id
stringlengths
3
8
text
stringlengths
1
115k
st184000
it’s not supported yet, responded here: Quantization aware training lower than 8-bits? 12
st184001
During static quantization of my model, I encounter the following error - RuntimeError: Didn’t find kernel to dispatch to for operator ‘aten::_cat’. Tried to look up kernel for dispatch key ‘QuantizedCPUTensorId’. Registered dispatch keys are: [CPUTensorId, VariableTensorId] I have fused and quantized the model, as well as the input image. But it throws an error on concat raised by - y = torch.cat([sources[0], sources[1]], dim=1) Any suggestions would be appreciated. Full code here - github.com raghavgurbaxani/experiments/blob/master/try_static_quant.py 16 ### CRAFT model here - https://github.com/clovaai/CRAFT-pytorch/blob/master/craft.py import torch import os import time from craft import CRAFT import cv2 import numpy as np import craft_utils import imgproc import file_utils from torch.autograd import Variable canvas_size=1280 mag_ratio=1.5 trained_model='craft_mlt_25k.pth' def uninplace(model): if hasattr(model, 'inplace'): model.inplace = False if not model.children(): return for child in model.children(): This file has been truncated. show original
st184002
Please see the usage of skip_add (+= operation) here: https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#model-architecture 50 The operators listed here https://pytorch.org/docs/stable/quantization.html#torch.nn.quantized.QFunctional 51 should be replaced with their functional module counterpart in the network before post-training quantization.
st184003
@dskhudia thank you for your suggestion. I replaced the ‘cat’ modules with n.quantized.FloatFunctional().cat() But I run into another error - TypeError: NumPy conversion for Variable[QuantizedCPUQUInt8Type] is not supported from the line y[0,:,:,0].cpu().data.numpy() (line 52 in the link above) How do I convert this quantized variable to numpy ? Thanks again for your help.
st184004
There are a couple of options depending on what you want: If you want quantized integer data, use int_repr (https://github.com/pytorch/pytorch/wiki/Introducing-Quantized-Tensor 21) If you want float data, dequantize and use your existing way of converting it to numpy.
st184005
@dskhudia Thank you very much, I changed score_link = y[0,:,:,1].cpu().data.numpy() to score_link = y[0,:,:,1].int_repr().cpu().data.numpy() as per your suggestion. But the prediction is very bad. Can you point me to how to dequantize the model ?
st184006
This is the final prediction. Correct? If yes, you would need to dequantize the final tensor, .e.g, using dequantized_y = y.dequantize()
st184007
thanks a lot @dskhudia I tried both methods. Final prediction is really bad Original Prediction from FP32 model - Prediction from INT8 model - Not sure where I’m going wrong
st184008
You may want to try some quantization accuracy improvement techniques such as per channel quantization for weights Quantization aware training Measuring torch.norm between float model and quantize model to see where it’s off the most.
st184009
is there an example for per channel quantization and measuring the torch norm between the 2 models ?
st184010
For per channel see https://github.com/pytorch/tutorials/blob/master/advanced_source/static_quantization_tutorial.py 8 and for norm you can use something like the following: SQNR = [] for i in range(len(ref_output)): SQNR.append(20*torch.log10(torch.norm(ref_output[i][0])/torch.norm(ref_output[i][0]-qtz_output[i][0])).numpy()) print('SQNR (dB)', SQNR)
st184011
@dskhudia The performance improved slightly after per channel quantization, but it is still very bad Do you think I should try float 16 instead? If so, how do I change the config to change it to Float16. Also, in your earlier response ref_output is the output from the net ? i.e ref_output=net(x), is that what you meant ? Thanks again for your help , hope I can resolve this problem
st184012
Float16 quantized operators do not exist for static quantization. Since current cpus do not support float16 compute natively, converting to float16 for compute bound cases doesn’t provide much performance benefits. ref_output is from the float model. You might want to check the norm at few different places in the network to see where we are deviating too much from floating point results.
st184013
In PyTorch there’s a way to compare the module level quantization error, which could help to debug and narrow down the issue. I’m working on an example and will post here later.
st184014
@Raghav_Gurbaxani, have you tried using histogram observer for activation? In most cases this could improve the accuracy of the quantized model. You can do: model.qconfig = torch.quantization.QConfig( activation=torch.quantization.default_histogram_observer, weight=torch.quantization.default_per_channel_weight_observer)
st184015
thanks @hx89 , if you could post that example for compare module level quantization error - It would be great In the meantime, I tried the histogram observer and the result is still pretty bad any other suggestions ?
st184016
Have you checked the accuracy of fused_model? By checking the accuracy of fused_model before converting to int8 model we can know if the issue is in the preprocessing part or in the quantized model. If fused_model has good accuracy, the next step we can check the quantization error of the weights. Could you try the following code: def l2_error(ref_tensor, new_tensor): """Compute the l2 error between two tensors. Args: ref_tensor (numpy array): Reference tensor. new_tensor (numpy array): New tensor to compare with. Returns: abs_error: l2 error relative_error: relative l2 error """ assert ( ref_tensor.shape == new_tensor.shape ), "The shape between two tensors is different" diff = new_tensor - ref_tensor abs_error = np.linalg.norm(diff) ref_norm = np.linalg.norm(ref_tensor) if ref_norm == 0: if np.allclose(ref_tensor, new_tensor): relative_error = 0 else: relative_error = np.inf else: relative_error = np.linalg.norm(diff) / ref_norm return abs_error, relative_error float_model_dbg = fused_model qmodel_dbg = quantized for key in float_model_dbg.state_dict().keys(): float_w = float_model_dbg.state_dict()[key] qkey = key # Get rid of extra hiearchy of the fused Conv in float model if key.endswith('.weight'): qkey = key[:-9] + key[-7:] if qkey in qmodel_dbg.state_dict(): q_w = qmodel_dbg.state_dict()[qkey] if q_w.dtype == torch.float: abs_error, relative_error = l2_error(float_w.numpy(), q_w.detach().numpy()) else: abs_error, relative_error = l2_error(float_w.numpy(), q_w.dequantize().numpy()) print(key, ', abs error = ', abs_error, ", relative error = ", relative_error) It should print out the quantization error for each Conv weight such as: features.0.0.weight , abs error = 0.21341866 , relative error = 0.01703797 features.3.squeeze.0.weight , abs error = 0.095942035 , relative error = 0.012483358 features.3.expand1x1.0.weight , abs error = 0.071949296 , relative error = 0.010309489 features.3.expand3x3.0.weight , abs error = 0.18284422 , relative error = 0.025256516 features.4.squeeze.0.weight , abs error = 0.088713735 , relative error = 0.011313644 features.4.expand1x1.0.weight , abs error = 0.0780085 , relative error = 0.0126931975 ...
st184017
@hx89 the performance of the fused model is good That means there’s something wrong on the quantization side, not the fusion side. Here’s the log of the relative norm errors - github.com raghavgurbaxani/experiments/blob/master/quantization_error.txt 1 basenet.slice1.3.0.weight , abs error = 0.07433768 , relative error = 0.014456971 basenet.slice1.7.0.weight , abs error = 0.102016546 , relative error = 0.012097403 basenet.slice1.10.0.weight , abs error = 0.14640729 , relative error = 0.0126273325 basenet.slice2.14.0.weight , abs error = 0.13131897 , relative error = 0.011922532 basenet.slice2.17.0.weight , abs error = 0.17593716 , relative error = 0.011550295 basenet.slice3.20.0.weight , abs error = 0.21453191 , relative error = 0.011749155 basenet.slice3.24.0.weight , abs error = 0.29290414 , relative error = 0.012482245 basenet.slice3.27.0.weight , abs error = 0.5628253 , relative error = 0.011207958 basenet.slice4.30.0.weight , abs error = 0.19498727 , relative error = 0.010849289 basenet.slice4.34.0.weight , abs error = 0.53952134 , relative error = 0.010825124 basenet.slice4.37.bias , abs error = 0.0 , relative error = 0.0 basenet.slice5.1.bias , abs error = 0.0 , relative error = 0.0 basenet.slice5.2.bias , abs error = 0.0 , relative error = 0.0 upconv1.conv.0.0.weight , abs error = 0.064128004 , relative error = 0.009871936 upconv1.conv.3.0.weight , abs error = 0.13272223 , relative error = 0.011580461 upconv2.conv.0.0.weight , abs error = 0.36004254 , relative error = 0.00813957 upconv2.conv.3.0.weight , abs error = 0.11151659 , relative error = 0.010093152 upconv3.conv.0.0.weight , abs error = 0.19112265 , relative error = 0.0072073564 upconv3.conv.3.0.weight , abs error = 0.07007974 , relative error = 0.008535749 upconv4.conv.0.0.weight , abs error = 0.0805448 , relative error = 0.0067224735 This file has been truncated. show original Can you suggest what to do next ? Is there any way to reduce these errors ? Apart from QAT ofcourse
st184018
Looks like the first Conv basenet.slice1.3.0.weight has the largest error, could you try skipping the quantization of that Conv and keep it as the float module? We have previously seen some CV models’s first Conv is sensitive to quantization and skipping it would give better accuracy.
st184019
@hx89 actually it seems like all these have pretty high relative errors - [ basenet.slice1.7.0.weight , basenet.slice1.10.0.weight , basenet.slice2.14.0.weight , basenet.slice2.17.0.weight , basenet.slice3.20.0.weight ,basenet.slice3.24.0.weight , basenet.slice3.27.0.weight , basenet.slice4.30.0.weight ,basenet.slice4.34.0.weight ] although that seems like a good idea, keeping a few layers as float while converting the rest to int8. I am not sure how to pass the partial model to torch.quantization.convert() for quantization and then combining the partially quantized model and unquantized layers together for inference on the image. Could you provide an example ? Thanks a ton
st184020
It’s actually simpler, to skip the first conv for example, there are two step: Step 1: Move the quant stub after the first conv in the forward function of the module. For example in the original quantizable module, quant stub is at the beginning before conv1: Class QuantizableNet(nn.Module): def __init__(self): ... self.quant = torch.quantization.QuantStub() self.dequant = torch.quantization.DeQuantStub() def forward(self, x): x = self.quant(x) x = self.conv1(x) x = self.maxpool(x) x = self.fc(x) x = self.dequant(x) return x To skip the quantization of conv1 we can move self.quant() aftert conv1: Class QuantizableNet(nn.Module): def __init__(self): ... self.quant = torch.quantization.QuantStub() self.dequant = torch.quantization.DeQuantStub() def forward(self, x): x = self.conv1(x) x = self.quant(x) x = self.maxpool(x) x = self.fc(x) x = self.dequant(x) return x Step 2: Then we need to set the qconfig of conv1 to None after prepare(), this way PyTorch knows we want to keep conv1 as float module and won’t swap it with quantized module: model = QuantizableNet() ... torch.quantization.prepare(model) model.conv1.qconfig = None
st184021
I’m seeing unexpected behavior with post-training static quantization. My understanding is that after calibration, the instances of QuantStub and DeQuantStub are replaced by instances of torch.nn.quantized.Quantize and torch.nn.quantized.DeQuantize. What I’m seeing with torch 1.7.1 is that only DeQuantStub is being replaced. The output of the code below is: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch. mwe.py:113: UserWarning: instance is type <class 'torch.quantization.stubs.QuantStub'>, not <class 'torch.nn.quantized.modules.Quantize'> Is my understanding correct? Am I missing a step? import time import os import warnings import torch from torchvision import models from torchvision import transforms from torchvision import datasets def get_loaders(data_dir, batch_size=256, num_workers=1, pin_memory=False): traindir = os.path.join(data_dir, 'train') valdir = os.path.join(data_dir, 'val') normalize = transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) train_dataset = datasets.ImageFolder( traindir, transforms.Compose( [ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), normalize, ] ) ) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=batch_size, shuffle=True, num_workers=num_workers, pin_memory=pin_memory ) val_loader = torch.utils.data.DataLoader( datasets.ImageFolder( valdir, transforms.Compose( [ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), normalize, ] ) ), batch_size=batch_size, shuffle=False, num_workers=num_workers, pin_memory=pin_memory ) return train_loader, val_loader def calibrate(qmodel, data_loader, num_batches=1): for i, (input, target) in enumerate(data_loader): qmodel(input) if i == num_batches - 1: break def main(model_name, num_batches=1): try: func = getattr(models, model_name) model = func(pretrained=True) except AttributeError as ae: msg = f'Invalid model name {model_name}? {ae}' raise AttributeError(msg) """ Wrap model so inputs and outputs are quantized. """ quantized_model = torch.quantization.QuantWrapper(model) backend = "fbgemm" model.qconfig = torch.quantization.get_default_qconfig(backend) torch.backends.quantized.engine = backend """ Calibrate the quantizers. """ static_quantized_model = torch.quantization.prepare( quantized_model, inplace=False ) train_loader, val_loader = get_loaders( '/data/datasets/imagenet', pin_memory=False ) calibrate(static_quantized_model, train_loader, num_batches=num_batches) static_quantized_model = torch.quantization.convert( static_quantized_model, inplace=False ) """ After calling convert, quant and dequant should be instances of Quantize and DeQuantize. Strangely, quant remains an instance of torch.quantization.QuantStub.""" def warn_type(instance, _type): if not isinstance(instance, _type): msg = f'instance is type {type(instance)}, not {_type}' warnings.warn(msg) warn_type(static_quantized_model.quant, torch.nn.quantized.Quantize) warn_type(static_quantized_model.dequant, torch.nn.quantized.DeQuantize) if __name__ == '__main__': main('resnet50', num_batches=5)
st184022
Solved by raghuramank100 in post #2 I think the problem is that you are not setting the qconfig for the model that is being passed to prepare: model.qconfig = torch.quantization.get_default_qconfig(backend)
st184023
ndronen: model.qconfig = torch.quantization.get_default_qconfig(backend) I think the problem is that you are not setting the qconfig for the model that is being passed to prepare: model.qconfig = torch.quantization.get_default_qconfig(backend)
st184024
This is the correct answer. The stubs are properly replaced when I change model.qconfig = torch.quantization.get_default_qconfig(backend) to quantized_model.qconfig = torch.quantization.get_default_qconfig(backend) Thanks!
st184025
Hello everyone, I am trying to quantize the retinanet for QAT. Firstly I wanted to quantize only some parts of the network and only then the whole net. In order to save time, I am using the Detectron2, but I suppose this issue is related to pytorch. First of all I tried to quantize RetinaNetHead (see the original one here - class RetinaNetHead: original retinanet in detectron2 5) my implementation of RetinaNetHead based on the original one as in tutorial for quantization: Quant and Dequant Stubs. 2. corresponding forward q_retinanet.py: class Q_RetinaNetHead(nn.Module): """ The head used in RetinaNet for object classification and box regression. It has two subnets for the two tasks, with a common structure but separate parameters. """ def __init__(self, cfg, input_shape: List[ShapeSpec]): super().__init__() # fmt: off in_channels = input_shape[0].channels num_classes = cfg.MODEL.RETINANET.NUM_CLASSES num_convs = cfg.MODEL.RETINANET.NUM_CONVS prior_prob = cfg.MODEL.RETINANET.PRIOR_PROB num_anchors = build_anchor_generator(cfg, input_shape).num_cell_anchors # fmt: on assert ( len(set(num_anchors)) == 1 ), "Using different number of anchors between levels is not currently supported!" num_anchors = num_anchors[0] cls_subnet = [] # cls_subnet.append(QuantStub()) bbox_subnet = [] for _ in range(num_convs): cls_subnet.append( nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1) ) cls_subnet.append(nn.ReLU()) bbox_subnet.append( nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1) ) bbox_subnet.append(nn.ReLU()) # cls_subnet.append(DeQuantStub()) self.quant = QuantStub() # added line self.cls_subnet = nn.Sequential(*cls_subnet) # self.cls_dequant = DeQuantStub() #added line self.bbox_subnet = nn.Sequential(*bbox_subnet) self.cls_score = nn.Conv2d( in_channels, num_anchors * num_classes, kernel_size=3, stride=1, padding=1 ) self.bbox_pred = nn.Conv2d(in_channels, num_anchors * 4, kernel_size=3, stride=1, padding=1) self.dequant = DeQuantStub() # added line # Initialization for modules in [self.cls_subnet, self.bbox_subnet, self.cls_score, self.bbox_pred]: for layer in modules.modules(): if isinstance(layer, nn.Conv2d): torch.nn.init.normal_(layer.weight, mean=0, std=0.01) torch.nn.init.constant_(layer.bias, 0) # Use prior in model initialization to improve stability bias_value = -(math.log((1 - prior_prob) / prior_prob)) torch.nn.init.constant_(self.cls_score.bias, bias_value) def forward(self, features): """ Arguments: features (list[Tensor]): FPN feature map tensors in high to low resolution. Each tensor in the list correspond to different feature levels. Returns: logits (list[Tensor]): #lvl tensors, each has shape (N, AxK, Hi, Wi). The tensor predicts the classification probability at each spatial position for each of the A anchors and K object classes. bbox_reg (list[Tensor]): #lvl tensors, each has shape (N, Ax4, Hi, Wi). The tensor predicts 4-vector (dx,dy,dw,dh) box regression values for every anchor. These values are the relative offset between the anchor and the ground truth box. """ logits = [] bbox_reg = [] for feature in features: logits.append( self.dequant(self.cls_score(self.cls_subnet(self.quant(feature))))) # added line: self,cls_quant() bbox_reg.append(self.dequant(self.bbox_pred(self.bbox_subnet(self.quant(feature))))) return logits, bbox_reg Fuse modules and configuration train_net.py: trainer.model.head.train() trainer.model.head.qconfig = torch.quantization.get_default_qconfig('fbgemm') modules_to_fuse = [['cls_subnet.0', 'cls_subnet.1'], ['cls_subnet.2', 'cls_subnet.3'], ['cls_subnet.4', 'cls_subnet.5'], ['cls_subnet.6', 'cls_subnet.7'], ['bbox_subnet.0', 'bbox_subnet.1'], ['bbox_subnet.2', 'bbox_subnet.3'], ['bbox_subnet.4', 'bbox_subnet.5'], ['bbox_subnet.6', 'bbox_subnet.7']] torch.quantization.fuse_modules(trainer.model.head, modules_to_fuse, inplace=True) torch.quantization.prepare_qat(trainer.model.head, inplace=True) do_train(cfg, trainer) trainer.model.head.eval() print("Convert->") torch.quantization.convert(trainer.model.head, inplace=True) The training precess is done successfully, but the last line with convert give me an error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! although I tried everything, even: cuda = torch.device('cuda:0') trainer.model.to(cuda) I also checked all the tensors and they are in cuda (please see Q_RetinaNetHead file after training) The entire architecture before training of the RetinaNet can be seen in (Q_RetinaNet file) My questiona are: how to get rid of this error Am I right, that qat process was done successfully and conver is only like an export of the already trained model? Best regard, yayapa files Q_RetinaNet and Q_RetinaNetHead can be found here as pdf 1
st184026
Solved by Vasiliy_Kuznetsov in post #2 can you try to move your model to CPU and see if that fixes the error? Currently quantized kernels are only supported on CPU. yes, that is correct
st184027
yayapa: how to get rid of this error Am I right, that qat process was done successfully and conver is only like an export of the already trained model? can you try to move your model to CPU and see if that fixes the error? Currently quantized kernels are only supported on CPU. yes, that is correct
st184028
Thank you for the answer! Yes, if I make it to cpu (model.to(torch.device(cpu))), it does work! Is there any workaround or something what I can do to transfer it on gpu?
st184029
great to hear. Currently the convert function only works on CPU, because we do not have support for running the quantized kernels on GPU.
st184030
Thank you for explanation. Will this feature appear in 1.8.0? Or maybe already in the nightly build?
st184031
We are not planning to work on quantized kernel support on CUDA for v1.8, but we definitely welcome OSS contributions!
st184032
Hello. I am trying to post quantize a 3D ResNet using the new graph mode quantization. As far as I understand, both AdaptiveAvgPool3d and MaxPool3d have no quantized kernel and should be marked as “non traceable”. From here ((prototype) FX Graph Mode Quantization User Guide — PyTorch Tutorials 1.8.1+cu102 documentation), I thought the following should do the trick: prep_config_dict = { "non_traceable_module_class": [nn.MaxPool3d, nn.AdaptiveAvgPool3d] } prepared_model = prepare_fx( model_to_quantize, qconfig_dict, prepare_custom_config_dict=prep_config_dict) Yet, it does not. Neither did some experiments using “non_traceable_module_name”. Appreciate any help I can get. Thank you in advance.
st184033
@jerryzh168 I thought the maxpool and avgpool was supported, can you comment on this, please?
st184034
Given I am using a 3D ResNet, I also need to use the 3D versions of every model. As far as I can tell, only the Average Pooling 3D is supported. Neither Adaptive Average Pooling 3D nor Max Pooling 3D are supported.
st184035
you don’t need to mark them as non-traceable module I think, they are leaf modules and will not be traced by default, non traceable module is typically used to mark a submodule as untraceable (e.g. a submodule contains conv - linear - other_ops - etc.) adaptive advg pol3d pytorch/quantization_patterns.py at master · pytorch/pytorch · GitHub and Maxpool3d: pytorch/quantization_patterns.py at master · pytorch/pytorch · GitHub are supported, these ops works for both float and quantized inputs.
st184036
I’m trying to quantize (8 bits) VGG16 model with EMNIST(balanced) dataset. My non quantized model works fine but i get an error with the quantization model where when i redo the layers with quantization i get a NAN error as below: Also attaching the function, need help to see what im missing. ValueError Traceback (most recent call last) in () ----> 1 testQuant(q_model, test_loader, quant=True, stats=stats) 4 frames in calcScaleZeroPoint(min_val, max_val, num_bits) 22 zero_point = initial_zero_point 23 —> 24 zero_point = int(zero_point) 25 26 return scale, zero_point ValueError: cannot convert float NaN to integer def quantForward(model, x, stats): Quantise before inputting into incoming layers x = quantize_tensor(x, min_val=stats[‘conv1_1’][‘min’], max_val=stats[‘conv1_1’][‘max’]) x, scale_next, zero_point_next = quantizeLayer(x.tensor, model.conv1_1, stats[‘conv1_2’], x.scale, x.zero_point) x = F.max_pool2d(x, 2, 2) x, scale_next, zero_point_next = quantizeLayer(x, model.conv1_2, stats[‘conv2_1’], scale_next, zero_point_next) x, scale_next, zero_point_next = quantizeLayer(x, model.conv2_1, stats[‘conv2_2’], scale_next, zero_point_next) x = F.max_pool2d(x, 2, 2) x, scale_next, zero_point_next = quantizeLayer(x, model.conv2_2, stats[‘conv3_1’], scale_next, zero_point_next) x, scale_next, zero_point_next = quantizeLayer(x, model.conv3_1, stats[‘conv3_2’], scale_next, zero_point_next) x, scale_next, zero_point_next = quantizeLayer(x, model.conv3_2, stats[‘conv3_3’], scale_next, zero_point_next) x = F.max_pool2d(x, 2, 2) x, scale_next, zero_point_next = quantizeLayer(x, model.conv3_3, stats[‘conv4_1’], scale_next, zero_point_next) x, scale_next, zero_point_next = quantizeLayer(x, model.conv4_1, stats[‘conv4_2’], scale_next, zero_point_next) x, scale_next, zero_point_next = quantizeLayer(x, model.conv4_2, stats[‘conv4_3’], scale_next, zero_point_next) #x = F.max_pool2d(x, 2, 2) x, scale_next, zero_point_next = quantizeLayer(x, model.conv4_3, stats[‘conv5_1’], scale_next, zero_point_next) x, scale_next, zero_point_next = quantizeLayer(x, model.conv5_1, stats[‘conv5_2’], scale_next, zero_point_next) x, scale_next, zero_point_next = quantizeLayer(x, model.conv5_2, stats[‘conv5_3’], scale_next, zero_point_next) #x = F.max_pool2d(x, 2, 2) x, scale_next, zero_point_next = quantizeLayer(x, model.conv5_3, stats[‘fc6’], scale_next, zero_point_next) print(zero_point_next) x, scale_next, zero_point_next = quantizeLayer(x, model.fc6, stats[‘fc7’], scale_next, zero_point_next) x = x.view(x.shape[0], -1) x, scale_next, zero_point_next = quantizeLayer(x, model.fc7, stats[‘fc8’], scale_next, zero_point_next) Back to dequant for final layer x = dequantize_tensor(QTensor(tensor=x, scale=scale_next, zero_point=zero_point_next)) x = model.fc8(x) return F.log_softmax(x, dim=1)
st184037
Can you point me to the VGG model you are trying to quantize? Is it th one from torchvision?
st184038
I’m currently trying to statically quantize the AttentionCell that contains a LSTMCell on an x86 system: class AttentionCell(nn.Module): def __init__(self, input_size, hidden_size, num_embeddings): super(AttentionCell, self).__init__() self.i2h = nn.Linear(input_size, hidden_size, bias=False) self.h2h = nn.Linear(hidden_size, hidden_size) # either i2i or h2h should have bias self.score = nn.Linear(hidden_size, 1, bias=False) self.rnn = nn.LSTMCell(input_size + num_embeddings, hidden_size) self.hidden_size = hidden_size This is the custom config I’m using at the moment: custom_module_config = { 'float_to_observed_custom_module_class': { torch.nn.LSTMCell: torch.nn.quantizable.LSTMCell, torch.nn.LSTM: torch.nn.quantizable.LSTM } } However, after the preparing the model for quantization, during forwarding some samples to fine-tune the weights I get the following error: File "/Users/cgarriga/projects/datascience-common/ocr_poc/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/Users/cgarriga/projects/datascience-common/ocr_poc/venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 149, in forward return self.module(*inputs, **kwargs) File "/Users/cgarriga/projects/datascience-common/ocr_poc/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/Users/cgarriga/projects/datascience-common/ocr_poc/mlpococr/poc_ocr/recognition/model.py", line 121, in forward batch_max_length=self.opt.batch_max_length File "/Users/cgarriga/projects/datascience-common/ocr_poc/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/Users/cgarriga/projects/datascience-common/ocr_poc/mlpococr/poc_ocr/recognition/modules/prediction.py", line 55, in forward hidden, alpha = self.attention_cell(hidden, batch_H, char_onehots) File "/Users/cgarriga/projects/datascience-common/ocr_poc/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/Users/cgarriga/projects/datascience-common/ocr_poc/mlpococr/poc_ocr/recognition/modules/prediction.py", line 91, in forward cur_hidden = self.rnn(concat_context, prev_hidden) File "/Users/cgarriga/projects/datascience-common/ocr_poc/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 893, in _call_impl hook_result = hook(self, input, result) File "/Users/cgarriga/projects/datascience-common/ocr_poc/venv/lib/python3.6/site-packages/torch/quantization/quantize.py", line 83, in _observer_forward_hook return self.activation_post_process(output) File "/Users/cgarriga/projects/datascience-common/ocr_poc/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/Users/cgarriga/projects/datascience-common/ocr_poc/venv/lib/python3.6/site-packages/torch/quantization/observer.py", line 900, in forward if x_orig.numel() == 0: AttributeError: 'tuple' object has no attribute 'numel' The forward method for the AttentionCell is the following: def forward(self, prev_hidden, batch_H, char_onehots): # [batch_size x num_encoder_step x num_channel] -> [batch_size x num_encoder_step x hidden_size] batch_H_proj = self.i2h(batch_H) prev_hidden_proj = self.h2h(prev_hidden[0]).unsqueeze(1) e = self.score(torch.tanh(batch_H_proj + prev_hidden_proj)) # batch_size x num_encoder_step * 1 alpha = F.softmax(e, dim=1) context = torch.bmm(alpha.permute(0, 2, 1), batch_H).squeeze(1) # batch_size x num_channel concat_context = torch.cat([context, char_onehots], 1) # batch_size x (num_channel + num_embedding) cur_hidden = self.rnn(concat_context, prev_hidden) return cur_hidden, alpha How could I solve this issue? Apparently the problem comes from the LSTMCell inside the AttentionCell. Thank you in advance!
st184039
Solved by raghuramank100 in post #2 Thanks for pointing this issue out. There is an error in pytorch/rnn.py at master · pytorch/pytorch · GitHub. Filing an issue to track this at: Quantizable LSTMCell does not work correctly. · Issue #55945 · pytorch/pytorch · GitHub
st184040
Thanks for pointing this issue out. There is an error in pytorch/rnn.py at master · pytorch/pytorch · GitHub. Filing an issue to track this at: Quantizable LSTMCell does not work correctly. · Issue #55945 · pytorch/pytorch · GitHub 8
st184041
I have applied the default torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8) to my models (TailorNet) then saved it using torch.save(model.state_dict() as usual. Loading the model again with multiprocessing via ProcessPoolExecutor produces the error below. Somehow using ThreadPoolExecutor works fine. Error message: Traceback (most recent call last): File "/usr/lib/python3.8/concurrent/futures/process.py", line 368, in _queue_management_worker result_item = result_reader.recv() File "/usr/lib/python3.8/multiprocessing/connection.py", line 251, in recv return _ForkingPickler.loads(buf.getbuffer()) File "/home/osboxes/.local/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 88, in rebuild_tensor t = torch._utils._rebuild_tensor(storage, storage_offset, size, stride) File "/home/osboxes/.local/lib/python3.8/site-packages/torch/_utils.py", line 133, in _rebuild_tensor t = torch.tensor([], dtype=storage.dtype, device=storage.device) RuntimeError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode]. How I collect the models: for t in concurrent.futures.as_completed(futures): runner, num = t.result()
st184042
hi @mfikryrizal , to clarify, you are able to load the same successfully with ThreadPoolExecutor but not with ProcessPoolExecutor? Do you have a repro (either your full model or a small test case) you can share?
st184043
I also come up with the same error when I try to use quantized distilbert model. Sample test case can be found in as follow. running under pytest, the first test case can pass but the second one shows error: import torch import torch.multiprocessing as mp from transformers import BertTokenizer, DistilBertConfig from transformers.models.distilbert import DistilBertPreTrainedModel, DistilBertModel def test_quantized_distil_bert_1(): model = DistilBertClassifier(distil_bert_config) model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8) tokenids, mask = _construct_model_inputs("hello world") result = model(tokenids, attention_mask=mask) assert result def test_quantized_distil_bert_2(): model = DistilBertClassifier(distil_bert_config) model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8) tokenids, mask = _construct_model_inputs("hello world") with mp.Pool(1) as pool: process = pool.apply_async(model, (tokenids, mask)) result = process.get(10) assert result def _construct_model_inputs(sentence: str): tokenids = torch.tensor(tokenizer.encode(sentence)).unsqueeze(0) mask = torch.ones_like(tokenids, dtype=torch.int64) return tokenids, mask class DistilBertClassifier(DistilBertPreTrainedModel): def __init__(self, config: DistilBertConfig): super().__init__(config) self.num_labels = config.num_labels self.distilbert = DistilBertModel(config) self.pre_classifier = torch.nn.Linear(config.dim, config.dim) self.classifier = torch.nn.Linear(config.dim, config.num_labels) self.dropout = torch.nn.Dropout(config.seq_classif_dropout) torch.manual_seed(345) torch.cuda.manual_seed(345) self.init_weights() def forward(self, input_ids=None, attention_mask=None, head_mask=None, inputs_embeds=None, labels=None): distilbert_output = self.distilbert( input_ids=input_ids, attention_mask=attention_mask, head_mask=head_mask, inputs_embeds=inputs_embeds ) hidden_state = distilbert_output[0] pooled_output = hidden_state[:, 0] pooled_output = self.pre_classifier(pooled_output) pooled_output = torch.nn.ReLU()(pooled_output) pooled_output = self.dropout(pooled_output) logits = self.classifier(pooled_output) outputs = (logits,) + distilbert_output[1:] if labels is not None: if self.num_labels == 1: loss_fct = torch.nn.MSELoss() loss = loss_fct(logits.view(-1), labels.view(-1)) else: loss_fct = torch.nn.CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) outputs = (loss,) + outputs return outputs # (loss), logits, (hidden_states), (attentions) tokenizer = BertTokenizer.from_pretrained("tokenizer_path/") distil_bert_config = DistilBertConfig() Error stack trace: Process SpawnPoolWorker-1: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 313, in _bootstrap self.run() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/pool.py", line 114, in worker task = get() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/queues.py", line 358, in get return _ForkingPickler.loads(res) File "/Users/dennis/.virtualenvs/fano_ms_intent/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 88, in rebuild_tensor t = torch._utils._rebuild_tensor(storage, storage_offset, size, stride) File "/Users/dennis/.virtualenvs/fano_ms_intent/lib/python3.8/site-packages/torch/_utils.py", line 133, in _rebuild_tensor t = torch.tensor([], dtype=storage.dtype, device=storage.device) RuntimeError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, MkldnnCPU, SparseCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
st184044
Hi, I know that static & dynamic quantization cannot inference with CUDA. but I am wondering that QAT model can inference with CUDA. Thanks.
st184045
From the PyTorch Quantization docs 8 Quantization-aware training (through FakeQuantize) supports both CPU and CUDA
st184046
Quantization doc says that it does support both CPU and GPU. I tried the tutorial and it didn’t work. I am still confusing because some of users are saying it does not support GPU yet from Is QAT Inference not support GPU? 11
st184047
You can run a QAT model prior to convert on GPU. Please look at the example in torchvision: vision/train_quantization.py at master · pytorch/vision · GitHub 42
st184048
After I applied QAT method and I tried to inference the model with GPU but I got this error below. However CPU is working fine. File “quantize_model.py”, line 359, in model = quantization_aware_training(model, device) File “quantize_model.py”, line 120, in quantization_aware_training torch.quantization.convert(quantized_eval_model, inplace=True) File “/opt/conda/lib/python3.8/site-packages/torch/quantization/quantize.py”, line 471, in convert _convert( File “/opt/conda/lib/python3.8/site-packages/torch/quantization/quantize.py”, line 507, in _convert _convert(mod, mapping, True, # inplace File “/opt/conda/lib/python3.8/site-packages/torch/quantization/quantize.py”, line 509, in _convert reassign[name] = swap_module(mod, mapping, custom_module_class_mapping) File “/opt/conda/lib/python3.8/site-packages/torch/quantization/quantize.py”, line 534, in swap_module new_mod = mapping[type(mod)].from_float(mod) File “/opt/conda/lib/python3.8/site-packages/torch/nn/intrinsic/quantized/modules/conv_relu.py”, line 97, in from_float return super(ConvReLU2d, cls).from_float(mod) File “/opt/conda/lib/python3.8/site-packages/torch/nn/quantized/modules/conv.py”, line 418, in from_float return _ConvNd.from_float(cls, mod) File “/opt/conda/lib/python3.8/site-packages/torch/nn/quantized/modules/conv.py”, line 220, in from_float return cls.get_qconv(mod, activation_post_process, weight_post_process) File “/opt/conda/lib/python3.8/site-packages/torch/nn/quantized/modules/conv.py”, line 187, in get_qconv qweight = _quantize_weight(mod.weight.float(), weight_post_process) File “/opt/conda/lib/python3.8/site-packages/torch/nn/quantized/modules/utils.py”, line 14, in _quantize_weight qweight = torch.quantize_per_channel( RuntimeError: Could not run ‘aten::quantize_per_channel’ with arguments from the ‘CUDA’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes 1 for possible resolutions. ‘aten::quantize_per_channel’ is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode]. Quantization model.to(device) model.eval() model.fuse_model() model.qconfig = torch.quantization.get_default_qat_qconfig(‘fbgemm’) torch.quantization.prepare_qat(model, inplace=True) Evaluation with torch.no_grad(): model.eval() epoch_psnr = AverageMeter() quantized_eval_model = copy.deepcopy(model_without_ddp) quantized_eval_model.eval() quantized_eval_model.to(device) torch.quantization.convert(quantized_eval_model, inplace=True) for data in eval_dataloader: inputs, labels = data inputs = inputs.to(device, non_blocking=True) labels = labels.to(device, non_blocking=True) preds = quantized_eval_model(inputs).clamp(0.0, 1.0) Model self.quant = torch.quantization.QuantStub() self.conv_relu1 = ConvReLu(1, 64, _kernel_size=5, _padding=5//2) self.conv_relu2 = ConvReLu(64,32, _kernel_size=3, _padding=3//2) self.sub_pixel = nn.Sequential( nn.Conv2d(32, 1 * (scale_factor ** 2), kernel_size=3, stride=1, padding=1), nn.PixelShuffle(scale_factor), nn.Sigmoid() ) self.dequant = torch.quantization.DeQuantStub() def forward(self, x): x = self.quant(x) x = self.conv_relu1(x) x = self.conv_relu2(x) x = self.sub_pixel(x) x = self.dequant(x) return x class ConvReLu(nn.Sequential): def init(self, _in_channels, _out_channels, _kernel_size, _padding=0): super(ConvReLu, self).init( nn.Conv2d(_in_channels, _out_channels, kernel_size=_kernel_size, padding=_padding), nn.ReLU(_out_channels) )
st184049
How to quantize weights in pointwise convolution layer in MobileNet V2, not all the entire model’s parameters. I searched a lot, but most of the source code quantized the entire model’s parameters.
st184050
you can set qconfig only for the sub module that you want to quantize. e.g. class M(torch.nn.Module): def __init__(self): self.conv1 = ... self.conv2 = ... m = M() m.conv1 = qconfig # if you only want to quantize conv1
st184051
Just to be clear, this would be exactly like this: m.conv1.qconfig = qconfig right?
st184052
Hello, I am new in Deep Learning and Pytorch. I’m interested in making fast deep-learning model. So I have tried to run dynamic quantized model on BERT tutorial in pytorch.org 5. I had program run on Intel Xeon E5-2620 v4 system, and checked that the quantized model is smaller than original model(438M -> 181.5M). but totall-evalluate time of quantized model is slower than original model(122.3 -> 123.2); I had program run on same-spec but different computer, result was same; I also had program run on AMD ryzen 2600, ryzen 3950x system, in this case, execution speed was slower than intel system, but totall-evalutate quantized model was faster than original model;
st184053
As given in the tutorial here: https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html 6 The size reduction from 438M -> 181.5M matches. For the time, we expect approximately a 2x performance improvement on Xeon E5-2620 v4. Can’t say what’s the issue without access to your env setup.
st184054
Hi, I’m in trouble with the same case. No speed-up. I ran the bert tutorial in a conda env on iMac. FP32: 437.98 MB, 106.6 sec, acc&f1 0.8811 QINT8: 181.44 MB, 108.2 sec, acc&f1 0.8799 python -V 3.8.5 torch.__version__ 1.6.0 <conda info> active environment : quant_bert active env location : /Users/*******/anaconda/envs/quant_bert shell level : 1 user config file : /Users/*******/.condarc populated config files : conda version : 4.5.11 conda-build version : 2.0.2 python version : 3.5.6.final.0 base environment : /Users/*******/anaconda (writable) channel URLs : https://repo.anaconda.com/pkgs/main/osx-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/free/osx-64 https://repo.anaconda.com/pkgs/free/noarch https://repo.anaconda.com/pkgs/r/osx-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/pro/osx-64 https://repo.anaconda.com/pkgs/pro/noarch package cache : /Users/*******/anaconda/pkgs /Users/*******/.conda/pkgs envs directories : /Users/*******/anaconda/envs /Users/*******/.conda/envs platform : osx-64 user-agent : conda/4.5.11 requests/2.14.2 CPython/3.5.6 Darwin/19.6.0 OSX/10.15.6 <H/W spec.> macOS Catalina 10.15.6 iMac (Retina 5K, 27-inch, Late 2014) Quad-Core Intel Core i5, 3.5GHz 32GB 1600 MHz DDR3 AMD Radeon R9 M290X 2 GB L2 cache: 256 KB / core L3 cache: 6 MB Thanks.
st184055
Thanks; My env setup of Intel system is as follows conda info active environment : danchu active env location : /home/danchu/anaconda3/envs/danchu shell level : 1 user config file : /home/danchu/.condarc populated config files : /home/danchu/.condarc conda version : 4.8.3 conda-build version : 3.18.11 python version : 3.8.3.final.0 virtual packages : __cuda=10.1 __glibc=2.17 base environment : /home/danchu/anaconda3 (writable) channel URLs : https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /home/danchu/anaconda3/pkgs /home/danchu/.conda/pkgs envs directories : /home/danchu/anaconda3/envs /home/danchu/.conda/envs platform : linux-64 user-agent : conda/4.8.3 requests/2.24.0 CPython/3.8.3 Linux/3.10.0-957.el7.x86_64 centos/7.6.1810 glibc/2.17 UID:GID : 1994:1994 netrc file : None offline mode : False package version numpy 1.19.2 torch 1.8.0.dev20201014+cu101 transformers 3.3.1 torch._config_.parallel_info() 1.8.0.dev20201014+cu101 ATen/Parallel: at::get_num_threads() : 1 at::get_num_interop_threads() : 16 OpenMP 201511 (a.k.a. OpenMP 4.5) omp_get_max_threads() : 1 Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications mkl_get_max_threads() : 1 Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) std::thread::hardware_concurrency() : 32 Environment variables: OMP_NUM_THREADS : [not set] MKL_NUM_THREADS : [not set] ATen parallel backend: OpenMP torch._config_.show() PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 10.1 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75 - CuDNN 7.6.3 - Magma 2.5.2 - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, HW SPEC cpu : Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz x 2 mem size : 65673788 kB gpu : Geforce GTX 1080ti x4
st184056
Can you two please try with setting OMP_NUM_THREADS=1 MKL_NUM_THREADS=1 ./binary ?
st184057
I’ve tried as follows, export MKL_NUM_THREADS = 1 export OMP_NUM_THREADS = 1 python Bert_quantze_tutorial.py touch.config.parallel_info() 1.8.0.dev20201014+cu101 ATen/Parallel: at::get_num_threads() : 1 at::get_num_interop_threads() : 16 OpenMP 201511 (a.k.a. OpenMP 4.5) omp_get_max_threads() : 1 Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications mkl_get_max_threads() : 1 Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) std::thread::hardware_concurrency() : 32 Environment variables: OMP_NUM_THREADS : 1 MKL_NUM_THREADS : 1 But can’t get any speed up with this setting; Result Size (MB): 438.021641 Size (MB): 181.502781 Evaluating: 100%|████████████████████████████████████████████████████████████████████████████████| 51/51 [02:00<00:00, 2.37s/it] {'acc': 0.8602941176470589, 'f1': 0.9018932874354562, 'acc_and_f1': 0.8810937025412575} Evaluate total time (seconds): 120.8 Evaluating: 100%|████████████████████████████████████████████████████████████████████████████████| 51/51 [02:01<00:00, 2.39s/it] {'acc': 0.8578431372549019, 'f1': 0.8999999999999999, 'acc_and_f1': 0.878921568627451} Evaluate total time (seconds): 122.0
st184058
Thanks. 1] What do you mean “.binary”? 2] Before applying your comment, torch.__config__.parallel_info() ATen/Parallel: at::get_num_threads() : 1 at::get_num_interop_threads() : 2 OpenMP not found Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications mkl_get_max_threads() : 4 Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0) std::thread::hardware_concurrency() : 4 Environment variables: OMP_NUM_THREADS : [not set] MKL_NUM_THREADS : [not set] ATen parallel backend: OpenMP Size (MB): 438.017609 Size (MB): 181.499089 Evaluating: 100%|███████████████████████████████████████████████████████████████████████| 51/51 [01:52<00:00, 2.21s/it] {'acc': 0.8602941176470589, 'f1': 0.9018932874354562, 'acc_and_f1': 0.8810937025412575} Evaluate total time (seconds): 112.9 Evaluating: 100%|███████████████████████████████████████████████████████████████████████| 51/51 [01:51<00:00, 2.18s/it] {'acc': 0.8578431372549019, 'f1': 0.8993055555555555, 'acc_and_f1': 0.8785743464052287} Evaluate total time (seconds): 111.2 3] After, ATen/Parallel: at::get_num_threads() : 1 at::get_num_interop_threads() : 2 OpenMP not found Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications mkl_get_max_threads() : 1 Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0) std::thread::hardware_concurrency() : 4 Environment variables: OMP_NUM_THREADS : 1 MKL_NUM_THREADS : 1 ATen parallel backend: OpenMP Size (MB): 438.017609 Size (MB): 181.499089 Evaluating: 100%|███████████████████████████████████████████████████████████████████████| 51/51 [01:49<00:00, 2.15s/it] {'acc': 0.8602941176470589, 'f1': 0.9018932874354562, 'acc_and_f1': 0.8810937025412575} Evaluate total time (seconds): 110.0 Evaluating: 100%|███████████████████████████████████████████████████████████████████████| 51/51 [01:47<00:00, 2.12s/it] {'acc': 0.8578431372549019, 'f1': 0.8993055555555555, 'acc_and_f1': 0.8785743464052287} Evaluate total time (seconds): 108.0 I couldn’t get a meaningful speed-up. p.s) Interface of function convert_examples_to_features from transformers was changed. So I modified run.py.
st184059
I’ve tried another spec Intel machine as follows: HW SPEC cpu : Intel(R) Xeon(R) silver 4110 cpu @ 2.10GHz (skylake) mem size : 65673788 kB gpu : Geforce GTX 1080ti x4 touch.__config__.parallel_info() 1.8.0.dev20201014+cu101 ATen/Parallel: at::get_num_threads() : 1 at::get_num_interop_threads() : 8 OpenMP 201511 (a.k.a. OpenMP 4.5) omp_get_max_threads() : 1 Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications mkl_get_max_threads() : 1 Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) std::thread::hardware_concurrency() : 16 Environment variables: OMP_NUM_THREADS : [not set] MKL_NUM_THREADS : [not set] ATen parallel backend: OpenMP touch.__config__.show() PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 10.1 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75 - CuDNN 7.6.3 - Magma 2.5.2 - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, Result Size (MB): 438.021641 Size (MB): 181.502781 Evaluating: 100%|████████████████████████████████████████████████████████████████████████████████| 51/51 [03:19<00:00, 3.92s/it] {'acc': 0.8602941176470589, 'f1': 0.9018932874354562, 'acc_and_f1': 0.8810937025412575} Evaluate total time (seconds): 200.0 Evaluating: 100%|████████████████████████████████████████████████████████████████████████████████| 51/51 [01:57<00:00, 2.31s/it] {'acc': 0.8578431372549019, 'f1': 0.8999999999999999, 'acc_and_f1': 0.878921568627451} Evaluate total time (seconds): 117.6 In this case, I can get considerable speed up with same env_setting&code; I think that Intel broadwell&haswell processors are related with this issue;
st184060
Micro-architecture of CPU seems to matter. My Macbook has a CPU based on Kaby Lake (successor of Sky Lake), and shows an expected speed-up at the quantized mode.
st184061
One thing that I missed to talk about; Another tutorial “DYNAMIC QUANTIZATION ON AN LSTM WORD LANGUAGE MODEL” shows speed-up with Xeon e5 2620 v4 system; Env-set and result as folows: torch._ config _.parallel_info() ATen/Parallel: at::get_num_threads() : 1 at::get_num_interop_threads() : 16 OpenMP 201511 (a.k.a. OpenMP 4.5) omp_get_max_threads() : 1 Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications mkl_get_max_threads() : 1 Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) std::thread::hardware_concurrency() : 32 Environment variables: OMP_NUM_THREADS : [not set] MKL_NUM_THREADS : [not set] ATen parallel backend: OpenMP torch._config_.show PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 10.1 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75 - CuDNN 7.6.3 - Magma 2.5.2 - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, Result loss: 5.167 elapsed time (seconds): 225.1 loss: 5.168 elapsed time (seconds): 148.0
st184062
danchu: I think that Intel broadwell&haswell processors are related with this issue; We haven’t benchmarked haswell but as long as it has avx2 we should see similar speedups.
st184063
I also tried dynamic quatization on transformer word language model. (It is in https://github.com/pytorch/examples/tree/master/word_language_model 1) And in this case, it shows some speed-up with Xeon e5 2620 v4 system too;;; Env-set and result as folows: torch._ config _.parallel_info() ATen/Parallel: at::get_num_threads() : 1 at::get_num_interop_threads() : 16 OpenMP 201511 (a.k.a. OpenMP 4.5) omp_get_max_threads() : 1 Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications mkl_get_max_threads() : 1 Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) std::thread::hardware_concurrency() : 32 Environment variables: OMP_NUM_THREADS : [not set] MKL_NUM_THREADS : [not set] ATen parallel backend: OpenMP torch.config.show PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 10.1 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75 - CuDNN 7.6.3 - Magma 2.5.2 - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, Result 캡처1003×174 4.08 KB
st184064
Hello, I am also trying to reproduce the results of the dynamic quantization example provided at (beta) Dynamic Quantization on BERT — PyTorch Tutorials 1.7.1 documentation 2 from transformers import BertForSequenceClassification, BertTokenizerFast import torch import os tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased-finetuned-mrpc") model = BertForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc") batch_size = 32 sequence_length = 128 vocab_size = len(tokenizer.vocab) token_ids = torch.randint(vocab_size, (batch_size, sequence_length)) def print_size_of_model(model): torch.save(model.state_dict(), "temp.p") print('Size (MB):', os.path.getsize("temp.p")/1e6) os.remove('temp.p') quantized_model = torch.quantization.quantize_dynamic( model, {torch.nn.Linear}, dtype=torch.qint8 ) I then compare the memory footprint and performance of the baseline model and its quantized counterpart: print_size_of_model(model) %timeit model(token_ids) Size (MB): 433.3 2.16 s ± 30.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) print_size_of_model(quantized_model) %timeit quantized_model(token_ids) Size (MB): 176.8 2.08 s ± 16.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Quantization seems to be working fine based on the significantly reduced memory footprint, but I cannot reproduce the execution speed benefits highlighted in the example (close to 50%) I am running on CentOS Linux release 7.9.2009 with a Intel® Xeon® CPU E5-2690 v3. I am running transformers v4.3.2 with torch v1.7.0. I installed the CUDA 11.0 version of Pytorch but I am not moving the tensors or the models to the GPU in the example provided. I ran the same example on a notebook with a i5-8350U CPU and could observe a significant speed-up. Could this be linked to an issue with Haswell CPUs? Detailed configuration: ATen/Parallel: at::get_num_threads() : 6 at::get_num_interop_threads() : 3 OpenMP 201511 (a.k.a. OpenMP 4.5) omp_get_max_threads() : 6 Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications mkl_get_max_threads() : 6 Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) std::thread::hardware_concurrency() : 6 Environment variables: OMP_NUM_THREADS : [not set] MKL_NUM_THREADS : [not set] ATen parallel backend: OpenMP PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.0 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80 - CuDNN 8.0.4 - Magma 2.5.2 - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
st184065
Hi, have you solved this problem? I’d like to accelerate a model by static quantization on Xeon E5 2680 v4 but meet the same thing. Is it really the problem of cpu microarchitecture?
st184066
Hi, I’m trying to implement Quantization Aware Training as part of my Tiny YOLOv3 model (have mostly used ultralytics/yolov3 as the base for my code). This is what my model architecture looks like: Model( (model): Sequential( (0): Conv( (conv): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(16, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): LeakyReLU(negative_slope=0.1, inplace=True) ) (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (2): Conv( (conv): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): LeakyReLU(negative_slope=0.1, inplace=True) ) (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (4): Conv( (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): LeakyReLU(negative_slope=0.1, inplace=True) ) (5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (6): Conv( (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): LeakyReLU(negative_slope=0.1, inplace=True) ) (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (8): Conv( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): LeakyReLU(negative_slope=0.1, inplace=True) ) (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (10): Conv( (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): LeakyReLU(negative_slope=0.1, inplace=True) ) (11): ZeroPad2d(padding=[0, 1, 0, 1], value=0.0) (12): MaxPool2d(kernel_size=2, stride=1, padding=0, dilation=1, ceil_mode=False) (13): Conv( (conv): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(1024, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): LeakyReLU(negative_slope=0.1, inplace=True) ) (14): Conv( (conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): LeakyReLU(negative_slope=0.1, inplace=True) ) (15): Conv( (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): LeakyReLU(negative_slope=0.1, inplace=True) ) (16): Conv( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): LeakyReLU(negative_slope=0.1, inplace=True) ) (17): Upsample(scale_factor=2.0, mode=nearest) (18): Concat() (19): Conv( (conv): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): LeakyReLU(negative_slope=0.1, inplace=True) ) (20): Detect( (m): ModuleList( (0): Conv2d(256, 27, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(512, 27, kernel_size=(1, 1), stride=(1, 1)) ) ) ) ) and this is the code snippet that I have where I am trying to implement the QAT (as part of my train.py script): model.train() # Set backend to qnnpack torch.backends.quantized.engine = 'qnnpack' # QAT: Attach global qconfig model.qconfig = torch.quantization.get_default_qat_qconfig('qnnpack') # QAT: Fuse modules model_fused = torch.quantization.fuse_modules(model, [[model.model[2].conv, model.model[2].bn], [model.model[4].conv, model.model[4].bn], [model.model[6].conv, model.model[6].bn], [model.model[8].conv, model.model[8].bn], [model.model[10].conv, model.model[10].bn], [model.model[13].conv, model.model[13].bn], [model.model[14].conv, model.model[14].bn], [model.model[15].conv, model.model[15].bn], [model.model[16].conv, model.model[16].bn], [model.model[19].conv, model.model[19].bn]], inplace=True) # QAT: Perform "fake quantization" model_prepared = torch.quantization.prepare_qat(model_fused, inplace=True) The issue that I’m facing with this is that in the fuse_modules() method, I’m getting the following error Traceback (most recent call last): File "train.py", line 606, in <module> train(hyp, opt, device, tb_writer, wandb) File "train.py", line 123, in train [model.model[19].conv, model.model[19].bn]], inplace=True) File "/azureml-envs/azureml_0ae001c63ee102296c480ee5afc65405/lib/python3.6/site-packages/torch/quantization/fuse_modules.py", line 146, in fuse_modules _fuse_modules(model, module_list, fuser_func, fuse_custom_config_dict) File "/azureml-envs/azureml_0ae001c63ee102296c480ee5afc65405/lib/python3.6/site-packages/torch/quantization/fuse_modules.py", line 74, in _fuse_modules mod_list.append(_get_module(model, item)) File "/azureml-envs/azureml_0ae001c63ee102296c480ee5afc65405/lib/python3.6/site-packages/torch/quantization/fuse_modules.py", line 15, in _get_module tokens = submodule_key.split('.') File "/azureml-envs/azureml_0ae001c63ee102296c480ee5afc65405/lib/python3.6/site-packages/torch/nn/modules/module.py", line 948, in __getattr__ type(self).__name__, name)) AttributeError: 'Conv2d' object has no attribute 'split' I’ve followed the QAT tutorial on the PyTorch docs and can’t seem to understand why this error is occurring.
st184067
Solved by ekremcet in post #2 Hi, torch.quantization.fuse_modules(model, list) Expects list of names of the operations to be fused as the second argument. However, you passed the operations themselves that causes the error. Try to change the second argument to name of your layers which are defined in the init method of your mo…
st184068
Hi, torch.quantization.fuse_modules(model, list) Expects list of names of the operations to be fused as the second argument. However, you passed the operations themselves that causes the error. Try to change the second argument to name of your layers which are defined in the init method of your model. A short example: If your Model is defined like this: class Net(nn.Module): def __init__(self, scale): super(Net, self).__init__() self.conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=5, padding=1) self.relu1 = nn.ReLU(inplace=True) self.conv2 = nn.Conv2d(in_channels=64, out_channels=32, kernel_size=3, padding=1) self.relu2 = nn.ReLU(inplace=True) ... You can fuse layers with the following: model = torch.quantization.fuse_modules(model, [['conv1', 'relu1'], ["conv2", "relu2"]]) Hope this helps!
st184069
I had applied QuantWrapper() on a pre-trained FasterRCNN model with mobilenet v3-320 backbone. Though the model size has reduced to 25% of the original size, the inference time is approximately the same. QuantWrapper( (quant): QuantStub() (dequant): DeQuantStub() (module): QuantWrapper( (quant): QuantStub() (dequant): DeQuantStub() (module): FasterRCNN( (transform): GeneralizedRCNNTransform( Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) Resize(min_size=(320,), max_size=640, mode='bilinear') ) (backbone): BackboneWithFPN( (body): IntermediateLayerGetter( (0): ConvBNActivation( (0): Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): FrozenBatchNorm2d(16, eps=1e-05) (2): Hardswish() ) (1): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=16, bias=False) (1): FrozenBatchNorm2d(16, eps=1e-05) (2): ReLU(inplace=True) ) (1): ConvBNActivation( (0): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(16, eps=1e-05) (2): Identity() ) ) ) (2): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(64, eps=1e-05) (2): ReLU(inplace=True) ) (1): ConvBNActivation( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=64, bias=False) (1): FrozenBatchNorm2d(64, eps=1e-05) (2): ReLU(inplace=True) ) (2): ConvBNActivation( (0): Conv2d(64, 24, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(24, eps=1e-05) (2): Identity() ) ) ) (3): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(24, 72, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(72, eps=1e-05) (2): ReLU(inplace=True) ) (1): ConvBNActivation( (0): Conv2d(72, 72, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=72, bias=False) (1): FrozenBatchNorm2d(72, eps=1e-05) (2): ReLU(inplace=True) ) (2): ConvBNActivation( (0): Conv2d(72, 24, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(24, eps=1e-05) (2): Identity() ) ) ) (4): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(24, 72, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(72, eps=1e-05) (2): ReLU(inplace=True) ) (1): ConvBNActivation( (0): Conv2d(72, 72, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), groups=72, bias=False) (1): FrozenBatchNorm2d(72, eps=1e-05) (2): ReLU(inplace=True) ) (2): SqueezeExcitation( (fc1): Conv2d(72, 24, kernel_size=(1, 1), stride=(1, 1)) (relu): ReLU(inplace=True) (fc2): Conv2d(24, 72, kernel_size=(1, 1), stride=(1, 1)) ) (3): ConvBNActivation( (0): Conv2d(72, 40, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(40, eps=1e-05) (2): Identity() ) ) ) (5): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(40, 120, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(120, eps=1e-05) (2): ReLU(inplace=True) ) (1): ConvBNActivation( (0): Conv2d(120, 120, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=120, bias=False) (1): FrozenBatchNorm2d(120, eps=1e-05) (2): ReLU(inplace=True) ) (2): SqueezeExcitation( (fc1): Conv2d(120, 32, kernel_size=(1, 1), stride=(1, 1)) (relu): ReLU(inplace=True) (fc2): Conv2d(32, 120, kernel_size=(1, 1), stride=(1, 1)) ) (3): ConvBNActivation( (0): Conv2d(120, 40, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(40, eps=1e-05) (2): Identity() ) ) ) (6): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(40, 120, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(120, eps=1e-05) (2): ReLU(inplace=True) ) (1): ConvBNActivation( (0): Conv2d(120, 120, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=120, bias=False) (1): FrozenBatchNorm2d(120, eps=1e-05) (2): ReLU(inplace=True) ) (2): SqueezeExcitation( (fc1): Conv2d(120, 32, kernel_size=(1, 1), stride=(1, 1)) (relu): ReLU(inplace=True) (fc2): Conv2d(32, 120, kernel_size=(1, 1), stride=(1, 1)) ) (3): ConvBNActivation( (0): Conv2d(120, 40, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(40, eps=1e-05) (2): Identity() ) ) ) (7): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(240, eps=1e-05) (2): Hardswish() ) (1): ConvBNActivation( (0): Conv2d(240, 240, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=240, bias=False) (1): FrozenBatchNorm2d(240, eps=1e-05) (2): Hardswish() ) (2): ConvBNActivation( (0): Conv2d(240, 80, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(80, eps=1e-05) (2): Identity() ) ) ) (8): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(80, 200, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(200, eps=1e-05) (2): Hardswish() ) (1): ConvBNActivation( (0): Conv2d(200, 200, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=200, bias=False) (1): FrozenBatchNorm2d(200, eps=1e-05) (2): Hardswish() ) (2): ConvBNActivation( (0): Conv2d(200, 80, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(80, eps=1e-05) (2): Identity() ) ) ) (9): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(80, 184, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(184, eps=1e-05) (2): Hardswish() ) (1): ConvBNActivation( (0): Conv2d(184, 184, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=184, bias=False) (1): FrozenBatchNorm2d(184, eps=1e-05) (2): Hardswish() ) (2): ConvBNActivation( (0): Conv2d(184, 80, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(80, eps=1e-05) (2): Identity() ) ) ) (10): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(80, 184, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(184, eps=1e-05) (2): Hardswish() ) (1): ConvBNActivation( (0): Conv2d(184, 184, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=184, bias=False) (1): FrozenBatchNorm2d(184, eps=1e-05) (2): Hardswish() ) (2): ConvBNActivation( (0): Conv2d(184, 80, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(80, eps=1e-05) (2): Identity() ) ) ) (11): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(480, eps=1e-05) (2): Hardswish() ) (1): ConvBNActivation( (0): Conv2d(480, 480, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=480, bias=False) (1): FrozenBatchNorm2d(480, eps=1e-05) (2): Hardswish() ) (2): SqueezeExcitation( (fc1): Conv2d(480, 120, kernel_size=(1, 1), stride=(1, 1)) (relu): ReLU(inplace=True) (fc2): Conv2d(120, 480, kernel_size=(1, 1), stride=(1, 1)) ) (3): ConvBNActivation( (0): Conv2d(480, 112, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(112, eps=1e-05) (2): Identity() ) ) ) (12): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(672, eps=1e-05) (2): Hardswish() ) (1): ConvBNActivation( (0): Conv2d(672, 672, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=672, bias=False) (1): FrozenBatchNorm2d(672, eps=1e-05) (2): Hardswish() ) (2): SqueezeExcitation( (fc1): Conv2d(672, 168, kernel_size=(1, 1), stride=(1, 1)) (relu): ReLU(inplace=True) (fc2): Conv2d(168, 672, kernel_size=(1, 1), stride=(1, 1)) ) (3): ConvBNActivation( (0): Conv2d(672, 112, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(112, eps=1e-05) (2): Identity() ) ) ) (13): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(672, eps=1e-05) (2): Hardswish() ) (1): ConvBNActivation( (0): Conv2d(672, 672, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), groups=672, bias=False) (1): FrozenBatchNorm2d(672, eps=1e-05) (2): Hardswish() ) (2): SqueezeExcitation( (fc1): Conv2d(672, 168, kernel_size=(1, 1), stride=(1, 1)) (relu): ReLU(inplace=True) (fc2): Conv2d(168, 672, kernel_size=(1, 1), stride=(1, 1)) ) (3): ConvBNActivation( (0): Conv2d(672, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(160, eps=1e-05) (2): Identity() ) ) ) (14): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(960, eps=1e-05) (2): Hardswish() ) (1): ConvBNActivation( (0): Conv2d(960, 960, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=960, bias=False) (1): FrozenBatchNorm2d(960, eps=1e-05) (2): Hardswish() ) (2): SqueezeExcitation( (fc1): Conv2d(960, 240, kernel_size=(1, 1), stride=(1, 1)) (relu): ReLU(inplace=True) (fc2): Conv2d(240, 960, kernel_size=(1, 1), stride=(1, 1)) ) (3): ConvBNActivation( (0): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(160, eps=1e-05) (2): Identity() ) ) ) (15): InvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(960, eps=1e-05) (2): Hardswish() ) (1): ConvBNActivation( (0): Conv2d(960, 960, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=960, bias=False) (1): FrozenBatchNorm2d(960, eps=1e-05) (2): Hardswish() ) (2): SqueezeExcitation( (fc1): Conv2d(960, 240, kernel_size=(1, 1), stride=(1, 1)) (relu): ReLU(inplace=True) (fc2): Conv2d(240, 960, kernel_size=(1, 1), stride=(1, 1)) ) (3): ConvBNActivation( (0): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(160, eps=1e-05) (2): Identity() ) ) ) (16): ConvBNActivation( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d(960, eps=1e-05) (2): Hardswish() ) ) (fpn): FeaturePyramidNetwork( (inner_blocks): ModuleList( (0): Conv2d(160, 256, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(960, 256, kernel_size=(1, 1), stride=(1, 1)) ) (layer_blocks): ModuleList( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (extra_blocks): LastLevelMaxPool() ) ) (rpn): RegionProposalNetwork( (anchor_generator): AnchorGenerator() (head): RPNHead( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (cls_logits): Conv2d(256, 15, kernel_size=(1, 1), stride=(1, 1)) (bbox_pred): Conv2d(256, 60, kernel_size=(1, 1), stride=(1, 1)) ) ) (roi_heads): RoIHeads( (box_roi_pool): MultiScaleRoIAlign(featmap_names=['0', '1', '2', '3'], output_size=(7, 7), sampling_ratio=2) (box_head): TwoMLPHead( (fc6): Linear(in_features=12544, out_features=1024, bias=True) (fc7): Linear(in_features=1024, out_features=1024, bias=True) ) (box_predictor): FastRCNNPredictor( (cls_score): Linear(in_features=1024, out_features=91, bias=True) (bbox_pred): Linear(in_features=1024, out_features=364, bias=True) ) ) ) ) ) Any suggestions?
st184070
one good next step would be to run per-op profiling (PyTorch Profiler — PyTorch Tutorials 1.8.0 documentation 7) on both the fp32 and quantized versions of your model, to see which kernels are contributing the most to inference time
st184071
Thank you for the reply. I’m however getting an error on the quantized model Could not run ‘aten::quantize_per_tensor’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. ‘aten::quantize_per_tensor’ is only available for these backends: [CPU, CUDA, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
st184072
this means you are quantizing a already quantized Tensor, it’s because QuantStub is not placed correctly in the original model, you might have a Tensor being quantized multiple times.] e.g. x = self.quant(x) x = self.quant2(x) looks like you have quantwrapper over the original model and the fasterrcnn model, this would mean the input of FasterRCNN model is quantized twice.
st184073
Do you think it’s because of the (quant): QuantStub() (dequant): DeQuantStub() (module): QuantWrapper( (quant): QuantStub() (dequant): DeQuantStub() Portion? Anyways, now I fixed it, and found this error while using the Profiler > Expected input images to be of floating type (in range [0, 1]), but found type torch.quint8 instead
st184074
@jerryzh168 @Vasiliy_Kuznetsov Irrespective of the above error, I still don’t understand how the inference time is almost the same despite the proper modules are shown to be quantized when it is printed ` #Nomal model model_inference 5.31% 24.628ms 99.99% 463.702ms 463.702ms #Static quantized model model_inference 4.93% 22.530ms 99.99% 456.504ms 456.504ms #ROI quantized model (dynamically quantizing the Linear layers) model_inference 5.31% 24.628ms 99.99% 461.05ms 461.05ms
st184075
I want to know what happened when I call torch.ops.quantized.add API, but I have no idea where the corresponding c++ code is?
st184076
Solved by Vasiliy_Kuznetsov in post #2 Hi @Jonson , you can find the code for quantized add here: pytorch/qadd.cpp at master · pytorch/pytorch · GitHub
st184077
Hi @Jonson , you can find the code for quantized add here: pytorch/qadd.cpp at master · pytorch/pytorch · GitHub 7
st184078
Hi, I am trying to apply static quantization to a model which has nn.ModuleList() object as one of its module. To fuse the layers we need to pass the list of layers as string to ´torch.quantization.fuse_modules´ So I tried: import torch.nn as nn class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.quant = torch.quantization.QuantStub() self.linears = nn.ModuleList([nn.Linear(10, 10), nn.ReLU(), nn.Linear(10, 10), nn.ReLU()]) self.dequant = torch.quantization.DeQuantStub() def forward(self, x): # ModuleList can act as an iterable, or be indexed using ints x = quant(x) x = self.linears[0](x) x = self.linears[1](x) x = self.linears[2](x) x = self.linears[3](x) x = self.dequant(x) return x model_fp32 = MyModule() model_fp32.eval() input_fp32 = torch.randn(4, 10,10) out = model_fp32(input_fp32) model_fp32.qconfig = torch.quantization.get_default_qconfig('fbgemm') model_fp32_prepared = torch.quantization.fuse_modules(model_fp32, ["linears[0]","linears[1]" ]) -> throws error: AttributeError: 'MyModule' object has no attribute 'linears[0]'
st184079
Solved by Vasiliy_Kuznetsov in post #2 hi @hirenpatel1207 , if you use ["linears.0","linears.1"] instead of ["linears[0]","linears[1]" ], it should work.
st184080
hi @hirenpatel1207 , if you use ["linears.0","linears.1"] instead of ["linears[0]","linears[1]" ], it should work.
st184081
Since matrix multiplication is not supported for model quantization I’m performing it with a nn.Linear layer which I change its weigths in every forward pass. This approach works well for the FP32 model but it crashes when the model is quantized. The issue is that, when the model is converted to int8, the following lines of code are not valid self.linear.weight.requires_grad = False self.linear.weight.copy_ (input1[b]) because in the converted model self.linear.weight is not a torch.nn.Paramater but a method which returns a Tensor Any workarround on this? FULL CODE import torch import torch.nn as nn class BatchedMatMul(nn.Module): def __init__(self): super().__init__() self.quant = torch.quantization.QuantStub() self.linear = nn.Linear(3,3, bias=False) self.dequant = torch.quantization.DeQuantStub() def forward(self, input1, input2): y = [] for b in range(input1.shape[0]): print(f"Linear's type: {type(self.linear)}") print(f"Linear's weigth type: {type(self.linear.weight)}") self.linear.weight.requires_grad = False self.linear.weight.copy_ (self.quant(input1[b])) y.append(self.linear(self.quant(input2[b]))) return self.dequant(torch.stack(y)) print("Cronstruct model...") matmul = BatchedMatMul() print("Cronstruct model... [OK]") matmul.eval() print("Running FP32 inference...") inp = torch.ones(3, 3).repeat(2,1,1) y = matmul(inp, inp) print(y) print("Running FP32 inference... [OK]") print("Quantizing...") matmul.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') matmul_prepared = torch.quantization.prepare(matmul) matmul_prepared(inp, inp) model_int8 = torch.quantization.convert(matmul_prepared) print("Quantizing... [OK]") print("Running INT8 inference...") y = model_int8(inp, inp) print(y) print("Running INT8 inference..[OK]") OUTPUT Cronstruct model... Cronstruct model... [OK] Running FP32 inference... Linear's weigth type: <class 'torch.nn.parameter.Parameter'> Linear's weigth type: <class 'torch.nn.parameter.Parameter'> tensor([[[3., 3., 3.], [3., 3., 3.], [3., 3., 3.]], [[3., 3., 3.], [3., 3., 3.], [3., 3., 3.]]]) Running FP32 inference... [OK] Quantizing... Linear's weigth type: <class 'torch.nn.parameter.Parameter'> Linear's weigth type: <class 'torch.nn.parameter.Parameter'> Quantizing... [OK] Running INT8 inference... Linear's weigth type: <class 'method'> /usr/local/lib/python3.6/dist-packages/torch/quantization/observer.py:121: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch. reduce_range will be deprecated in a future release of PyTorch." --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-81-024fd82f94de> in <module>() 34 print("Quantizing... [OK]") 35 print("Running INT8 inference...") ---> 36 y = model_int8(inp, inp) 37 print(y) 38 print("Running INT8 inference..[OK]") 1 frames <ipython-input-81-024fd82f94de> in forward(self, input1, input2) 10 for b in range(input1.shape[0]): 11 print(f"Linear's weigth type: {type(self.linear.weight)}") ---> 12 self.linear.weight.requires_grad = False 13 self.linear.weight.copy_ (input1[b]) 14 y.append(self.linear(input2[b])) AttributeError: 'method' object has no attribute 'requires_grad'
st184082
I come acrross with the following: class BatchedMatMul(nn.Module): def __init__(self): super().__init__() self.quant = torch.quantization.QuantStub() self.linear = nn.Linear(3,3, bias=False) self.dequant = torch.quantization.DeQuantStub() def forward(self, input1, input2): y = [] for b in range(input1.shape[0]): print(f"Linear's type: {type(self.linear)}") print(f"Linear's weigth type: {type(self.linear.weight)}") if isinstance(self.linear.weight, nn.Parameter): self.linear.weight.requires_grad = False self.linear.weight.copy_ (self.quant(input1[b])) y.append(self.linear(self.quant(input2[b]))) else: self.linear.set_weight_bias(self.quant(input1[b]), b=None) y.append(self.linear(self.quant(input2[b]))) return self.dequant(torch.stack(y)) self.linear has changeg from torch.nn.modules.linear.Linear to torch.nn.quantized.modules.linear.Linear so their methods and attributes are different. Nevertheless, this approach is still throwing an error because the quantized linear layer expects an signed integer as parameter but an unsigned is being given… --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-112-629b4f9ebdc0> in <module>() 42 print("Quantizing... [OK]") 43 print("Running INT8 inference...") ---> 44 y = model_int8.forward(inp, inp) 45 print(y) 46 print("Running INT8 inference..[OK]") 2 frames /usr/local/lib/python3.6/dist-packages/torch/nn/quantized/modules/linear.py in set_weight_bias(self, weight, bias) 21 def set_weight_bias(self, weight: torch.Tensor, bias: Optional[torch.Tensor]) -> None: 22 if self.dtype == torch.qint8: ---> 23 self._packed_params = torch.ops.quantized.linear_prepack(weight, bias) 24 elif self.dtype == torch.float16: 25 self._packed_params = torch.ops.quantized.linear_prepack_fp16(weight, bias) RuntimeError: expected scalar type QInt8 but found QUInt8
st184083
I think I’m close… what do you think? import torch import torch.nn as nn class BatchedMatMul(nn.Module): def __init__(self): super().__init__() self.quant = torch.quantization.QuantStub() self.linear = nn.Linear(3,3, bias=False) self.dequant = torch.quantization.DeQuantStub() def forward(self, input1, input2): y = [] for b in range(input1.shape[0]): print(f"Linear's type: {type(self.linear)}") print(f"Linear's weigth type: {type(self.linear.weight)}") if isinstance(self.linear.weight, nn.Parameter): self.linear.weight.requires_grad = False self.linear.weight.copy_ (self.quant(input1[b])) y.append(self.linear(self.quant(input2[b]))) else: scale = self.linear.weight().q_per_channel_scales() zero_point = self.linear.weight().q_per_channel_zero_points() w = torch.quantize_per_channel(input1[b], scale, zero_point, 1, torch.qint8) self.linear.set_weight_bias(w, b=None) y.append(self.linear(self.quant(input2[b]))) return self.dequant(torch.stack(y)) print("Cronstruct model...") matmul = BatchedMatMul() print("Cronstruct model... [OK]") matmul.eval() print("Running FP32 inference...") inp = torch.ones(3, 3).repeat(2,1,1) y = matmul(2*inp, inp) print("FP32 output...") print(y) print("Running FP32 inference... [OK]") print("Quantizing...") matmul.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') matmul_prepared = torch.quantization.prepare(matmul) matmul_prepared(2*inp, inp) model_int8 = torch.quantization.convert(matmul_prepared) print("Quantizing... [OK]") print("Running INT8 inference...") y = model_int8.forward(2*inp, inp) print("Int8 Output") print(y) print("Running INT8 inference..[OK]") OUT Cronstruct model... Cronstruct model... [OK] Running FP32 inference... Linear's type: <class 'torch.nn.modules.linear.Linear'> Linear's weigth type: <class 'torch.nn.parameter.Parameter'> Linear's type: <class 'torch.nn.modules.linear.Linear'> Linear's weigth type: <class 'torch.nn.parameter.Parameter'> FP32 output... tensor([[[6., 6., 6.], [6., 6., 6.], [6., 6., 6.]], [[6., 6., 6.], [6., 6., 6.], [6., 6., 6.]]]) Running FP32 inference... [OK] Quantizing... Linear's type: <class 'torch.nn.modules.linear.Linear'> Linear's weigth type: <class 'torch.nn.parameter.Parameter'> Linear's type: <class 'torch.nn.modules.linear.Linear'> Linear's weigth type: <class 'torch.nn.parameter.Parameter'> Quantizing... [OK] Running INT8 inference... Linear's type: <class 'torch.nn.quantized.modules.linear.Linear'> Linear's weigth type: <class 'method'> Linear's type: <class 'torch.nn.quantized.modules.linear.Linear'> Linear's weigth type: <class 'method'> Int8 Output tensor([[[5.9695, 5.9695, 5.9695], [5.9695, 5.9695, 5.9695], [5.9695, 5.9695, 5.9695]], [[5.9695, 5.9695, 5.9695], [5.9695, 5.9695, 5.9695], [5.9695, 5.9695, 5.9695]]]) Running INT8 inference..[OK] /usr/local/lib/python3.6/dist-packages/torch/quantization/observer.py:121: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch. reduce_range will be deprecated in a future release of PyTorch."
st184084
this feels a bit hacky, I’m not sure if it would work or not, bmm is not supported right now, why not put dequantstub and quantstub around bmm op to avoid quantizing it?
st184085
Yes, it’s quite hacky… indeed, it only works for dynamic quantization (quantizing only the weights) I’m very interested in placing the quantsub and dequantsub to avoid quantization, could you please provide a pice of code? Do you mean something like this: # y and x Tensors previouslly created... x = torch.quantization.DeQuantStub(x) y = torch.quantization.DeQuantStub(y) Y = torch.torch.bmm(y, x) y = torch.quantization.QuantStub(y) Thx you very much!
st184086
well. since quantization is not yet available for GPU inference… it is not worth for me to try this out…
st184087
Following is my error message: Traceback (most recent call last): File “pose_estimation/test_on_single_image_quant_ver.py”, line 119, in main() File “pose_estimation/test_on_single_image_quant_ver.py”, line 92, in main output = quantized_model(input) File “/usr/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl result = self.forward(*input, **kwargs) RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File “code/torch/models/pose_mobilenet.py”, line 17, in forward x0 = (self.quant).forward(x, ) x1 = (self.features).forward(x0, ) x2 = (self.conv_transpose_layers).forward(x1, ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE x3 = (self.final_layer).forward(x2, ) return (self.dequant).forward(x3, ) File “code/torch/torch/nn/modules/container/___torch_mangle_4.py”, line 26, in forward _8 = getattr(self, “8”) input0 = (_0).forward(input, None, ) input1 = (_1).forward(input0, ) ~~~~~~~~~~~ <— HERE input2 = (_2).forward(input1, ) input3 = (_3).forward(input2, None, ) File “code/torch/torch/nn/modules/container/___torch_mangle_4.py”, line 25, in forward _7 = getattr(self, “7”) _8 = getattr(self, “8”) input0 = (_0).forward(input, None, ) ~~~~~~~~~~~ <— HERE input1 = (_1).forward(input0, ) input2 = (_2).forward(input1, ) File “code/torch/torch/nn/modules/conv.py”, line 22, in forward output_size: Optional[List[int]]=None) -> Tensor: output_padding = (self)._output_padding(input, output_size, [2, 2], [1, 1], [4, 4], ) _0 = torch.conv_transpose2d(input, self.weight, self.bias, [2, 2], [1, 1], output_padding, 1, [1, 1]) ~~~~~~~~~~~~~~~~~~~~~~ <— HERE return _0 def _output_padding(self: torch.torch.nn.modules.conv.ConvTranspose2d, Traceback of TorchScript, original code (most recent call last): File “/usr/lib/python3.8/site-packages/torch/nn/modules/container.py”, line 117, in forward def forward(self, input): for module in self: input = module(input) ~~~~~~ <— HERE return input File “/usr/lib/python3.8/site-packages/torch/nn/modules/container.py”, line 117, in forward def forward(self, input): for module in self: input = module(input) ~~~~~~ <— HERE return input File “/usr/lib/python3.8/site-packages/torch/nn/modules/conv.py”, line 905, in forward output_padding = self._output_padding(input, output_size, self.stride, self.padding, self.kernel_size) return F.conv_transpose2d( ~~~~~~~~~~~~~~~~~~ <--- HERE input, self.weight, self.bias, self.stride, self.padding, output_padding, self.groups, self.dilation) RuntimeError: Could not run ‘aten::slow_conv_transpose2d’ with arguments from the ‘QuantizedCPU’ backend. ‘aten::slow_conv_transpose2d’ is only available for these backends: [CPU, CUDA, Autograd, Profiler, Tracer]. I’m new to this, I try to find something like torch.nn.quantized.conv_transpose2d but I can’t find it or is there any other ways? Thanks in advance
st184088
Solved by Zafar in post #32 At the moment there is no active work to implement the per channel observer for the convtranspose. The reason is that there is non-trivial task that requires observation of a proper channel, which is different for the conv and convtranspose. If you add a feature request on github, I will try to get …
st184089
hi @ruka, we landed support for quantized conv transpose recently (https://github.com/pytorch/pytorch/pull/40371 94 and the preceding PRs). It is not in v1.6, but you can try it out in the nightly!
st184090
Hi, @Vasiliy_Kuznetsov I updated my pytorch to nightly(Version: 1.7.0a0+60665ac). But when I try to convert my model, I get error: Traceback (most recent call last): File “pose_estimation/quantized.py”, line 67, in main() File “pose_estimation/quantized.py”, line 61, in main torch.quantization.convert(model, inplace = True) File “/home/yjwen/local/lib/python3.8/site-packages/torch/quantization/quantize.py”, line 414, in convert _convert(module, mapping, inplace=True) File “/home/yjwen/local/lib/python3.8/site-packages/torch/quantization/quantize.py”, line 458, in _convert _convert(mod, mapping, inplace=True) File “/home/yjwen/local/lib/python3.8/site-packages/torch/quantization/quantize.py”, line 459, in _convert reassign[name] = swap_module(mod, mapping) File “/home/yjwen/local/lib/python3.8/site-packages/torch/quantization/quantize.py”, line 485, in swap_module new_mod = mapping[type(mod)].from_float(mod) File “/home/yjwen/local/lib/python3.8/site-packages/torch/nn/quantized/modules/conv.py”, line 507, in from_float qconv = cls(mod.in_channels, mod.out_channels, mod.kernel_size, File “/home/yjwen/local/lib/python3.8/site-packages/torch/nn/quantized/modules/conv.py”, line 641, in init super(ConvTranspose2d, self).init( File “/home/yjwen/local/lib/python3.8/site-packages/torch/nn/quantized/modules/conv.py”, line 476, in init super(_ConvTransposeNd, self).init( File “/home/yjwen/local/lib/python3.8/site-packages/torch/nn/quantized/modules/conv.py”, line 53, in init self.set_weight_bias(qweight, bias_float) File “/home/yjwen/local/lib/python3.8/site-packages/torch/nn/quantized/modules/conv.py”, line 650, in set_weight_bias self._packed_params = torch.ops.quantized.conv_transpose2d_prepack( RuntimeError: FBGEMM doesn’t support transpose packing yet! Did I miss anything(maybe some special that I need to do before quantized conv transpose) or this is a bug? Thanks
st184091
Currently, the ConvTranspose is only supported using the QNNPACK. The FBGEMM version is planned, but there is no specific date specified for it. Meanwhile, you have two options for the eager mode: replace the ConvTranspose: 1) Replace the instances of the ConvTranspose with dequant->ConvTranspose->quant construct 2) Set the torch.backends.quantized.engine = 'qnnpack' before running your model. You also might need to set the qconfig = torch.quantization.get_default_qconfig('qnnpack') or qconfig = torch.quantization.get_default_qat_qconfig('qnnpack')
st184092
@Zafar Thank you so much for the reply!! I successfully converted the model by setting quantized.engine = 'qnnpack' and get_default_qconfig('qnnpack') But the quantized model predicts a totally wrong result(my original model works fine) I carefully studied the official tutorial https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#model-architecture 5 It seems that nothing special needs to be done when using a quantized model, just the usual way model.eval() with torch.no_grad(): output = model(image) Any hints?
st184093
Can you elaborate on the wrong result, please – I wonder if it is within the quantization error.
st184094
Hi, @Zafar This is the result from my original model original800×533 572 KB This is the result from the quantized model quant_result800×533 123 KB It seems that all 21 keypoints are in the wrong position Following is my conversion code state_dict = torch.load('model_best.pth') model.load_state_dict(state_dict) model = model.to('cpu') torch.backends.quantized.engine = 'qnnpack' model.eval() model.fuse_model() model.qconfig = torch.quantization.get_default_qat_qconfig('qnnpack') torch.quantization.prepare(model, inplace = True) evaluate(model) torch.quantization.convert(model, inplace = True) torch.jit.save(torch.jit.script(model), 'model_quantization_scripted_quantized.pth')
st184095
ruka: evaluate(model) I am assuming the evaluate runs the model on the training data. Anyway, the results seem to be pretty bad. It would worth to investigate the SNR of the model. Is it by any chance an open-source model I could take a look at? If not – it’s OK, I can try cooking up my own model and test it – what’s the architecture?
st184096
@Zafar Sure, let me warp up my model definition code and delete some unnecessary dependencies.
st184097
@Zafar Hi, I upload the code, you can find it here https://drive.google.com/file/d/1tyS4lqdq9FWxy-96M5qwJjZno7ViB83p 9
st184098
Thank you for the model – Is there an open-source dataset I could use to reproduce the error? I ran the model with synthetic data, but it is better to repro with actual images. Ideally, I would want the pretrained model as well, but if it is not available – it’s OK, I can train it myself. I can run the model with random inputs, and get fairly good results: https://colab.research.google.com/drive/1T_jvh96gekf1OLh_ttbT3Tgi6VNTytTF?usp=sharing 12
st184099
Hi, @Zafar Thank you for the reply. I use this dataset as my training data: https://www.cs.cmu.edu/~tsimon/projects/mvbs.html 3 It seems that the original download link is dead, so I upload a torrent here: https://drive.google.com/file/d/1QDJdFuYJGS9Kp4lng_bbv5JF36l4tioc/view?usp=sharing 3 BTW, for the result of SNR shown in your code, does it mean that, a small number shows the result between original model and quantized model are close to each other?