id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st184200 | I figured it out. For anyone with the same interest, hope this template helpful. Jerry, please comment if anything missing here.
class qatCustom (nn.Module):
def __init__(..., qconfig=None):
super().__init__(....)
self.qconifg = qconfig
#skip this if there is no learnable parm
self.weight_fake_quant = qconfig.weight()
def forward(self, input):
#implement the forward but with qweight
qweight = self.weight_fake_quant(self.weight)
#do compute
return _forward_imple(input, qweight)
@classmethod
def from_float(cls, mod):
qconfig = mod.qconfig
qmod = cls(qmod constructor inputs, qconfig=qconfig)
qmod.weight = mod.weight
return qmod
torch.quantization.quantization_mappings.register_qat_module_mapping(Custom, qatCustom) |
st184201 | Hi,
I am trying to do post training static quantization, however, I am running into issues where certain operations are not defined for QuantizedCPUTensorId.
Minimal reproducible example:
>>> import torch
>>>
>>> A = torch.Tensor([[2,2], [3,3]]).unsqueeze(0)
>>> B = torch.Tensor([[2,3], [2,3]]).unsqueeze(0)
>>> scale, zero_point, dtype = 1.0, 2, torch.qint8
>>> qA = torch.quantize_per_tensor(A, scale, zero_point, dtype)
>>> qB = torch.quantize_per_tensor(B, scale, zero_point, dtype)
>>> torch.matmul(A,B)
tensor([[[ 8., 12.],
[12., 18.]]])
>>> torch.matmul(qA,qB)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Could not run 'aten::bmm' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::bmm' is only available for these backends: [CPUTensorId, VariableTensorId].
Are there alternatives to accomplishing the same?
I know there are certain operations that are defined here: https://pytorch.org/docs/stable/quantization.html#floatfunctional 3 but what would be the optimal way? |
st184202 | If possible try using nn.Linear instead of aten::bmm.
Currently the only way is to implement the quantized operator for aten::bmm.
One easy way could be by implementing the quantized::linear operator by looping over the batch dimension. We will be looking into implementing this operator in the future. |
st184203 | Hi @supriyar, thanks for the response.
Yes, I had thought about that but wouldn’t that operation be suboptimal? However, if there is no alternative, I guess it would have to be so for now. |
st184204 | Seems like https://pytorch.org/docs/stable/quantization.html#torch.nn.quantized.functional.linear 8 is not a viable option. It requires the input tensor to be unsigned, however, the operation explicitly is between two tensors that are qint8.
>>> torch.nn.quantized.functional.linear(qA[0,], qB[0,])
RuntimeError: expected scalar type QUInt8 but found QInt8 |
st184205 | do you need both of the inputs to be qint8? If you change qA to be quint8 it would work |
st184206 | I’ve came across with the following code. As pointed out I perform the matrix multiplication with nn.Linear.
What do you think?
import torch
import torch.nn as nn
class BatchedMatMul(nn.Module):
def __init__(self):
super().__init__()
self.quant = torch.quantization.QuantStub()
self.linear = nn.Linear(3,3, bias=False)
self.dequant = torch.quantization.DeQuantStub()
def forward(self, input1, input2):
y = []
for b in range(input1.shape[0]):
print(f"Linear's type: {type(self.linear)}")
print(f"Linear's weigth type: {type(self.linear.weight)}")
if isinstance(self.linear.weight, nn.Parameter):
self.linear.weight.requires_grad = False
self.linear.weight.copy_ (self.quant(input1[b]))
y.append(self.linear(self.quant(input2[b])))
else:
scale = self.linear.weight().q_per_channel_scales()
zero_point = self.linear.weight().q_per_channel_zero_points()
w = torch.quantize_per_channel(input1[b], scale, zero_point, 1, torch.qint8)
self.linear.set_weight_bias(w, b=None)
y.append(self.linear(self.quant(input2[b])))
return self.dequant(torch.stack(y))
print("Cronstruct model...")
matmul = BatchedMatMul()
print("Cronstruct model... [OK]")
matmul.eval()
print("Running FP32 inference...")
inp = torch.ones(3, 3).repeat(2,1,1)
y = matmul(2*inp, inp)
print("FP32 output...")
print(y)
print("Running FP32 inference... [OK]")
print("Quantizing...")
matmul.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
matmul_prepared = torch.quantization.prepare(matmul)
matmul_prepared(2*inp, inp)
model_int8 = torch.quantization.convert(matmul_prepared)
print("Quantizing... [OK]")
print("Running INT8 inference...")
y = model_int8.forward(2*inp, inp)
print("Int8 Output")
print(y)
print("Running INT8 inference..[OK]")
Output:
Cronstruct model...
Cronstruct model... [OK]
Running FP32 inference...
Linear's type: <class 'torch.nn.modules.linear.Linear'>
Linear's weigth type: <class 'torch.nn.parameter.Parameter'>
Linear's type: <class 'torch.nn.modules.linear.Linear'>
Linear's weigth type: <class 'torch.nn.parameter.Parameter'>
FP32 output...
tensor([[[6., 6., 6.],
[6., 6., 6.],
[6., 6., 6.]],
[[6., 6., 6.],
[6., 6., 6.],
[6., 6., 6.]]])
Running FP32 inference... [OK]
Quantizing...
Linear's type: <class 'torch.nn.modules.linear.Linear'>
Linear's weigth type: <class 'torch.nn.parameter.Parameter'>
Linear's type: <class 'torch.nn.modules.linear.Linear'>
Linear's weigth type: <class 'torch.nn.parameter.Parameter'>
Quantizing... [OK]
Running INT8 inference...
Linear's type: <class 'torch.nn.quantized.modules.linear.Linear'>
Linear's weigth type: <class 'method'>
Linear's type: <class 'torch.nn.quantized.modules.linear.Linear'>
Linear's weigth type: <class 'method'>
Int8 Output
tensor([[[5.9695, 5.9695, 5.9695],
[5.9695, 5.9695, 5.9695],
[5.9695, 5.9695, 5.9695]],
[[5.9695, 5.9695, 5.9695],
[5.9695, 5.9695, 5.9695],
[5.9695, 5.9695, 5.9695]]])
Running INT8 inference..[OK]
/usr/local/lib/python3.6/dist-packages/torch/quantization/observer.py:121: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch.
reduce_range will be deprecated in a future release of PyTorch." |
st184207 | currently we only support quint8 for activations and qint8 for weight I think.
Currently we do not have plans for supporting bmm, one workaround is to put DeQuantStub and QuantStub around bmm op to skip quantizing it. |
st184208 | Hello,
Is there any generalized way or code to fuse layers of any convolutional model ?
ex. AlexNet, ResNet, VGG
just one code which will work for all sort of model to fuse their conv + bn + relu layers. |
st184209 | Solved by jerryzh168 in post #4
this can be achieved by FX Graph Mode Quantization, which will be released as a prototype feature in PyTorch 1.8 |
st184210 | Check here 29.
Simply, for fusing conv + bn + relu, you can replace the convolution operators to these operators 49, and then replace bn operator and relu operator to nn.Identity().
PS: Please check the correctness by yourself. |
st184211 | this can be achieved by FX Graph Mode Quantization, which will be released as a prototype feature in PyTorch 1.8 |
st184212 | I trained my quantized network, I was able to export the weights and bais to perform the inference on my platform.
However, I cannot find the data obtained from the pytorch observer and I have to manually calculate the parameters to scale the outputs of each layer (conv layer and fully layer). How can I extract this information directly from pytorch and then query the observer?
Thanks. |
st184213 | Solved by Vasiliy_Kuznetsov in post #4
You’d want to inspect the observer instances. After this line of code,
evaluate(model_quantized, criterion, loader_train, neval_batches=num_calibration_batches)
if you print your model, you should see observer modules attached, with statistics. You can query these module instances. For example… |
st184214 | You can use
scale, zero_point = observer.calculate_qparams()
Please see torch.quantization — PyTorch 1.7.0 documentation 2 for more info. |
st184215 | Forgive me if I ask trivial things but I am a beginner, this is the code:
model_quantized = model
eval_batch_size = value_batch_size
model_quantized.eval()
model_quantized.qconfig = torch.quantization.get_default_qconfig('qnnpack')
torch.backends.quantized.engine = 'qnnpack'
torch.quantization.prepare(model_quantized, inplace=True)
evaluate(model_quantized, criterion, loader_train, neval_batches=num_calibration_batches)
torch.quantization.convert(model_quantized, inplace=True)
top1, top5 = evaluate(model_quantized, criterion, loader_valid, neval_batches=num_eval_batches)
temp_scale, temp_zero_point = torch.quantization.observer.MinMaxObserver.calculate_qparams()
but if i run it, i get:
Traceback (most recent call last):
File "main2D.py", line 785, in <module>
temp_scale, temp_zero_point = torch.quantization.observer.MinMaxObserver.calculate_qparams()
TypeError: calculate_qparams() missing 1 required positional argument: 'self'
Should i write the command differently or in a different place?
Thank you very much. |
st184216 | Matteo_Scrugli:
evaluate(model_quantized, criterion, loader_train, neval_batches=num_calibration_batches)
You’d want to inspect the observer instances. After this line of code,
evaluate(model_quantized, criterion, loader_train, neval_batches=num_calibration_batches)
if you print your model, you should see observer modules attached, with statistics. You can query these module instances. For example, if you have a model.conv1, you can then call model.conv1.activation_post_process.calculate_qparams() to get qparams for the activation of conv1. |
st184217 | PyTorch: 1.7.0
I modified the quantized weights of a net post-quantization as follows:
# instantiate the quantized net (not shown here).
# get one of the conv layers
tmp = model_int8.state_dict()['features.0.weight']
scales = tmp.q_per_channel_scales()
zero_pts = tmp.q_per_channel_zero_points()
axis = tmp.q_per_channel_axis()
# get int repr
tmp_int8 = tmp.int_repr()
# change value (value change is dependent on the int8 value)
tmp_int8[0][0][0][0] = new_value
# how to convert tmp_int8 to torch.qint8 type?
new_tmp = torch._make_per_channel_quantized_tensor(tmp_int8, scales, zero_pts, axis)
# based on the above step:
model_int8.features[0].weight = new_tmp
However, model_int8.features[0].weight shows updated values, but model_int8.state_dict()['features.0.weight'] shows old vales.
I also tried saving the modified model and reloading, but the problem persists.
Question is which weight values are being used for inference? I do not see change in the classification results. |
st184218 | Solved by Vasiliy_Kuznetsov in post #2
You need to use conv_obj._weight_bias() to get the weight, and conv_obj.set_weight_bias(w, b) to set the weight. Here is an example: gist:8ad0e472e8743e142a1e72cb0f9efb1d · GitHub |
st184219 | You need to use conv_obj._weight_bias() to get the weight, and conv_obj.set_weight_bias(w, b) to set the weight. Here is an example: gist:8ad0e472e8743e142a1e72cb0f9efb1d · GitHub 11 |
st184220 | I’m using pytorch 1.7.1 now.
Trying to use QAT for some project.
Everything works well on training.
But when I tried to inference on GPU,
Some errors appeared.
Is QAT inference not support GPU at this moment?
Will it be supported at later patch? |
st184221 | Solved by Vasiliy_Kuznetsov in post #2
Currently int8 kernels are not supported on GPU. The workaround is to either eval on GPU without converting to int8 (with fake_quant emulating int8), or to eval int8 on CPU. Adding int8 support on CUDA is being considered but there is no timeline at the moment. |
st184222 | Currently int8 kernels are not supported on GPU. The workaround is to either eval on GPU without converting to int8 (with fake_quant emulating int8), or to eval int8 on CPU. Adding int8 support on CUDA is being considered but there is no timeline at the moment. |
st184223 | I have a post-training statically quantized NN. I want to change a couple of weight values of one of the convolution layer before the inference. The weight change should be based on int8 values and not on the save-format (which is torch.qint8 with corresponding scales and zero points). So far I have done the following:
# instantiate the quantized net (not shown here).
# get one of the conv layers
tmp = model_int8.state_dict()['features.0.weight']
# get int repr
tmp_int8 = tmp.int_repr()
# change value (value change is dependent on the int8 value)
tmp_int8[0][0][0][0] = new_value
# TODO: how to convert tmp_int8 to torch.qint8 type?
new_tmp = convert_int8_to_qint8(tmp_int8) # how to do this
# TODO: based on the above step:
model_int8.state_dict()['features.0.weight'] = new_tmp
My question is how to change the int8 tensor to torch.qint8 based on the scales and zero_points of the original weight tensor (something similar to torch.quantized_per_channel() but for int8 to qint8)? OR is there another way to do this?
Thank you. |
st184224 | Solved by jerryzh168 in post #2
if this is a per channel quantized tensor? you can call pytorch/native_functions.yaml at master · pytorch/pytorch · GitHub to assemble a per channel quantized tensor with int_repr, scales and zero_points etc.
however, this api might be deprecated in future pytorch releases |
st184225 | if this is a per channel quantized tensor? you can call pytorch/native_functions.yaml at master · pytorch/pytorch · GitHub 3 to assemble a per channel quantized tensor with int_repr, scales and zero_points etc.
however, this api might be deprecated in future pytorch releases |
st184226 | @jerryzh168 thanks for the reply. yes, it is per channel quantization.
So, I can now generate torch.qint8 tensor from tmp_int8 tensor.
I verified by printing the new_tmp tensor to see the new values are changed.
However, the following line is not updating the model weights:
model_int8.state_dict()['features.0.weight'] = new_tmp
When I print the model_int8.state_dict()['features.0.weight'] it still shows the old values. How can I fix this?
Thank you. |
st184227 | Hi @jerryzh168 , thank you, it worked.
Can you please tell me the difference between model_int8.state_dict()['features.0.weight'] & model_int8.features[0].weight?
Because after changing the code you suggested above, when I tried printing with model_int8.features[0].weight it shows updated values. But model_int8.state_dict()['features.0.weight'] shows old vales. |
st184228 | oh really? model_int8.state_dict() is probably a read only copy and changing that won’t change the original weight. I’m not sure why model_int8.state_dict() is not updated after you modify the weight though, that is not expected I think, are you sure that it did not change? |
st184229 | I reran to confirm it just now and model_int8.features[0].weight shows updated values, but model_int8.state_dict()['features.0.weight'] shows old vales.
Do I need to save and reload the model after a manual weight update – I do not see why?
Also, in the above case of mismatch, which weight values will be used for inference: model_int8.features[0].weight or model_int8.state_dict()['features.0.weight']? |
st184230 | @jerryzh168 Hey, My problem persists. Please check: Changed quantized weights not reflecting in state_dict() 15
Thanks for help. |
st184231 | When running this code I can’t fuse Linear + ReLU, though the documentation says that it is possible (https://pytorch.org/docs/stable/_modules/torch/quantization/fuse_modules.html#fuse_modules 2)
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.quant = torch.quantization.QuantStub()
self.linear = nn.Linear(32, 64)
self.relu = nn.ReLU()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, input):
y = self.quant(input)
y = self.linear(input)
y = self.relu(y)
y = self.dequant(y)
return y
def fuse_model(self):
torch.quantization.fuse_modules(self, modules_to_fuse=[[self.linear, self.relu]], inplace=True)
print("Create model...")
model_ = MyModule()
print("Create model... [OK]")
in_ = torch.ones(32, 32)
print("Forward pass...")
y = model_(in_)
print("Forward pass... [OK]")
print("Fusing model... ")
model_.eval()
model_.fuse_model()
print("Fusing model... [OK]")
I got the following error:
---------------------------------------------------------------------------
ModuleAttributeError Traceback (most recent call last)
<ipython-input-38-7ffe2cc10aa7> in <module>()
24
25 model_.eval()
---> 26 model_.fuse_model()
27
4 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
777 return modules[name]
778 raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
--> 779 type(self).__name__, name))
780
781 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
ModuleAttributeError: 'Linear' object has no attribute 'split'
Torch version: 1.7.0+cu101
Torch vision version: 0.8.1+cu101 |
st184232 | Solved by jerryzh168 in post #2
I think you need the following:
def fuse_model(self):
torch.quantization.fuse_modules(self, modules_to_fuse=[["linear", "relu"]], inplace=True) |
st184233 | I think you need the following:
def fuse_model(self):
torch.quantization.fuse_modules(self, modules_to_fuse=[["linear", "relu"]], inplace=True) |
st184234 | Hi, everyone!
I have a problem when I am doing quantization on a part of the model like below.
I had no error when I did quantization for the whole model.
import torch
class M(torch.nn.Module):
def __init__(self):
super(M, self).__init__()
self.quant = torch.quantization.QuantStub()
self.conv = torch.nn.Conv2d(1, 1, 1)
self.relu = torch.nn.ReLU()
self.conv2 = torch.nn.Conv2d(1, 1, 1)
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.conv(x)
x = self.relu(x)
x = self.quant(x)
x = self.conv2(x)
x = self.dequant(x)
x = self.relu(x)
return x
model_fp32 = M()
model_fp32.eval()
model_fp32.conv2.qconfig = torch.quantization.get_default_qconfig('fbgemm')
model_fp32_prepared = torch.quantization.prepare(model_fp32)
input_fp32 = torch.randn(40, 1, 16, 16)
model_fp32_prepared(input_fp32) # calibration
model_int8 = torch.quantization.convert(model_fp32_prepared)
result = model_int8(input_fp32)
The code gives a runtime error at the last inference step.
/usr/local/lib/python3.6/dist-packages/torch/quantization/observer.py:121: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch.
reduce_range will be deprecated in a future release of PyTorch."
Traceback (most recent call last):
File "/notebooks/PycharmProjects/NetworkCompression/pytorch_quantization_prac/main_2.py", line 33, in <module>
result = model_int8(input_fp32)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/notebooks/PycharmProjects/NetworkCompression/pytorch_quantization_prac/main_2.py", line 17, in forward
x = self.conv2(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/quantized/modules/conv.py", line 332, in forward
input, self._packed_params, self.scale, self.zero_point)
RuntimeError: Could not run 'quantized::conv2d.new' with arguments from the 'CPU' backend. 'quantized::conv2d.new' is only available for these backends: [QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].
QuantizedCPU: registered at /pytorch/aten/src/ATen/native/quantized/cpu/qconv.cpp:858 [kernel]
BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
AutogradOther: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback]
AutogradXLA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback]
Tracer: fallthrough registered at /pytorch/torch/csrc/jit/frontend/tracer.cpp:967 [backend fallback]
Autocast: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:254 [backend fallback]
Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:511 [backend fallback]
VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
Process finished with exit code 1 |
st184235 | Solved by Vasiliy_Kuznetsov in post #2
Hi @fred107, the reason it is not working is because model_fp32.quant does not have a qconfig specified. The quantization convert API only swaps modules with qconfig defined. You could fix this by doing something like model_fp32.quant.qconfig = torch.quantization.get_default_qconfig('fbgemm') befo… |
st184236 | Hi @fred107, the reason it is not working is because model_fp32.quant does not have a qconfig specified. The quantization convert API only swaps modules with qconfig defined. You could fix this by doing something like model_fp32.quant.qconfig = torch.quantization.get_default_qconfig('fbgemm') before calling prepare. |
st184237 | Thank you @Vasiliy_Kuznetsov . It works now !!
Then what about model_fp32.dequant.qconfig? It works out even when model_fp32.dequant.qconfig is not defined. |
st184238 | DequantStub() does nothing but just change the variable from quint to float32. It does not store any specific information, scale factor etc.
That’s why when trying to quantize multiple blocks, we need multiple instances of QuantStub(). But, single instance of DeQuantStub() can dequant them all. |
st184239 | cc @jerryzh168, is it expected that a DeQuantStub is swapped without a qconfig defined in Eager mode? |
st184240 | previously that is true but it’s fixed after [quant][eagermode][fix] Fix quantization for DeQuantStub by jerryzh168 · Pull Request #49428 · pytorch/pytorch · GitHub 22 |
st184241 | Net: Alexnet
Dataset: Imagenet
Quantization: Post-training Static Quantization with ‘fbgemm’.
The net:
import os
import sys
import random
import torch
import torch.nn as nn
import torchvision
import torchvision.datasets as datasets
import torchvision.models as models
import torchvision.transforms as transforms
from utils import load_state_dict_from_url
import collections
######## AlexNet model ########
__all__ = ['AlexNet', 'alexnet']
model_urls = { 'alexnet': 'https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth', }
class AlexNet(nn.Module):
def __init__(self, num_classes=1000):
super(AlexNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
self.avgpool = nn.AdaptiveAvgPool2d((6, 6))
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(256 * 6 * 6, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes),
)
# QuantStub converts tensors from floating point to quantized
self.quant = torch.quantization.QuantStub()
# DeQuantStub converts tensors from quantized to floating point
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
# manually specify where tensors will be converted from floating
# point to quantized in the quantized model
x = self.quant(x)
x = self.features(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.classifier(x)
# manually specify where tensors will be converted from floating
# point to quantized in the quantized model
x = self.quant(x)
return x
def alexnet(pretrained=False, progress=True, **kwargs):
r"""AlexNet model architecture from the
`"One weird trick..." <https://arxiv.org/abs/1404.5997>`_ paper.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
model = AlexNet(**kwargs)
if pretrained:
state_dict = load_state_dict_from_url(model_urls['alexnet'], progress=progress)
model.load_state_dict(state_dict)
return model
However, I get the following error:
RuntimeError: Could not run 'aten::quantize_per_tensor' with arguments from the 'QuantizedCPU' backend. 'aten::quantize_per_tensor' is only available for these backends: [CPU, CUDA, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
How to resolve this?
Thanks. |
st184242 | Solved by jerryzh168 in post #2
This is because there is duplicated QuantStub placed in the model, I think forward should be:
def forward(self, x):
# manually specify where tensors will be converted from floating
# point to quantized in the quantized model
x = self.quant(x)
x = self.features(x)
x = self.avgpoo… |
st184243 | This is because there is duplicated QuantStub placed in the model, I think forward should be:
def forward(self, x):
# manually specify where tensors will be converted from floating
# point to quantized in the quantized model
x = self.quant(x)
x = self.features(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.classifier(x)
# manually specify where tensors will be converted from floating
# point to quantized in the quantized model
x = self.dequant(x)
return x
If you set qconfig for the full model, then all modules will be quantized, the output of self.classifier will be quantized as well. Why do you call self.quant(x) in the end instead of self.dequant(x) in the end? |
st184244 | What’s the difference between torch.nn.quantized.FloatFunctional.add and torch.nn.quantized.FloatFunctional.add_scalar? Is the former one used for tensor + tensor and the latter one used for tensor + scalar ? Or anything other? |
st184245 | Solved by Vasiliy_Kuznetsov in post #2
Yes, this is correct. You can check this out in the source code, here: pytorch/functional_modules.py at b89827b73f7881a8108518452768654894862e5d · pytorch/pytorch · GitHub |
st184246 | feiyuhuahuo:
Is the former one used for tensor + tensor and the latter one used for tensor + scalar ? Or anything other?
Yes, this is correct. You can check this out in the source code, here: pytorch/functional_modules.py at b89827b73f7881a8108518452768654894862e5d · pytorch/pytorch · GitHub 24 |
st184247 | Hello,
I’ve read the excellent static quantization tutorial, which worked really well with my MobileNet_v2 pretrained weights
pytorch.org
(beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials... 3
Now I want to do the same with Inception_v3 model. Thing is, when I try to load the state dictionary I get the following error (full trace bellow), why and how do I fix that? Do I need to create a new class as they did in the tutorial?
per_channel_quantized_model = torchvision.models.quantization.inception_v3(pretrained=True, aux_logits=False, quantize=True)
per_channel_quantized_model.fc = nn.Linear(2048, 2)
state_dict = torch.load(float_model_file, map_location=torch.device(‘cpu’))
per_channel_quantized_model.load_state_dict(state_dict)
File “”, line 5, in
per_channel_quantized_model.load_state_dict(state_dict)
File “/home/nimrod/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 1030, in load_state_dict
load(self)
File “/home/nimrod/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 1028, in load
load(child, prefix + name + ‘.’)
File “/home/nimrod/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 1028, in load
load(child, prefix + name + ‘.’)
File “/home/nimrod/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 1025, in load
state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)
File “/home/nimrod/anaconda3/lib/python3.7/site-packages/torch/nn/quantized/modules/conv.py”, line 120, in _load_from_state_dict
state_dict[prefix + ‘weight’], state_dict[prefix + ‘bias’])
KeyError: ‘Conv2d_1a_3x3.conv.bias’ |
st184248 | Solved by Vasiliy_Kuznetsov in post #8
Here is a high level of the differences:
# fp32
# torchvision.models.inception_v3(pretrained=False, aux_logits=False)
# state_dict keys of Conv2d_1a_3x3
# conv
Conv2d_1a_3x3.conv.weight
# bn
Conv2d_1a_3x3.bn.weight
Conv2d_1a_3x3.bn.bias
Conv2d_1a_3x3.bn.runnin… |
st184249 | Using eager mode quantization often changes the model hierarchy of the model, so it’s possible that the module hierarchy of your quantized model no longer matches your fp32 state_dict. What is the origin of the fp32 state_dict, is it also from torchvision?
In practice, people usually fix this by either loading the weights before fusing the model, or by writing custom state dict mappers which modify the state keys according to the module hierarchy changes. |
st184250 | Yes, the original state dict is a fp32 state_dict, a torchvision model.
I loaded the state_dict before fusing the model, I fuse it only afterward. So writing a state_dict mappers (or creating the model class “from scratch” as they did in the tutorial) is the only solution?
Can you give an example of how I can create such a mapper? |
st184251 | Nimrod_Daniel:
per_channel_quantized_model = torchvision.models.quantization.inception_v3(pretrained=True, aux_logits=False, quantize=True)
per_channel_quantized_model = torchvision.models.quantization.inception_v3(pretrained=True, aux_logits=False, quantize=True)
this line should already load a pretrained model. To clarify, is the reason you are loading weights again is to populate the new fc layer, or something else? |
st184252 | It loads imagenet weights, but I need a different representation, not just the fc layer, meaning the weights of the whole net. You can ignore the pretrained value. |
st184253 | Makes sense. The torchvision.models.quantization.inception_v3(pretrained=True, aux_logits=False, quantize=True) line is torchvision’s best effort to provide a pretrained model ready for quantization for use cases where the default fp32 pretrained weights are fine. Unfortunately, if you need to load a different version of floating point weights, a mapping of the state dict is required.
Here is a code snippet which does this for an unrelated model, but the principle is the same:
def get_new_bn_key(old_bn_key):
# tries to adjust the key for conv-bn fusion, where
# root
# - conv
# - bn
#
# becomes
#
# root
# - conv
# - bn
return old_bn_key.replace(".bn.", ".conv.bn.")
non_qat_to_qat_state_dict_map = {}
for key in original_state_dict.keys():
if key in new_state_dict.keys():
non_qat_to_qat_state_dict_map[key] = key
else:
maybe_new_bn_key = get_new_bn_key(key)
if maybe_new_bn_key in new_state_dict.keys():
non_qat_to_qat_state_dict_map[key] = maybe_new_bn_key
...
# when loading the state dict, use the mapping created above
We are planning to release a tool soon (hopefully v1.8) to automate all of this, so this should get easier in the near future. |
st184254 | Yeah, I get the idea. You copy the keys if they with the same name, or change the name and then copy the keys in the new map. Afterward, you load the weights to the quantized model according to the new mapping.
If I look at the Inception and Quantized Inception’s state_dict (before fusing, of course) then they have completely different names, and also a different length of state_dict. How can I tackle that ? |
st184255 | Here is a high level of the differences:
# fp32
# torchvision.models.inception_v3(pretrained=False, aux_logits=False)
# state_dict keys of Conv2d_1a_3x3
# conv
Conv2d_1a_3x3.conv.weight
# bn
Conv2d_1a_3x3.bn.weight
Conv2d_1a_3x3.bn.bias
Conv2d_1a_3x3.bn.running_mean
Conv2d_1a_3x3.bn.running_var
Conv2d_1a_3x3.bn.num_batches_tracked
# ready for quantization but not quantized
# mq = torchvision.models.quantization.inception_v3(pretrained=False, aux_logits=False, quantize=False)
# state_dict keys of Conv2d_1a_3x3
# conv
Conv2d_1a_3x3.conv.weight
# bn
Conv2d_1a_3x3.bn.weight
Conv2d_1a_3x3.bn.bias
Conv2d_1a_3x3.bn.running_mean
Conv2d_1a_3x3.bn.running_var
Conv2d_1a_3x3.bn.num_batches_tracked
# quantized and fused
# mq = torchvision.models.quantization.inception_v3(pretrained=False, aux_logits=False, quantize=True)
# state_dict keys of Conv2d_1a_3x3
# conv, including quantization-specific scale+zp
Conv2d_1a_3x3.conv.weight
Conv2d_1a_3x3.conv.bias
Conv2d_1a_3x3.conv.scale
Conv2d_1a_3x3.conv.zero_point
# no bn, it was fused into the conv
You could use the ready for quantization but not quantized model if you are running quantization yourself, the state_dict will be closer to fp32 version since it is before fusion. Then you’d have to go block by block and see if there are additional differences. |
st184256 | Nimrod_Daniel:
they have completely different names
which names in particular are different? It should be pretty similar. |
st184257 | The state_dicts have the same lengths, my mistake. That was quite odd and confusing, though the lengths make sense now. The naming differences shouldn’t be much of a problem.
I’ll deal with that and then perform the fusion.
It could have been great if I could perform a QAT using CUDA instead of dealing with the conversion, hopefully a CUDA support will available soon.
So far the PTQ with optimized calibration works really well with my data, I get almost no drop in AP:) , but maybe in slightly different cases it would be useful in order to avoid drop in AP.
Thanks. |
st184258 | Nimrod_Daniel:
It could have been great if I could perform a QAT using CUDA instead of dealing with the conversion, hopefully a CUDA support will available soon.
calling prepare and running the QAT fine-tuning is supported on CUDA. The only thing not supported is calling convert and running the quantized kernels. Not sure if that is what you were referring to. |
st184259 | Hello everyone, hope you are having a great time.
I have been playing with quantization recently, and during my last experiments, I tried to change something in my forward pass likethis :
def forward(self, x):
out1 = self.p1(x)
out2 = self.p2(out1)
out3 = self.p3(out2)
out1 = out1.mean(3).mean(2)
out2 = out2.mean(3).mean(2)
out3 = out3.mean(3).mean(2)
out5 = torch.cat((out1, out2, out3), 1)
output = self.classifier(out5)
return output
to
def forward(self, x):
out1 = self.p1(x)
out2 = self.p2(out1)
out3 = self.p3(out2)
out1 = F.avg_pool2d(out1, kernel_size=out1.size()[2:]).view(out1.size(0), -1)
out2 = F.avg_pool2d(out2, kernel_size=out2.size()[2:]).view(out2.size(0), -1)
out3 = F.avg_pool2d(out3, kernel_size=out3.size()[2:]).view(out3.size(0), -1)
out5 = torch.cat((out1, out2, out3), 1)
output = self.classifier(out5)
return output
I did this so I can easily experiment with qconfig, for example, prevent these operations from participating in the quantization process. something like this :
qconfig = get_default_qconfig('fbgemm')
qconfig_dict = {'': qconfig,
"object_type": [
(torch.nn.Linear, None),
(torch.nn.functional.avg_pool2d, None)
]
}
But upon trying to start the whole quantization process after this change I get the following error :
model = torch._fx.symbolic_trace(model)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\_fx\symbolic_trace.py", line 168, in symbolic_trace
return Tracer().trace(root)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\_fx\symbolic_trace.py", line 152, in trace
self.graph.output(self.create_arg(fn(*args)))
File "d:\Codes\fac_ver\python\Side_Projects\Facial-Landmark-PyTorch\models\slim.py", line 197, in forward
out1 = F.avg_pool2d(out1, kernel_size=out1.size()[2:]).view(out1.size(0), -1)
TypeError: avg_pool2d(): argument 'kernel_size' must be tuple of ints, not Proxy
and it seems out.size(2:] is Proxy(getitem)
How should I get around this? I know I’m converting the model into symbolic_trace, but if I don’t, then I cant call the quantize_static_fx which requires this.
How should I go about this?
pn: I know I can also use torch.mean just fine, but I’d like to know how to get around that Proxything in general.
Thanks a lot in advance |
st184260 | Solved by Vasiliy_Kuznetsov in post #2
The fix would be to specify kernel size directly instead of dynamically inferring it from the input tensor. At the moment, this is a restriction of symbolic tracing.
Note: there will be more docs posted on this when FX graph mode quantization is officially released, ideally soon. |
st184261 | The fix would be to specify kernel size directly instead of dynamically inferring it from the input tensor. At the moment, this is a restriction of symbolic tracing.
Note: there will be more docs posted on this when FX graph mode quantization is officially released, ideally soon. |
st184262 | (Looks like an undefined behavior)
Torch 1.7.0, python 3.7, conda installation
I am trying to apply QAT to EfficientNet. Everything works good until I wrap my model into DataParallel. Then it crashes randomly with AssertionError during calculate_q_params call, most of all (~90%) during my first train batch run, sometimes after 5-10 runs.
First, I assumed that some weights burst to NaN, but
why everything is ok without DataParallel
the initial weights are not NaN, but in most cases it cannot do a single forward.
It never happens during evaluation. The layers the script crashes on are always different.
I can provide the full code via github, if needed - I could not create a compact minimum, it was not reproduced with simple networks.
Three stack trace examples (the last two are shortened, because the first lines are equal)
#1
Traceback (most recent call last):
File "/home/researcher/project_src/src/quantization/qat.py", line 161, in <module>
main(main_settings, qat_settings)
File "/home/researcher/project_src/src/quantization/qat.py", line 126, in main
model_input_feature=train_settings.model_input_feature)
File "/home/researcher/project_src/src/train.py", line 34, in train_one_epoch
outputs = model(_input)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
AssertionError: Caught AssertionError in replica 1 on device 1.
Original Traceback (most recent call last):
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/project_src/lib/models/qfriendly/single_frame_template.py", line 34, in forward
_, embedding = self.feature_extractor(x)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/project_src/lib/models/qfriendly/template_efficientnet/model.py", line 127, in forward
features, x = self.extract_features(inputs)
File "/home/researcher/project_src/lib/models/qfriendly/template_efficientnet/model.py", line 114, in extract_features
x = block(x, drop_connect_rate=drop_connect_rate)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/project_src/lib/models/qfriendly/template_efficientnet/mb_conv_block.py", line 81, in forward
x_squeezed = self._se_expand(self._activation(self._se_reduce(x_squeezed)))
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/project_src/lib/models/qfriendly/template_efficientnet/utils.py", line 34, in forward
x = self.conv2d(x)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 731, in _call_impl
hook_result = hook(self, input, result)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/quantize.py", line 82, in _observer_forward_hook
return self.activation_post_process(output)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/fake_quantize.py", line 90, in forward
_scale, _zero_point = self.calculate_qparams()
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/fake_quantize.py", line 85, in calculate_qparams
return self.activation_post_process.calculate_qparams()
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/observer.py", line 402, in calculate_qparams
return self._calculate_qparams(self.min_val, self.max_val)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/observer.py", line 248, in _calculate_qparams
min_val, max_val
AssertionError: min 0.23065748810768127 should be less than max 0.23042967915534973
#2
AssertionError: Caught AssertionError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/project_src/lib/models/qfriendly/single_frame_template.py", line 34, in forward
_, embedding = self.feature_extractor(x)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/project_src/lib/models/qfriendly/template_efficientnet/model.py", line 127, in forward
features, x = self.extract_features(inputs)
File "/home/researcher/project_src/lib/models/qfriendly/template_efficientnet/model.py", line 106, in extract_features
x = self._activation_stem(self._bn0(self._conv_stem(inputs)))
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/project_src/lib/models/qfriendly/template_efficientnet/utils.py", line 34, in forward
x = self.conv2d(x)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/qat/modules/conv.py", line 32, in forward
return self._conv_forward(input, self.weight_fake_quant(self.weight))
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/fake_quantize.py", line 90, in forward
_scale, _zero_point = self.calculate_qparams()
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/fake_quantize.py", line 85, in calculate_qparams
return self.activation_post_process.calculate_qparams()
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/observer.py", line 632, in calculate_qparams
return self._calculate_qparams(self.min_vals, self.max_vals)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/observer.py", line 252, in _calculate_qparams
min_val, max_val
AssertionError: min tensor([ -477068.8750, -36602.4375, -153710.1406, -673136.7500,
-1487181.5000, -887609.8750, -2090753.6250, -3073480.7500,
-135975.0625, -1857073.3750, -181464.7656, -228204.1094,
429050.0938, -678035.3750, 1770874.6250, -940542.6250,
-34900.1250, -92754.3359, -276413.1875, -346014.0938,
-771024.1875, nan, -2376894.5000, -2290334.0000,
-96295.6406, -910624.7500, -20395.1934, 411769.1875,
-206277.0938, -126536.9297, -240083.9688, nan],
device='cuda:0') should be less than max tensor([ 262472.2500, 523413.2500, 206003.6875, 138193.0625,
-625572.7500, -141716.9844, -521715.0000, -1475260.0000,
1374850.2500, 87139.2031, 52821.7578, 156602.1094,
1140287.7500, 897314.3750, 3182818.0000, 274146.6562,
445342.1875, 458844.4688, 382576.7812, 250239.7031,
378798.8438, nan, 70838.8828, -85726.8125,
107011.6406, -338712.8125, 377927.9375, 2790590.2500,
95255.9531, 304543.9688, 215083.4688, nan],
device='cuda:0')
#3
AssertionError: Caught AssertionError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/project_src/lib/models/qfriendly/single_frame_template.py", line 34, in forward
_, embedding = self.feature_extractor(x)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/project_src/lib/models/qfriendly/template_efficientnet/model.py", line 127, in forward
features, x = self.extract_features(inputs)
File "/home/researcher/project_src/lib/models/qfriendly/template_efficientnet/model.py", line 114, in extract_features
x = block(x, drop_connect_rate=drop_connect_rate)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/project_src/lib/models/qfriendly/template_efficientnet/mb_conv_block.py", line 76, in forward
x = self._activation(self._bn1(self._depthwise_conv(x)))
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 731, in _call_impl
hook_result = hook(self, input, result)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/quantize.py", line 82, in _observer_forward_hook
return self.activation_post_process(output)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/fake_quantize.py", line 90, in forward
_scale, _zero_point = self.calculate_qparams()
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/fake_quantize.py", line 85, in calculate_qparams
return self.activation_post_process.calculate_qparams()
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/observer.py", line 402, in calculate_qparams
return self._calculate_qparams(self.min_val, self.max_val)
File "/home/researcher/miniconda3/lib/python3.7/site-packages/torch/quantization/observer.py", line 248, in _calculate_qparams
min_val, max_val
AssertionError: min nan should be less than max nan |
st184263 | I’m discussing it now with my colleagues. It is a part of a big project that cannot be put in a free access. I am trying to pull out the necessary code from there so that the issue is reproduced |
st184264 | Repro is here: https://bitbucket.org/ms_lilibeth/qat_dataparallel_bug/src/master/ 3
I’ve reproduced the bug only with my custom implementation of EfficientNet. In order to make it quantization-friendly I’ve done the following:
Multiplication and addition operations were replaced with
quantization-friendly FloatFunctional.
Other MBConvBlocks are used: they contain Conv2D instance instead of conv2d() function call
Paddings changed to be symmetrical about kernel size (3x3 kernel => 1x1 padding; 5x5 kernel => 2x2 padding). No more ZeroPad2D
Run the script multiple times: sometimes the bug appears, sometimes not. The original implementation of EfficientNet never fails |
st184265 | Hi @e_sh, apoligies for the long delay, and thank you so much for providing the repro. Your repro helped us find a bug in PyTorch. The issue was that for per-channel observers, replication was not working properly in some cases in nn.DataParallel, specifically when the scale and zero_point buffers were not initialized. We have a fix upcoming in fix unflatten_dense_tensor when there is empty tensor inside by zhaojuanmao · Pull Request #50321 · pytorch/pytorch · GitHub 3. A short term workaround you could do before the fix lands is to run an input through the network before replicating it - this will populate all the quantization buffers.
As an aside, I’d recommend looking into using nn.DistributedDataParallel, it is generally recommended over nn.DataParallel. |
st184266 | Is model pruning 3 a sensible (or even possible) second model optimization method after the model has already been quantized? |
st184267 | I read this from the PyToch docs.
For static quantization techniques which quantize activations, the user needs to do the following in addition:
Specify where activations are quantized and de-quantized. This is done using QuantStub and DeQuantStub modules.
Use torch.nn.quantized.FloatFunctional to wrap tensor operations that require special handling for quantization into modules. Examples are operations like add and cat which require special handling to determine output quantization parameters.
Fuse modules: combine operations/modules into a single module to obtain higher accuracy and performance. This is done using the torch.quantization.fuse_modules() API, which takes in lists of modules to be fused. We currently support the following fusions: [Conv, Relu], [Conv, BatchNorm], [Conv, BatchNorm, Relu], [Linear, Relu]
`
Seems that these three operations are not needed for quantization aware training. But from the API example and another tutorial((beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials 1.7.1 documentation 1), these three operations are also inclueded. So should they be implemented? |
st184268 | Yes, all of these are used for QAT in Eager mode. Fusion is optional but recommended for higher performance and accuracy. Is there any information / doc stating otherwise? |
st184269 | Hi, @Vasiliy_Kuznetsov, since add and cat are common operations in CNN models, so every add or cat needs to be wrapped by torch.nn.quantized.FloatFunctional? And is there any other operation needs to do so? Only add and cat? |
st184270 | In Eager mode - yes. There is a full list of ops supported by FloatFunctional here: torch.nn.quantized — PyTorch 1.7.0 documentation 8. |
st184271 | Hi,
I would like to use quantization method to save my model state dict with int8 data type so that I could save my storage space. And I also need to load these int 8 states back to a float 32 model for inference. How could I do this please ? |
st184272 | Solved by Vasiliy_Kuznetsov in post #4
You can use pytorch/observer.py at master · pytorch/pytorch · GitHub, this is also what the official quantization flows use. For example,
obs = torch.quantization.MinMaxObserver(...)
# run data through the observer
obs(your_tensor)
# get scale and zp from the data observer has seen
scale, zp = obs… |
st184273 | This isn’t a standard flow PyTorch quantization provides, but you could do something like this:
for a Tensor, use torch.quantize_per_tensor(x, ...) to convert fp32 -> int8, and x.dequantize() to convert from int8 to fp32.
override the _save_to_state_dict and _load_from_state_dict functions on the modules you’d like to do this on to use your custom logic. You can call quantize_per_tensor when saving, and convert it back with dequantize when loading. |
st184274 | Hi,
Thanks for replying !!! There is a scale and bias term in the function torch.quantize_per_tensor. When I convert float32 parameter to int8 parameter, how could I determine these args? Is there any inner processing method, or should I figure out the max/min values of each tensor myself ? |
st184275 | You can use pytorch/observer.py at master · pytorch/pytorch · GitHub 3, this is also what the official quantization flows use. For example,
obs = torch.quantization.MinMaxObserver(...)
# run data through the observer
obs(your_tensor)
# get scale and zp from the data observer has seen
scale, zp = obs.calculate_qparams()
One other thing to consider could be using fp16, that way you could just do x.half() without worrying about quantization params. |
st184276 | Hi,
I tried this code:
net = nn.Sequential(torch.nn.Linear(32, 32),)
net.qconfig = torch.quantization.get_default_qconfig('fbgemm')
net = torch.quantization.prepare(net)
inten = torch.randn(224, 32)
net(inten)
qnet = torch.quantization.convert(net)
print(qnet)
print(qnet.state_dict().keys())
print(qnet.state_dict()['weight'])
Only to find that there is not weight attribute in the qnet. How could I make the attribute same as original nn.Linear ? |
st184277 | Solved by Vasiliy_Kuznetsov in post #2
The weight for quantized Linear is packed. You can access it by doing linear_layer_instance._weight_bias() (see pytorch/linear.py at master · pytorch/pytorch · GitHub). |
st184278 | The weight for quantized Linear is packed. You can access it by doing linear_layer_instance._weight_bias() (see pytorch/linear.py at master · pytorch/pytorch · GitHub 1). |
st184279 | It seems disable_fake_quant and other similar functions not working for the following case (Pytorch 1.7).
class LeNet(nn.Module):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(28 * 28, 10)
self.relu1 = nn.ReLU(inplace=True)
def forward(self, x):
return self.relu1(self.l1(x.view(x.size(0), -1)))
fq_weight = torch.quantization.FakeQuantize.with_args(\
observer=torch.quantization.MovingAverageMinMaxObserver.with_args(),
quant_min=0, quant_max=255, dtype=torch.quint8)
fq_activation = torch.quantization.FakeQuantize.with_args(\
observer=torch.quantization.MovingAverageMinMaxObserver.with_args(),
quant_min=0, quant_max=255, dtype=torch.quint8)
model = LeNet()
model.l1.qconfig = torch.quantization.QConfig(activation=fq_activation, weight=fq_weight)
torch.quantization.prepare_qat(model, inplace=True)
model.l1.apply(torch.quantization.disable_fake_quant)
image1011×265 21.1 KB
The above case shows it works just fine. Now, I make my own FakeQuantize as follows, then it doesn’t work any more.
class MyFakeQuantize(torch.quantization.FakeQuantize):
def __init__(self, observer, quant_min, quant_max, n_cluster=0, **observer_kwargs):
super().__init__(observer, quant_min, quant_max, **observer_kwargs)
fq_weight = MyFakeQuantize.with_args(\
observer=torch.quantization.MovingAverageMinMaxObserver.with_args(),
quant_min=0, quant_max=255, dtype=torch.quint8)
fq_activation = MyFakeQuantize.with_args(\
observer=torch.quantization.MovingAverageMinMaxObserver.with_args(),
quant_min=0, quant_max=255, dtype=torch.quint8)
model2 = LeNet()
model2.l1.qconfig = torch.quantization.QConfig(activation=fq_activation, weight=fq_weight)
torch.quantization.prepare_qat(model2, inplace=True)
model2.l1.apply(torch.quantization.disable_fake_quant)
image1009×259 21.2 KB
Is this expected behavior? |
st184280 | thyeros:
class MyFakeQuantize(torch.quantization.FakeQuantize):
def __init__(self, observer, quant_min, quant_max, n_cluster=0, **observer_kwargs):
super().__init__(observer, quant_min, quant_max, **observer_kwargs)
fq_weight = MyFakeQuantize.with_args(\
observer=torch.quantization.MovingAverageMinMaxObserver.with_args(),
quant_min=0, quant_max=255, dtype=torch.quint8)
fq_activation = MyFakeQuantize.with_args(\
observer=torch.quantization.MovingAverageMinMaxObserver.with_args(),
quant_min=0, quant_max=255, dtype=torch.quint8)
model2 = LeNet()
model2.l1.qconfig = torch.quantization.QConfig(activation=fq_activation, weight=fq_weight)
torch.quantization.prepare_qat(model2, inplace=True)
model2.l1.apply(torch.quantization.disable_fake_quant)
hi @thyeros, I cannot reproduce this on master. Could you check if you reproduce this on v1.7 or master? Which version did you originally see the issue on? |
st184281 | thyeros:
class MyFakeQuantize(torch.quantization.FakeQuantize):
def __init__(self, observer, quant_min, quant_max, n_cluster=0, **observer_kwargs):
super().__init__(observer, quant_min, quant_max, **observer_kwargs)
fq_weight = MyFakeQuantize.with_args(\
observer=torch.quantization.MovingAverageMinMaxObserver.with_args(),
quant_min=0, quant_max=255, dtype=torch.quint8)
fq_activation = MyFakeQuantize.with_args(\
observer=torch.quantization.MovingAverageMinMaxObserver.with_args(),
quant_min=0, quant_max=255, dtype=torch.quint8)
model2 = LeNet()
model2.l1.qconfig = torch.quantization.QConfig(activation=fq_activation, weight=fq_weight)
torch.quantization.prepare_qat(model2, inplace=True)
model2.l1.apply(torch.quantization.disable_fake_quant)
This was with 1.7.0, plz see the snapshot.
image755×1059 142 KB |
st184282 | hi @thyeros, unfortunately we cannot repro this on master and this is not a known KP. Could you try the debug script in gist:61ac9744858509e175d4ce50258782e4 · GitHub 1 and narrow down which exact part of disable_fake_quant is not working in your environment? In the debug script, I just copied the disable_fake_quant definition so it’s easy to debug. |
st184283 | Hi, Vasiliy:
First of all, thanks for the help in this matter. Really appreciated. The code from you doesn’t work out of the box, as FakeQuantizeBase is not available.
image960×133 24.1 KB
Since I know that FakeQuantize is based on FakeQuantizeBase, it looks strange to me as well, hence I checked out the torch/quantization/fake_quantize.py and found out that the pytorch 1.7.0 in my image doesn’t have FakeQuantizeBase and has very different implementations for disable/enable_quant/observer.
image850×819 86.7 KB
Can you tell if this is the right one for pytorch1.7.0?? |
st184284 | Thanks for the additional info. Looks like what you are describing was not supported, and added recently with https://github.com/pytorch/pytorch/pull/48072 1. Before that PR, the type(mod) == FakeQuantize check would not pass for custom fake quant classes. It should pass now. Could you try updating your PyTorch installation and see if it works with 1.7.1 or master? |
st184285 | Looking at Releases · pytorch/pytorch · GitHub, it doesn’t look like this PR was in 1.7.1. So, you could try master. Or, you could write your own function which disables fake_quants (for example, by copying the code after the PR above) and call that, without having to update your PyTorch installation. |
st184286 | Thanks Vasiliy for double confirming the latest PR. I will perhaps live with a custom implementation for now (like yours) wait for the official release. Thanks again! |
st184287 | I used fbgemm as qconfig, and I checked that my cpu (Intel Xeon silver 4114) supports AVX2 operations. But my quantized model takes 3 times longer to inference than original fp32 model.
My original model is as follows,
#Original model
class NormUnet(nn.Module):
def __init__(self, in_chans, out_chans, chans, num_pools):
super().__init__()
self.unet = Unet(
in_chans=in_chans,
out_chans=out_chans,
chans=chans,
num_pool_layers=num_pools
)
def norm(self, x):
b, h, w = x.shape
x = x.view(b, h * w)
mean = x.mean(dim=1).view(b, 1, 1)
std = x.std(dim=1).view(b, 1, 1)
x = x.view(b, h, w)
return (x - mean) / std, mean, std
def unnorm(self, x, mean, std):
return x * std + mean
def forward(self, x):
x, mean, std = self.norm(x)
x = x.unsqueeze(1)
x = self.unet(x)
x = x.squeeze(1)
x = self.unnorm(x, mean, std)
return x
As std in norm method doesn’t support quantization, I only quantized unet, then put it back to original model.
# Load pre-trained model.
model = NormUnet(in_chans=1, out_chans=1, chans=128, num_pools=4)
model = torch.nn.parallel.DataParallel(model)
checkpoint = torch.load(model_path)
model.load_state_dict(checkpoint['model'])
model = model.module
model.to('cpu')
#Extract only unet part
unet_before_quantize = model.unet
#For quantization
class QuantizedUNet(nn.Module):
def __init__(self, model_fp32):
super(QuantizedUNet, self).__init__()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
self.model_fp32 = model_fp32
def forward(self, x):
x = self.quant(x)
x = self.model_fp32(x)
x = self.dequant(x)
return x
quantized_model = QuantizedUNet(model_fp32=unet_before_quantize)
quantization_config = torch.quantization.get_default_qconfig("fbgemm")
quantized_model.qconfig = quantization_config
torch.quantization.prepare(quantized_model, inplace=True)
Then I calibrate and quantize. After all these, Quantized model output is what I expected, but inference time gets about 3~4 times longer.
#I calibrated in between
quantized_model = torch.quantization.convert(quantized_model, inplace=True)
quantized_model.eval()
#I replace only unet block
model.unet = quantized_model
Where did I make mistakes?
I tried torch.set_num_threads(1), and export to torchscript then load back. Couldn’t get faster than original fp32 model.
Helps will be really appreciated.
Thanks! |
st184288 | Hi @KURI, have you tried profiling both models to see which layers are the bottlenecks? That would help narrow down the issue. PyTorch Profiler — PyTorch Tutorials 1.7.1 documentation 8 has an example. |
st184289 | Thanks for advice!
Unfortunately, profiler doesn’t work on quantized model. Its execution never ends. On fp32 model, profiler works well. |
st184290 | Hi @KURI, thanks for reporting, your code to call the quantization APIs looks good, and that sounds like a bug somewhere in PyTorch. Is your model architecture shareable? If it is, could you file a github issue with a repro and a description of your environment? Then we could get someone on the fbgemm team to take a look. |
st184291 | Hello. This is a follow-up question from Torch Jit Modules without parameters 15. How can I get the original modules from a RecursiveScriptModule object? I want to get the quantized weights of the convolutional layers to compare their distribution with their full precision counterparts, but even after studying the documentation of TorchScript, I could not get around the RecursiveScriptModule objects.
I really appreciate any help you can provide. |
st184292 | Solved by Vasiliy_Kuznetsov in post #2
hi @AfonsoSalgadoSousa, if you have a scripted convolution layer qconv, you can inspect the weight and bias by doing qconv._packed_params.__getstate__(). |
st184293 | hi @AfonsoSalgadoSousa, if you have a scripted convolution layer qconv, you can inspect the weight and bias by doing qconv._packed_params.__getstate__(). |
st184294 | Hi. I was trying to plot the weight distribution of the full precision model and its quantized counterpart and encountered the following issue: although “.named_modules()” has all existing modules in the full precision model, the .named_parameters() does not have all weights matching the existing modules—namely, only the nn.BatchNorm3D have parameters.
An extract of a print from q_model.named_parameters() (q_model is a MobileNet3D, post training quantized and saved with torch.jit.save(torch.jit.script(model)…):
features.0.1.weight
features.0.1.bias
features.1.conv.1.weight
features.1.conv.1.bias
features.1.conv.4.weight
features.1.conv.4.bias
features.2.conv.1.weight
features.2.conv.1.bias
features.2.conv.4.weight
features.2.conv.4.bias
features.2.conv.7.weight
features.2.conv.7.bias
Notice layers 0 (Convolution), 2 (ReLU), 3 (Convolution), 5 (ReLU) and 6 (Convolution) are not present (have no weights or biases?).
The modules were not fused for this model, which otherwise could explain the behaviour. Unless fusion is done under the rug. Does anyone have some insights on this? |
st184295 | Solved by Vasiliy_Kuznetsov in post #6
model.named_parameters() only returns parameters, i.e. instances of torch.Parameter. quantized convs pack weight and bias into a special object which is not using torch.Parameter, which is why it does not show up via named_parameters(). You can inspect the weight and bias of quantized convs by usi… |
st184296 | can you post your code on how you confirm only nn.BatchNorm3d has parameters?
also what error is given when the compressed model? does the same error not appear in the baseline model? |
st184297 | Hi. Thank you for the reply. Neither model gives errors in saving or loading, nor inference. The code I used to check the behaviour was the following:
model, _ = generate_model(args)
fp_model = resume_model(
(args.model_path / 'full_precision' / 'mobilenet3d_ft_50ep.pth'), model)
quant_model = torch.jit.load((args.model_path / 'quantized' / 'quant_mobilenet3d_ft_50ep.pth').as_posix())
for fp_name, _ in fp_model.named_parameters():
print(fp_name)
for q_name, _ in quant_model.named_parameters():
print(q_name)
Just loading both models and checking their parameters. This example is from a MobileNet3D and even though the layers are not named I can easily assess their class based on their index.
Extract from full precision model (missing indices 2 and 5, ReLUs, as expected):
features.17.conv.0.weight
features.17.conv.1.weight
features.17.conv.1.bias
features.17.conv.3.weight
features.17.conv.4.weight
features.17.conv.4.bias
features.17.conv.6.weight
features.17.conv.7.weight
features.17.conv.7.bias
Extract from quantized version (missing ReLUs and Convs, not expected):
features.17.conv.1.weight
features.17.conv.1.bias
features.17.conv.4.weight
features.17.conv.4.bias
features.17.conv.7.weight
features.17.conv.7.bias
Interestingly, I also checked with the quantized version of a SqueezeNet3D which was giving abysmal inference results (random basically) and the list of parameters is empty. This architecture has some 3D pooling modules which are not supported, and I had to compute them in floating-point. Still, I cannot understand how can I get no errors nether in loading, saving or inference. |
st184298 | @AfonsoSalgadoSousa do you have a smaller repro of the issue that we can take a look at? |
st184299 | Hello. Thanks for the reply. I think the easiest way is to try the PyTorch post-quantization tutorial ((beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials 1.7.1 documentation 1) where I can reproduce the most extreme case of this behaviour (list(myModel.named_parameters()) == []), ‘myModel’ being the quantized version of the full precision model. I try listing the parameters right after ‘torch.quantization.convert(myModel, inplace=True)’. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.