id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st184300 | model.named_parameters() only returns parameters, i.e. instances of torch.Parameter. quantized convs pack weight and bias into a special object which is not using torch.Parameter, which is why it does not show up via named_parameters(). You can inspect the weight and bias of quantized convs by using qconv._weight_bias(). e2e example: gist:2456e16d0f40366d830bcfe176fafe5c · GitHub 16 |
st184301 | Perfect. Thank you so much for the answer. Been struggling with this for a couple of weeks. |
st184302 | I have a Deep Convolutional LSTM NN, for which I am trying to do a very basic post trained quantization. Just convert the trained weights from float32 to int8.
I’m new to this so I might have does this wrong but what I’ve done is taken a list of model.parameter after training, of which there are 18 tensors. Converted the value range from the float range to int8 range and then converted the data type to int8. The list output (qparameter) looks right, now I’m just unsure how to put these values back into a model to test.
qparameters = []
def convert_float_to_int8(i):
while i < 18:
x = list(model.parameters())[i]
OldMin = torch.min(x)
OldMax = torch.max(x)
NewMin = float(-128)
NewMax = float(127)
OldRange = (OldMax-OldMin)
NewRange = (NewMax-NewMin)
y = (((x - OldMin)*NewRange)/OldRange)+NewMin
y = torch.round(y)
y = y.type(torch.int8)
print(x.dtype)
print(y,y.dtype)
qparameters.append(y)
i += 1
convert_float_to_int8(0)
print(qparameters) |
st184303 | Hi @FCF, you can check out Dynamic Quantization — PyTorch Tutorials 1.7.1 documentation 9 for an e2e example of this. |
st184304 | I want to quantize a model that I have created that uses a custom Parameter to hold the weights of several Conv2d() layers. I pass this Parameter to the forward function, which then assigns the different parts of the Parameter to the weights of the Conv2d() layers, which requires a cast to Parameter on every forward function call. This works fine for normal use, if inefficient, but when I want to use the Quantization package, the assignment throws this error:
KeyError: “attribute ‘weight’ already exists”
When I don’t use quantization, I can use the functional conv2d(), but I don’t think that is supported yet with nn.quantize. What’s the difference between a quantized model and a regular model, and how can I fix this error?
Thanks |
st184305 | Solved by louis in post #3
So the issue is that I was passing floatTensor to a now torch.qint8, and attempting to cast it to a Parameter. The solution is to put an if statement like so:
class Block(torch.nn.Module):
def __init__(self):
super(Block, self).__init__()
self.conv1 = torch.nn.Conv2d(1,1,3, padd… |
st184306 | Here is a minimal example:
import torch
import torch.nn as nn
from torch.quantization import QuantStub, DeQuantStub
import copy
class Block(nn.Module):
def __init__(self):
super(Block, self).__init__()
self.conv1 = nn.Conv2d(1,1,3, padding=1, bias=False)
def forward(self, x, conv_parameter):
self.conv1.weight = torch.nn.parameter.Parameter(conv_parameter)
return self.conv1(x)
class BigBlock(nn.Module):
def __init__(self):
super(BigBlock, self).__init__()
self.conv_parameter = torch.nn.parameter.Parameter(torch.rand((1,1,3,3)))
self.block = Block()
self.quant = QuantStub()
self.dequant = DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.block(x, self.conv_parameter)
x = self.dequant(x)
return x
x = torch.rand((1,1,5,5))
net = BigBlock()
print(net(x)) # works fine
qnet = copy.deepcopy(net)
qnet = qnet.eval()
qnet.qconfig = torch.quantization.default_qconfig
torch.quantization.prepare(qnet, inplace=True)
net(torch.rand((1,1,5,5)))
torch.quantization.convert(qnet, inplace=True)
print(qnet)
print(qnet(x)) # throws an error
I am using pytorch-1.6.0.dev202, Python 3.8.1. |
st184307 | So the issue is that I was passing floatTensor to a now torch.qint8, and attempting to cast it to a Parameter. The solution is to put an if statement like so:
class Block(torch.nn.Module):
def __init__(self):
super(Block, self).__init__()
self.conv1 = torch.nn.Conv2d(1,1,3, padding=1, bias=False)
def forward(self, x, conv_parameter):
if(isinstance(self.conv1, torch.nn.quantized.Conv2d)):
return self.conv1(x)
else:
self.conv1.weight = torch.nn.parameter.Parameter(conv_parameter)
return self.conv1(x)
which works since my passed in conv_parameter should be equal to the weight stored in self.conv1, so we can just ignore the conversion to Parameter when using the quantized network. |
st184308 | Does this code reduce the model size? I implement your code on my model and it doesn’t change my model size. |
st184309 | Hello!
I am trying to train MobileNetV3 with Lite Reduced ASPP for Semantic Segmentation using Quantization Aware Training, but for some reason it does not training at all. Output of the model seems to be like random noise.
So I have couple of questions.
Currently I have such activations as nn.ReLU6, nn.Sigmoid, nn.Hardsigmoid and nn.Hardswish. I tried both approaches - wrap them with QuantWrapper or replace with ReLU. Both didn’t help. What is the correct way?
I’ve replaced functional.interpolate with nn.UpsamplingBilinear2d. It also didn’t help. But is it relevant or I can use functional analogue?
I’ve replaced all add and mul operations to separated torch.nn.quantized.FloatFunctional(). Didn’t work.
Usually with QAT approach does model training longer or approximately it should take the same time?
Am I right that is possible to train model with QAT using GPU?
Here is the print of my model:
QuantizableMobileNetV3SmallLRASPP(
(_quant): QuantStub()
(_encoder): QuantizableMobileNetV3(
(_layers): ModuleList(
(0): Sequential(
(0): ConvBn2d(
(0): Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): Hardswish()
)
(1): QuantizableInvertedResidual(
(_expansion): Sequential(
(0): ConvBn2d(
(0): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): ReLU6()
)
(_conv): Sequential(
(0): ConvBn2d(
(0): Conv2d(16, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=16, bias=False)
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): ReLU6()
)
(_sqse): QuantizableSqueezeAndExcite(
(_avg_pool): AdaptiveAvgPool2d(output_size=1)
(_fc): Sequential(
(0): LinearReLU(
(0): Linear(in_features=16, out_features=4, bias=True)
(1): ReLU()
)
(1): Identity()
(2): Linear(in_features=4, out_features=16, bias=True)
(3): Sequential(
(0): DeQuantStub()
(1): Hardsigmoid()
(2): QuantStub()
)
)
(_mul): FloatFunctional(
(activation_post_process): Identity()
)
)
(_reduce): Sequential(
(0): ConvBn2d(
(0): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
)
(_skip_add): FloatFunctional(
(activation_post_process): Identity()
)
)
(2): QuantizableInvertedResidual(
(_expansion): Sequential(
(0): ConvBn2d(
(0): Conv2d(16, 72, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): ReLU6()
)
(_conv): Sequential(
(0): ConvBn2d(
(0): Conv2d(72, 72, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=72, bias=False)
(1): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): ReLU6()
)
(_reduce): Sequential(
(0): ConvBn2d(
(0): Conv2d(72, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
)
(_skip_add): FloatFunctional(
(activation_post_process): Identity()
)
)
(3): QuantizableInvertedResidual(
(_expansion): Sequential(
(0): ConvBn2d(
(0): Conv2d(24, 88, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): ReLU6()
)
(_conv): Sequential(
(0): ConvBn2d(
(0): Conv2d(88, 88, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=88, bias=False)
(1): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): ReLU6()
)
(_reduce): Sequential(
(0): ConvBn2d(
(0): Conv2d(88, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
)
(_skip_add): FloatFunctional(
(activation_post_process): Identity()
)
)
(4): QuantizableInvertedResidual(
(_expansion): Sequential(
(0): ConvBn2d(
(0): Conv2d(24, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): Hardswish()
)
(_conv): Sequential(
(0): ConvBn2d(
(0): Conv2d(96, 96, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), groups=96, bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): Hardswish()
)
(_sqse): QuantizableSqueezeAndExcite(
(_avg_pool): AdaptiveAvgPool2d(output_size=1)
(_fc): Sequential(
(0): LinearReLU(
(0): Linear(in_features=96, out_features=24, bias=True)
(1): ReLU()
)
(1): Identity()
(2): Linear(in_features=24, out_features=96, bias=True)
(3): Sequential(
(0): DeQuantStub()
(1): Hardsigmoid()
(2): QuantStub()
)
)
(_mul): FloatFunctional(
(activation_post_process): Identity()
)
)
(_reduce): Sequential(
(0): ConvBn2d(
(0): Conv2d(96, 40, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
)
(_skip_add): FloatFunctional(
(activation_post_process): Identity()
)
)
(5): QuantizableInvertedResidual(
(_expansion): Sequential(
(0): ConvBn2d(
(0): Conv2d(40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): Hardswish()
)
(_conv): Sequential(
(0): ConvBn2d(
(0): Conv2d(240, 240, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=240, bias=False)
(1): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): Hardswish()
)
(_sqse): QuantizableSqueezeAndExcite(
(_avg_pool): AdaptiveAvgPool2d(output_size=1)
(_fc): Sequential(
(0): LinearReLU(
(0): Linear(in_features=240, out_features=60, bias=True)
(1): ReLU()
)
(1): Identity()
(2): Linear(in_features=60, out_features=240, bias=True)
(3): Sequential(
(0): DeQuantStub()
(1): Hardsigmoid()
(2): QuantStub()
)
)
(_mul): FloatFunctional(
(activation_post_process): Identity()
)
)
(_reduce): Sequential(
(0): ConvBn2d(
(0): Conv2d(240, 40, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
)
(_skip_add): FloatFunctional(
(activation_post_process): Identity()
)
)
(6): QuantizableInvertedResidual(
(_expansion): Sequential(
(0): ConvBn2d(
(0): Conv2d(40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): Hardswish()
)
(_conv): Sequential(
(0): ConvBn2d(
(0): Conv2d(240, 240, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=240, bias=False)
(1): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): Hardswish()
)
(_sqse): QuantizableSqueezeAndExcite(
(_avg_pool): AdaptiveAvgPool2d(output_size=1)
(_fc): Sequential(
(0): LinearReLU(
(0): Linear(in_features=240, out_features=60, bias=True)
(1): ReLU()
)
(1): Identity()
(2): Linear(in_features=60, out_features=240, bias=True)
(3): Sequential(
(0): DeQuantStub()
(1): Hardsigmoid()
(2): QuantStub()
)
)
(_mul): FloatFunctional(
(activation_post_process): Identity()
)
)
(_reduce): Sequential(
(0): ConvBn2d(
(0): Conv2d(240, 40, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
)
(_skip_add): FloatFunctional(
(activation_post_process): Identity()
)
)
(7): QuantizableInvertedResidual(
(_expansion): Sequential(
(0): ConvBn2d(
(0): Conv2d(40, 120, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(120, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): Hardswish()
)
(_conv): Sequential(
(0): ConvBn2d(
(0): Conv2d(120, 120, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=120, bias=False)
(1): BatchNorm2d(120, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
(2): Hardswish()
)
(_sqse): QuantizableSqueezeAndExcite(
(_avg_pool): AdaptiveAvgPool2d(output_size=1)
(_fc): Sequential(
(0): LinearReLU(
(0): Linear(in_features=120, out_features=30, bias=True)
(1): ReLU()
)
(1): Identity()
(2): Linear(in_features=30, out_features=120, bias=True)
(3): Sequential(
(0): DeQuantStub()
(1): Hardsigmoid()
(2): QuantStub()
)
)
(_mul): FloatFunctional(
(activation_post_process): Identity()
)
)
(_reduce): Sequential(
(0): ConvBn2d(
(0): Conv2d(120, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Identity()
)
(_skip_add): FloatFunctional(
(activation_post_process): Identity()
)
)
)
)
(_decoder): QuantizableLRASPP(
(_aspp_conv1): Sequential(
(0): ConvBnReLU2d(
(0): Conv2d(48, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(1): Identity()
(2): Identity()
)
(_upper2): UpsamplingBilinear2d(size=(49, 78), mode=bilinear)
(_aspp_conv2): Sequential(
(0): AvgPool2d(kernel_size=11, stride=(4, 4), padding=0)
(1): Conv2d(48, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): Sequential(
(0): DeQuantStub()
(1): Sigmoid()
(2): QuantStub()
)
)
(_aspp_conv12): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1), bias=False)
(_upper12): UpsamplingBilinear2d(size=(98, 155), mode=bilinear)
(_aspp_conv3): Conv2d(24, 1, kernel_size=(1, 1), stride=(1, 1), bias=False)
(_upper123): UpsamplingBilinear2d(size=(780, 1240), mode=bilinear)
(_mul): FloatFunctional(
(activation_post_process): Identity()
)
(_add): FloatFunctional(
(activation_post_process): Identity()
)
)
(_dequant): DeQuantStub()
)
After model I am also performing nn.Sigmoid activation for binary pixels classification.
If you can help me with piece of advice don’t hesitate to reply.
Thanks in advance! |
st184310 | Solved by smivv in post #4
Finally I made it working.
The real reason why it wasn’t working is because I had PyTorch 1.7 installed which is probably has not all quantized operations implemented or maybe has some bugs.
Now I installed PyTorch from sources using master branch and I have it training
Thanks! |
st184311 | @smivv, could you share the code you used to enable QAT on the model?
We currently do support quantization of Sigmoid, Hardsigmoid and ReLU6. cc @jerryzh168 to confirm
QAT of UpsamplingBilinear2d isn’t supported so you will have to wrap it with Quant-Dequant block.
Maybe looking at the code will help, we also have a tutorial here https://github.com/pytorch/vision/blob/master/references/classification/train_quantization.py 8 for reference
QAT training takes longer, due to the insertion of observers and fake quant modules in the model
It is possible to do QAT on GPU, but you will need to move the model to CPU before running convert. |
st184312 | Hello @supriyar,
Thanks for your comment!
Are you sure about Sigmoid and Hardsigmoid? Because when I look into models’ print after prepate_qat and I see that the only activations without FakeQuantizer are Sigmoid and Hardsigmoid. While HardSwish has FakeQuantizer attached:
(2): Hardswish(
(activation_post_process): FakeQuantize(
fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0')
(activation_post_process): MovingAverageMinMaxObserver(min_val=inf, max_val=-inf)
)
)
........................................................
(2): Sequential(
(0): DeQuantStub()
(1): Sigmoid()
(2): QuantStub(
(activation_post_process): FakeQuantize(
fake_quant_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), observer_enabled=tensor([1], device='cuda:0', dtype=torch.uint8), quant_min=0, quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine, ch_axis=-1, scale=tensor([1.], device='cuda:0'), zero_point=tensor([0], device='cuda:0')
(activation_post_process): MovingAverageMinMaxObserver(min_val=inf, max_val=-inf)
)
)
)
Just did it and it didn’t help.
Already seen that, thanks, but nothing new in terms of code for me
Does it take longer in terms of epochs? Because timing for sure is increased.
Ok, that’s how I do that.
I am using Catalyst framework, so what I am doing is that I am preparing model for QAT once stage is started. Here is my code:
class QuantizationAwareTrainingCallback(Callback):
def __init__(
self,
backend: str = "fbgemm",
):
super().__init__(order=CallbackOrder.Internal + 1)
assert backend in ["fbgemm", "qnnpack"], "Unknown backend type"
self.backend = backend
def on_stage_start(self, runner: "IRunner") -> None:
# model is already fused
runner.model.train()
torch.backends.quantized.engine = self.backend
runner.model.qconfig = torch.quantization.get_default_qat_qconfig(self.backend)
runner.model = torch.quantization.prepare_qat(runner.model, inplace=False)
runner.model.apply(torch.quantization.enable_observer)
runner.model.apply(torch.quantization.enable_fake_quant) |
st184313 | Finally I made it working.
The real reason why it wasn’t working is because I had PyTorch 1.7 installed which is probably has not all quantized operations implemented or maybe has some bugs.
Now I installed PyTorch from sources using master branch and I have it training
Thanks! |
st184314 | Hi, Im running a MobileNet Version with GDC(global depthwise convolution) in the last layer
My GDC layer look like this:
class GDC(Module):
def __init__(self, embedding_size):
super(GDC, self).__init__()
self.conv_6_dw = Linear_block(512, 512, groups=512, kernel=(7,7), stride=(1, 1), padding=(0, 0))
self.conv_6_flatten = Flatten()
self.linear = Linear(512, embedding_size, bias=False)
self.bn = BatchNorm1d(embedding_size)
def forward(self, x):
x = self.conv_6_dw(x)
x = self.conv_6_flatten(x)
x = self.linear(x)
x = self.bn(x)
return x
When I run, the code stuck in line x = self.linear(x) with the error:
RuntimeError: Expected self.scalar_type() == ScalarType::Float to be true, but got false.
I think the I do not fuse modules because fuse modules only support [Conv, Relu], [Conv, BatchNorm], [Conv, BatchNorm, Relu], [Linear, Relu] ( not [Linear, BatchNorm1D] ), so that self.linear(x) is using FloatType instead of IntType
How can I custom my network so that the model can be trainable ?
Sorry for any misunderstanding and inconveniences. |
st184315 | @manhntm3 even without fusion the linear operation can be quantized. You can control which operations are quantized by inserting QuantStub/DequantStub around them.
Could you share the code you use to quantize/transform the model for quantization? |
st184316 | @supriyar Thanks you for your answers.
My model code looks like this:
class GDC(Module):
def __init__(self, embedding_size):
super(GDC, self).__init__()
self.conv_6_dw = Linear_block(512, 512, groups=512, kernel=(7,7), stride=(1, 1), padding=(0, 0))
self.conv_6_flatten = Flatten()
self.linear = Linear(512, embedding_size, bias=False)
self.bn = BatchNorm1d(embedding_size)
def forward(self, x):
x = self.conv_6_dw(x)
x = self.conv_6_flatten(x)
x = self.linear(x)
x = self.bn(x)
return x
class NetronNet(Module):
def __init__(self, embedding_size):
super(NetronNet, self).__init__()
self.quant = QuantStub()
self.conv = ...(some convolutional layer)
self.linear = GDC(embedding_size, embedding_size)
self.dequant = DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = self.linear(x)
x = self.dequant(x)
return x
def fuse_model(self):
(fuse self.conv layer)
Here are the code to quantize the model:
netron.to(cpu_device)
netron.train()
netron.fuse_model()
netron.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
To elaborate, after fuse model code, my conv layers work just fine. But the GDC module layer(linear with batchnorm) failed to quantize even after I add QuantStub()/DeQuantStub() layer in the module. So how do I know when the layer has been quantize/transform correctly ? |
st184317 | I am trying to run quantization on a model that I have to try and make the performance much faster. The model I am using to test this out is the pretrained wideres101(I have noted below how you can call it). The code is running on CPU. I put in a couple breakpoints, outputting the model and model size before and after quantization. Before quantization, the model is 510MB and after quantization it is down to 128MB. It seems like the quantization is being done as it should. The problem arises when the quantized model is called again later in the code when running the tester.
How I call the model:
model = torchvision.models.wide_resnet101_2(pretrained=True)
The line 113 is where the code is erroring out at:
image1819×686 83.3 KB
This is the error I get:
File “C:\Users\NishchintUpadhyaya\anaconda3\envs\SoilMoist\lib\site-packages\torch\nn\intrinsic\quantized\modules\conv_relu.py”, line 69, in forward
return torch.ops.quantized.conv2d_relu(
RuntimeError: Could not run ‘quantized::conv2d_relu.new’ with arguments from the ‘CPU’ backend. ‘quantized::conv2d_relu.new’ is only available for these backends: [QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].
Any suggestions? |
st184318 | @nishchint98, this error usually means that the input to conv2d_relu is not a quantized tensor, but a floating point tensor. Please make sure in the code that the input being passed is a quantized tensor. |
st184319 | Hi, I like to selectively quantize layers as some layers in my project just serve as a regularizer. So, I tried a few ways and got confused with the following results.
class LeNet(nn.Module):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(28 * 28, 10)
self.relu1 = nn.ReLU(inplace=True)
def forward(self, x):
return self.relu1(self.l1(x.view(x.size(0), -1)))
1. selective qconfig assignment and top level transform
model = LeNet()
model.l1.qconfig = torch.quantization.get_default_qat_qconfig()
torch.quantization.prepare_qat(model, inplace=True)
print(model)
image1141×355 29.7 KB
2. selective qconfig assignment and selective transform
model2 = LeNet()
model2.l1.qconfig = torch.quantization.get_default_qat_qconfig()
torch.quantization.prepare_qat(model2.l1, inplace=True)
print(model2)
image1139×245 16.7 KB
You can see that the 2nd case doesn’t have (weight_fake_quant): FakeQuantize. Is this a correct behavior? Shouldn’t both yield the same transformed model?
Also, if there is a better way to do selective quantization (like different bits, quant vs no quant), please advise. |
st184320 | Solved by hx89 in post #4
I think the reason is prepare_qat() calls convert(), which doesn’t convert the root module, so if you print the type of l1, in case 2 model2.l1 is root module thus not converted and still has type <class ‘torch.nn.modules.linear.Linear’> which doesn’t have weight_fake_quant attribute, while model.l1… |
st184321 | You can use the model.layer.qconfig = None syntax to turn off quantization for a layer and all of its children. Please feel free to see https://pytorch.org/docs/stable/quantization.html#model-preparation-for-quantization 1 for more context. |
st184322 | Right, so in my both examples, qconifg is set for only ‘l1’ layer. But, the question is ‘should I always do prepare_qat at the top level?’. why does calling prepare_qat for a particular layer one at a time yields difference? Any explanation/insight would be appreciated |
st184323 | I think the reason is prepare_qat() calls convert(), which doesn’t convert the root module, so if you print the type of l1, in case 2 model2.l1 is root module thus not converted and still has type <class ‘torch.nn.modules.linear.Linear’> which doesn’t have weight_fake_quant attribute, while model.l1 is type <class ‘torch.nn.qat.modules.linear.Linear’> which has weight_fake_quant attribute. |
st184324 | Sounds you’re right: it doesn’t convert the very model given to prepare_qat()
print(type(model2.l1))
<class 'torch.nn.modules.linear.Linear'> |
st184325 | Hello. I am trying to employ post-training quantization on a SqueezeNet 3D and am getting the following error:
RuntimeError: Could not run ‘aten::max_pool3d_with_indices’ with arguments from the ‘QuantizedCPU’ backend. ‘aten::max_pool3d_with_indices’ is only available for these backends: [CPU, CUDA, Named, Autograd, Profiler, Tracer].
I checked the list of supported operators (https://pytorch.org/docs/stable/quantization-support.html 2) and it seems that MaxPool3D is not supported. How can I circumvent this issue? |
st184326 | Solved by Vasiliy_Kuznetsov in post #2
if an operator does not have a quantized kernel, you can run it in fp32. For example,
class M(torch.nn.Module):
def __init__(...):
...
self.dequant_1 = torch.quantization.DeQuantStub()
self.quant_1 = torch.quantization.QuantStub()
...
def forward(self, ...):
...
# (com… |
st184327 | if an operator does not have a quantized kernel, you can run it in fp32. For example,
class M(torch.nn.Module):
def __init__(...):
...
self.dequant_1 = torch.quantization.DeQuantStub()
self.quant_1 = torch.quantization.QuantStub()
...
def forward(self, ...):
...
# (computation in int8)
...
# convert from int8 to fp32
x_1 = self.dequant_1(x_0)
# run in fp32
x_2 = self.maxpool(x_1)
# convert back to int8, if needed
x_3 = self.quant_q(x_2)
... |
st184328 | Thank you very much for your reply. That actually answers my question, but hopefully, I can take the opportunity to ask you another one. I am using a pretrained SqueezeNet 3D and in order to be able to load saved models I need to keep the nn.Sequential modules (namely self.features and self.classifier). I tried doing something like this:
if self.quantize:
# run Pooling operations and dropout in FP32
x = self.quant(x)
# x = self.features(x)
for m in self.features:
if isinstance(m, nn.MaxPool3d):
x = self.dequant(x)
x = m(x)
x = self.quant(x)
else:
x = m(x)
for c in self.classifier:
if isinstance(c, nn.AvgPool3d) or isinstance(c, nn.Dropout):
x = self.dequant(x)
x = c(x)
x = self.quant(x)
else:
x = c(x)
x = x.view(x.size(0), -1)
x = self.dequant(x)
return x
else:
x = self.features(x)
x = self.classifier(x)
return x.view(x.size(0), -1)
This gives the following error:
RuntimeError:
Cannot re-assign ‘c’ because it has type value of type ‘torch.torch.nn.modules.pooling.AvgPool3d’ and c is not a first-class value. Only reassignments to first-class values are allowed:
File “/home/ctm/afonso/easyride/acceleration/src/models/squeezenet.py”, line 152
for c in self.classifier:
if isinstance(c, nn.AvgPool3d) or isinstance(c, nn.Dropout):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
x = self.dequant(x)
x = c(x)
Although not understanding why, I actually managed to fix the problem using the following code extract instead of the classifier loop:
x = self.dequant(x)
x = self.classifier[0](x)
x = self.quant(x)
x = self.classifier[1](x)
x = self.classifier[2](x)
x = self.dequant(x)
x = self.classifier[-1](x)
I still do not fully understand how the TorchScript inspector works. I would appreciate if you could give me some insights on this behaviour. |
st184329 | Hello, I am trying to use the packages of quantization that PyTorch provides to quantize a MobileNet 3D. I followed the steps recommended in https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 1. Everything goes fine until I try to compile the model as a TorchScript with torch.jit.script for saving the model. The "Replacing addition with nn.quantized.FloatFunctional" step yields the following error:
Traceback (most recent call last):
File “main.py”, line 155, in
main()
File “main.py”, line 146, in main
torch.jit.save(torch.jit.script(fp_model), (opt.model_path / ‘quantized’ / ‘quant_mobilenet3d.pth’).as_posix())
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/jit/init.py”, line 1516, in script
return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile)
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 318, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 372, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/jit/init.py”, line 1900, in _construct
init_fn(script_module)
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 353, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, infer_methods_to_compile)
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 372, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/jit/init.py”, line 1900, in _construct
init_fn(script_module)
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 353, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, infer_methods_to_compile)
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 376, in create_script_module_impl
create_methods_from_stubs(concrete_type, stubs)
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 292, in create_methods_from_stubs
concrete_type._create_methods(defs, rcbs, defaults)
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/jit/init.py”, line 1359, in _recursive_compile_class
_compile_and_register_class(obj, rcb, _qual_name)
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/jit/init.py”, line 1363, in _compile_and_register_class
_jit_script_class_compile(qualified_name, ast, rcb)
RuntimeError:
undefined value super:
File “/home/ctm/.conda/envs/3dcnn/lib/python3.7/site-packages/torch/nn/quantized/modules/functional_modules.py”, line 35
def init(self):
super(FloatFunctional, self).init()
~~~~~ <— HERE
self.activation_post_process = torch.nn.Identity()
‘FloatFunctional.init’ is being compiled since it was called from ‘torch.torch.nn.quantized.modules.functional_modules.FloatFunctional’
File “/home/ctm/afonso/easyride/acceleration/src/models/mobilenetv2.py”, line 69
if self.use_res_connect:
if self.quantize:
return nn.quantized.FloatFunctional().add(x, self.conv(x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE
else:
return x + self.conv(x)
‘torch.torch.nn.quantized.modules.functional_modules.FloatFunctional’ is being compiled since it was called from ‘InvertedResidual.forward’
File “/home/ctm/afonso/easyride/acceleration/src/models/mobilenetv2.py”, line 69
if self.use_res_connect:
if self.quantize:
return nn.quantized.FloatFunctional().add(x, self.conv(x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE
else:
return x + self.conv(x)
Not using the FloatFunctional function and keeping the addition as is enables the model to be saved and loaded correctly, but will later give errors on the inference step because of the operation not being supported by the QuantizedCPU backend. Similarly, using the addition from QFunctional also gives backend-based errors. |
st184330 | Solved by AfonsoSalgadoSousa in post #2
I found the answer in another thread, but cannot find it again. Basically, the nn.quantized.FloatFunctional() class should be initialized in the init function so that it is visible to the TorchScript inspector.
This will not work:
def forward(self, x):
return nn.quantized.FloatFunctional().add(… |
st184331 | I found the answer in another thread, but cannot find it again. Basically, the nn.quantized.FloatFunctional() class should be initialized in the init function so that it is visible to the TorchScript inspector.
This will not work:
def forward(self, x):
return nn.quantized.FloatFunctional().add(x, self.conv(x))
Yet something like this will:
def __init__():
self.q_add = nn.quantized.FloatFunctional()
...
def forward(self, x):
return self.q_add.add(x, self.conv(x)) |
st184332 | I’m trying to quantize my model using torch.quantization.quantize_dynamic(model, {nn.ConvTranspose1d}, dtype=torch.qint8), but my model size doesn’t decrease and neither does computation time.
However if I add nn.Linear to the set of layers to quantize, it seems to have an effect. Naturally this means that ConnTranspose1d doesn’t support dynamic quantization.
My question is very simple: Which layers support dynamic quantization at the moment? Is a list present somewhere? I couldn’t find anything in the docs.
I’m running the model on i7 processor (Macbook pro), pytorch version 1.7.0 installed using pip
Thanks |
st184333 | Solved by Vasiliy_Kuznetsov in post #2
Hi @ayush-1506, the list of supported dynamic quantization layers is here: https://github.com/pytorch/pytorch/blob/master/torch/quantization/quantization_mappings.py#L76 |
st184334 | Hi @ayush-1506, the list of supported dynamic quantization layers is here: https://github.com/pytorch/pytorch/blob/master/torch/quantization/quantization_mappings.py#L76 61 |
st184335 | Thanks for sharing this, @Vasiliy_Kuznetsov ! Are there plans for supporting other layers ? |
st184336 | I’m not aware of plans to add more layers to dynamic quantization specifically. What would be the use case? |
st184337 | The currently supported layers looks like a very small subset of all possible out-of-the-box layers. Suppose I’m interested in quantizing a Convnet (Conv2d, Conv1d) layers. This won’t work at the moment, right? |
st184338 | @ayush-1506, you can check out static quantization or QAT which support convolutions. |
st184339 | Thanks, will check it out. I’ve statically-quantized a model and stored the state dict locally, but loading it back gives me this error :
RuntimeError: Could not run 'quantized::conv_transpose1d_prepack' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'quantized::conv_transpose1d_prepack' is only available for these backends: [QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].
Any idea what this is about ? Thanks again |
st184340 | this error message means that you are trying to give a fp32 tensor to a quantized layer. The way to fix it is to use QuantStub and DeQuantStub to control the conversions, feel free to check out the examples on https://pytorch.org/docs/stable/quantization.html 21 . |
st184341 | Hello all,
This is a followup question to this one 3.
I tried to quantize a model of mine using the eager mode post-training quantization. The quantization process seemed to complete just fine as the model stats show significant changes (the model size shrunk from 22 to 5MB and performance-wise, it became 3x faster).
However, when trying to save the model with
torch.jit.save(model, save_path)
I encounter the following error :
/root/anaconda3/envs/shishosama/lib/python3.7/site-packages/torch/quantization/observer.py:121: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch.
reduce_range will be deprecated in a future release of PyTorch."
Quantized Model: took 310.891 ms [min/max: 310.9/310.9] ms
Size (MB): 5.798671
Traceback (most recent call last):
File "/mnt/internet/hasanpour/embeder_moder_training/simpnet_quantizer.py", line 242, in <module>
torch.jit.save(model, save_path)
File "/root/anaconda3/envs/seyyedhossein/lib/python3.7/site-packages/torch/jit/_serialization.py", line 81, in save
m.save(f, _extra_files=_extra_files)
File "/root/anaconda3/envs/seyyedhossein/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'simpnet_imgnet_drpall_q' object has no attribute 'save'
While saving the model using torch.save like :
torch.save(model.state_dict(), save_path)
This is how I’m doing the quantization :
def print_size_of_model(model):
torch.save(model.state_dict(), "temp.p")
print('Size (MB):', os.path.getsize("temp.p")/1e6)
os.remove('temp.p')
def calibrate(model, data_loader):
model.eval()
with torch.no_grad():
for image, target in data_loader:
model(image)
train_dataset = ArcDataset(1000)
dtloader = torch.utils.data.DataLoader(train_dataset, batch_size=256, shuffle=True, num_workers=4)
dummy_input = torch.randn(size=(1, 3, 112, 112))
checkpoint_path = 'test_checkpoint.tar'
save_path = 'test_checkpoint_q.tar'
model = simpnet(512, scale=1.0,network_idx=0, mode=1, simpnet_name="simpnet5mq")
checkpoint = torch.load(checkpoint_path, map_location=torch.device('cpu'))
model.load_state_dict(checkpoint['state_dict'], strict=True)
print('\n \n', model)
model.eval()
with Benchmark_Block("Default Model: ") as blk:
for i in range(100):
_ = model(dummy_input)
print_size_of_model(model)
print(f'\n\n-------------------------------\n\n')
model.fuse_model()
print(f'quantized model: {model}')
model.eval()
# Specify quantization configuration
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
# print(model.qconfig)
torch.quantization.prepare(model, inplace=True)
calibrate(model, dtloader)
# Convert to quantized model
torch.quantization.convert(model, inplace=True)
with Benchmark_Block("Quantized Model: ") as blk:
for i in range(100):
_ = model(dummy_input)
print_size_of_model(model)
model = model.cpu()
# also doing a forward pass fails as well
lfw_acc, threshold = lfw_test(model)
torch.save(model.state_dict(), save_path) # saves successfully
torch.jit.save(model, save_path) # fails with the error messgae posted above
works just fine and the model gets saved!
As also stated in the code as a comment, doing a forward pass results in the same error I get when doing this in graph mode that is :
Evaluating data/angles.txt...
0%| | 0/6000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/mnt/internet/hasanpour/embeder_moder_training/simpnet_quantizer.py", line 241, in <module>
lfw_acc, threshold = lfw_test(model)
File "/mnt/internet/hasanpour/embeder_moder_training/lfw_eval.py", line 350, in lfw_test
evaluate(model)
File "/mnt/internet/hasanpour/embeder_moder_training/lfw_eval.py", line 111, in evaluate
output = model(imgs)
File "/root/anaconda3/envs/seyyedhossein/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/internet/hasanpour/embeder_moder_training/models_new.py", line 769, in forward
out = self.features(x)
File "/root/anaconda3/envs/seyyedhossein/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/seyyedhossein/lib/python3.7/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/root/anaconda3/envs/seyyedhossein/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/anaconda3/envs/seyyedhossein/lib/python3.7/site-packages/torch/nn/intrinsic/quantized/modules/conv_relu.py", line 70, in forward
input, self._packed_params, self.scale, self.zero_point)
RuntimeError: Could not run 'quantized::conv2d_relu.new' with arguments from the 'QuantizedCUDA' backend. 'quantized::conv2d_relu.new' is only available for these backends: [QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].
QuantizedCPU: registered at /pytorch/aten/src/ATen/native/quantized/cpu/qconv.cpp:858 [kernel]
BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
AutogradOther: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback]
AutogradXLA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback]
Tracer: fallthrough registered at /pytorch/torch/csrc/jit/frontend/tracer.cpp:967 [backend fallback]
Autocast: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:254 [backend fallback]
Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:511 [backend fallback]
VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
What am I missing here?
Any help is greatly appreciated. |
st184342 | Solved by Shisho_Sama in post #2
OK I guess I found out my mistake!
Silly me hadnt traced the model beforehand thats why jit.save didnt work! |
st184343 | OK I guess I found out my mistake!
Silly me hadnt traced the model beforehand thats why jit.save didnt work! |
st184344 | Hello everyone, hope you are having a great time.
I tried both Quantization approaches and noticed the Graph mode post-training static quantization does not work properly as the manual static quantization results in nearly 5x model size reduction and nearly 3x runtime speedup.
I’m using Pytorch 1.6.
and this is how I’m doing this :
import os
import pickle
import numpy as np
import torch
from torch.quantization import per_channel_dynamic_qconfig
from torch.quantization import quantize_dynamic_jit
from torch.quantization import get_default_qconfig
from torch.quantization import quantize_jit
from dataset import ArcDataset
from utils import benchmark, Benchmark_Block
def calibrate(model, data_loader):
model.eval()
with torch.no_grad():
for image, target in data_loader:
model(image)
def quantize_post_training(jit_model_path, path_to_save, data_loader_test, dummy_input=torch.randn(size=(1, 3, 112, 112))):
jit_model = torch.jit.load(jit_model_path)
qconfig = get_default_qconfig('fbgemm')
qconfig_dict = {'': qconfig}
quantized_model = quantize_jit(jit_model,
qconfig_dict,
calibrate,
[data_loader_test],
inplace=False,
debug=False)
torch.jit.save(quantized_model, path_to_save)
print(f'Quantization is done!')
with Benchmark_Block("Default Model: ") as blk:
for i in range(100):
_ = jit_model(dummy_input)
with Benchmark_Block("Quantized Model: ") as blk:
for i in range(100):
_ = quantized_model(dummy_input)
print(f'default model size: {os.path.getsize(jit_model_path)/1e6} MB')
print(f'quantized model size: {os.path.getsize(path_to_save)/1e6} MB')
def run_quantization():
train_dataset = ArcDataset(sample_count=1000)
dtloader = torch.utils.data.DataLoader(train_dataset, batch_size=256, shuffle=True, num_workers=4)
model_path = "checkpoint_test.jit"
model_save_path = "checkpoint_test_q.jit"
quantize_post_training(model_path, model_save_path, dtloader)
run_quantization()
This finishes and the results are as follows :
torch version: 1.6.0
/root/anaconda3/envs/ShishoSama/lib/python3.7/site-packages/torch/nn/modules/module.py:385: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
if param.grad is not None:
Quantization is done!
Default Model: took 3216.330 ms [min/max: 3216.3/3216.3] ms
Quantized Model: took 3799.413 ms [min/max: 3799.4/3799.4] ms
default model size: 22.936544 MB
quantized model size: 22.119849 MB
As you can see, nearly nothing is changed between the two models.
now running the static quantization in eager mode results in these numbers:
Default Model: took 2975.428 ms [min/max: 2975.4/2975.4] ms
Size (MB): 22.853838
Quantized Model: took 373.182 ms [min/max: 373.2/373.2] ms
Size (MB): 5.798671
based on the information provided here 1 we should be able to achieve the same or very close to the same result we get in eager mode. so I wonder what is it that I’m doing wrong here.
Any help is g reatly appreciated.
Extra notes:
The base model is trained in pytorch 1.5.1 and the quantization process (both graph mode and eager mode) are being done using Pytorch 1.6. |
st184345 | Solved by Shisho_Sama in post #2
OK, it seems like a bug in 1.6 only as upgrading to 1.7 fixed this issue!
here are the results in 1.7:
Graph mode static quantization :
Quantization is done!
Default Model: took 895.521 ms [min/max: 895.5/895.5] ms
Quantized Model: took 337.002 ms [min/max: 337.0/337.0] ms
default model size: 2… |
st184346 | OK, it seems like a bug in 1.6 only as upgrading to 1.7 fixed this issue!
here are the results in 1.7:
Graph mode static quantization :
Quantization is done!
Default Model: took 895.521 ms [min/max: 895.5/895.5] ms
Quantized Model: took 337.002 ms [min/max: 337.0/337.0] ms
default model size: 22.936544 MB
quantized model size: 5.780453 MB
Eager mode static quantization :
Default Model: took 1211.988 ms [min/max: 1212.0/1212.0] ms
Size (MB): 22.853838
Quantized Model: took 336.531 ms [min/max: 336.5/336.5] ms
Size (MB): 5.798671
Although the timing for eager mode static quantization is a bit off, I guess this is normal |
st184347 | Greetings. I have gone through two quantization attempts for resnet50 that comes with pytorch and had mixed results:
dynamic quantization works but is limited to the only Linear layer used in ResNet, thus the resulting improvements in model size and inference latency are just a few percent.
static quantization nominally succeeds, but at runtime the new model throws the exception described in Supported quantized tensor operations 9, which I presume is caused by the “+” operation used to implement skip connections. It doesn’t seem feasible to exclude those as they repeat throughout the entire depth of the model. Am I correct in deducing then that the resnet implementation that ships with pytorch cannot be (correctly) statically quantized by the current API?
I understand that quantization support is marked experimental – I’d like to confirm that the limitations I am seeing are expected at this stage.
Thank you. |
st184348 | Solved by supriyar in post #4
At this point, eager mode quantization might require changes to the model in order to make it work. Here is an example of how resnet50 is quantized in pytorch - https://github.com/pytorch/vision/blob/master/torchvision/models/quantization/resnet.py
Going forward we are planning on graph mode quanti… |
st184349 | Incidentally, I can reproduce the issue with a tiny test model: adding a += step to forward() makes it non-quantizable.
(BTW, I am aware of torch.nn.quantized.FloatFunctional 4 – my use case prevents such intrusive model modifications) |
st184350 | At this point, eager mode quantization might require changes to the model in order to make it work. Here is an example of how resnet50 is quantized in pytorch - https://github.com/pytorch/vision/blob/master/torchvision/models/quantization/resnet.py 53
Going forward we are planning on graph mode quantization where such invasive model changes won’t be required. |
st184351 | Thanks. I wanted to make sure I wasn’t missing anything obvious. The pre-quantized model works because of the changes including
def __init__(self, *args, **kwargs):
...
**self.add_relu = torch.nn.quantized.FloatFunctional()**
...
def forward(self, x):
identity = x
out = self.conv1(x)
...
**out = self.add_relu.add_relu(out, identity)**
return out |
st184352 | Hi,I keep getting this error :
RuntimeError: Could not run ‘quantized::conv2d’ with arguments from the ‘CPUTensorId’ backend. ‘quantized::conv2d’ is only available for these backends: [QuantizedCPUTensorId].
Can someone help me with this?
Thanks! |
st184353 | hi @Bryan_Wang, we cannot commit to a timeline yet but we are hoping to release it as a prototype this year. You are welcome to check out the test cases demonstrating the current API in https://github.com/pytorch/pytorch/blob/master/test/quantization/test_quantize_fx.py 9, although it will be in flux for the near future and we don’t have documentation just yet. |
st184354 | Hi all,some layers are missing in the pretrained Resnet50 model.I don’t see Quantstub() and DeQuantStub() .Also,(Skip_add):FloatFunctional (adctivation_post_process):Identity()) is missing after every layer.I guess this is causing inference issues on quantized model and keeps giving this error:
RuntimeError: Could not run ‘quantized::conv2d’ with arguments from the ‘CPUTensorId’ backend. ‘quantized::conv2d’ is only available for these backends: [QuantizedCPUTensorId].
Any suggestions about this anyone please? |
st184355 | Hi @samhithaaaa, could you share the code/script that you are using to quantize the model?
If you are using fx based quantization you will likely not see QuantStub()/DeQuantStub() in the graph. cc @jerryzh168 |
st184356 | are you getting the model from https://github.com/pytorch/vision/tree/master/torchvision/models/quantization 47? |
st184357 | Hello,
I try to access scale and zero_point from the weight within QuantizedLinearReLU by using methods scale and q_scale. The results are failed as shown in below:
model.fc1
Out[7]: QuantizedLinearReLU(in_features=4, out_features=4, scale=0.04960649833083153, zero_point=0, qscheme=torch.per_channel_affine)
model.fc1.weight().scale
Out[8]: AttributeError: 'Tensor' object has no attribute 'scale'
model.fc1.weight().q_scale()
Out[9]: RuntimeError: Expected quantizer->qscheme() == kPerTensorAffine to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
The scale and zero_point that I want to access are zero_point=tensor([0, 0, 0, 0]), axis=0 and scale=tensor([0.0145, 0.0016, 0.0132, 0.0124], dtype=torch.float64) as below:
model.fc1.weight()
Out[9]:
tensor([[-0.1880, -1.3018, 0.8100, 1.8369],
[-0.0033, 0.1172, -0.0065, -0.2083],
[-0.9236, 0.3299, -1.6889, -1.3195],
[-0.2718, 0.4078, 1.0997, 1.5693]], size=(4, 4), dtype=torch.qint8,
quantization_scheme=torch.per_channel_affine,
scale=tensor([0.0145, 0.0016, 0.0132, 0.0124], dtype=torch.float64),
zero_point=tensor([0, 0, 0, 0]), axis=0)
Is there any way to access these parameters without manually copy from the prompt? Thank you. |
st184358 | Solved by dskhudia in post #2
For per_channel_affine quant scheme, please use q_per_channel_scales() and q_per_channel_zero_points() |
st184359 | For per_channel_affine quant scheme, please use q_per_channel_scales() and q_per_channel_zero_points() |
st184360 | dskhudia:
For per_channel_affine quant scheme, please use q_per_channel_scales() and q_per_channel_zero_points()
It works perfectly, thank you @dskhudia. |
st184361 | I tried quantizing a model using both static and dynamic quantization. Both schemes quantized the weights of the layers but did not quantize the biases. Is there a reason why? and how can I quantize the biases?
Implementation is similar to this 3 |
st184362 | Solved by dskhudia in post #6
if quantized, biases are usually quantized with a scale = activation_scale * weight_scale so that quantized bias can directly be added to matmul output in quantized domain. In pytorch eager mode (due to dynamic nature of pytorch graph), knowing activation scale statically is impossible. |
st184363 | biases are not quantized and kept in fp32. For convs and Linears, bias are dynamically quantized before addition while doing convs/Linears. |
st184364 | Hi, Thanks for your reply.
If you’re saying that biases are not quantized, what do you then mean by biases are dynamically quantized for linears? |
st184365 | When the linear is run, it converts biases to int32 before adding to matmul result. |
st184366 | if quantized, biases are usually quantized with a scale = activation_scale * weight_scale so that quantized bias can directly be added to matmul output in quantized domain. In pytorch eager mode (due to dynamic nature of pytorch graph), knowing activation scale statically is impossible. |
st184367 | I’d like to quantize my model weights to 16 bits for speed/memory savings in deployment. The torch.cuda.AMP package – which appears to be the strong recommendation for training acceleration – returns model weights as 32 bit floats which appear to require a full 32 bits of precision to represent in model saving and loading (additionally, casting them to float16s for inference leads to performance loss). The built-in torch.quantization tools appear to be limited to int8 outputs. Simply calling ‘.half()’ on my model/inputs and attempting to train as normal has the stability issues you might expect. While the documentation is a bit ambiguous about being able to explicitly set the types of particular layers when using AMP, trying to set specific Linear layers to float16 results in the same GradScaler errors as reported elsewhere.
Is there a recommended best practice for training a model ultimately destined for export in 16-bit floating point format? |
st184368 | I see a decreasing validation accuracy in this tutorial:https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#quantization-aware-training 2
in the article, validation accuracy decreases from
Epoch 0 :Evaluation accuracy on 300 images, 77.67
to
Epoch 7 :Evaluation accuracy on 300 images, 75.67 |
st184369 | Solved by Vasiliy_Kuznetsov in post #3
hi @icyhearts, this is expected for the tutorial as written, as it is using a toy dataset for speed (a tiny subsample of ImageNet, without enough images in each class). You would need to run it on a real dataset to get better numerics on the validation set. |
st184370 | hi @icyhearts, this is expected for the tutorial as written, as it is using a toy dataset for speed (a tiny subsample of ImageNet, without enough images in each class). You would need to run it on a real dataset to get better numerics on the validation set. |
st184371 | Hello!
I want to deploy a pytorch model trained in quantization-aware training method and then convert the model to other inference deep learning frame works(such as MACE) to deploy on mobile devices.
I am using a model based on https://github.com/pytorch/vision/blob/master/torchvision/models/mobilenet.py
my modification is:
1: change change bias=False to bias=True
2: change out_features of classifier to 20 because I have only 20 classes.
I use torch.jit to trace my model and save it.
The traced model is available in:
github.com
icyhearts/mobilenetv2_bias/blob/master/33.pt
PK 33/data/0FBZZZZZZZZZZZZZZZZZZZZZÀW<U»Vr<=½¯Ï¼ïcà;,,<6µ½Ãp¼p«<&Á»sxv<Í£<W½ap»²9<¸½»¼Prμâð¤¼_ó"¼r7¼p7ï¼ÈËx¼éÙ¼R½½+Ƽå}z½Bp¼Æ´<Ì@ >lÆ>AãE=C¾f¾b½»½·p}=²[h;R©>^z??7Þ=Mñ¾@N¿î¿½ê1¼iH»dñ<yE=:>á;øÜ½,|)¾8÷ºõÎ˽cÔ= ;Ý»Q;o??úC¡¾Þ+¿6Ín?{¾ ë½L>$½CR¿CM«?ô¿Ò¿
½?JHᄆƼk¤¼_(=gÞR¾+éÄ>Á0¾a¾4Aã>Ò¾\Ú:Æ7½· ¨=#ëq=t´M>Â,¾h7<&>Ó®¾tlã»q½ó£©=R=ó?îC¿3b=8²&?ö4¿MO¼ÏݼKh¯<[´C=në¸=/½°;¼Ã=ÚØ¾½}YÄ<é4Å<jw;º»¶½ò½
KÝ=Éð½Í+£¾çK¢<n=4V{¼¡¨=ª\¾~¯¾RG>þ{d¾j¿ICò».»<°1½ÔH¼Ô½|Õ½´y<Ç)½$Ôö½r&>å2>êýw¾I^>6Ä'?>%(¾7!?nõ-?¥;òó
¾SI;u#½|ê¾×¾¢4¿¼w½J¾À¾Ä¾Hy½ºÇu>f:½?äξ/Á÷½¤g(>¯Ïɾ÷¾ú଼ÿ¼Í£·¹4îé>Þ ¾ûZ¾Ô>bô½ß=½N}½II¼Ø¸<Øtm?f¾Í/î¾l>¿v{¾ù½½F<µÇ<!L;ÉÙq>*ʽKõ½´ûc=:U½_öÔ¼ª<Àç¼Ìò½ºg¼¦Ú»=ãg>òd¼°5>,¼>μrµ½°6¾®L#<lYK>JUÀ>Ëñ;Véµ>M(?b¤»H¼B½Y¾<uó<Sò=åïÆ<¨¬=äÍ> õH=ãé*=Û¾ @¾R0?³¿+ö<:?á/¿j¢½ ,>FÔR¾®®¾¹{?«k¿8><#g?°ãz¿r P=ñói¼¹A&¼rì½9ÈÍ>Ì«¾I
</¸Í>ôɾÄ8=îB;n\;/:F¬9dV;àíû9'ò6ºî¦:!KÖ:¾@;îI¥:YÔ:S¼:!h;ã¨:³j
:c:;{)¼;f;÷_;ëR;]{M;K¶®;±;±3/;;×im½ô0¾2êȽÇ"p¾³¾¢"¸½ù¨<ïïl¼0½äR>Ñ»>ø>fçw=`p>å>®>QQ«>?>gèÞ<tú¾`u¾uËz¾(J«¾Ö»¾,S½c{¤½ÛÍ;¾{½½aÖ<¬Ñ~=c=½\Ö=Q
.¼Î§½Ýã >Ó!x>©Ò[;4Y=°]¼e<¼û>/W½Xù½G,ã>ÞNà>wX¼&ºð;=!¸=¹ß¼OÒ½,Ê>>s>z¡#¾¤£¾Íª½B7>>¬>2ù¸=D;K½&#½cë<k¿¢¾[ª'¿Ða¾g?®)? Ö)>a§ð½bÚÆ½+üC¼°½Ù¾ªã,½qê=¯I>x¹å< º¼¤ª¼xlÐ<ô>§6ª<ÿ.<¦¬¦=½ìSa:ÿ´<SÎÐ:¿i¾¼M?¥=^-=Ð><qO½pf=ÎÊG½o;êá"½¦>dü¨½ÐA½Ît¾q½ »Ö»<ÅÑ=LL ½ù¼!;¬Îm:X;!:Åâ~; y9O;C;°H-;hÐã¸Çb:(yܹ&^:=áá¹â³¹0l;*d:ìØ¹Üµ:-!ȹ{¹ë:H+¬87^º¦µ :ÒÛ
;¿÷øºQÙ;o;Cd;©<;¨ØE;V¦E;Ì;Ä[<:U,; źèºnáäºPÖ©ºäk®9ô¨:Äfë9ÆC$:Ü':n:+ºßÐ)º·£º9ߣú9aI#8WÙ:èÉÖ:¯¹®{¹Àü9ó0;0§:R<๪<3:Á<"»Þ/³:»Çóºÿõ;ÀºÖr<sò<0¼»×»
ð¹Ó@²:b\£»Ø¬º)ºãxK; Ï<·Ö¸/黺â»UóL: PÉ:¸:tºñ1»B&9ÆËºZÁÓº\5a9g`N»&éº<ZºÐW»=²»x<¹È »¦}u»cUÇ»aü:êF:Läó9Ï:
É »4Ô 9HݺNB»º¥0º4h=+¼P½Äð¼Do<èê!=Bí½±Ü¨=ÎB> ¸Ü<¥^ɼ2Ù½fÒ¼<ï=²q=q÷½Ö^>Ð>g~<. <È *½ÔK¼y«¼¯æ<SK¼7·<a>Â;Ñk^¾Ã¾zd¾Oó¾+¿ÇÍù½>ؾÕϬ¾Rõ=b\ê=(á·=¥«=s=tl=æÊ>Þî=:=Á¹ò½;b¹=¾á=¡J«=oy×>ªå>À¬<G>q}w>û.´¼ §¾íîºéT<-¾Ã¿L¼Ë,Ê= ·=
ö=e÷a¾´¾Ì¾:½½:OȽG¿§½4aò½õè<î>Y½±H>|¾ñd>7m=F¨¾y»uÑS<æ½<¨y$=Yd
¾(Ñ·½e9>)t>3̾ÏyR½î½¡{=8´½o®é½î{#¾\Àî>ý>¹8¿ïo½¿¾J'¼~¡¾mÀ½Rì»wOÊ=À>Ér6¾/À<.Áê½òý=ÝØ½Õ¢=éôQ>Á+>MѲ>*¿>®G>©¾Ýõ¼¨»>Gt¾Éé¾{!µ¾ú¾Ctê¾Üô¿Q̸¾7a
¿V®¾®ç½$>Û¨¡>Aà>´ 0?7Aá>öè=aT1>=>5y;o8:Cz9ÛÅ:~?Ã9nõºm.ö9öñI»²»¥»st;M;eé9;};l'6;¿5¡:U?:E :6/Rºm-n;@è;8À:)©Ò;í¬;ZK¸ìH7cíü¸?ïw»@=Ó==l<|ú»9¥½iî½Ç1½G=<Hä=A/>¡¼vMH¾ëKy=ræ¾}c¾]Îv=l®=Ö C>ª ¾p=%¾´L»¾ÂV̼ßöνXàx¾ãXí½3Ï<|Ü<ç(~<z=,uê½hî¡>yè?Aº=!~¥¾¿S§ö<x=Þ`<ùMF¾ýÝ?ÙÍz?ô[!>®ï¿·Õt¿äT=7Àñ¼;h©¼$'½!>ãí>
X=cD¾k¾¶¹ä»9éL=9çw½*/,=Y¹½Ì@ͽ)]ô;Äk½{{¾Õ¾q=¸5à<Ò¼f]=í:¾i¿X¾Ý,>ãÈ.¾g[°¾FÂD½e5=¹¨³=[¢"<ò ¼RÖ£:NL<><V(¾vÛU>AÄ=AÞ¼Ñ_e¾èþ½å>É`>yd>ã^ý<A7\>Ãû@¾ÿ8¾}U2¿ÃE¿ËV¾U{>d!>s¾ðá=w=ÀPH<ï ¾Gâ><é=¼>æ¿b>`þ+½7õ<Ð=#ö1>æ·³=û«=®(>Æ=F¾$Á½]:í¼x
b½aüȼè<=½#-¼æ¯Í=õ(¾ï¬=#Â׽⮽ôné½²ì½Bþ¾,p¾fz.½S̾٠ê½{W¾;]øE¼q»;ñ+ü<¤ÿY½;Tº½+¯>ø½\¾tf@=<kTR=tú=OÞ½Þ¾K&>½54B¾çp<5Ç»<UÚ=]«s=E½À¼C@½ >cg½>l,¾Î¾åã0=I5¤>ç>Ç<?íÌE?ÆjÊ>9Y? 6>¤1
½ô,¾¿ø'¾ji9¾}ÿ¬¾¦Î¾]¾,±¾j³¾õ> »=º"¾Èp»¥Y¾J¾Ò;¾Ç[¾ö=&?;ºñ<ÓǼ R½¦½´;I;ÿ=Yþú¼º1ù<Ìñ>>QÃ<âS<Áå¼Ñyo¼@Z=i 6>&=º=îy>ÄÒ·¼Ü;
½öþ½
üL½(,Ü<gä=?TÊ»ÔË6="ã=PKÚ
PK 33/data/1FBZZZZZPP3¤¬:lû»Ò)8»ûÖw½éó¼ÁG½Ùß§<ÓÏ»ÏÖV<´R»û,<3x9,Û»þZ=pÓ;Õè#¼(>;(;=,A¼Æãc;=͹¼þݺUo¼æ>¼ê»PKêQPK 33/data/2FBZZZZZzà=1>B²>ðO>É<u>p«>JE>í>Y?A>oC<tâ>4_>"1>îQ>'ý9<)<" R<h¬;haò= eá>À>N;E>?;À,å>Ýèq>(İ>ÊÕ¤>(¼>ãZ>P¬ò>¥Ç&>PKTkPK 33/data/3FBZZZZZ~}¬½§:?3µ°>Â>k?J!?Rpí>!§?¨®?ªá¹,¾B#s?ïÓæ>fž`sº®¼;,½°ç¼Ó¬ª>Ãy$>¸|¾*»û>Â>bò}¼gtï¾!?1Á¾Á?þ¡ã½aÚ½B^¶>z:>PK|§PK 33/data/4FBZZZZZTì½|";á¾:ß#ñ:E¥[=[¿è}<¥LÀ½¡î;{¯;|vÓ½\í·½Ö@9nÚ½ksZ»ãU»L²ºD97½5?ìI=;=ä=>èÕc;ÄwA¾>e=»Å=Ý<F¾$\ò<o%¿ó®4½PK¬+{PK 33/data/5FBZZZZZ_×=7W>Äpn?5¸>åC@[ß<@nÍ>M²@{?èÈ:~@;?0ÛS@Z[>°×?§ªÙ9L
L:O<y ¡9Å;?ür@¼:Ã?ë?ÆÝ?"I9-@&¶û>`*×?Sc?G@yB^>¬äi@ÅÅC?PKN³7PK 33/data/6FBZZZZZ¢àPK§
ñÝPK A33/data/7FB=ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZPKߥPK 33/data/8FBZZZZPKïÒPK 33/data/9FBZZZZ¤ß8;PKvÃPK
33/data/10FBPKiß"ePK
@33/data/11FB<ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ4PK©
This file has been truncated. show original
I find the graph created by jit is not optimized:
graph1411×968 122 KB
The consants are not folded so there are add, sqrt, div and mul before conv layer, the bias of conv layer is added twice, BN parameters are not folded into Conv layer.
Is there any suggestion to optimize a jit model to do these folds?
I want to deploy the model in mobile device so these folds can be useful.
Many thanks. |
st184372 | Did you quantize the model before transferring? See here https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#quantization-aware-training 2 for the steps. |
st184373 | From PyTorch official quantization tutorial:
pytorch.org
(beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials... 3
In the following setting, we did torch.per_tensor_symmetric. However, the zero point for QuantizedConv2d is 63 instead of 0. At first, I thought it is due to that the kernel size is 1 x 1. We could not use a scale factor of 0 and a zero point of 0 for symmetric quantization.
QConfig(activation=functools.partial(<class 'torch.quantization.observer.MinMaxObserver'>, reduce_range=True), weight=functools.partial(<class 'torch.quantization.observer.MinMaxObserver'>, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric))
Post Training Quantization Prepare: Inserting Observers
Inverted Residual Block:After observer insertion
Sequential(
(0): ConvBNReLU(
(0): ConvReLU2d(
(0): Conv2d(
32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32
(activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf)
)
(1): ReLU(
(activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf)
)
)
(1): Identity()
(2): Identity()
)
(1): Conv2d(
32, 16, kernel_size=(1, 1), stride=(1, 1)
(activation_post_process): MinMaxObserver(min_val=inf, max_val=-inf)
)
(2): Identity()
)
..........Post Training Quantization: Calibration done
Post Training Quantization: Convert done
Inverted Residual Block: After fusion and quantization, note fused modules:
Sequential(
(0): ConvBNReLU(
(0): QuantizedConvReLU2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=0.1516050398349762, zero_point=0, padding=(1, 1), groups=32)
(1): Identity()
(2): Identity()
)
(1): QuantizedConv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), scale=0.17719413340091705, zero_point=63)
(2): Identity()
)
Size of model after quantization
Size (MB): 3.631847
..........Evaluation accuracy on 300 images, 66.67
Later, in some of my own experiments, I found for some QuantizedConv2d whose kernel size is 3 x 3, there could be non-zero zero points for symmetric quantization as well. How to understand the zero point in this context? Thank you. |
st184374 | Currently static quantization doesn’t seem to be supported for recurrent networks (RNNs/LSTMs/GRUs). Only dynamic quantization seems to be supported. Can offline calibration for activations for any model with recurrent networks be done using native PyTorch quantization primitives to mimic static quantization? |
st184375 | I have a quantization script (link to full code here) 2. Quoting from the code, here are the relevant model layers:
class UNet(nn.Module):
def __init__(self):
super().__init__()
self.quant_1 = torch.quantization.QuantStub()
self.conv_1_1 = nn.Conv2d(3, 64, 3)
torch.nn.init.kaiming_normal_(self.conv_1_1.weight)
self.relu_1_2 = nn.ReLU()
self.norm_1_3 = nn.BatchNorm2d(64)
self.dequant_1 = torch.quantization.DeQuantStub()
self.conv_1_4 = nn.Conv2d(64, 64, 3)
torch.nn.init.kaiming_normal_(self.conv_1_4.weight)
# continued...
def forward(self, x):
x = self.quant_1(x)
x = self.conv_1_1(x)
x = self.relu_1_2(x)
x = self.norm_1_3(x)
x = self.dequant_1(x)
x = self.conv_1_4(x)
x = self.relu_1_5(x)
# continued...
I am attempting to quantize the model, then evaluate its performance on a dataset of interest:
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
checkpoints_dir = '/spell/checkpoints'
model.load_state_dict(
torch.load(f"{checkpoints_dir}/model_50.pth", map_location=torch.device('cpu'))
)
model.eval()
# NEW
model = torch.quantization.prepare(model)
print(f"Quantizing the model...")
start_time = time.time()
for i, (batch, segmap) in enumerate(dataloader):
# batch = batch.cuda()
# segmap = segmap.cuda()
model(batch)
model = torch.quantization.convert(model)
print(f"Quantization done in {str(time.time() - start_time)} seconds.")
print(f"Evaluating the model...")
start_time = time.time()
for i, (batch, segmap) in enumerate(dataloader):
# batch = batch.cuda()
# segmap = segmap.cuda()
model(batch)
print(f"Evaluation done in {str(time.time() - start_time)} seconds.")
The code, as written, fails with the following log output:
Loading the model...
Quantizing the model...
Quantization done in 84.06090354919434 seconds.
Evaluating the model...
Traceback (most recent call last):
File "/spell/servers/eval_quantized.py", line 277, in <module>
main()
File "/spell/servers/eval_quantized.py", line 269, in main
model(batch)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/spell/servers/eval_quantized.py", line 178, in forward
x = self.conv_1_4(x)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/quantized/modules/conv.py", line 215, in forward
self.dilation, self.groups, self.scale, self.zero_point)
RuntimeError: Could not run 'quantized::conv2d' with arguments from the 'CPUTensorId' backend. 'quantized::conv2d' is only available for these backends: [QuantizedCPUTensorId].
This implies that x = self.conv_1_4(x) is wrong, because conv_1_4 is quantized. But I don’t understand why conv_1_4 is quantized, though.
(apologies for posting an incomplete question–hit the wrong button) |
st184376 | Solved by Vasiliy_Kuznetsov in post #2
If the intent is to dequantize and want to do conv_1_4 in fp32, then the problem is that by default, the quantization APIs quantize all convolutions in the model. The workaround would be to disable quantization for conv_1_4, like this:
model.qconfig = ...
# disable quant for a specific layer
mode… |
st184377 | ResidentMario:
x = self.dequant_1(x)
x = self.conv_1_4(x)
If the intent is to dequantize and want to do conv_1_4 in fp32, then the problem is that by default, the quantization APIs quantize all convolutions in the model. The workaround would be to disable quantization for conv_1_4, like this:
model.qconfig = ...
# disable quant for a specific layer
model.conv_1_4.qconfig = None
# continue with the quantization APIs, conv_1_4 will not be quantized |
st184378 | by default, the quantization APIs quantize all convolutions in the model
Ah! This was not obvious from reading the documentation; the Quantization quickstart 2 does not mention this behavior, nor, as best I can tell, does the Static quantization tutorial.
May I suggest making a docs task for this? Sorry, I know this is my second one in as many days. |
st184379 | Hi!
I have some model:
class SomelLayer(nn.Module):
# some code here
def forward(self, x):
dim = 3
mean_x = x.mean(dim)
mean_x2 = (x * x).mean(dim)
std_x = torch.nn.functional.relu(mean_x2 - mean_x*mean_x).sqrt()
#some code
I want to use static quantization.
class SomelLayer(nn.Module):
def __init__(self, mode):
super(SomelLayer, self).__init__()
self.quant = QuantStub()
self.dequant = DeQuantStub()
self.f_mul = torch.nn.quantized.FloatFunctional()
self.f_add = torch.nn.quantized.FloatFunctional()
self.f_add_relu = torch.nn.quantized.FloatFunctional()
self.f_mul_scalar = torch.nn.quantized.FloatFunctional()
def forward(self, x):
dim = 3
x = self.quant(x)
mean_x = x.mean(dim)
mean_x2 = self.f_mul.mul(x, x).mean(dim)
std_x = self.f_add_relu.add_relu(mean_x2, self.f_mul_scalar.mul_scalar(self.f_mul.mul(mean_x, mean_x), -1.0))
std_x = std_x.sqrt()
# some code
x = self.dequant(x)
return x
But I’m getting error during inference:
Traceback of TorchScript, original code (most recent call last):
File ".../model.py", line 235, in forward
std_x = self.f_add_relu.add_relu(mean_x2,
self.f_mul_scalar.mul_scalar(self.f_mul.mul(mean_x, mean_x), -1.0))
std_x = std_x.sqrt()
~~~~~~~~~~ <--- HERE
RuntimeError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend. 'aten::empty.memory_format' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
Do I have to implement the sqrt operator myself to solve this problem? As here: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/README.md 2
I have similar problems with other operators. |
st184380 | Solution for this problem:
self.quant0 = QuantStub()
self.dequant0 = DeQuantStub()
std_x = self.dequant0(std_x)
std_x = std_x.sqrt()
std_x = self.quant0(std_x) |
st184381 | Hi @pizuzadan, yes, a quantized kernel for sqrt is not implemented at the moment. Doing it in fp32 is the workaround. |
st184382 | The Quantization Release Blog Post and various demos in the docs make heavy use of the torch.quantization.get_default_qconfig method for configuring quantization algorithm behavior. Oddly, the torch.quantization API Reference 4 makes no reference to this function.
This is unfortunate, because it means there isn’t anywhere to go to get a list of valid parameters to this method. If others agree, and this isn’t intentional, I’d be happy to fine a ticket for this in the GH tracker. |
st184383 | Solved by Vasiliy_Kuznetsov in post #2
Hi @ResidentMario, that’s a good point. You can check out the definition of the function here: https://github.com/pytorch/pytorch/blob/20ac7362009dd8e0aca6e72fc9357773136a83b8/torch/quantization/qconfig.py#L81. Filed https://github.com/pytorch/pytorch/issues/48106 to track adding it to the docs. |
st184384 | Hi @ResidentMario, that’s a good point. You can check out the definition of the function here: https://github.com/pytorch/pytorch/blob/20ac7362009dd8e0aca6e72fc9357773136a83b8/torch/quantization/qconfig.py#L81 2. Filed https://github.com/pytorch/pytorch/issues/48106 3 to track adding it to the docs. |
st184385 | I want to get the int value of each weight in the quantized model, using int_repr(), but I get a RuntimeError
this is code
for weight in myModel.state_dict():
if “weight” in weight:
m = myModel.state_dict()[weight].int_repr()
print(m)
RuntimeError: Could not run ‘aten::int_repr’ with arguments from the ‘CPU’ backend. ‘aten::int_repr’ is only available for these backends: [QuantizedCPU, Autograd, Profiler, Tracer]. |
st184386 | I want to get the output of each layer of the network weight to verify that the quantized result is equal to the formula and, when the network model is large, how do I call int_repr() to automatically output the int values of all the weight parameters |
st184387 | Dear all,
I have a Pruned Quantized MobileNet v2 model,
and is now trying to simulate its inference from scratch.
Here is a brief sample of my code:
(not the actual code, but very similar)
(m is the layer)
# scan through feature map _xin
for _bgn_y in range(_xin_h):
for _bgn_x in range(_xin_w):
all_channels = 0.0
# scan through kernel
for _kin in range(_w_input_ch):
_ftmp = 0.0
for _x in range(_w_kernel):
for _y in range(_w_kernel):
fx = torch.dequantize(xin).numpy()[args.img_in_batch][_kin][_bgn_y+_y]_bgn_x+_x]
fw = torch.dequantize(m.weight()).numpy()[_kout][_kin][_y][_x]
_ftmp += (fw*fx)
all_channel += _ftmp
out[_kout][_bgn_y][_bgn_x] = (_fall_channel + m.bias()[_kout])/m.scale + m.zero_point
I assumed the results would be exactly the same as PyTorch’s Quantized Model.
However, it’s 99.999975% the same:
Out of the 40,million Feature Map parameters, only 1 or 2 points is off by 1.
After some investigation, I’ve found the points off were .5 values,
and are all randomly distributed.
Ex:
My Calculation:
pyTorch’s Output:
72
It seems like its a rounding issue, so I have tried different Rounding methods, but all in vain.
(I’m currently using the Bankers Rounding, the one python3 uses)
Any help would be appreciated!
Thanks
Best wishes,
James |
st184388 | Hello. I am struggling to assess the storage memory footprint of quantized models. I would like to compare the storage savings of using quantization. We theoretically can manage to know this, but I would also like to save my model for deployment.
So, basically, my question is, how can I store a Pytorch quantized model (not quantized using the built-in quantization methods) encoded as to minimize its size in accordance with the bitwidth used? |
st184389 | Hello. I am trying to run the work of DSQ (paper: https://arxiv.org/pdf/1908.05033.pdf 1, code: https://github.com/ricky40403/DSQ 1) and faced a strange issue. I get “torch.nn.modules.module.ModuleAttributeError: ‘DSQConv’ object has no attribute ‘running_lw’”. Inside forward(), almost every attribute from the DSQConv class seems to not be noticed inside forward().
Also, can someone tell me how should I store an encoded version of the models to analyse the storage size difference between different bit ranges? |
st184390 | Hello, everyone.
I have a question on how to use prepare_qat properly.
I’m interested in quantizing a subset of models and like to apply prepare_qat selectively. However, a simple test code below reveals that such a selective case will not lead to the same quant-prep as the case where it is done at the top-level.
class LeNet(nn.Module):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(28 * 28, 10)
self.relu1 = nn.ReLU(inplace=True)
def forward(self, x):
return self.relu1(self.l1(x.view(x.size(0), -1)))
model_a = LeNet()
model_b = LeNet()
qconfig = torch.quantization.get_default_qat_qconfig()
Now, when comparing two ways of apply prepare_qat
top-level prepare_qat
model_a.qconfig = qconfig
torch.quantization.prepare_qat(model_a, inplace=True)
print(model_a)
layer-wise prepare_qat
model_b.l1.qconfig = qconfig
torch.quantization.prepare_qat(model_b.l1, inplace=True)
model_b.relu1.qconfig = qconfig
torch.quantization.prepare_qat(model_b.relu1, inplace=True)
print(model_b)
You can see below that print(model_a) and print(model_b) will yield different results, notably, weight_fake_quant is missing in model_b.l1. Can someone explain why these two have different behaviors and the right way to do selective quantization?
image2486×2022 527 KB |
st184391 | Hello. I have a question about convert in torch.quantization.
For a model like this,
(module): LeNet(
(l1): Linear(in_features=784, out_features=10, bias=True)
(relu1): ReLU(inplace=True)
)
After QAT and convert, I got
(module): LeNet(
(l1): QuantizedLinear(in_features=784, out_features=10, scale=0.5196203589439392, zero_point=78, qscheme=torch.per_channel_affine)
(relu1): QuantizedReLU(inplace=True)
)
But, I’m looking for a way to do an evaluation on CUDA, and in that sense, I need to convert it back to the pre-QAT model yet with ‘quantized FP32’ weights and perhaps custom forward_hook to perform activation quantization. Can someone advise the best way to achieve this? In my understanding, these are the steps but like to ensure I don’t reinvent the wheel here.
write a new converter to get the pre-QAT model architecture and load quantized weight (but, in FP32).
add forward_prehook that does quantization per scale/zero_point from activation_post_process
(should it be forward_prehook or forward_posthook??)
Any suggestions would be appreciated! |
st184392 | Hi @thyeros, you can use the QAT model after prepare and before convert to evaluate in fp32 emulating int8. It will model the quantized numerics in fp32 with the FakeQuantize modules, and it works on CUDA. Here is an example from torchvision: https://github.com/pytorch/vision/blob/master/references/classification/train_quantization.py#L134 6 |
st184393 | oh, I see. Just using the QAT model in eval will run just find on CUDA, with all int8 emulation effects. So, there is literally nothing special to do. Is that right?
In terms of eval speed on CUDA, then would it be still a good idea to drop observers? or just disabling them would be sufficient?
Thanks! |
st184394 | In terms of eval speed on CUDA, then would it be still a good idea to drop observers? or just disabling them would be sufficient?
You can use model.apply(torch.quantization.disable_observer) and model.apply(torch.quantization.enable_observer) to toggle them (example: https://github.com/pytorch/vision/blob/master/references/classification/train_quantization.py#L128 6). |
st184395 | What is the pseudo code for a quantized conv1d ? I haven’t been able to work out the requantization and bias portion.
At the bottom is a conv1d model with 1x1 filter and [1] input, if that helps.
In the forums, I’ve seen the requantization scale param is defined as
requant_scale = input_scale * weight_scale / output_scale
Even with all zero_points=0, the following equation is close but not exactly correct
output = (input * weight * requant_scale + bias_quant) * model.conv.scale
Simple Model:
M(
(quant): Quantize(scale=tensor([0.0160]), zero_point=tensor([127]), dtype=torch.quint8)
(conv): QuantizedConv1d(1, 1, kernel_size=(1, 1), stride=(1, 1), scale=0.0021364481654018164, zero_point=0)
(dequant): DeQuantize()
)
input=tensor([[[-2.0260]]])
input_quant=tensor([[[0]]], dtype=torch.uint8)
weight_quant=tensor([[[-128]]], dtype=torch.int8)
weight=tensor([[[-0.6016]]], size=(1, 1, 1), dtype=torch.qint8,
quantization_scheme=torch.per_channel_affine,
scale=tensor([0.0047], dtype=torch.float64), zero_point=tensor([0]),
axis=0)
bias=tensor([-0.9427], requires_grad=True)
output=tensor([[[0.2756]]])
@supriyar @jianyuhuang |
st184396 | Hi all
Recently, I am trying to deploy my model to android. I build libtorch for android with NDK r19c. My model consists of Linear, relu and layernorm. There are some cat ops in forward. I found that, my model becomes slower when I quantized it and the cpu usage also becomes high. Although my device is arch64, I built arm32 libtorch.so to be capable with arm32 devices. Do you think why my model becomes slower after quantization? Does the shape of input and weights affect QNNPACK?
I think qnnpack should be efficient on arm, so I don’t know why this happens.
The flowing is my code:
class Conv1dSubsampling(torch.nn.Module):
def init(self, idim, odim, groups=1, use_bias=False, do_quant=False, svd_dim=-1):
“”“Construct an Conv1dSubsampling object.”""
super(Conv1dSubsampling, self).init()
self.relu = torch.nn.ReLU()
self.norm = torch.nn.LayerNorm(odim)
self.svd_dim=svd_dim
self.quant1 = QuantStub()
self.quant2 = QuantStub()
self.dequant1 = DeQuantStub()
self.dequant2 = DeQuantStub()
self.cnn_linear1 = torch.nn.Sequential(
torch.nn.Linear(5idim, 256, bias=use_bias),
torch.nn.ReLU(),)
self.cnn_linear2 = torch.nn.Sequential(
torch.nn.Linear(2563, svd_dim, bias=use_bias),
torch.nn.Linear(svd_dim, 256, bias=use_bias),
torch.nn.ReLU())
self.cnn_linear3 = torch.nn.Sequential(
torch.nn.Linear(256, svd_dim, bias=use_bias),
torch.nn.Linear(svd_dim, odim, bias=use_bias),
torch.nn.ReLU())
def forward(self, x):
#slice and cat the input
x0 = x[:-4, :]
x1 = x[1:-3,:]
x2 = x[2:-2,:]
x3 = x[3:-1,:]
x4 = x[4:,:]
x = torch.cat((x0,x1,x2,x3,x4), dim=-1)
x = x[::2, :]
x = torch.cat((x, torch.zeros(16-x.shape[0], x.shape[1])))
x = self.quant1(x)
x = self.cnn_linear1(x) # (t//2, 256)
x = self.dequant1(x)
x0 = x[0:-2, :]
x1 = x[1:-1, :]
x2 = x[2:, :]
x = torch.cat((x0,x1,x2), dim=-1)
x = x[::2, :]
#x = x[0:4, :]
x = torch.cat((x, torch.zeros(8-x.shape[0], x.shape[1])))
x = self.quant2(x)
x = self.cnn_linear2(x) # (1, t//2, 256)
x = self.cnn_linear3(x)
x = self.norm(x)
x = self.dequant2(x)
return x[0:4,:]
model = Conv1dSubsampling(40, 400, do_quant=True)
model.eval()
if do_quant:
qconfig = torch.quantization.get_default_qconfig(‘qnnpack’)
print(qconfig)
model.qconfig = qconfig
torch.backends.quantized.engine = ‘qnnpack’
torch.quantization.prepare(model, inplace=True)
for i in range(1, 100):
x = model(torch.randn(batch, idim))
torch.quantization.convert(model, inplace=True)
traced_module = torch.jit.trace(model, (torch.randn(batch, idim)))
script_module = torch.jit.script(model)
script_module.save('model.pt)
Finally, I load it using C++ and run forward. |
st184397 | Hi @Sining_Sun, quantized LayerNorm currently has an efficient kernel in fbgemm (x86), but it does not have an efficient kernel in qnnpack (ARM). So, you are likely seeing the slow fallback path of the kernel on ARM.
A workaround for now could be to let LayerNorm stay in fp32. You can do this by setting the qconfig to None for the LayerNorm module, and moving the dequant to be before LayerNorm. |
st184398 | @Vasiliy_Kuznetsov
Thanks. Apart from the layernorm problem, I found another problem. Even my network is very simple, for example, just one Linear layer without LayerNorm, the cpu usage is very high after quantization. More details can be found in in post.
Quantized model has higer cpu usage on Android Mobile
Hi all,
Recently, I am deploying my model to Android. I found that my quantized model has very high cpu usage, even much higher than fp32 float model. So I tried to run a model with only one Linear layer like:
Class Linear(torch.nn.Module):
def _init_(self, idim, odim):
self.quant= torch.quantization.QuantStub()
self.dequant = torch.quantization.DequantStub()
self.linear = torch.nn.Sequential(torch.nn.Linear(idim, 60), torch.nn.Linear(60, odim), torch.nn.ReLU())
def forward(self, …
This problem has been confused me for a long time. |
st184399 | Hi everyone,
I am building an image classification model.
I want to compress the trained model. I came across this tutorial by - pytorch 4
Is there a defined way or checklist on how to proceed to pruning, like how to decide which pruning method to choose, which modules to prune and how much to prune? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.