id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st183800 | I will take a look at it later this week. Can you elaborate on what you mean by `Its output is very bad; it doesn’t deacreased in accuracy but totally wrong;"? |
st183801 | I figured that much, but can you give a little more info about what you mean by “output is very bad”? Is it slow? Is the accuracy low? If so, what is the difference in accuracies? |
st183802 | I see – the reason I am asking is that we usually use accuracy as a metric. If you are using any proxy for performance – let me know. Otherwise, I will just assume that the accuracy degrades after quantization. Also, I have my own implementation of the unet, but it would probably be helpful if I could get yours – that way, I will be able to reproduce the issue locally |
st183803 | Here 6 I uploaded all the files needed to reproduce the case.
First we run python trace_model.py to make simple torchscript version of weights.
Then test results on image with python demo.py.
After that uncomment the following lines in trace_model.py:
# backend = "qnnpack"
# model.qconfig = torch.quantization.get_default_qconfig(backend)
# torch.backends.quantized.engine = backend
# model = torch.quantization.prepare(model, inplace=False)
# model = torch.quantization.convert(model, inplace=False)
and repeat steps 1,2 again; |
st183804 | Happy new year!
I quantized UNet except last layer as follows, because I need full precision at the last layer.
class QuantizedUNet(nn.Module):
def __init__(self, model_fp32):
super(QuantizedUNet, self).__init__()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
self.until_last = copy.deepcopy(model_fp32)
# Remove last layer from fp32 model and keep it in another variable
del self.until_last.conv2[2]
self.last_conv = model_fp32.conv2[2]
def forward(self, x):
# manually specify where tensors will be converted from floating
# point to quantized in the quantized model
x = self.quant(x)
x = self.until_last(x)
x = self.dequant(x)
x = self.last_conv(x)
return x
After static quantization and calibration, this is the model that i got.
QuantizedUNet(
(quant): QuantStub()
(dequant): DeQuantStub()
(until_last): Unet(
(down_sample_layers): ModuleList(
(0): Sequential(
(0): QuantizedConv2d(1, 8, kernel_size=(3, 3), stride=(1, 1), scale=0.09462987631559372, zero_point=64, padding=(1, 1))
(1): QuantizedBNReLU2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): Identity()
(3): QuantizedConv2d(8, 8, kernel_size=(3, 3), stride=(1, 1), scale=0.6255205273628235, zero_point=83, padding=(1, 1))
(4): QuantizedBNReLU2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): Identity()
)
(1): Sequential(
(0): QuantizedConv2d(8, 16, kernel_size=(3, 3), stride=(1, 1), scale=1.403043270111084, zero_point=87, padding=(1, 1))
(1): QuantizedBNReLU2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): Identity()
(3): QuantizedConv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), scale=2.315826654434204, zero_point=60, padding=(1, 1))
(4): QuantizedBNReLU2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): Identity()
)
(2): Sequential(
(0): QuantizedConv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), scale=5.481112957000732, zero_point=56, padding=(1, 1))
(1): QuantizedBNReLU2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): Identity()
(3): QuantizedConv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=12.060239791870117, zero_point=77, padding=(1, 1))
(4): QuantizedBNReLU2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): Identity()
)
(3): Sequential(
(0): QuantizedConv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), scale=16.808162689208984, zero_point=69, padding=(1, 1))
(1): QuantizedBNReLU2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): Identity()
(3): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=27.680782318115234, zero_point=80, padding=(1, 1))
(4): QuantizedBNReLU2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): Identity()
)
)
(conv): Sequential(
(0): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=39.90061950683594, zero_point=66, padding=(1, 1))
(1): QuantizedBNReLU2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): Identity()
(3): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=102.32366180419922, zero_point=65, padding=(1, 1))
(4): QuantizedBNReLU2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): Identity()
)
(up_sample_layers): ModuleList(
(0): Sequential(
(0): QuantizedConv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), scale=1064.0137939453125, zero_point=71, padding=(1, 1))
(1): QuantizedBNReLU2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): Identity()
(3): QuantizedConv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=1038.538330078125, zero_point=73, padding=(1, 1))
(4): QuantizedBNReLU2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): Identity()
)
(1): Sequential(
(0): QuantizedConv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), scale=3193.4365234375, zero_point=99, padding=(1, 1))
(1): QuantizedBNReLU2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): Identity()
(3): QuantizedConv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), scale=1721.619873046875, zero_point=87, padding=(1, 1))
(4): QuantizedBNReLU2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): Identity()
)
(2): Sequential(
(0): QuantizedConv2d(32, 8, kernel_size=(3, 3), stride=(1, 1), scale=2268.27001953125, zero_point=71, padding=(1, 1))
(1): QuantizedBNReLU2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): Identity()
(3): QuantizedConv2d(8, 8, kernel_size=(3, 3), stride=(1, 1), scale=856.855712890625, zero_point=71, padding=(1, 1))
(4): QuantizedBNReLU2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): Identity()
)
(3): Sequential(
(0): QuantizedConv2d(16, 8, kernel_size=(3, 3), stride=(1, 1), scale=493.1239318847656, zero_point=105, padding=(1, 1))
(1): QuantizedBNReLU2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): Identity()
(3): QuantizedConv2d(8, 8, kernel_size=(3, 3), stride=(1, 1), scale=84.60382080078125, zero_point=26, padding=(1, 1))
(4): QuantizedBNReLU2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): Identity()
)
)
(conv2): Sequential(
(0): QuantizedConv2d(8, 4, kernel_size=(1, 1), stride=(1, 1), scale=15.952274322509766, zero_point=86)
(1): QuantizedConv2d(4, 1, kernel_size=(1, 1), stride=(1, 1), scale=9.816205978393555, zero_point=58)
)
)
(last_conv): Conv2d(1, 1, kernel_size=(1, 1), stride=(1, 1))
)
It looks like all layers except last layer quantized properly as I expected.
But when I do inference with this model as follows, it raises runtime error at quantized modules part.
quantized_model.eval()
quantized_ouput = quantized_model(norm_input[0:1])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-127-34dec8af2a31> in <module>
1 quantized_model.eval()
----> 2 quantized_ouput = quantized_model(norm_input[0:1])
~/.conda/envs/airs_project/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-118-db9442ff2b9c> in forward(self, x)
14 # point to quantized in the quantized model
15 x = self.quant(x)
---> 16 x = self.until_last(x)
17 x = self.dequant(x)
18 x = self.last_conv(x)
~/.conda/envs/airs_project/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/mnt/hdd2/jinwoo/airs_project/sample_forward/sample_forward/utils/unet.py in forward(self, input)
38 # Apply down-sampling layers
39 for layer in self.down_sample_layers:
---> 40 output = layer(output)
41 stack.append(output)
42 output = F.max_pool2d(output, kernel_size=2)
~/.conda/envs/airs_project/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/.conda/envs/airs_project/lib/python3.9/site-packages/torch/nn/modules/container.py in forward(self, input)
115 def forward(self, input):
116 for module in self:
--> 117 input = module(input)
118 return input
119
~/.conda/envs/airs_project/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/.conda/envs/airs_project/lib/python3.9/site-packages/torch/nn/quantized/modules/conv.py in forward(self, input)
329 if len(input.shape) != 4:
330 raise ValueError("Input shape must be `(N, C, H, W)`!")
--> 331 return ops.quantized.conv2d(
332 input, self._packed_params, self.scale, self.zero_point)
333
RuntimeError: Could not run 'quantized::conv2d.new' with arguments from the 'CPU' backend. 'quantized::conv2d.new' is only available for these backends: [QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].
QuantizedCPU: registered at /pytorch/aten/src/ATen/native/quantized/cpu/qconv.cpp:858 [kernel]
BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
AutogradOther: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback]
AutogradXLA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback]
Tracer: fallthrough registered at /pytorch/torch/csrc/jit/frontend/tracer.cpp:967 [backend fallback]
Autocast: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:254 [backend fallback]
Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:511 [backend fallback]
VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
I suspected that not-quantized module could be the one that raises this error, so I only inferenced with quantized module.
#Except last unquantized layer
quantized_model.until_last.eval()
quantized_model.until_last(norm_input[0:1])
But it raises the same error.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-129-2ede05a56b57> in <module>
1 quantized_model.until_last.eval()
----> 2 quantized_model.until_last(norm_input[0:1])
~/.conda/envs/airs_project/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/mnt/hdd2/jinwoo/airs_project/sample_forward/sample_forward/utils/unet.py in forward(self, input)
38 # Apply down-sampling layers
39 for layer in self.down_sample_layers:
---> 40 output = layer(output)
41 stack.append(output)
42 output = F.max_pool2d(output, kernel_size=2)
~/.conda/envs/airs_project/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/.conda/envs/airs_project/lib/python3.9/site-packages/torch/nn/modules/container.py in forward(self, input)
115 def forward(self, input):
116 for module in self:
--> 117 input = module(input)
118 return input
119
~/.conda/envs/airs_project/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/.conda/envs/airs_project/lib/python3.9/site-packages/torch/nn/quantized/modules/conv.py in forward(self, input)
329 if len(input.shape) != 4:
330 raise ValueError("Input shape must be `(N, C, H, W)`!")
--> 331 return ops.quantized.conv2d(
332 input, self._packed_params, self.scale, self.zero_point)
333
RuntimeError: Could not run 'quantized::conv2d.new' with arguments from the 'CPU' backend. 'quantized::conv2d.new' is only available for these backends: [QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].
QuantizedCPU: registered at /pytorch/aten/src/ATen/native/quantized/cpu/qconv.cpp:858 [kernel]
BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
AutogradOther: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback]
AutogradXLA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback]
Tracer: fallthrough registered at /pytorch/torch/csrc/jit/frontend/tracer.cpp:967 [backend fallback]
Autocast: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:254 [backend fallback]
Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:511 [backend fallback]
VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
As I inspected, it seems like all layers are quantized properly in this module, too.
I searched google and pytorch docs thoroughly, but not sure what to debug for right now.
Any suggestions would be really welcome.
Thanks. |
st183805 | Please see quant docs: add common errors section by vkuzo · Pull Request #49902 · pytorch/pytorch · GitHub 107 which was landed recently and adds documentation about this error. It looks like one of your layers expects an int8 tensor, but it is being passed an fp32 tensor. The fix would be to figure out where exactly in your model this is happening (it’s in the stack trace), and then add a QuantStub right before it. Also, the code which prepares and converts the model needs to include the QuantStub objects, as those are a part of the quantization flow. |
st183806 | Thanks! I figured it out with your help.
Now I’m wondering if it’s possible not to quantize input. Currently I’m doing super-resolution task, which needs input image at full precision. But quantized model needs quantized input. I’m looking forward to get advice.
Bests, |
st183807 | Hi KURI, hope you are fine.
I am facing almost the same issue as yours, so kindly help me with this
I have trained the model using fastai, and timm libararies.
Currently, I am doing following:
effb3_model=learner_effb3.model.eval()
backend = "qnnpack"
effb3_model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
model_static_quantized = torch.quantization.prepare(effb3_model, inplace=False)
model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False)
print_size_of_model(model_static_quantized)
But I am facing following error, while calling the model for inference:
RuntimeError: Could not run 'aten::thnn_conv2d_forward' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::thnn_conv2d_forward' is only available for these backends: [CPU, CUDA, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
And this is my quantized_model:
Sequential(
(0): Sequential(
(0): Conv2dSame(3, 40, kernel_size=(3, 3), stride=(2, 2), bias=False)
(1): QuantizedBatchNorm2d(40, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
(3): Sequential(
(0): Sequential(
(0): DepthwiseSeparableConv(
(conv_dw): QuantizedConv2d(40, 40, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=40)
(bn1): QuantizedBatchNorm2d(40, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(40, 10, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(10, 40, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pw): QuantizedConv2d(40, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn2): QuantizedBatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): Identity()
)
(1): DepthwiseSeparableConv(
(conv_dw): QuantizedConv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=24)
(bn1): QuantizedBatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(24, 6, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(6, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pw): QuantizedConv2d(24, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn2): QuantizedBatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): Identity()
)
)
(1): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): Conv2dSame(144, 144, kernel_size=(3, 3), stride=(2, 2), groups=144, bias=False)
(bn2): QuantizedBatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(144, 6, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(6, 144, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=192)
(bn2): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(192, 8, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(8, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=192)
(bn2): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(192, 8, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(8, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): Conv2dSame(192, 192, kernel_size=(5, 5), stride=(2, 2), groups=192, bias=False)
(bn2): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(192, 8, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(8, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(192, 48, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(288, 288, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=288)
(bn2): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(288, 12, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(12, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(288, 288, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=288)
(bn2): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(288, 12, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(12, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(3): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): Conv2dSame(288, 288, kernel_size=(3, 3), stride=(2, 2), groups=288, bias=False)
(bn2): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(288, 12, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(12, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(288, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(3): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(4): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(3): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(4): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(5): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): Conv2dSame(816, 816, kernel_size=(5, 5), stride=(2, 2), groups=816, bias=False)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(3): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(4): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(5): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(6): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 384, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(384, 2304, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(2304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(2304, 2304, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=2304)
(bn2): QuantizedBatchNorm2d(2304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(2304, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(96, 2304, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(2304, 384, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(4): QuantizedConv2d(384, 1536, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(5): QuantizedBatchNorm2d(1536, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(6): SiLU(inplace=True)
)
(1): Sequential(
(0): AdaptiveConcatPool2d(
(ap): AdaptiveAvgPool2d(output_size=1)
(mp): AdaptiveMaxPool2d(output_size=1)
)
(1): Flatten(full=False)
(2): BatchNorm1d(3072, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): Dropout(p=0.25, inplace=False)
(4): QuantizedLinear(in_features=3072, out_features=512, scale=1.0, zero_point=0, qscheme=torch.per_tensor_affine)
(5): ReLU(inplace=True)
(6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): Dropout(p=0.5, inplace=False)
(8): QuantizedLinear(in_features=512, out_features=73, scale=1.0, zero_point=0, qscheme=torch.per_tensor_affine)
)
)
Thanks for any help. |
st183808 | @Muhammad_Ali
Hi Ali,
I am facing the same issue for this.
Have got a soluton for this?
Thanks. |
st183809 | I was using Pytorch for post-training quantization for my resnet18 model. Following is part of the code.
net.qconfig = torch.quantization.QConfig(
activation=torch.quantization.MinMaxObserver.with_args(dtype=torch.quint8, qscheme=torch.per_tensor_symmetric),
weight=torch.quantization.MinMaxObserver.with_args(dtype=torch.qint8, qscheme=torch.per_tensor_symmetric))
I wanted to print bias and scale for each tensor that is being used internally for each Tensor.
Can someone please help me do it the right way?
Thanks. |
st183810 | Hi @shas19 , if you print out the quantized network it should show the scale and zero_points of various layers. Is there something else you are looking for? Could you be more specific? |
st183811 | Hi,
I have the same question for printing quantized weights. Is there a way to see the quantized int values which are used in the convolution operators, like the example shown below:
class M(torch.nn.Module):
def __init__(self):
super(M, self).__init__()
# QuantStub converts tensors from floating point to quantized
self.quant = torch.quantization.QuantStub()
self.conv = torch.nn.Conv2d(1, 1, 1)
self.bn = torch.nn.BatchNorm2d(1)
self.relu = torch.nn.ReLU()
# DeQuantStub converts tensors from quantized to floating point
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
print(x) # <- input is quantized
print(self.conv.weight) # <- It prints: bound method Conv2d.weight of QuantizedConv2d(1, 1, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
print(self.conv.bias) # <- It prints: <bound method Conv2d.bias of QuantizedConv2d(1, 1, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)>
x = self.bn(x)
x = self.relu(x)
x = self.dequant(x)
return x
# conver the model to qat
qconfig = QConfig(
activation = FakeQuantize.with_args(observer=MovingAverageMinMaxObserver),
weight = FakeQuantize.with_args(
observer=MovingAverageMinMaxObserver,
quant_min=-128, quant_max=127,
dtype=torch.qint8)
)
model_fp32 = M()
model_fp32.train()
model_fp32.qconfig = qconfig
model_fp32_prepared = prepare_qat(model_fp32)
model_fp32_prepared.eval()
model_int8 = convert(model_fp32_prepared)
# run the model
input_fp32 = torch.randn(4, 1, 4, 4)
res = model_int8(input_fp32) |
st183812 | Also, I did look into the code. I found the qat conv module calling fake quant before doing convolution.
github.com
pytorch/pytorch/blob/master/torch/nn/qat/modules/conv.py#L36 3
factory_kwargs = {'device': device, 'dtype': dtype}
super().__init__(in_channels, out_channels, kernel_size,
stride=stride, padding=padding, dilation=dilation,
groups=groups, bias=bias, padding_mode=padding_mode,
**factory_kwargs)
assert qconfig, 'qconfig must be provided for QAT module'
self.qconfig = qconfig
self.weight_fake_quant = qconfig.weight(factory_kwargs=factory_kwargs)
def forward(self, input):
return self._conv_forward(input, self.weight_fake_quant(self.weight), self.bias)
@classmethod
def from_float(cls, mod):
r"""Create a qat module from a float module or qparams_dict
Args: `mod` a float module, either produced by torch.quantization utilities
or directly from user
"""
assert type(mod) == cls._FLOAT_MODULE, 'qat.' + cls.__name__ + '.from_float only works for ' + \
cls._FLOAT_MODULE.__name__
github.com
pytorch/pytorch/blob/master/torch/quantization/fake_quantize.py#L141 1
_scale, _zero_point = self.calculate_qparams()
_scale, _zero_point = _scale.to(self.scale.device), _zero_point.to(self.zero_point.device)
if self.scale.shape != _scale.shape:
self.scale.resize_(_scale.shape)
self.zero_point.resize_(_zero_point.shape)
self.scale.copy_(_scale)
self.zero_point.copy_(_zero_point)
if self.fake_quant_enabled[0] == 1:
if self.is_per_channel:
X = torch.fake_quantize_per_channel_affine(
X, self.scale, self.zero_point,
self.ch_axis, self.quant_min, self.quant_max)
else:
X = torch.fake_quantize_per_tensor_affine(
X, float(self.scale), int(self.zero_point),
self.quant_min, self.quant_max)
return X
@torch.jit.export
def extra_repr(self):
And, the fake quant calls a fake quantize affine function, for example the fake_quantize_per_tensor_affine function in pytorch/torch/onnx/symbolic_opset10.py
I print the output of self.weight_fake_quant(self.weight) and see all quanted and requanted floating point numbers. My question is that the convolution seems using the floating point r instead of the integers q from the formula in the QAT’s paper r = S*(q - Z)? |
st183813 | Most discussion around quantized exports that I’ve found is on this thread 136. However, most users are talking about int8 not fp16 - I’m not sure how similar the approaches/issues are between the two precisions |
st183814 | I trained a model(model A) using pytorch whose forward method depends on the outputs from another model(model B). Model B can be thought of as a feature extractor for model A.
I saved the state_dict of model A only after training it. As expected, the size of the saved .pth file was small since I only saved the trained parameters of model A.
However, when I try to export this model A into onnx format using torch.onnx.export, the size of the resultant onnx model is very large and takes the parameters of model B too. Is this the expected behavior? I know onnx model runs the forward (which involves calling model B) method of model A before exporting it. But does it necessarily have to include the parameters of model B along with model A? |
st183815 | For the ONNX file to run standalone, it has to contain both the architecture definition and all model weights required to compute the forward path. Given this, it makes sense to me that model B parameters would need to be included |
st183816 | I’m new to quantization so I couldn’t figure out a way to easily reproduce this without going through the whole flow.
From pdb during a forward pass of a quantized model:
print(x.dtype)
# >> torch.quint8
print(x.shape)
# >> torch.Size([1, 40, 64, 384])
print(x.mean((2,3), keepdim=True).shape)
# >> torch.Size([1, 40])
This happens when I run the forward pass just after setting torch.backends.quantized.engine = 'qnnpack'.
If I do not set it, the forward pass runs fine, and 10x faster than the non-quantized version of my model (in other words, as expected)
Running this on Android causes the same issue. |
st183817 | Solved by tom in post #3
Hello Alexander,
I can confirm this and took the liberty to file QNNPACK mean with keepdim doesn't work · Issue #58668 · pytorch/pytorch · GitHub . Thank you so much for reporting this with very precise repro information! This makes things much easier.
For reference: This illustrates the problem:
… |
st183818 | I’m unable to reproduce:
import torch
import copy
# define a floating point model where some layers could benefit from QAT
class M(torch.nn.Module):
def __init__(self):
super(M, self).__init__()
# QuantStub converts tensors from floating point to quantized
self.quant = torch.quantization.QuantStub()
self.conv = torch.nn.Conv2d(1, 1, 1)
self.bn = torch.nn.BatchNorm2d(1)
self.relu = torch.nn.ReLU()
# DeQuantStub converts tensors from quantized to floating point
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = self.bn(x)
x = self.relu(x)
x = x.mean((2, 3), keepdim=True)
x = self.dequant(x)
return x
model_fp32 = M()
model_fp32.train()
model_fp32.qconfig = torch.quantization.get_default_qat_qconfig('qnnpack')
model_fp32_fused = torch.quantization.fuse_modules(model_fp32,
[['conv', 'bn', 'relu']])
model_fp32_prepared = torch.quantization.prepare_qat(model_fp32_fused)
x = torch.rand(2, 1, 224, 224)
model_fp32_prepared.eval()
model_fp32_prepared(x)
model_int8 = torch.quantization.convert(model_fp32_prepared)
y = model_int8(x)
print(y)
print(y.shape)
tensor([[[[0.]]],
[[[0.]]]])
torch.Size([2, 1, 1, 1])
Do you have a better repro? |
st183819 | Hello Alexander,
I can confirm this and took the liberty to file QNNPACK mean with keepdim doesn't work · Issue #58668 · pytorch/pytorch · GitHub 1 . Thank you so much for reporting this with very precise repro information! This makes things much easier.
For reference: This illustrates the problem:
torch.backends.quantized.engine = 'qnnpack'
print(torch.backends.quantized.engine, torch.quantize_per_tensor(torch.randn(5, 5, 5, 5), scale=0.2, zero_point=0, dtype=torch.quint8).mean((2,3), keepdim=True).shape)
torch.backends.quantized.engine = 'fbgemm'
print(torch.backends.quantized.engine, torch.quantize_per_tensor(torch.randn(5, 5, 5, 5), scale=0.2, zero_point=0, dtype=torch.quint8).mean((2,3), keepdim=True).shape)
Best regards
Thomas |
st183820 | I’m happy to report that the issue linked above has been closed, so we should see nightlies that have the problem fixed. I don’t think it made it into 1.9, though, but I hope to make Raspberry Pi wheels with the fix soon enough.
Best regards
Thomas |
st183821 | Hi All,
I tried quantizing a finetuned T5 model :
model = T5ForConditionalGeneration.from_pretrained('/path/to/finetuned/model')
model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
tokenizer = T5Tokenizer.from_pretrained('t5-base')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
list = ["What is you name ?",
"Where do you live ?",
"You should say hi and greet him.",
"If John has cash 80 dollars then tell him to transfer",
"Tell John to buy some vegetables",]
l = []
t = []
for s in list:
start = time.perf_counter();
sentence = s
text = "paraphrase: " + sentence + " </s>"
max_len = 256
encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device)
# set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3
beam_outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
do_sample=True,
max_length=256,
top_k=120,
top_p=0.98,
early_stopping=True,
num_return_sequences=1
)
l.append(tokenizer.decode(beam_outputs[0],skip_special_tokens=True,clean_up_tokenization_spaces=True))
t.append("time taken = {}".format(time.perf_counter()-start))
for i in l:
print(i)
for j in t:
print(j)
While getting inferences it says:
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
and I am getting empty string as inference. What is wrong? |
st183822 | I am trying to run quantization on a model. The model I am using is the pretrained wide_resnet101_2. The code is running on CPU. Before quantization, the model is 510MB and after quantization it is down to 129MB. It seems like the quantization is working. The problem arises when the quantized model is called later in the code to run the tester.
The error is in the line 70: RuntimeError: Could not run ‘aten::add_.Tensor’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes 2 for possible resolutions. ‘aten::add_.Tensor’ is only available for these backends: [CPU, MkldnnCPU, SparseCPU, Meta, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
Any help with this? |
st183823 | Solved by jerryzh168 in post #2
This means the input of aten::add_ is a quantized Tensor. To address the problem you can either
(1). place a DequantStub and QuantStub around the aten::add_ op.
e.g.
def __init__(...):
...
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
… |
st183824 | aidanaranjo:
The error is in the line 70: RuntimeError: Could not run ‘aten::add_.Tensor’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. ‘aten::add_.Tensor’ is only available for these backends: [CPU, MkldnnCPU, SparseCPU, Meta, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
This means the input of aten::add_ is a quantized Tensor. To address the problem you can either
(1). place a DequantStub and QuantStub around the aten::add_ op.
e.g.
def __init__(...):
...
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
...
def forward(...):
...
self.dequant(x)
x += ...
self.quant(x)
...
or
(2). quantize aten::add_ by replacing it with FloatFunctional(pytorch/functional_modules.py at master · pytorch/pytorch · GitHub 123) |
st183825 | Providing an example code snippet for approach 2
def __init__(...):
...
self.quant_x = torch.quantization.QuantStub()
self.quant_y = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
self.ff = torch.nn.quantized.FloatFunctional()
...
def forward(x, y):
...
x = self.quant_x(x)
y = self.quant_y(y)
out = self.ff.add(x, y)
... |
st183826 | Hi,
There seems to be a quantized::add operator but I can’t find how to use it
class QuantAdd(torch.nn.Module):
def __init__(self):
super(QuantAdd, self).__init__()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x, y):
x = self.quant(x)
y = self.quant(y)
out = x + y
return self.dequant(out)
model = QuantAdd()
torch.backends.quantized.engine = 'qnnpack'
model.qconfig = torch.quantization.get_default_qconfig('qnnpack')
torch.quantization.prepare(model, inplace=True)
a = torch.rand(1, 3, 4, 4)
b = torch.rand(1, 3, 4, 4)
_ = model(a, b)
torch.quantization.convert(model, inplace=True)
traced_model = torch.jit.trace(model, (a, b))
The above will result in the error:
NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend.
I tried using out.copy_(x + y) on a pre-allocated tensor but still get the ‘aten::empty.memory_format’ error which I guess is related to x + y return. So I tried in-place addition (both with x += y and x.add_(y) ) and I get:
NotImplementedError: Could not run 'aten::add.out' with arguments from the 'QuantizedCPU' backend.
Am I missing something obvious? What’s the right way to use quantized addition in a model?
Thanks,
Julien |
st183827 | Actually I think I found the way.
I need to use a torch.nn.quantized.FloatFunctional as a functor and then swap it by a torch.nn.quantized.QFunctional after model conversion but before tracing.
Is that the recommended way to proceed?
Or is there a more straightforward alternative? |
st183828 | I’m wondering if it is possible to quantize a model that has 2 inputs (2 tensors). Basically, I having a GAN model in which the generator module requires 2 inputs for inference. Right now, I’m looking at the example with Static Quantization in here but the sample model to be quantized just have 1 input. |
st183829 | Solved by Zafar in post #2
Yes it is. I am not sure what exactly your model is, but here is an example on how you could make a model with two inputs in the forward
class SomeAwesomeModel(nn.Module):
def __init__(self):
self.quant_x = torch.quantization.QuantStub()
self.quant_y = torch.quantization.QuantSt… |
st183830 | Yes it is. I am not sure what exactly your model is, but here is an example on how you could make a model with two inputs in the forward
class SomeAwesomeModel(nn.Module):
def __init__(self):
self.quant_x = torch.quantization.QuantStub()
self.quant_y = torch.quantization.QuantStub()
self.func = nn.quantized.FloatFunctional()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x, y):
qx = self.quant_x(x)
qy = self.quant_y(y)
qz = self.func.add(qx, qy)
z = self.dequant(qz)
return z
The model above expect two floating point tensors as input, quantizes them, and after performing some funciton, returns the dequantized version. |
st183831 | @ptrblck Perhaps the need for one QuantStub per model input could be mentioned in the QAT docs 1? On my first read through this distinction was not clear to me until I started getting errors in my project code |
st183832 | I’m working with timms mobilenetv2_100
import timm
model = timm.create_model('mobilenetv2_100')
and FX post-training static quantization.
I’m getting a very strange behaviour regarding the quantized model accuracy. I’d be happy to provide more detail if this question gets interest but for now here are the clues:
Using get_default_qconfig("fgemm") I get 100% accuracy (I’m only testing 10 samples so this is fair).
Using get_default_qconfig("qnnpack") I get 0% accuracy BUT read on for the interesting clues.
If I only quantize some of the backbone blocks rather than the whole model I can recover 100% accuracy with get_default_qconfig("qnnpack")
Quantize only blocks [0] → 100%
[0, 2, 3, 4, 5] → 100%
[0, 1, 2, 3, 4, 5] → 0%
[1] → 100%
[0, 2, 3, 4, 5, 6] → 0%
[6] → 0%
All the above results are gathered by running the model with the regular torch backend. When I set `torch.backends.quantized.engine = ‘qnnpack’ even the cases where I previously recovered 100% go to 0%.
Where could I go from here to understand what’s going on? I’m new to quantization so I’m not necessarily aware of the options.
Could it be that there’s some under/overflow issue which just randomly happens according to some combination of blocks quantized?
For reference, the blocks of the model look like:
(blocks): Sequential(
(0): Sequential(
(0): DepthwiseSeparableConv(
(conv_dw): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(se): Identity()
(conv_pw): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): Identity()
)
)
(1): Sequential(
(0): InvertedResidual(
(conv_pw): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96, bias=False)
(bn2): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(144, 144, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=144, bias=False)
(bn2): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(144, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Sequential(
(0): InvertedResidual(
(conv_pw): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(144, 144, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=144, bias=False)
(bn2): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
(bn2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
(bn2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(3): Sequential(
(0): InvertedResidual(
(conv_pw): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=192, bias=False)
(bn2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
(bn2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
(bn2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(3): InvertedResidual(
(conv_pw): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
(bn2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Sequential(
(0): InvertedResidual(
(conv_pw): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
(bn2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(384, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False)
(bn2): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False)
(bn2): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(5): Sequential(
(0): InvertedResidual(
(conv_pw): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(576, 576, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=576, bias=False)
(bn2): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(576, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
(bn2): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
(bn2): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(6): Sequential(
(0): InvertedResidual(
(conv_pw): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act1): ReLU6(inplace=True)
(conv_dw): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
(bn2): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): Conv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
) |
st183833 | @jerryzh168 here it is (part 1 of 3)
(conv_pwl): QuantizedConv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), scale=0.2908034026622772, zero_point=129)
)
)
(3): Module(
(0): Module(
(conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=0.13633513450622559, zero_point=106)
(act1): ReLU6(inplace=True)
(conv_dw): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), scale=0.11250900477170944, zero_point=133, padding=(1, 1), groups=192)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): QuantizedConv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.3205368220806122, zero_point=131)
)
(1): Module(
(conv_pw): QuantizedConv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), scale=0.10603202879428864, zero_point=118)
(act1): ReLU6(inplace=True)
(conv_dw): QuantizedConv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), scale=0.1964792162179947, zero_point=146, padding=(1, 1), groups=384)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): QuantizedConv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.21481290459632874, zero_point=135)
)
(2): Module(
(conv_pw): QuantizedConv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), scale=0.08830209076404572, zero_point=117)
(act1): ReLU6(inplace=True)
(conv_dw): QuantizedConv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), scale=0.2017802745103836, zero_point=95, padding=(1, 1), groups=384)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): QuantizedConv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.16327263414859772, zero_point=126)
)
(3): Module(
(conv_pw): QuantizedConv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), scale=0.08116503804922104, zero_point=117)
(act1): ReLU6(inplace=True)
(conv_dw): QuantizedConv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), scale=0.10571161657571793, zero_point=157, padding=(1, 1), groups=384)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): QuantizedConv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.1567448079586029, zero_point=134)
)
)
(4): Module(
(0): Module(
(conv_pw): QuantizedConv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), scale=0.13676510751247406, zero_point=108)
(act1): ReLU6(inplace=True)
(conv_dw): QuantizedConv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), scale=0.26821452379226685, zero_point=88, padding=(1, 1), groups=384)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): QuantizedConv2d(384, 96, kernel_size=(1, 1), stride=(1, 1), scale=0.2556881308555603, zero_point=119)
)
(1): Module(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=0.0698079839348793, zero_point=121)
(act1): ReLU6(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=0.08290387690067291, zero_point=148, padding=(1, 1), groups=576)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=0.16372020542621613, zero_point=123)
)
(2): Module(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=0.06444422900676727, zero_point=141)
(act1): ReLU6(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=0.07609664648771286, zero_point=142, padding=(1, 1), groups=576)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=0.1533583104610443, zero_point=131)
)
)
(5): Module(
(0): Module(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=0.0860033854842186, zero_point=141)
(act1): ReLU6(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(2, 2), scale=0.07259328663349152, zero_point=121, padding=(1, 1), groups=576)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): QuantizedConv2d(576, 160, kernel_size=(1, 1), stride=(1, 1), scale=0.22664271295070648, zero_point=119)
)
(1): Module(
(conv_pw): QuantizedConv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), scale=0.0699293464422226, zero_point=116)
(act1): ReLU6(inplace=True)
(conv_dw): QuantizedConv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), scale=0.07911744713783264, zero_point=144, padding=(1, 1), groups=960)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): QuantizedConv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), scale=0.12189313769340515, zero_point=130)
)
(2): Module(
(conv_pw): QuantizedConv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), scale=0.06675049662590027, zero_point=129)
(act1): ReLU6(inplace=True)
(conv_dw): QuantizedConv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), scale=0.07856452465057373, zero_point=159, padding=(1, 1), groups=960)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): QuantizedConv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), scale=0.1353532373905182, zero_point=132)
)
)
(6): Module(
(0): Module(
(conv_pw): QuantizedConv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), scale=0.10627786070108414, zero_point=105)
(act1): ReLU6(inplace=True)
(conv_dw): QuantizedConv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), scale=0.15888939797878265, zero_point=161, padding=(1, 1), groups=960)
(act2): ReLU6(inplace=True)
(se): Identity()
(conv_pwl): QuantizedConv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), scale=0.09197470545768738, zero_point=130)
)
)
)
(conv_head): QuantizedConv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), scale=0.13624273240566254, zero_point=160)
(act2): ReLU6(inplace=True)
(global_pool): Module(
(pool): Identity()
)
(classifier): Identity()
)
(rnn): GRU(1280, 128, batch_first=True, bidirectional=True)
(relu): ReLU()
(dropout): Dropout(p=0.3, inplace=False)
(fc): QuantizedLinear(in_features=256, out_features=11, scale=0.06948493421077728, zero_point=120, qscheme=torch.per_tensor_affine)
) |
st183834 | (part 2 of 3)
import torch
def forward(self, img):
feature_extractor_conv_stem_input_scale_0 = self.feature_extractor_conv_stem_input_scale_0
feature_extractor_conv_stem_input_zero_point_0 = self.feature_extractor_conv_stem_input_zero_point_0
feature_extractor_conv_stem_input_dtype_0 = self.feature_extractor_conv_stem_input_dtype_0
quantize_per_tensor_1 = torch.quantize_per_tensor(img, feature_extractor_conv_stem_input_scale_0, feature_extractor_conv_stem_input_zero_point_0, feature_extractor_conv_stem_input_dtype_0); img = feature_extractor_conv_stem_input_scale_0 = feature_extractor_conv_stem_input_zero_point_0 = feature_extractor_conv_stem_input_dtype_0 = None
feature_extractor_conv_stem = self.feature_extractor.conv_stem(quantize_per_tensor_1); quantize_per_tensor_1 = None
feature_extractor_act1 = self.feature_extractor.act1(feature_extractor_conv_stem); feature_extractor_conv_stem = None
feature_extractor_blocks_0_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "0"), "0").conv_dw(feature_extractor_act1); feature_extractor_act1 = None
feature_extractor_blocks_0_0_act1 = getattr(getattr(self.feature_extractor.blocks, "0"), "0").act1(feature_extractor_blocks_0_0_conv_dw); feature_extractor_blocks_0_0_conv_dw = None
dequantize_1 = feature_extractor_blocks_0_0_act1.dequantize(); feature_extractor_blocks_0_0_act1 = None
feature_extractor_blocks_0_0_se = getattr(getattr(self.feature_extractor.blocks, "0"), "0").se(dequantize_1); dequantize_1 = None
feature_extractor_blocks_0_0_conv_pw_input_scale_0 = self.feature_extractor_blocks_0_0_conv_pw_input_scale_0
feature_extractor_blocks_0_0_conv_pw_input_zero_point_0 = self.feature_extractor_blocks_0_0_conv_pw_input_zero_point_0
feature_extractor_blocks_0_0_conv_pw_input_dtype_0 = self.feature_extractor_blocks_0_0_conv_pw_input_dtype_0
quantize_per_tensor_2 = torch.quantize_per_tensor(feature_extractor_blocks_0_0_se, feature_extractor_blocks_0_0_conv_pw_input_scale_0, feature_extractor_blocks_0_0_conv_pw_input_zero_point_0, feature_extractor_blocks_0_0_conv_pw_input_dtype_0); feature_extractor_blocks_0_0_se = feature_extractor_blocks_0_0_conv_pw_input_scale_0 = feature_extractor_blocks_0_0_conv_pw_input_zero_point_0 = feature_extractor_blocks_0_0_conv_pw_input_dtype_0 = None
feature_extractor_blocks_0_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "0"), "0").conv_pw(quantize_per_tensor_2); quantize_per_tensor_2 = None
dequantize_2 = feature_extractor_blocks_0_0_conv_pw.dequantize(); feature_extractor_blocks_0_0_conv_pw = None
feature_extractor_blocks_0_0_act2 = getattr(getattr(self.feature_extractor.blocks, "0"), "0").act2(dequantize_2); dequantize_2 = None
feature_extractor_blocks_1_0_conv_pw_input_scale_0 = self.feature_extractor_blocks_1_0_conv_pw_input_scale_0
feature_extractor_blocks_1_0_conv_pw_input_zero_point_0 = self.feature_extractor_blocks_1_0_conv_pw_input_zero_point_0
feature_extractor_blocks_1_0_conv_pw_input_dtype_0 = self.feature_extractor_blocks_1_0_conv_pw_input_dtype_0
quantize_per_tensor_3 = torch.quantize_per_tensor(feature_extractor_blocks_0_0_act2, feature_extractor_blocks_1_0_conv_pw_input_scale_0, feature_extractor_blocks_1_0_conv_pw_input_zero_point_0, feature_extractor_blocks_1_0_conv_pw_input_dtype_0); feature_extractor_blocks_0_0_act2 = feature_extractor_blocks_1_0_conv_pw_input_scale_0 = feature_extractor_blocks_1_0_conv_pw_input_zero_point_0 = feature_extractor_blocks_1_0_conv_pw_input_dtype_0 = None
feature_extractor_blocks_1_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "1"), "0").conv_pw(quantize_per_tensor_3); quantize_per_tensor_3 = None
feature_extractor_blocks_1_0_act1 = getattr(getattr(self.feature_extractor.blocks, "1"), "0").act1(feature_extractor_blocks_1_0_conv_pw); feature_extractor_blocks_1_0_conv_pw = None
feature_extractor_blocks_1_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "1"), "0").conv_dw(feature_extractor_blocks_1_0_act1); feature_extractor_blocks_1_0_act1 = None
feature_extractor_blocks_1_0_act2 = getattr(getattr(self.feature_extractor.blocks, "1"), "0").act2(feature_extractor_blocks_1_0_conv_dw); feature_extractor_blocks_1_0_conv_dw = None
dequantize_3 = feature_extractor_blocks_1_0_act2.dequantize(); feature_extractor_blocks_1_0_act2 = None
feature_extractor_blocks_1_0_se = getattr(getattr(self.feature_extractor.blocks, "1"), "0").se(dequantize_3); dequantize_3 = None
feature_extractor_blocks_1_0_conv_pwl_input_scale_0 = self.feature_extractor_blocks_1_0_conv_pwl_input_scale_0
feature_extractor_blocks_1_0_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_1_0_conv_pwl_input_zero_point_0
feature_extractor_blocks_1_0_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_1_0_conv_pwl_input_dtype_0
quantize_per_tensor_4 = torch.quantize_per_tensor(feature_extractor_blocks_1_0_se, feature_extractor_blocks_1_0_conv_pwl_input_scale_0, feature_extractor_blocks_1_0_conv_pwl_input_zero_point_0, feature_extractor_blocks_1_0_conv_pwl_input_dtype_0); feature_extractor_blocks_1_0_se = feature_extractor_blocks_1_0_conv_pwl_input_scale_0 = feature_extractor_blocks_1_0_conv_pwl_input_zero_point_0 = feature_extractor_blocks_1_0_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_1_0_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "1"), "0").conv_pwl(quantize_per_tensor_4); quantize_per_tensor_4 = None
feature_extractor_blocks_1_1_conv_pw = getattr(getattr(self.feature_extractor.blocks, "1"), "1").conv_pw(feature_extractor_blocks_1_0_conv_pwl)
feature_extractor_blocks_1_1_act1 = getattr(getattr(self.feature_extractor.blocks, "1"), "1").act1(feature_extractor_blocks_1_1_conv_pw); feature_extractor_blocks_1_1_conv_pw = None
feature_extractor_blocks_1_1_conv_dw = getattr(getattr(self.feature_extractor.blocks, "1"), "1").conv_dw(feature_extractor_blocks_1_1_act1); feature_extractor_blocks_1_1_act1 = None
feature_extractor_blocks_1_1_act2 = getattr(getattr(self.feature_extractor.blocks, "1"), "1").act2(feature_extractor_blocks_1_1_conv_dw); feature_extractor_blocks_1_1_conv_dw = None
dequantize_4 = feature_extractor_blocks_1_1_act2.dequantize(); feature_extractor_blocks_1_1_act2 = None
feature_extractor_blocks_1_1_se = getattr(getattr(self.feature_extractor.blocks, "1"), "1").se(dequantize_4); dequantize_4 = None
feature_extractor_blocks_1_1_conv_pwl_input_scale_0 = self.feature_extractor_blocks_1_1_conv_pwl_input_scale_0
feature_extractor_blocks_1_1_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_1_1_conv_pwl_input_zero_point_0
feature_extractor_blocks_1_1_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_1_1_conv_pwl_input_dtype_0
quantize_per_tensor_5 = torch.quantize_per_tensor(feature_extractor_blocks_1_1_se, feature_extractor_blocks_1_1_conv_pwl_input_scale_0, feature_extractor_blocks_1_1_conv_pwl_input_zero_point_0, feature_extractor_blocks_1_1_conv_pwl_input_dtype_0); feature_extractor_blocks_1_1_se = feature_extractor_blocks_1_1_conv_pwl_input_scale_0 = feature_extractor_blocks_1_1_conv_pwl_input_zero_point_0 = feature_extractor_blocks_1_1_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_1_1_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "1"), "1").conv_pwl(quantize_per_tensor_5); quantize_per_tensor_5 = None
feature_extractor_blocks_1_1_scale_0 = self.feature_extractor_blocks_1_1_scale_0
feature_extractor_blocks_1_1_zero_point_0 = self.feature_extractor_blocks_1_1_zero_point_0
add_1 = torch.ops.quantized.add(feature_extractor_blocks_1_1_conv_pwl, feature_extractor_blocks_1_0_conv_pwl, feature_extractor_blocks_1_1_scale_0, feature_extractor_blocks_1_1_zero_point_0); feature_extractor_blocks_1_1_conv_pwl = feature_extractor_blocks_1_0_conv_pwl = feature_extractor_blocks_1_1_scale_0 = feature_extractor_blocks_1_1_zero_point_0 = None
feature_extractor_blocks_2_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "2"), "0").conv_pw(add_1); add_1 = None
feature_extractor_blocks_2_0_act1 = getattr(getattr(self.feature_extractor.blocks, "2"), "0").act1(feature_extractor_blocks_2_0_conv_pw); feature_extractor_blocks_2_0_conv_pw = None
feature_extractor_blocks_2_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "2"), "0").conv_dw(feature_extractor_blocks_2_0_act1); feature_extractor_blocks_2_0_act1 = None
feature_extractor_blocks_2_0_act2 = getattr(getattr(self.feature_extractor.blocks, "2"), "0").act2(feature_extractor_blocks_2_0_conv_dw); feature_extractor_blocks_2_0_conv_dw = None
dequantize_5 = feature_extractor_blocks_2_0_act2.dequantize(); feature_extractor_blocks_2_0_act2 = None
feature_extractor_blocks_2_0_se = getattr(getattr(self.feature_extractor.blocks, "2"), "0").se(dequantize_5); dequantize_5 = None
feature_extractor_blocks_2_0_conv_pwl_input_scale_0 = self.feature_extractor_blocks_2_0_conv_pwl_input_scale_0
feature_extractor_blocks_2_0_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_2_0_conv_pwl_input_zero_point_0
feature_extractor_blocks_2_0_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_2_0_conv_pwl_input_dtype_0
quantize_per_tensor_6 = torch.quantize_per_tensor(feature_extractor_blocks_2_0_se, feature_extractor_blocks_2_0_conv_pwl_input_scale_0, feature_extractor_blocks_2_0_conv_pwl_input_zero_point_0, feature_extractor_blocks_2_0_conv_pwl_input_dtype_0); feature_extractor_blocks_2_0_se = feature_extractor_blocks_2_0_conv_pwl_input_scale_0 = feature_extractor_blocks_2_0_conv_pwl_input_zero_point_0 = feature_extractor_blocks_2_0_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_2_0_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "2"), "0").conv_pwl(quantize_per_tensor_6); quantize_per_tensor_6 = None
feature_extractor_blocks_2_1_conv_pw = getattr(getattr(self.feature_extractor.blocks, "2"), "1").conv_pw(feature_extractor_blocks_2_0_conv_pwl)
feature_extractor_blocks_2_1_act1 = getattr(getattr(self.feature_extractor.blocks, "2"), "1").act1(feature_extractor_blocks_2_1_conv_pw); feature_extractor_blocks_2_1_conv_pw = None
feature_extractor_blocks_2_1_conv_dw = getattr(getattr(self.feature_extractor.blocks, "2"), "1").conv_dw(feature_extractor_blocks_2_1_act1); feature_extractor_blocks_2_1_act1 = None
feature_extractor_blocks_2_1_act2 = getattr(getattr(self.feature_extractor.blocks, "2"), "1").act2(feature_extractor_blocks_2_1_conv_dw); feature_extractor_blocks_2_1_conv_dw = None
dequantize_6 = feature_extractor_blocks_2_1_act2.dequantize(); feature_extractor_blocks_2_1_act2 = None
feature_extractor_blocks_2_1_se = getattr(getattr(self.feature_extractor.blocks, "2"), "1").se(dequantize_6); dequantize_6 = None
feature_extractor_blocks_2_1_conv_pwl_input_scale_0 = self.feature_extractor_blocks_2_1_conv_pwl_input_scale_0
feature_extractor_blocks_2_1_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_2_1_conv_pwl_input_zero_point_0
feature_extractor_blocks_2_1_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_2_1_conv_pwl_input_dtype_0
quantize_per_tensor_7 = torch.quantize_per_tensor(feature_extractor_blocks_2_1_se, feature_extractor_blocks_2_1_conv_pwl_input_scale_0, feature_extractor_blocks_2_1_conv_pwl_input_zero_point_0, feature_extractor_blocks_2_1_conv_pwl_input_dtype_0); feature_extractor_blocks_2_1_se = feature_extractor_blocks_2_1_conv_pwl_input_scale_0 = feature_extractor_blocks_2_1_conv_pwl_input_zero_point_0 = feature_extractor_blocks_2_1_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_2_1_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "2"), "1").conv_pwl(quantize_per_tensor_7); quantize_per_tensor_7 = None
feature_extractor_blocks_2_1_scale_0 = self.feature_extractor_blocks_2_1_scale_0
feature_extractor_blocks_2_1_zero_point_0 = self.feature_extractor_blocks_2_1_zero_point_0
add_2 = torch.ops.quantized.add(feature_extractor_blocks_2_1_conv_pwl, feature_extractor_blocks_2_0_conv_pwl, feature_extractor_blocks_2_1_scale_0, feature_extractor_blocks_2_1_zero_point_0); feature_extractor_blocks_2_1_conv_pwl = feature_extractor_blocks_2_0_conv_pwl = feature_extractor_blocks_2_1_scale_0 = feature_extractor_blocks_2_1_zero_point_0 = None
feature_extractor_blocks_2_2_conv_pw = getattr(getattr(self.feature_extractor.blocks, "2"), "2").conv_pw(add_2)
feature_extractor_blocks_2_2_act1 = getattr(getattr(self.feature_extractor.blocks, "2"), "2").act1(feature_extractor_blocks_2_2_conv_pw); feature_extractor_blocks_2_2_conv_pw = None
feature_extractor_blocks_2_2_conv_dw = getattr(getattr(self.feature_extractor.blocks, "2"), "2").conv_dw(feature_extractor_blocks_2_2_act1); feature_extractor_blocks_2_2_act1 = None
feature_extractor_blocks_2_2_act2 = getattr(getattr(self.feature_extractor.blocks, "2"), "2").act2(feature_extractor_blocks_2_2_conv_dw); feature_extractor_blocks_2_2_conv_dw = None
dequantize_7 = feature_extractor_blocks_2_2_act2.dequantize(); feature_extractor_blocks_2_2_act2 = None
feature_extractor_blocks_2_2_se = getattr(getattr(self.feature_extractor.blocks, "2"), "2").se(dequantize_7); dequantize_7 = None
feature_extractor_blocks_2_2_conv_pwl_input_scale_0 = self.feature_extractor_blocks_2_2_conv_pwl_input_scale_0
feature_extractor_blocks_2_2_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_2_2_conv_pwl_input_zero_point_0
feature_extractor_blocks_2_2_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_2_2_conv_pwl_input_dtype_0
quantize_per_tensor_8 = torch.quantize_per_tensor(feature_extractor_blocks_2_2_se, feature_extractor_blocks_2_2_conv_pwl_input_scale_0, feature_extractor_blocks_2_2_conv_pwl_input_zero_point_0, feature_extractor_blocks_2_2_conv_pwl_input_dtype_0); feature_extractor_blocks_2_2_se = feature_extractor_blocks_2_2_conv_pwl_input_scale_0 = feature_extractor_blocks_2_2_conv_pwl_input_zero_point_0 = feature_extractor_blocks_2_2_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_2_2_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "2"), "2").conv_pwl(quantize_per_tensor_8); quantize_per_tensor_8 = None
feature_extractor_blocks_2_2_scale_0 = self.feature_extractor_blocks_2_2_scale_0
feature_extractor_blocks_2_2_zero_point_0 = self.feature_extractor_blocks_2_2_zero_point_0
add_3 = torch.ops.quantized.add(feature_extractor_blocks_2_2_conv_pwl, add_2, feature_extractor_blocks_2_2_scale_0, feature_extractor_blocks_2_2_zero_point_0); feature_extractor_blocks_2_2_conv_pwl = add_2 = feature_extractor_blocks_2_2_scale_0 = feature_extractor_blocks_2_2_zero_point_0 = None
feature_extractor_blocks_3_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "3"), "0").conv_pw(add_3); add_3 = None
feature_extractor_blocks_3_0_act1 = getattr(getattr(self.feature_extractor.blocks, "3"), "0").act1(feature_extractor_blocks_3_0_conv_pw); feature_extractor_blocks_3_0_conv_pw = None
feature_extractor_blocks_3_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "3"), "0").conv_dw(feature_extractor_blocks_3_0_act1); feature_extractor_blocks_3_0_act1 = None
feature_extractor_blocks_3_0_act2 = getattr(getattr(self.feature_extractor.blocks, "3"), "0").act2(feature_extractor_blocks_3_0_conv_dw); feature_extractor_blocks_3_0_conv_dw = None
dequantize_8 = feature_extractor_blocks_3_0_act2.dequantize(); feature_extractor_blocks_3_0_act2 = None
feature_extractor_blocks_3_0_se = getattr(getattr(self.feature_extractor.blocks, "3"), "0").se(dequantize_8); dequantize_8 = None
feature_extractor_blocks_3_0_conv_pwl_input_scale_0 = self.feature_extractor_blocks_3_0_conv_pwl_input_scale_0
feature_extractor_blocks_3_0_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_3_0_conv_pwl_input_zero_point_0
feature_extractor_blocks_3_0_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_3_0_conv_pwl_input_dtype_0
quantize_per_tensor_9 = torch.quantize_per_tensor(feature_extractor_blocks_3_0_se, feature_extractor_blocks_3_0_conv_pwl_input_scale_0, feature_extractor_blocks_3_0_conv_pwl_input_zero_point_0, feature_extractor_blocks_3_0_conv_pwl_input_dtype_0); feature_extractor_blocks_3_0_se = feature_extractor_blocks_3_0_conv_pwl_input_scale_0 = feature_extractor_blocks_3_0_conv_pwl_input_zero_point_0 = feature_extractor_blocks_3_0_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_3_0_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "3"), "0").conv_pwl(quantize_per_tensor_9); quantize_per_tensor_9 = None
feature_extractor_blocks_3_1_conv_pw = getattr(getattr(self.feature_extractor.blocks, "3"), "1").conv_pw(feature_extractor_blocks_3_0_conv_pwl)
feature_extractor_blocks_3_1_act1 = getattr(getattr(self.feature_extractor.blocks, "3"), "1").act1(feature_extractor_blocks_3_1_conv_pw); feature_extractor_blocks_3_1_conv_pw = None
feature_extractor_blocks_3_1_conv_dw = getattr(getattr(self.feature_extractor.blocks, "3"), "1").conv_dw(feature_extractor_blocks_3_1_act1); feature_extractor_blocks_3_1_act1 = None
feature_extractor_blocks_3_1_act2 = getattr(getattr(self.feature_extractor.blocks, "3"), "1").act2(feature_extractor_blocks_3_1_conv_dw); feature_extractor_blocks_3_1_conv_dw = None
dequantize_9 = feature_extractor_blocks_3_1_act2.dequantize(); feature_extractor_blocks_3_1_act2 = None
feature_extractor_blocks_3_1_se = getattr(getattr(self.feature_extractor.blocks, "3"), "1").se(dequantize_9); dequantize_9 = None
feature_extractor_blocks_3_1_conv_pwl_input_scale_0 = self.feature_extractor_blocks_3_1_conv_pwl_input_scale_0
feature_extractor_blocks_3_1_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_3_1_conv_pwl_input_zero_point_0
feature_extractor_blocks_3_1_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_3_1_conv_pwl_input_dtype_0
quantize_per_tensor_10 = torch.quantize_per_tensor(feature_extractor_blocks_3_1_se, feature_extractor_blocks_3_1_conv_pwl_input_scale_0, feature_extractor_blocks_3_1_conv_pwl_input_zero_point_0, |
st183835 | (part 3 of 3)
feature_extractor_blocks_3_1_conv_pwl_input_dtype_0); feature_extractor_blocks_3_1_se = feature_extractor_blocks_3_1_conv_pwl_input_scale_0 = feature_extractor_blocks_3_1_conv_pwl_input_zero_point_0 = feature_extractor_blocks_3_1_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_3_1_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "3"), "1").conv_pwl(quantize_per_tensor_10); quantize_per_tensor_10 = None
feature_extractor_blocks_3_1_scale_0 = self.feature_extractor_blocks_3_1_scale_0
feature_extractor_blocks_3_1_zero_point_0 = self.feature_extractor_blocks_3_1_zero_point_0
add_4 = torch.ops.quantized.add(feature_extractor_blocks_3_1_conv_pwl, feature_extractor_blocks_3_0_conv_pwl, feature_extractor_blocks_3_1_scale_0, feature_extractor_blocks_3_1_zero_point_0); feature_extractor_blocks_3_1_conv_pwl = feature_extractor_blocks_3_0_conv_pwl = feature_extractor_blocks_3_1_scale_0 = feature_extractor_blocks_3_1_zero_point_0 = None
feature_extractor_blocks_3_2_conv_pw = getattr(getattr(self.feature_extractor.blocks, "3"), "2").conv_pw(add_4)
feature_extractor_blocks_3_2_act1 = getattr(getattr(self.feature_extractor.blocks, "3"), "2").act1(feature_extractor_blocks_3_2_conv_pw); feature_extractor_blocks_3_2_conv_pw = None
feature_extractor_blocks_3_2_conv_dw = getattr(getattr(self.feature_extractor.blocks, "3"), "2").conv_dw(feature_extractor_blocks_3_2_act1); feature_extractor_blocks_3_2_act1 = None
feature_extractor_blocks_3_2_act2 = getattr(getattr(self.feature_extractor.blocks, "3"), "2").act2(feature_extractor_blocks_3_2_conv_dw); feature_extractor_blocks_3_2_conv_dw = None
dequantize_10 = feature_extractor_blocks_3_2_act2.dequantize(); feature_extractor_blocks_3_2_act2 = None
feature_extractor_blocks_3_2_se = getattr(getattr(self.feature_extractor.blocks, "3"), "2").se(dequantize_10); dequantize_10 = None
feature_extractor_blocks_3_2_conv_pwl_input_scale_0 = self.feature_extractor_blocks_3_2_conv_pwl_input_scale_0
feature_extractor_blocks_3_2_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_3_2_conv_pwl_input_zero_point_0
feature_extractor_blocks_3_2_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_3_2_conv_pwl_input_dtype_0
quantize_per_tensor_11 = torch.quantize_per_tensor(feature_extractor_blocks_3_2_se, feature_extractor_blocks_3_2_conv_pwl_input_scale_0, feature_extractor_blocks_3_2_conv_pwl_input_zero_point_0, feature_extractor_blocks_3_2_conv_pwl_input_dtype_0); feature_extractor_blocks_3_2_se = feature_extractor_blocks_3_2_conv_pwl_input_scale_0 = feature_extractor_blocks_3_2_conv_pwl_input_zero_point_0 = feature_extractor_blocks_3_2_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_3_2_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "3"), "2").conv_pwl(quantize_per_tensor_11); quantize_per_tensor_11 = None
feature_extractor_blocks_3_2_scale_0 = self.feature_extractor_blocks_3_2_scale_0
feature_extractor_blocks_3_2_zero_point_0 = self.feature_extractor_blocks_3_2_zero_point_0
add_5 = torch.ops.quantized.add(feature_extractor_blocks_3_2_conv_pwl, add_4, feature_extractor_blocks_3_2_scale_0, feature_extractor_blocks_3_2_zero_point_0); feature_extractor_blocks_3_2_conv_pwl = add_4 = feature_extractor_blocks_3_2_scale_0 = feature_extractor_blocks_3_2_zero_point_0 = None
feature_extractor_blocks_3_3_conv_pw = getattr(getattr(self.feature_extractor.blocks, "3"), "3").conv_pw(add_5)
feature_extractor_blocks_3_3_act1 = getattr(getattr(self.feature_extractor.blocks, "3"), "3").act1(feature_extractor_blocks_3_3_conv_pw); feature_extractor_blocks_3_3_conv_pw = None
feature_extractor_blocks_3_3_conv_dw = getattr(getattr(self.feature_extractor.blocks, "3"), "3").conv_dw(feature_extractor_blocks_3_3_act1); feature_extractor_blocks_3_3_act1 = None
feature_extractor_blocks_3_3_act2 = getattr(getattr(self.feature_extractor.blocks, "3"), "3").act2(feature_extractor_blocks_3_3_conv_dw); feature_extractor_blocks_3_3_conv_dw = None
dequantize_11 = feature_extractor_blocks_3_3_act2.dequantize(); feature_extractor_blocks_3_3_act2 = None
feature_extractor_blocks_3_3_se = getattr(getattr(self.feature_extractor.blocks, "3"), "3").se(dequantize_11); dequantize_11 = None
feature_extractor_blocks_3_3_conv_pwl_input_scale_0 = self.feature_extractor_blocks_3_3_conv_pwl_input_scale_0
feature_extractor_blocks_3_3_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_3_3_conv_pwl_input_zero_point_0
feature_extractor_blocks_3_3_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_3_3_conv_pwl_input_dtype_0
quantize_per_tensor_12 = torch.quantize_per_tensor(feature_extractor_blocks_3_3_se, feature_extractor_blocks_3_3_conv_pwl_input_scale_0, feature_extractor_blocks_3_3_conv_pwl_input_zero_point_0, feature_extractor_blocks_3_3_conv_pwl_input_dtype_0); feature_extractor_blocks_3_3_se = feature_extractor_blocks_3_3_conv_pwl_input_scale_0 = feature_extractor_blocks_3_3_conv_pwl_input_zero_point_0 = feature_extractor_blocks_3_3_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_3_3_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "3"), "3").conv_pwl(quantize_per_tensor_12); quantize_per_tensor_12 = None
feature_extractor_blocks_3_3_scale_0 = self.feature_extractor_blocks_3_3_scale_0
feature_extractor_blocks_3_3_zero_point_0 = self.feature_extractor_blocks_3_3_zero_point_0
add_6 = torch.ops.quantized.add(feature_extractor_blocks_3_3_conv_pwl, add_5, feature_extractor_blocks_3_3_scale_0, feature_extractor_blocks_3_3_zero_point_0); feature_extractor_blocks_3_3_conv_pwl = add_5 = feature_extractor_blocks_3_3_scale_0 = feature_extractor_blocks_3_3_zero_point_0 = None
feature_extractor_blocks_4_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "4"), "0").conv_pw(add_6); add_6 = None
feature_extractor_blocks_4_0_act1 = getattr(getattr(self.feature_extractor.blocks, "4"), "0").act1(feature_extractor_blocks_4_0_conv_pw); feature_extractor_blocks_4_0_conv_pw = None
feature_extractor_blocks_4_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "4"), "0").conv_dw(feature_extractor_blocks_4_0_act1); feature_extractor_blocks_4_0_act1 = None
feature_extractor_blocks_4_0_act2 = getattr(getattr(self.feature_extractor.blocks, "4"), "0").act2(feature_extractor_blocks_4_0_conv_dw); feature_extractor_blocks_4_0_conv_dw = None
dequantize_12 = feature_extractor_blocks_4_0_act2.dequantize(); feature_extractor_blocks_4_0_act2 = None
feature_extractor_blocks_4_0_se = getattr(getattr(self.feature_extractor.blocks, "4"), "0").se(dequantize_12); dequantize_12 = None
feature_extractor_blocks_4_0_conv_pwl_input_scale_0 = self.feature_extractor_blocks_4_0_conv_pwl_input_scale_0
feature_extractor_blocks_4_0_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_4_0_conv_pwl_input_zero_point_0
feature_extractor_blocks_4_0_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_4_0_conv_pwl_input_dtype_0
quantize_per_tensor_13 = torch.quantize_per_tensor(feature_extractor_blocks_4_0_se, feature_extractor_blocks_4_0_conv_pwl_input_scale_0, feature_extractor_blocks_4_0_conv_pwl_input_zero_point_0, feature_extractor_blocks_4_0_conv_pwl_input_dtype_0); feature_extractor_blocks_4_0_se = feature_extractor_blocks_4_0_conv_pwl_input_scale_0 = feature_extractor_blocks_4_0_conv_pwl_input_zero_point_0 = feature_extractor_blocks_4_0_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_4_0_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "4"), "0").conv_pwl(quantize_per_tensor_13); quantize_per_tensor_13 = None
feature_extractor_blocks_4_1_conv_pw = getattr(getattr(self.feature_extractor.blocks, "4"), "1").conv_pw(feature_extractor_blocks_4_0_conv_pwl)
feature_extractor_blocks_4_1_act1 = getattr(getattr(self.feature_extractor.blocks, "4"), "1").act1(feature_extractor_blocks_4_1_conv_pw); feature_extractor_blocks_4_1_conv_pw = None
feature_extractor_blocks_4_1_conv_dw = getattr(getattr(self.feature_extractor.blocks, "4"), "1").conv_dw(feature_extractor_blocks_4_1_act1); feature_extractor_blocks_4_1_act1 = None
feature_extractor_blocks_4_1_act2 = getattr(getattr(self.feature_extractor.blocks, "4"), "1").act2(feature_extractor_blocks_4_1_conv_dw); feature_extractor_blocks_4_1_conv_dw = None
dequantize_13 = feature_extractor_blocks_4_1_act2.dequantize(); feature_extractor_blocks_4_1_act2 = None
feature_extractor_blocks_4_1_se = getattr(getattr(self.feature_extractor.blocks, "4"), "1").se(dequantize_13); dequantize_13 = None
feature_extractor_blocks_4_1_conv_pwl_input_scale_0 = self.feature_extractor_blocks_4_1_conv_pwl_input_scale_0
feature_extractor_blocks_4_1_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_4_1_conv_pwl_input_zero_point_0
feature_extractor_blocks_4_1_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_4_1_conv_pwl_input_dtype_0
quantize_per_tensor_14 = torch.quantize_per_tensor(feature_extractor_blocks_4_1_se, feature_extractor_blocks_4_1_conv_pwl_input_scale_0, feature_extractor_blocks_4_1_conv_pwl_input_zero_point_0, feature_extractor_blocks_4_1_conv_pwl_input_dtype_0); feature_extractor_blocks_4_1_se = feature_extractor_blocks_4_1_conv_pwl_input_scale_0 = feature_extractor_blocks_4_1_conv_pwl_input_zero_point_0 = feature_extractor_blocks_4_1_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_4_1_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "4"), "1").conv_pwl(quantize_per_tensor_14); quantize_per_tensor_14 = None
feature_extractor_blocks_4_1_scale_0 = self.feature_extractor_blocks_4_1_scale_0
feature_extractor_blocks_4_1_zero_point_0 = self.feature_extractor_blocks_4_1_zero_point_0
add_7 = torch.ops.quantized.add(feature_extractor_blocks_4_1_conv_pwl, feature_extractor_blocks_4_0_conv_pwl, feature_extractor_blocks_4_1_scale_0, feature_extractor_blocks_4_1_zero_point_0); feature_extractor_blocks_4_1_conv_pwl = feature_extractor_blocks_4_0_conv_pwl = feature_extractor_blocks_4_1_scale_0 = feature_extractor_blocks_4_1_zero_point_0 = None
feature_extractor_blocks_4_2_conv_pw = getattr(getattr(self.feature_extractor.blocks, "4"), "2").conv_pw(add_7)
feature_extractor_blocks_4_2_act1 = getattr(getattr(self.feature_extractor.blocks, "4"), "2").act1(feature_extractor_blocks_4_2_conv_pw); feature_extractor_blocks_4_2_conv_pw = None
feature_extractor_blocks_4_2_conv_dw = getattr(getattr(self.feature_extractor.blocks, "4"), "2").conv_dw(feature_extractor_blocks_4_2_act1); feature_extractor_blocks_4_2_act1 = None
feature_extractor_blocks_4_2_act2 = getattr(getattr(self.feature_extractor.blocks, "4"), "2").act2(feature_extractor_blocks_4_2_conv_dw); feature_extractor_blocks_4_2_conv_dw = None
dequantize_14 = feature_extractor_blocks_4_2_act2.dequantize(); feature_extractor_blocks_4_2_act2 = None
feature_extractor_blocks_4_2_se = getattr(getattr(self.feature_extractor.blocks, "4"), "2").se(dequantize_14); dequantize_14 = None
feature_extractor_blocks_4_2_conv_pwl_input_scale_0 = self.feature_extractor_blocks_4_2_conv_pwl_input_scale_0
feature_extractor_blocks_4_2_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_4_2_conv_pwl_input_zero_point_0
feature_extractor_blocks_4_2_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_4_2_conv_pwl_input_dtype_0
quantize_per_tensor_15 = torch.quantize_per_tensor(feature_extractor_blocks_4_2_se, feature_extractor_blocks_4_2_conv_pwl_input_scale_0, feature_extractor_blocks_4_2_conv_pwl_input_zero_point_0, feature_extractor_blocks_4_2_conv_pwl_input_dtype_0); feature_extractor_blocks_4_2_se = feature_extractor_blocks_4_2_conv_pwl_input_scale_0 = feature_extractor_blocks_4_2_conv_pwl_input_zero_point_0 = feature_extractor_blocks_4_2_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_4_2_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "4"), "2").conv_pwl(quantize_per_tensor_15); quantize_per_tensor_15 = None
feature_extractor_blocks_4_2_scale_0 = self.feature_extractor_blocks_4_2_scale_0
feature_extractor_blocks_4_2_zero_point_0 = self.feature_extractor_blocks_4_2_zero_point_0
add_8 = torch.ops.quantized.add(feature_extractor_blocks_4_2_conv_pwl, add_7, feature_extractor_blocks_4_2_scale_0, feature_extractor_blocks_4_2_zero_point_0); feature_extractor_blocks_4_2_conv_pwl = add_7 = feature_extractor_blocks_4_2_scale_0 = feature_extractor_blocks_4_2_zero_point_0 = None
feature_extractor_blocks_5_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "5"), "0").conv_pw(add_8); add_8 = None
feature_extractor_blocks_5_0_act1 = getattr(getattr(self.feature_extractor.blocks, "5"), "0").act1(feature_extractor_blocks_5_0_conv_pw); feature_extractor_blocks_5_0_conv_pw = None
feature_extractor_blocks_5_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "5"), "0").conv_dw(feature_extractor_blocks_5_0_act1); feature_extractor_blocks_5_0_act1 = None
feature_extractor_blocks_5_0_act2 = getattr(getattr(self.feature_extractor.blocks, "5"), "0").act2(feature_extractor_blocks_5_0_conv_dw); feature_extractor_blocks_5_0_conv_dw = None
dequantize_15 = feature_extractor_blocks_5_0_act2.dequantize(); feature_extractor_blocks_5_0_act2 = None
feature_extractor_blocks_5_0_se = getattr(getattr(self.feature_extractor.blocks, "5"), "0").se(dequantize_15); dequantize_15 = None
feature_extractor_blocks_5_0_conv_pwl_input_scale_0 = self.feature_extractor_blocks_5_0_conv_pwl_input_scale_0
feature_extractor_blocks_5_0_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_5_0_conv_pwl_input_zero_point_0
feature_extractor_blocks_5_0_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_5_0_conv_pwl_input_dtype_0
quantize_per_tensor_16 = torch.quantize_per_tensor(feature_extractor_blocks_5_0_se, feature_extractor_blocks_5_0_conv_pwl_input_scale_0, feature_extractor_blocks_5_0_conv_pwl_input_zero_point_0, feature_extractor_blocks_5_0_conv_pwl_input_dtype_0); feature_extractor_blocks_5_0_se = feature_extractor_blocks_5_0_conv_pwl_input_scale_0 = feature_extractor_blocks_5_0_conv_pwl_input_zero_point_0 = feature_extractor_blocks_5_0_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_5_0_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "5"), "0").conv_pwl(quantize_per_tensor_16); quantize_per_tensor_16 = None
feature_extractor_blocks_5_1_conv_pw = getattr(getattr(self.feature_extractor.blocks, "5"), "1").conv_pw(feature_extractor_blocks_5_0_conv_pwl)
feature_extractor_blocks_5_1_act1 = getattr(getattr(self.feature_extractor.blocks, "5"), "1").act1(feature_extractor_blocks_5_1_conv_pw); feature_extractor_blocks_5_1_conv_pw = None
feature_extractor_blocks_5_1_conv_dw = getattr(getattr(self.feature_extractor.blocks, "5"), "1").conv_dw(feature_extractor_blocks_5_1_act1); feature_extractor_blocks_5_1_act1 = None
feature_extractor_blocks_5_1_act2 = getattr(getattr(self.feature_extractor.blocks, "5"), "1").act2(feature_extractor_blocks_5_1_conv_dw); feature_extractor_blocks_5_1_conv_dw = None
dequantize_16 = feature_extractor_blocks_5_1_act2.dequantize(); feature_extractor_blocks_5_1_act2 = None
feature_extractor_blocks_5_1_se = getattr(getattr(self.feature_extractor.blocks, "5"), "1").se(dequantize_16); dequantize_16 = None
feature_extractor_blocks_5_1_conv_pwl_input_scale_0 = self.feature_extractor_blocks_5_1_conv_pwl_input_scale_0
feature_extractor_blocks_5_1_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_5_1_conv_pwl_input_zero_point_0
feature_extractor_blocks_5_1_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_5_1_conv_pwl_input_dtype_0
quantize_per_tensor_17 = torch.quantize_per_tensor(feature_extractor_blocks_5_1_se, feature_extractor_blocks_5_1_conv_pwl_input_scale_0, feature_extractor_blocks_5_1_conv_pwl_input_zero_point_0, feature_extractor_blocks_5_1_conv_pwl_input_dtype_0); feature_extractor_blocks_5_1_se = feature_extractor_blocks_5_1_conv_pwl_input_scale_0 = feature_extractor_blocks_5_1_conv_pwl_input_zero_point_0 = feature_extractor_blocks_5_1_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_5_1_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "5"), "1").conv_pwl(quantize_per_tensor_17); quantize_per_tensor_17 = None
feature_extractor_blocks_5_1_scale_0 = self.feature_extractor_blocks_5_1_scale_0
feature_extractor_blocks_5_1_zero_point_0 = self.feature_extractor_blocks_5_1_zero_point_0
add_9 = torch.ops.quantized.add(feature_extractor_blocks_5_1_conv_pwl, feature_extractor_blocks_5_0_conv_pwl, feature_extractor_blocks_5_1_scale_0, feature_extractor_blocks_5_1_zero_point_0); feature_extractor_blocks_5_1_conv_pwl = feature_extractor_blocks_5_0_conv_pwl = feature_extractor_blocks_5_1_scale_0 = feature_extractor_blocks_5_1_zero_point_0 = None
feature_extractor_blocks_5_2_conv_pw = getattr(getattr(self.feature_extractor.blocks, "5"), "2").conv_pw(add_9)
feature_extractor_blocks_5_2_act1 = getattr(getattr(self.feature_extractor.blocks, "5"), "2").act1(feature_extractor_blocks_5_2_conv_pw); feature_extractor_blocks_5_2_conv_pw = None
feature_extractor_blocks_5_2_conv_dw = getattr(getattr(self.feature_extractor.blocks, "5"), "2").conv_dw(feature_extractor_blocks_5_2_act1); feature_extractor_blocks_5_2_act1 = None
feature_extractor_blocks_5_2_act2 = getattr(getattr(self.feature_extractor.blocks, "5"), "2").act2(feature_extractor_blocks_5_2_conv_dw); feature_extractor_blocks_5_2_conv_dw = None
dequantize_17 = feature_extractor_blocks_5_2_act2.dequantize(); feature_extractor_blocks_5_2_act2 = None
feature_extractor_blocks_5_2_se = getattr(getattr(self.feature_extractor.blocks, "5"), "2").se(dequantize_17); dequantize_17 = None
feature_extractor_blocks_5_2_conv_pwl_input_scale_0 = self.feature_extractor_blocks_5_2_conv_pwl_input_scale_0
feature_extractor_blocks_5_2_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_5_2_conv_pwl_input_zero_point_0
feature_extractor_blocks_5_2_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_5_2_conv_pwl_input_dtype_0
quantize_per_tensor_18 = torch.quantize_per_tensor(feature_extractor_blocks_5_2_se, feature_extractor_blocks_5_2_conv_pwl_input_scale_0, feature_extractor_blocks_5_2_conv_pwl_input_zero_point_0, feature_extractor_blocks_5_2_conv_pwl_input_dtype_0); feature_extractor_blocks_5_2_se = feature_extractor_blocks_5_2_conv_pwl_input_scale_0 = feature_extractor_blocks_5_2_conv_pwl_input_zero_point_0 = feature_extractor_blocks_5_2_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_5_2_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "5"), "2").conv_pwl(quantize_per_tensor_18); quantize_per_tensor_18 = None
feature_extractor_blocks_5_2_scale_0 = self.feature_extractor_blocks_5_2_scale_0
feature_extractor_blocks_5_2_zero_point_0 = self.feature_extractor_blocks_5_2_zero_point_0
add_10 = torch.ops.quantized.add(feature_extractor_blocks_5_2_conv_pwl, add_9, feature_extractor_blocks_5_2_scale_0, feature_extractor_blocks_5_2_zero_point_0); feature_extractor_blocks_5_2_conv_pwl = add_9 = feature_extractor_blocks_5_2_scale_0 = feature_extractor_blocks_5_2_zero_point_0 = None
feature_extractor_blocks_6_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "6"), "0").conv_pw(add_10); add_10 = None
feature_extractor_blocks_6_0_act1 = getattr(getattr(self.feature_extractor.blocks, "6"), "0").act1(feature_extractor_blocks_6_0_conv_pw); feature_extractor_blocks_6_0_conv_pw = None
feature_extractor_blocks_6_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "6"), "0").conv_dw(feature_extractor_blocks_6_0_act1); feature_extractor_blocks_6_0_act1 = None
feature_extractor_blocks_6_0_act2 = getattr(getattr(self.feature_extractor.blocks, "6"), "0").act2(feature_extractor_blocks_6_0_conv_dw); feature_extractor_blocks_6_0_conv_dw = None
dequantize_18 = feature_extractor_blocks_6_0_act2.dequantize(); feature_extractor_blocks_6_0_act2 = None
feature_extractor_blocks_6_0_se = getattr(getattr(self.feature_extractor.blocks, "6"), "0").se(dequantize_18); dequantize_18 = None
feature_extractor_blocks_6_0_conv_pwl_input_scale_0 = self.feature_extractor_blocks_6_0_conv_pwl_input_scale_0
feature_extractor_blocks_6_0_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_6_0_conv_pwl_input_zero_point_0
feature_extractor_blocks_6_0_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_6_0_conv_pwl_input_dtype_0
quantize_per_tensor_19 = torch.quantize_per_tensor(feature_extractor_blocks_6_0_se, feature_extractor_blocks_6_0_conv_pwl_input_scale_0, feature_extractor_blocks_6_0_conv_pwl_input_zero_point_0, feature_extractor_blocks_6_0_conv_pwl_input_dtype_0); feature_extractor_blocks_6_0_se = feature_extractor_blocks_6_0_conv_pwl_input_scale_0 = feature_extractor_blocks_6_0_conv_pwl_input_zero_point_0 = feature_extractor_blocks_6_0_conv_pwl_input_dtype_0 = None
feature_extractor_blocks_6_0_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "6"), "0").conv_pwl(quantize_per_tensor_19); quantize_per_tensor_19 = None
feature_extractor_conv_head = self.feature_extractor.conv_head(feature_extractor_blocks_6_0_conv_pwl); feature_extractor_blocks_6_0_conv_pwl = None
feature_extractor_act2 = self.feature_extractor.act2(feature_extractor_conv_head); feature_extractor_conv_head = None
dequantize_19 = feature_extractor_act2.dequantize(); feature_extractor_act2 = None
feature_extractor_global_pool_pool = self.feature_extractor.global_pool.pool(dequantize_19); dequantize_19 = None
feature_extractor_classifier = self.feature_extractor.classifier(feature_extractor_global_pool_pool); feature_extractor_global_pool_pool = None
getattr_1 = feature_extractor_classifier.shape
getitem = getattr_1[-2]; getattr_1 = None
max_pool2d_1 = torch.nn.functional.max_pool2d(feature_extractor_classifier, (getitem, 1), stride = 1, padding = 0, dilation = 1, ceil_mode = False, return_indices = False); feature_extractor_classifier = getitem = None
squeeze_1 = max_pool2d_1.squeeze(2); max_pool2d_1 = None
permute = squeeze_1.permute(0, 2, 1); squeeze_1 = None
rnn = self.rnn(permute); permute = None
getitem_1 = rnn[0]
getitem_2 = rnn[1]; rnn = None
relu_1 = self.relu(getitem_1); getitem_1 = None
dropout_1 = self.dropout(relu_1); relu_1 = None
fc_input_scale_0 = self.fc_input_scale_0
fc_input_zero_point_0 = self.fc_input_zero_point_0
fc_input_dtype_0 = self.fc_input_dtype_0
quantize_per_tensor_20 = torch.quantize_per_tensor(dropout_1, fc_input_scale_0, fc_input_zero_point_0, fc_input_dtype_0); dropout_1 = fc_input_scale_0 = fc_input_zero_point_0 = fc_input_dtype_0 = None
fc = self.fc(quantize_per_tensor_20); quantize_per_tensor_20 = None
dequantize_20 = fc.dequantize(); fc = None
return dequantize_20 |
st183836 | fbgemm does have a reduce_range config: pytorch/qconfig.py at master · pytorch/pytorch · GitHub, so if you are using qnnpack qconfig and run on fbgemm backends (which is the default backend) I think, it will have overflows. But I’m not sure why setting backend to qnnpack would result in error, did you set the qengine before convert? |
st183837 | Yes I did set the qengine before convert. Perhaps I can make you a full working example with code + data and all? If so, I’ll go ahead and make one on Github and share it with you |
st183838 | I’m trying to do QAT for yolov5, but I got the following error:
RuntimeError: Expected self.scalar_type() == ScalarType::Float to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
This is the total log:
Traceback (most recent call last):
File "train.py", line 464, in <module>
train(hyp, opt, device, tb_writer)
File "train.py", line 269, in train
pred = model(imgs) # forward
File "/home/feiyu/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/feiyu/yolov5/models/yolo.py", line 132, in forward
x = self.forward_once(x, profile) # single-scale inference, train
File "/home/feiyu/yolov5/models/yolo.py", line 150, in forward_once
x = m(x) # run
File "/home/feiyu/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/feiyu/yolov5/models/common.py", line 114, in forward
return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
File "/home/feiyu/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/feiyu/yolov5/models/common.py", line 48, in fuseforward
return self.act(self.conv(x))
File "/home/feiyu/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 731, in _call_impl
hook_result = hook(self, input, result)
File "/home/feiyu/anaconda3/lib/python3.7/site-packages/torch/quantization/quantize.py", line 82, in _observer_forward_hook
return self.activation_post_process(output)
File "/home/feiyu/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/feiyu/anaconda3/lib/python3.7/site-packages/torch/quantization/fake_quantize.py", line 104, in forward
self.quant_max)
RuntimeError: Expected self.scalar_type() == ScalarType::Float to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
And here is the code where error occurs:
class Conv(nn.Module):
# Standard convolution
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
super().__init__()
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
self.bn = nn.BatchNorm2d(c2)
self.act = nn.Hardswish() if act else nn.Identity()
def forward(self, x):
return self.act(self.bn(self.conv(x)))
def fuseforward(self, x):
return self.act(self.conv(x))
I tried use nn.ReLU to replace nn.Hardswish, but it didn’t work. And I checked the input x dtype, which is always torch.float32. |
st183839 | Solved by feiyuhuahuo in post #3
My bad. I didn’t notice that the forward progress is wrapped by torch.cuda.amp.autocast. It’s OK now. |
st183840 | I directly add X = X.float() above where the error appears in torch.quantization.fake_quantize.py. Because I found the dtype of X is torch.float16. Surprisingly, it works, the model can be successfully trained.
if self.fake_quant_enabled[0] == 1:
if self.qscheme == torch.per_channel_symmetric or self.qscheme == torch.per_channel_affine:
X = torch.fake_quantize_per_channel_affine(X, self.scale, self.zero_point,
self.ch_axis, self.quant_min, self.quant_max)
else:
import pdb
# try:
# I modified the code here. $$$$$$$$$$$$$$$$$
X= X.float()
X = torch.fake_quantize_per_tensor_affine(X, float(self.scale),
int(self.zero_point), self.quant_min,
self.quant_max)
# except:
# pdb.set_trace()
return X
Is this normal?
I haven’t completed training now, thus I can’t tell whether I can get a useful model. My PyTorch version is 1.7.0 |
st183841 | My bad. I didn’t notice that the forward progress is wrapped by torch.cuda.amp.autocast. It’s OK now. |
st183842 | hi, I met the same problem. Could you please paste the updated code here, thanks. |
st183843 | Hi,
I am trying to use QAT to speed up a segmentation model on CPU.
The preparation, training and conversion to a quantized model all seem to work fine: negligible drop in performance and reduction in model size by ~4.
However, I am getting some strange latency measurements with the quantized model where larger image take more time for inference than the original model.
Here are a few numbers for a MobileNetV3 large with dilation and reduced tail (see Everything you need to know about TorchVision’s MobileNetV3 implementation | PyTorch) with the LR-ASPP head on top for the segmentation:
Fused model CPU latency:
256x256: 76 ms
512x512: 206 ms
1024x1024: 706 ms
Quantized model CPU latency:
256x256: 53 ms
512x512: 211 ms
1024x1024: 849 ms
These numbers were obtained with torch.set_num_threads(4) on a Ryzen 7 3700X.
For some reason, at higher resolutions, the model is slower with quantization. I am also using torchvision’s implementation of quantizable MobileNetV3 (vision/mobilenetv3.py at master · pytorch/vision · GitHub).
Any idea where this could come from? |
st183844 | After some investigation, it seems that the culprit here is dilation.
When removing dilation from MobileNetV3 (used in the last 3 blocks), the latency drop significantly. Here are the latency measurements:
Fused model CPU latency:
256x256: 62 ms
512x512: 148 ms
1024x1024: 494 ms
Quantized model CPU latency:
256x256: 5 ms
512x512: 16 ms
1024x1024: 59 ms
Evaluating a simple Conv(3, 64, kernel_size=5, stride=2) → BN → ReLU on 512x512 inputs, we get the following profiles:
Fused model without dilation:
-------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
-------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::conv2d 0.10% 8.000us 70.95% 5.660ms 5.660ms 1
aten::convolution 0.15% 12.000us 70.85% 5.652ms 5.652ms 1
aten::_convolution 0.15% 12.000us 70.70% 5.640ms 5.640ms 1
aten::mkldnn_convolution 70.40% 5.616ms 70.55% 5.628ms 5.628ms 1
aten::batch_norm 0.13% 10.000us 23.15% 1.847ms 1.847ms 1
aten::_batch_norm_impl_index 0.11% 9.000us 23.03% 1.837ms 1.837ms 1
aten::native_batch_norm 22.74% 1.814ms 22.90% 1.827ms 1.827ms 1
aten::relu_ 0.20% 16.000us 5.89% 470.000us 470.000us 1
aten::threshold_ 5.69% 454.000us 5.69% 454.000us 454.000us 1
aten::empty 0.19% 15.000us 0.19% 15.000us 3.000us 5
aten::empty_like 0.11% 9.000us 0.16% 13.000us 4.333us 3
aten::as_strided_ 0.03% 2.000us 0.03% 2.000us 2.000us 1
-------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 7.977ms
Quantized model without dilation:
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
forward 0.66% 24.000us 100.00% 3.658ms 3.658ms 1
quantized::conv2d_relu 62.41% 2.283ms 76.85% 2.811ms 2.811ms 1
aten::dequantize 18.92% 692.000us 18.94% 693.000us 693.000us 1
aten::contiguous 0.16% 6.000us 14.27% 522.000us 522.000us 1
aten::copy_ 13.42% 491.000us 13.48% 493.000us 493.000us 1
aten::quantize_per_tensor 3.14% 115.000us 3.14% 115.000us 115.000us 1
aten::empty_like 0.33% 12.000us 0.63% 23.000us 23.000us 1
aten::item 0.19% 7.000us 0.41% 15.000us 7.500us 2
aten::_local_scalar_dense 0.22% 8.000us 0.22% 8.000us 4.000us 2
aten::qscheme 0.16% 6.000us 0.16% 6.000us 2.000us 3
aten::_empty_affine_quantized 0.14% 5.000us 0.14% 5.000us 2.500us 2
aten::q_scale 0.11% 4.000us 0.11% 4.000us 2.000us 2
aten::q_zero_point 0.08% 3.000us 0.08% 3.000us 1.500us 2
aten::empty 0.05% 2.000us 0.05% 2.000us 1.000us 2
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 3.658ms
Fused model with dilation:
-------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
-------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
aten::conv2d 0.08% 9.000us 76.87% 8.417ms 8.417ms 1
aten::convolution 0.07% 8.000us 76.79% 8.408ms 8.408ms 1
aten::_convolution 0.11% 12.000us 76.72% 8.400ms 8.400ms 1
aten::mkldnn_convolution 76.53% 8.379ms 76.61% 8.388ms 8.388ms 1
aten::batch_norm 0.07% 8.000us 16.21% 1.775ms 1.775ms 1
aten::_batch_norm_impl_index 0.08% 9.000us 16.14% 1.767ms 1.767ms 1
aten::native_batch_norm 15.94% 1.745ms 16.04% 1.756ms 1.756ms 1
aten::relu_ 0.16% 18.000us 6.91% 757.000us 757.000us 1
aten::threshold_ 6.75% 739.000us 6.75% 739.000us 739.000us 1
aten::empty 0.11% 12.000us 0.11% 12.000us 2.400us 5
aten::empty_like 0.07% 8.000us 0.10% 11.000us 3.667us 3
aten::as_strided_ 0.02% 2.000us 0.02% 2.000us 2.000us 1
-------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 10.949ms
Quantized model with dilation:
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
forward 0.24% 24.000us 100.00% 9.854ms 9.854ms 1
quantized::conv2d_relu 79.02% 7.787ms 86.20% 8.494ms 8.494ms 1
aten::dequantize 12.05% 1.187ms 12.17% 1.199ms 1.199ms 1
aten::contiguous 0.07% 7.000us 7.10% 700.000us 700.000us 1
aten::copy_ 6.80% 670.000us 6.80% 670.000us 670.000us 1
aten::quantize_per_tensor 1.26% 124.000us 1.26% 124.000us 124.000us 1
aten::empty_like 0.13% 13.000us 0.23% 23.000us 23.000us 1
aten::item 0.06% 6.000us 0.13% 13.000us 6.500us 2
aten::empty 0.13% 13.000us 0.13% 13.000us 6.500us 2
aten::_local_scalar_dense 0.07% 7.000us 0.07% 7.000us 3.500us 2
aten::qscheme 0.04% 4.000us 0.04% 4.000us 1.333us 3
aten::q_zero_point 0.04% 4.000us 0.04% 4.000us 2.000us 2
aten::q_scale 0.04% 4.000us 0.04% 4.000us 2.000us 2
aten::_empty_affine_quantized 0.04% 4.000us 0.04% 4.000us 2.000us 2
--------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 9.854ms
From this data we can observe two things:
Convolutions with 5x5 dilated kernels are much slower with dilation on CPU
Quantized convolutions with 5x5 dilated kernels take an even more important performance hit.
All of this was tested with PyTorch 1.8.1. |
st183845 | Hi,
I’m facing the exact same problem. It seems that it is using depthwise convolution with dilation that can make quantization much slower. Using normal convolution + dilation is fine.
I tested on a single convolution block with in_channel, out_channel = 96, kernel size = 5 and dilation = 5.
Convolution block before quantization: takes ~358ms;
Convolution block after quantization: takes ~64ms;
Depthwise separable convolution block before quantization: takes ~154ms;
Depthwise separable convolution block before quantization: takes ~447ms; |
st183846 | Indeed, it seems it’s worse when the number of groups is larger than 1.
I have opened an issue on github Quantized conv2d with dilation and groups much slower than float32 · Issue #59730 · pytorch/pytorch · GitHub 3 |
st183847 | torch.backends.quantized.supported_engines
Out[3]: [‘qnnpack’, ‘none’]
fbgemm is not supported.
I tried to install fbgemm on my Ubuntu 20 and reinstalled pytorch 1.8. The problem is still there.
and also I tried to use qnnpack to run Quantization Aware Training on Bert. No matter I set the Embedding layer to be ‘None’ or ‘float16_dynamic_qconfig’, I got a error message:
Didn’t find engine for operation quantized::linear_prepack NoQEngine |
st183848 | As my test, if input’s (dtype quint8) zero point is large, for example 128, the torch.nn.quantized.Conv2d will give a wrong result on Ubuntu 18.04 or windows 10. Some output feature map points match correct result, some output feature map points mismatch correct result, and the difference is much more than 1 or 2, is about 10 or 20).
If I set input’s zero point smaller as 75, the quantized conv2d become correct. However, different images have different range, for example, a image after quantized range [-54, 81]: maximum zero point is 75, [-66, 84]: maximum zero point is 69. No regular pattern.
However, on Centos7, the quantized Conv2d give a correct result. The scripts, python and pytorch and torchvision version are completely same.
So I have no idea why. Very appropriately, if anyone could help me.
My test CNN is ResNet50 and data set is Image Net ILSVRC2012. |
st183849 | could you provide some test case so that we can reproduce the problem?
cc @dskhudia @Zafar |
st183850 | First, I make use of the per-trained quantize ResNet50 model on torchvision.models.quantization.resnet
import torch
from torchvision.models.resnet import Bottleneck, BasicBlock, ResNet, model_urls
import torch.nn as nn
from torchvision.models.utils import load_state_dict_from_url
from torch.quantization import QuantStub, DeQuantStub, fuse_modules
from torch._jit_internal import Optional
__all__ = ['QuantizableResNet', 'resnet50']
quant_model_urls = {
'resnet50_fbgemm':
'https://download.pytorch.org/models/quantized/resnet50_fbgemm_bf931d71.pth',
}
def _replace_relu(module):
reassign = {}
for name, mod in module.named_children():
_replace_relu(mod)
# Checking for explicit type instead of instance
# as we only want to replace modules of the exact type
# not inherited classes
if type(mod) == nn.ReLU or type(mod) == nn.ReLU6:
reassign[name] = nn.ReLU(inplace=False)
for key, value in reassign.items():
module._modules[key] = value
def quantize_model(model, backend):
_dummy_input_data = torch.rand(1, 3, 299, 299)
if backend not in torch.backends.quantized.supported_engines:
raise RuntimeError("Quantized backend not supported ")
torch.backends.quantized.engine = backend
model.eval()
# Make sure that weight qconfig matches that of the serialized models
if backend == 'fbgemm':
model.qconfig = torch.quantization.QConfig(
activation=torch.quantization.default_observer,
weight=torch.quantization.default_per_channel_weight_observer)
elif backend == 'qnnpack':
model.qconfig = torch.quantization.QConfig(
activation=torch.quantization.default_observer,
weight=torch.quantization.default_weight_observer)
model.fuse_model()
torch.quantization.prepare(model, inplace=True)
model(_dummy_input_data)
torch.quantization.convert(model, inplace=True)
return
class QuantizableBottleneck(Bottleneck):
def __init__(self, *args, **kwargs):
super(QuantizableBottleneck, self).__init__(*args, **kwargs)
self.skip_add_relu = nn.quantized.FloatFunctional()
self.relu1 = nn.ReLU(inplace=False)
self.relu2 = nn.ReLU(inplace=False)
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu1(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu2(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out = self.skip_add_relu.add_relu(out, identity)
return out
def fuse_model(self):
fuse_modules(self, [['conv1', 'bn1', 'relu1'],
['conv2', 'bn2', 'relu2'],
['conv3', 'bn3']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class QuantizableResNet(ResNet):
def __init__(self, *args, **kwargs):
super(QuantizableResNet, self).__init__(*args, **kwargs)
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
# Ensure scriptability
# super(QuantizableResNet,self).forward(x)
# is not scriptable
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.fc(x)
x = self.dequant(x)
return x
def fuse_model(self):
r"""Fuse conv/bn/relu modules in resnet models
Fuse conv+bn+relu/ Conv+relu/conv+Bn modules to prepare for quantization.
Model is modified in place. Note that this operation does not change numerics
and the model after modification is in floating point
"""
fuse_modules(self, ['conv1', 'bn1', 'relu'], inplace=True)
for m in self.modules():
if type(m) == QuantizableBottleneck:
m.fuse_model()
def _resnet(arch, block, layers, pretrained, progress, quantize, **kwargs):
model = QuantizableResNet(block, layers, **kwargs)
_replace_relu(model)
if quantize:
# TODO use pretrained as a string to specify the backend
backend = 'fbgemm'
quantize_model(model, backend)
else:
assert pretrained in [True, False]
if pretrained:
if quantize:
model_url = quant_model_urls[arch + '_' + backend]
else:
model_url = model_urls[arch]
state_dict = load_state_dict_from_url(model_url,
progress=progress)
model.load_state_dict(state_dict)
return model
def resnet50(pretrained=False, progress=True, quantize=False, **kwargs):
r"""ResNet-50 model from
`"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _resnet('resnet50', QuantizableBottleneck, [3, 4, 6, 3], pretrained, progress,
quantize, **kwargs)
And then do inference for a image from Image Net
from PIL import Image
import torch.backends.quantized
import torchvision.transforms as transforms
from qresnet50_original import resnet50
import numpy as np
model = resnet50(pretrained=True, quantize=True)
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])
img = Image.open("n01440764-0-tench.JPEG")
img_t = transform(img)
batch_t = torch.unsqueeze(img_t, 0)
# quantized input, and to test zero point effect, set zp 128
pre_quant = model.quant
pre_quant.zero_point = torch.tensor([128])
q_input = pre_quant(batch_t)
conv1_fp = torch.nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=True)
conv1_fp.weight = torch.nn.Parameter(model.conv1.weight().dequantize())
conv1_fp.bias = torch.nn.Parameter(model.conv1.bias())
relu = torch.nn.ReLU(inplace=True)
scale = model.conv1.scale
correct_output = np.round(relu(conv1_fp(q_input.dequantize()) / scale).detach().numpy())
output = np.round((model.conv1(q_input).detach().dequantize() / scale).numpy())
diff = np.abs(correct_output-output)
print(np.sum(diff))
If I change the pre_quant.zero_point to 128 (in fact larger than 75, will cause difference), the difference between correct output and quantization output will be large.
The test image is |
st183851 | My PC info: OS: Ubuntu 18.04, CPU: i7-9700K, Wrong ouput
Laptop info: OS: Windows 10, CPU: i7-8565U, Wrong ouput
My GPU Server: OS: CentOS 7.7, CPU: Intel® Xeon® Gold 6230, Correct output
PyTorch tested: 1.3.1, 1.4, 1.5. Python tested: 3.6, 3.7 |
st183852 | The scale and zero point for input are calculated after a calibration step across a representative set of images. By default, for PyTorch quantization, these are chosen such that quantized values are in the range [0, 127] for activations and zero_point is somewhere in the same range (e.g., 57 for the model here). We restrict quantized activation values to [0, 127] instead of [0, 255] for uint8 to avoid the saturation explained here: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/qconv.cpp#L633-L661 16
When you force the zero_point to 128, the quantized activations become in the range [0, 255] and it is possible that the saturation happens. BTW, any reason for forcing zero_point to a large value? Let the calibration step pick scale and zero_points for the model you are quantizing.
It is strange that you see different behavior on the cpus you mentioned. I would expect the same behavior on all the CPUs you mentioned. |
st183853 | Well, I try to use symmetric quantization scheme, which will force the zero point to 128, because asymmetric quantization mode will cause some other effects to our project. In this case, as I suppose, the range is [-128, 127], and after ReLU, the range become [0, 127]. This seems to be wrong. In fact, I prefer to using qint8 as feature map’s data type, but qint8 is not supported.
So can I say that in fbgemm, the qconv calculate the unsigned integer summation first and then minus summation with zero points? And the summation may cause saturation.
Also, the different behaviors confuse me… |
st183854 | For activations (uint8), the range is [0, 127] and after ReLU it stays the same. Which summation are you talking about? accumulations during convolution? accumulations (activations times weight matrix accumulations) are done in int32. |
st183855 | Okay, so the saturation happens during multiplication, but not the accumulations during convolution. But I suppose that before convolution, the uint8 range [0, 255] will minus zero point 128, which means the range will be [-128, 127]. However, it seems to be wrong.
Besides, when using nn.quantized.Conv2d but not nn.intrinsic.quantized.Conv2dReLU, the feature map output range should be [-128, 127] (symmetric quantization mode).
Actually, I am confused that why using uint8 not int8. |
st183856 | Okay, so the saturation happens during multiplication, but not the accumulations during convolution.
The saturation I mentioned earlier, happens during multiplication. However, for large number of channels even the accumulations into an int32 might overflow. This should be very rare and should only happen for a very large number of channels. For such a case, there is no saturation but the accumulator overflows and wraps around.
Actually, I am confused that why using uint8 not int8.
We are using uint8 for activations because the x64 vector instruction to multiply two 8-bit integers (vpmaddubsw) accepts uint8 for one input and int8 for another. We chose uint8 for activations and int8 for weights. int8 for activation can also be supported by a preprocessing step, i.e., by converting int8 to uint8 and adjusting zero_point for activations (add 128 to all int8 elements and new_zero_point = old_zero_point + 128 in the preprocessing step). |
st183857 | Has there been any update to this? There’s a “reduce_range” parameter which seems to fix the saturation issue discussed here. However, the reduce range functionality differs from standard int8 quantized convolutions, which makes predicting model performance and interfacing with other quantization libraries difficult. Thanks. |
st183858 | hi @amrmartini , just curious, what specifically would be helpful?
We do have plans to document the current behavior of reduce_range better and surface a warning if a user is using fbgemm with reduce_range set to False, to warn about potential overflow. We do not have plans at the moment to remove this restriction. |
st183859 | Hi all, hope you are fine.
I am stuck with the post-trainning quantization process:
I have trained the model using fastai, and timm libararies.
Currently, I am doing following:
effb3_model=learner_effb3.model.eval()
backend = "qnnpack"
effb3_model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
model_static_quantized = torch.quantization.prepare(effb3_model, inplace=False)
model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False)
print_size_of_model(model_static_quantized)
But I am facing following error, while calling the model for inference:
RuntimeError: Could not run 'aten::thnn_conv2d_forward' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::thnn_conv2d_forward' is only available for these backends: [CPU, CUDA, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
This is my model:
Sequential(
(0): Sequential(
(0): Conv2dSame(3, 40, kernel_size=(3, 3), stride=(2, 2), bias=False)
(1): QuantizedBatchNorm2d(40, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
(3): Sequential(
(0): Sequential(
(0): DepthwiseSeparableConv(
(conv_dw): QuantizedConv2d(40, 40, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=40)
(bn1): QuantizedBatchNorm2d(40, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(40, 10, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(10, 40, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pw): QuantizedConv2d(40, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn2): QuantizedBatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): Identity()
)
(1): DepthwiseSeparableConv(
(conv_dw): QuantizedConv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=24)
(bn1): QuantizedBatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(24, 6, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(6, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pw): QuantizedConv2d(24, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn2): QuantizedBatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): Identity()
)
)
(1): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): Conv2dSame(144, 144, kernel_size=(3, 3), stride=(2, 2), groups=144, bias=False)
(bn2): QuantizedBatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(144, 6, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(6, 144, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=192)
(bn2): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(192, 8, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(8, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=192)
(bn2): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(192, 8, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(8, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): Conv2dSame(192, 192, kernel_size=(5, 5), stride=(2, 2), groups=192, bias=False)
(bn2): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(192, 8, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(8, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(192, 48, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(288, 288, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=288)
(bn2): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(288, 12, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(12, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(288, 288, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=288)
(bn2): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(288, 12, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(12, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(3): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): Conv2dSame(288, 288, kernel_size=(3, 3), stride=(2, 2), groups=288, bias=False)
(bn2): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(288, 12, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(12, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(288, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(3): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(4): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(3): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(4): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(5): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): Conv2dSame(816, 816, kernel_size=(5, 5), stride=(2, 2), groups=816, bias=False)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(3): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(4): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(5): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(6): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 384, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(384, 2304, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(2304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(2304, 2304, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=2304)
(bn2): QuantizedBatchNorm2d(2304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(2304, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(96, 2304, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(2304, 384, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(4): QuantizedConv2d(384, 1536, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(5): QuantizedBatchNorm2d(1536, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(6): SiLU(inplace=True)
)
(1): Sequential(
(0): AdaptiveConcatPool2d(
(ap): AdaptiveAvgPool2d(output_size=1)
(mp): AdaptiveMaxPool2d(output_size=1)
)
(1): Flatten(full=False)
(2): BatchNorm1d(3072, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): Dropout(p=0.25, inplace=False)
(4): QuantizedLinear(in_features=3072, out_features=512, scale=1.0, zero_point=0, qscheme=torch.per_tensor_affine)
(5): ReLU(inplace=True)
(6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): Dropout(p=0.5, inplace=False)
(8): QuantizedLinear(in_features=512, out_features=73, scale=1.0, zero_point=0, qscheme=torch.per_tensor_affine)
)
)
Thanks for any help. |
st183860 | Hi,
I am trying to quantize a sequential model filled with convolution blocks and test its performance, when I test the quantized model on my local MacBook it performs extremely well, get more than 4x speed up. However, when I try to run the same code on Google Colab and another Linux system, it seems that the quantized model is only slightly faster than the non-quantized model (about 25%).
average run time for quantized model(Local PC): ~0.39s
average run time for non-quantized model(Local PC): ~3.53s
average run time for quantized model(Google Colab): ~4.25s
average run time for non-quantized model(Google Colab): ~4.97s
I tried to switch qconfig from “fbgemm” to “qnnpack” but it does not yields better performance.
I Also tried to make all kernel sizes fixed to 5 and dilations fixed to 1 and it does not help either
Here’s the script to test model run time. Is there any possible reason for this? Many thanks.
import torch
import torch.nn as nn
import time
import numpy as np
#kernel sizes and dilations used in conv blocks
class MyConfig:
kernel_sizes = [(1, 7), (7, 1), (5, 5), (5, 5), (5, 5), (5, 5), (5, 5), (5, 5), (5, 5), (5, 5), (5, 5), (5, 5), (5, 5), (5, 5)]
dilations = [(1, 1), (1, 1), (1, 1), (2, 1), (4, 1), (8, 1), (16, 1), (32, 1), (1, 1), (2, 2), (4, 4), (8, 8), (16, 16), (32, 32)]
config = MyConfig()
class ConvBlock(nn.Sequential):
def __init__(self, in_planes, out_planes, kernel_size=3, dilation =1 ,groups = 1,
stride=1):
pad = ((kernel_size[0] - 1) // 2 * dilation[0], (kernel_size[1] - 1) // 2 * dilation[1])
super(ConvBlock, self).__init__(nn.Conv2d(in_planes, out_planes, kernel_size, stride, pad, dilation, bias=False, groups = groups),
nn.BatchNorm2d(out_planes, momentum=0.1),
nn.ReLU(inplace=False)
)
#A sequential of convolution blocks
class Model(nn.Module):
def __init__(self,
kernel_sizes,
dilations,nf = 96):
super(Model, self).__init__()
self.Encoder = self.encoder(kernel_sizes, dilations, nf)
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def encoder(self, kernel_sizes, dilations, nf=96, outf=8):
block = []
for i in range(len(kernel_sizes)):
if i == 0:
block.append(ConvBlock(2, nf, kernel_sizes[i], dilations[i]))
else:
block.append(ConvBlock(nf, nf, kernel_sizes[i], dilations[i]))
block.append(ConvBlock(nf, outf, (1, 1), (1, 1)))
return nn.Sequential(*block)
def forward(self, x):
x = self.quant(x)
x = self.Encoder(x)
x = self.dequant(x)
return x
def fuse_module(self):
for m in self.modules():
if type(m) == ConvBlock:
torch.quantization.fuse_modules(m, [['0', '1', '2']], inplace=True)
#test quantized model
net = Model(config.kernel_sizes,config.dilations)
net.eval()
net.fuse_module()
net.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(net, inplace=True)
#calibrate parameter and convert
test_input = torch.randn((1,2,256,203))
net(test_input)
torch.quantization.convert(net,inplace = True)
run_times = []
net.eval()
with torch.no_grad():
for i in range(10):
test_input = torch.randn((1,2,256,203))
t = time.time()
net(test_input)
run_times.append(time.time()-t)
print("int8 model time = {} s".format(np.mean(run_times)))
#Do not quantize the model and do inference directly
net = Model(config.kernel_sizes,config.dilations)
run_times = []
net.eval()
with torch.no_grad():
for i in range(10):
test_input = torch.randn((1,2,256,203))
t = time.time()
net(test_input)
run_times.append(time.time()-t)
print("fp32 model time = {} s".format(np.mean(run_times))) |
st183861 | Hello! I have a pre-trained pruned network with 75% sparsity.
I would like to apply quantization to this network such that its sparsity is maintained during inference. I’ve opted to use symmetric quantization for this, and it’s my understanding that the zero point should be 0. However, I get zero_point=128. I place below a snippet of my code:
model.eval()
model.to('cpu')
quantization_config = torch.quantization.QConfig(activation=torch.quantization.MinMaxObserver.with_args(dtype=torch.quint8, qscheme=torch.per_tensor_symmetric), weight=torch.quantization.MinMaxObserver.with_args(dtype=torch.qint8, qscheme=torch.per_tensor_symmetric))
model.qconfig = quantization_config
quant_model = torch.quantization.prepare(model)
calibrate(quant_model, train_loader, batches_per_epoch)
When printing quant_model this is the output:
VGGQuant(
(features): Sequential(
(0): QuantizedConv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.07883524149656296, zero_point=128, padding=(1, 1))
(1): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.05492561683058739, zero_point=128, padding=(1, 1))
(4): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(7): QuantizedConv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.05388055741786957, zero_point=128, padding=(1, 1))
(8): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(9): ReLU(inplace=True)
(10): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.03040805645287037, zero_point=128, padding=(1, 1))
(11): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(12): ReLU(inplace=True)
(13): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(14): QuantizedConv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.023659387603402138, zero_point=128, padding=(1, 1))
(15): QuantizedBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(16): ReLU(inplace=True)
(17): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.01725710742175579, zero_point=128, padding=(1, 1))
(18): QuantizedBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(19): ReLU(inplace=True)
(20): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.013385827653110027, zero_point=128, padding=(1, 1))
(21): QuantizedBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(22): ReLU(inplace=True)
(23): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.011628611013293266, zero_point=128, padding=(1, 1))
(24): QuantizedBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(25): ReLU(inplace=True)
(26): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(27): QuantizedConv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.00966070219874382, zero_point=128, padding=(1, 1))
(28): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(29): ReLU(inplace=True)
(30): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.006910551339387894, zero_point=128, padding=(1, 1))
(31): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(32): ReLU(inplace=True)
(33): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.002619387349113822, zero_point=128, padding=(1, 1))
(34): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(35): ReLU(inplace=True)
(36): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.002502179006114602, zero_point=128, padding=(1, 1))
(37): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(38): ReLU(inplace=True)
(39): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(40): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.00118942407425493, zero_point=128, padding=(1, 1))
(41): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(42): ReLU(inplace=True)
(43): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.0017956980736926198, zero_point=128, padding=(1, 1))
(44): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(45): ReLU(inplace=True)
(46): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.0021184098441153765, zero_point=128, padding=(1, 1))
(47): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(48): ReLU(inplace=True)
(49): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.0019303301814943552, zero_point=128, padding=(1, 1))
(50): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(51): ReLU(inplace=True)
(52): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(53): AvgPool2d(kernel_size=1, stride=1, padding=0)
)
(classifier): QuantizedLinear(in_features=512, out_features=10, scale=0.0953117236495018, zero_point=128, qscheme=torch.per_tensor_affine)
(quant): Quantize(scale=tensor([0.0216]), zero_point=tensor([128]), dtype=torch.quint8)
(dequant): DeQuantize()
Should I use a different quantization scheme? Is there something I’m missing? I’d like for zero point to be 0 for all layers. |
st183862 | Well, if you want 0 to map to 0, I think you want signed integers rather than unsigned ones (which are the default), so use qint8 instead of quint8. Note that this may impact the operator coverage.
You probably know this, but just in case: for the sparsity to lead to less computation, you need special “structured sparse kernels”. I think they are being worked on, but it’s not what you get today.
Best regards
Thomas |
st183863 | This simple script will illustrate what I’m trying to do
import torch
from torch import nn
from torch.quantization.quantize_fx import prepare_fx
class Model(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
B = x.shape[0]
B = B * 1
return x
model = Model()
model.eval()
qconfig = torch.quantization.get_default_qconfig("fbgemm")
model = prepare_fx(model, {"": qconfig})
x = torch.arange(4)
print(model(x))
Right now it raises AttributeError: 'int' object has no attribute 'numel' because of the B * 1 line.
By the way, the real example for what I’m trying to quantize is here 1. |
st183864 | Solved by supriyar in post #2
Hi @Alexander_Soare, can you try this code out with the latest pytorch nightly build? I tried to reproduce the issue but wasn’t able to
class Model(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
B = x.shape[0]
… |
st183865 | Hi @Alexander_Soare, can you try this code out with the latest pytorch nightly build? I tried to reproduce the issue but wasn’t able to
class Model(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
B = x.shape[0]
B = B * 1
return x
model = Model()
model.eval()
qconfig = torch.quantization.get_default_qconfig("fbgemm")
model = prepare_fx(model, {"": qconfig})
x = torch.arange(4)
print(model(x))
print(model)
Outputs
tensor([0, 1, 2, 3])
GraphModule()
def forward(self, x):
getattr_1 = x.shape
getitem = getattr_1[0]; getattr_1 = None
mul = getitem * 1; getitem = None
return x |
st183866 | Hi,
I have this
{'model_state_dict': OrderedDict([(u'conv_3x3_32a.weight', tensor([[
[[ 0.0367, 0.0294, -0.1065],
[ 0.0918, 0.1065, -0.0331],
[-0.0147, 0.0184, -0.1028]]],
.......
[[[ 0.1249, 0.0661, -0.0257],
[ 0.0735, -0.0257, -0.1028],
[ 0.0441, -0.0698, -0.0771]]]], size=(40, 1, 3, 3),
dtype=torch.qint8, quantization_scheme=torch.per_tensor_affine,
scale=0.00367316859774, zero_point=0)), (u'conv_3x3_32a.scale', tensor(0.0031)), (u'conv_3x3_32a.zero_point', tensor(160))
I understand that the weights tensor has its own scale which is 0.00367316859774, but I have 2 questions:
Which is the purpose of the layer scale and zero point? Where do I use them?
How can I find which is the re-quantization scale used after the weight and input multiplication and accumulation? I don’t know how to access it. |
st183867 | scale and zero point are the quantization parameters for the layer. They are used to quantize the weight from fp32 to int8 domain.
re-quantization scale is defined based on input, weight and output scale. It is defined as
requant_scale = input_scale_fp32 * weight_scale_fp32 / output_scale_fp32
The conversion from accumulated int32 values to fp32 happens in the quantization backends, either FBGEMM or QNNPACK and the requantization scale can be found there.
cc @dskhudia |
st183868 | Hi,
thank you so much for your answer and please excuse me, but there are 2 scales that seems connected to the weights. The one inside the tensor parameters and the one of the layer. In my example:
dassima:
(u'conv_3x3_32a.weight', tensor([[
[[ 0.0367, 0.0294, -0.1065],
[ 0.0918, 0.1065, -0.0331],
[-0.0147, 0.0184, -0.1028]]],
.......
[[[ 0.1249, 0.0661, -0.0257],
[ 0.0735, -0.0257, -0.1028],
[ 0.0441, -0.0698, -0.0771]]]], size=(40, 1, 3, 3),
dtype=torch.qint8, quantization_scheme=torch.per_tensor_affine,
scale=0.00367316859774, zero_point=0)
and
dassima:
(u'conv_3x3_32a.scale', tensor(0.0031))
so number 0.00367316859774 and 0.0031 which generate different results. If the 0.0031 does the actual conversion from 32FP to 8INT, what does 0.00367316859774 ? |
st183869 | (u’conv_3x3_32a.scale’, tensor(0.0031))
This scale value here refers to the output scale of the layer. Please see the docs for more details
https://pytorch.org/docs/stable/quantization.html#id12 44 |
st183870 | @dassima
I’m probably late to the thread, but what quantization scheme did you use to get zero_point=0? |
st183871 | I am trying to quantize a model that uses modules from torch librosa 1.
I can quantize it and pass batches through the quantized model, but when trying to save it, the module LogmelFilterBank throws the following error:
cannot create weak reference to 'numpy.ufunc' object
The source code for this module is the following:
class LogmelFilterBank(nn.Module):
def __init__(self, sr=22050, n_fft=2048, n_mels=64, fmin=0.0, fmax=None,
is_log=True, ref=1.0, amin=1e-10, top_db=80.0, freeze_parameters=True):
r"""Calculate logmel spectrogram using pytorch. The mel filter bank is
the pytorch implementation of as librosa.filters.mel
"""
super(LogmelFilterBank, self).__init__()
self.is_log = is_log
self.ref = ref
self.amin = amin
self.top_db = top_db
if fmax == None:
fmax = sr//2
self.melW = librosa.filters.mel(sr=sr, n_fft=n_fft, n_mels=n_mels,
fmin=fmin, fmax=fmax).T
# (n_fft // 2 + 1, mel_bins)
self.melW = nn.Parameter(torch.Tensor(self.melW))
if freeze_parameters:
for param in self.parameters():
param.requires_grad = False
def forward(self, input):
r"""Calculate (log) mel spectrogram from spectrogram.
Args:
input: (*, n_fft), spectrogram
Returns:
output: (*, mel_bins), (log) mel spectrogram
"""
# Mel spectrogram
mel_spectrogram = torch.matmul(input, self.melW)
# (*, mel_bins)
# Logmel spectrogram
if self.is_log:
output = self.power_to_db(mel_spectrogram)
else:
output = mel_spectrogram
return output
def power_to_db(self, input):
r"""Power to db, this function is the pytorch implementation of
librosa.power_to_lb
"""
ref_value = self.ref
log_spec = 10.0 * torch.log10(torch.clamp(input, min=self.amin))
log_spec -= 10.0 * np.log10(np.maximum(self.amin, ref_value))
if self.top_db is not None:
if self.top_db < 0:
raise librosa.util.exceptions.ParameterError('top_db must be non-negative')
log_spec = torch.clamp(log_spec, min=log_spec.max().item() - self.top_db)
return log_spec
I cannot understand where is the weak reference being thrown or how can I prevent it. Thanks for any help you can provide. |
st183872 | Solved by AfonsoSalgadoSousa in post #3
Thanks for the reply. I fixed the problem with your suggestion. The problem was on trying to save the FP32 version of the model with torch.jit.save. Using torch.save fixed the problem. |
st183873 | Thanks for the reply. I fixed the problem with your suggestion. The problem was on trying to save the FP32 version of the model with torch.jit.save. Using torch.save fixed the problem. |
st183874 | Here’s the line that triggers this error during torch.quantization.quantize_fx.prepare_fx
patch_embed[:, 1:] = patch_embed[:, 1:] + self.proj(self.norm1_proj(pixel_embed).reshape(B, N - 1, -1))
Is there some way to get around this without having to avoid item assignment? Can I somehow inform that particular proxy instance that this should be okay? |
st183875 | Solved by Alexander_Soare in post #3
Thanks for your response!
Well, I kind of feel like I got away with a crime, but this didn’t throw the error:
patch_embed_rest = patch_embed[:, 1:]
patch_embed_rest += self.proj(self.norm1_proj(pixel_embed).reshape(B, N - 1, -1)) |
st183876 | You would likely want to lose the indexing on the left hand side (which makes the assignment necessarily in-place). How to do that depends on more context than shown here - as shown here, you might just keep the first column of patch_embed separately or so.
Best regards
Thomas |
st183877 | Thanks for your response!
Well, I kind of feel like I got away with a crime, but this didn’t throw the error:
patch_embed_rest = patch_embed[:, 1:]
patch_embed_rest += self.proj(self.norm1_proj(pixel_embed).reshape(B, N - 1, -1)) |
st183878 | Cool! Likely this is similar to “when can I use inplace without autograd complaining”… |
st183879 | Hi. I have an audio processing model that uses functions from TorchLibrosa (GitHub - qiuqiangkong/torchlibrosa) that I want to quantize and save. I get the following error:
RuntimeError:
_pad(Tensor input, int[] pad, str mode=“constant”, float value=0.) → (Tensor):
Expected a value of type ‘float’ for argument ‘value’ but instead found type ‘int’.
:
input_1 = input
getitem = input_1[(slice(None, None, None), None, slice(None, None, None))]; input_1 = None
_pad_1 = torch.nn.functional._pad(getitem, (256, 256), mode = ‘reflect’, value = 0); getitem = None
~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE
base_spectrogram_extractor_stft_conv_real_input_scale_0 = self.base_spectrogram_extractor_stft_conv_real_input_scale_0
base_spectrogram_extractor_stft_conv_real_input_zero_point_0 = self.base_spectrogram_extractor_stft_conv_real_input_zero_point_0
I tried ignoring these modules using the following config for quantization in FX Graph mode:
qconfig = get_default_qconfig("fbgemm")
qconfig_dict = {"": qconfig}
prep_config_dict = {
"non_traceable_module_name": ["spectrogram_extractor", "logmel_extractor"]
}
prepared_model = prepare_fx(
model_to_quantize, qconfig_dict, prepare_custom_config_dict=prep_config_dict)
What can I do to fix this issue? I really appreciate any help you can provide. |
st183880 | Solved by AfonsoSalgadoSousa in post #7
Managed to fix the problem by not using the following line:
x = F.pad(x, pad=(self.n_fft // 2, self.n_fft // 2), mode=self.pad_mode) |
st183881 | Can you provide some additional context? Is it only the serialization thats the issue, can you run the quantized model without saving it…etc?
ideally a way to reproduce the error would be most helpful. |
st183882 | Thanks for the reply. It is only the serialization that is the issue. I can perform a forward pass on the quantized model.
Following, I have a minimal reproducible example of the issue:
import copy
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.quantization import get_default_qconfig
from torch.quantization.quantize_fx import convert_fx, prepare_fx
from torchlibrosa.stft import LogmelFilterBank, Spectrogram
class AudioModel(nn.Module):
def __init__(self, classes_num):
super(AudioModel, self).__init__()
self.spectrogram_extractor = Spectrogram(n_fft=512, hop_length=160,
win_length=512, window='hann', center=True, pad_mode='reflect',
freeze_parameters=True)
self.logmel_extractor = LogmelFilterBank(sr=16000, n_fft=512,
n_mels=64, fmin=50, fmax=8000, ref=1.0, amin=1e-10, top_db=None,
freeze_parameters=True)
self.bn0 = nn.BatchNorm2d(64)
self.conv1 = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=(
3, 3), stride=(1, 1), padding=(1, 1), bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.fc1 = nn.Linear(64, 64, bias=True)
self.fc_audioset = nn.Linear(64, classes_num, bias=True)
def forward(self, input):
# (batch_size, 1, time_steps, freq_bins)
x = self.spectrogram_extractor(input)
x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
x = x.transpose(1, 3)
x = self.bn0(x)
x = x.transpose(1, 3)
x = F.relu_(self.bn1(self.conv1(x)))
x = F.avg_pool2d(x, kernel_size=(2, 2))
x = F.dropout(x, p=0.2, training=self.training)
x = torch.mean(x, dim=3)
(x1, _) = torch.max(x, dim=2)
x2 = torch.mean(x, dim=2)
x = x1 + x2
x = F.dropout(x, p=0.5, training=self.training)
x = F.relu_(self.fc1(x))
embedding = F.dropout(x, p=0.5, training=self.training)
clipwise_output = torch.sigmoid(self.fc_audioset(x))
output_dict = {'clipwise_output': clipwise_output,
'embedding': embedding}
return output_dict
float_model = AudioModel(classes_num=21)
model_to_quantize = copy.deepcopy(float_model)
model_to_quantize.eval()
qconfig = get_default_qconfig("fbgemm")
qconfig_dict = {"": qconfig}
prep_config_dict = {
"non_traceable_module_name": ["spectrogram_extractor", "logmel_extractor"]
}
prepared_model = prepare_fx(
model_to_quantize, qconfig_dict, prepare_custom_config_dict=prep_config_dict)
def calibrate(model, data_loader):
model.eval()
with torch.no_grad():
for X, _ in data_loader:
model(X)
dummy_input = [[torch.rand(1, 30000), 0]]
calibrate(prepared_model, dummy_input) # run calibration on sample data
quantized_model = convert_fx(prepared_model)
params = sum([np.prod(p.size()) for p in float_model.parameters()])
# print("Number of Parameters: {:.1f}M".format(params/1e6))
print(f"Number of Parameters: {params}M")
params = sum([np.prod(p.size()) for p in quantized_model.parameters()])
# print("Number of Parameters: {:.1f}M".format(params/1e6))
print(f"Number of Parameters: {params}M")
quantized_model(dummy_input[0][0])
torch.jit.save(torch.jit.script(quantized_model),
'test.pth')
loaded_quantized = torch.jit.load('test.pth')
I also tried with Eager mode quantization, dequantizing for arithmetic operations and the Spectrograms, but the problem is still while trying to save. |
st183883 | Hey, I haven’t had a chance to give this an in depth look, if I knew which line was causing the issue it’d be easier, but I suspect there’s an issue with the output of something being an integer while an observer is expecting a float, or something along those lines. If you try casting the output of the torch librosa functions to float, does that change anyhting? |
st183884 | Managed to fix the problem by not using the following line:
x = F.pad(x, pad=(self.n_fft // 2, self.n_fft // 2), mode=self.pad_mode) |
st183885 | When I’m doing QAT on my network, I keep getting the in-place error.(and I don’t set any Inplace=True in my code) I set detect anomaly True, and try to find the bug. It returns that it’s the conv2d layer leads to this error. However, it’s weird to believe that conv2d leads to this error. Can anyone help?
the error message is below:
/usr/local/lib/python3.8/dist-packages/torch/autograd/init.py:130: UserWarning: Error detected in FakeQuantizePerChannelAffineBackward. Traceback of forward call that caused the error:
File “Train_CS_OPINE_Net_plus_quan.py”, line 299, in
[x_output, loss_layers_sym, Phi, x_init] = model_fp32(batch_x)
File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “Train_CS_OPINE_Net_plus_quan.py”, line 218, in forward
[x_final, layer_sym] = self.fcs[0](x, PhiWeight, PhiTWeight, PhiTb) #share weight
File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “Train_CS_OPINE_Net_plus_quan.py”, line 160, in forward
x_conv2_backward_relu = self.convrelu2_backward(x_conv1_backward_relu)
File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.8/dist-packages/torch/nn/intrinsic/qat/modules/conv_fused.py”, line 312, in forward
self._conv_forward(input, self.weight_fake_quant(self.weight)))
File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 727, in _call_impl
result = self.forward(*input, **kwargs)
File “/usr/local/lib/python3.8/dist-packages/torch/quantization/fake_quantize.py”, line 99, in forward
X = torch.fake_quantize_per_channel_affine(X, self.scale, self.zero_point,
(Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:104.)
Variable._execution_engine.run_backward(
Traceback (most recent call last):
File “Train_CS_OPINE_Net_plus_quan.py”, line 319, in
loss_all.backward(retain_graph = True)
File “/usr/local/lib/python3.8/dist-packages/torch/tensor.py”, line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/usr/local/lib/python3.8/dist-packages/torch/autograd/init.py”, line 130, in backward
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [32]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
this is my code:
class BasicBlock(torch.nn.Module):
def __init__(self):
super(BasicBlock, self).__init__()
self.lambda_step = nn.Parameter(torch.Tensor([0.5]), requires_grad = True)
self.soft_thr = nn.Parameter(torch.Tensor([0.01]), requires_grad = True)
qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
# torch.nn.qat.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None)
# torch.nn.intrinsic.qat.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None)
self.conv_D = nn.qat.Conv2d(1, 32,(3, 3), padding=1, bias=False, qconfig = qconfig)
self.convrelu1_forward = nn.intrinsic.qat.ConvReLU2d(32, 32,(3, 3), padding=1, bias=False, qconfig = qconfig)
self.conv2_forward = nn.qat.Conv2d(32, 32,(3, 3), padding=1, bias=False, qconfig = qconfig)
self.convrelu1_backward = nn.intrinsic.qat.ConvReLU2d(32, 32,(3, 3), padding=1, bias=False, qconfig = qconfig)
self.convrelu2_backward = nn.intrinsic.qat.ConvReLU2d(32, 32,(3, 3), padding=1, bias=False, qconfig = qconfig)
self.conv2_backward = nn.qat.Conv2d(32, 32,(3, 3), padding=1, bias=False, qconfig = qconfig)
self.conv2_backward.weight_fake_quant = self.convrelu2_backward.weight_fake_quant # convrelu2_backward is fused with relu, but I only want the conv (without relu), and take out the weight
self.convrelu1_G = nn.intrinsic.qat.ConvReLU2d(32, 32,(3, 3), padding=1, bias=False, qconfig = qconfig)
self.conv2_G = nn.qat.Conv2d(32, 32,(3, 3), padding=1, bias=False, qconfig = qconfig)
self.conv3_G = nn.qat.Conv2d(32, 1,(3, 3), padding=1, bias=False, qconfig = qconfig)
if ReLU == "trelu":
self.alpha = nn.Parameter(torch.Tensor([alpha_arg]), requires_grad = False)
self.skip_add = nn.quantized.FloatFunctional()
# QuantStub converts tensors from floating point to quantized
self.quant = torch.quantization.QuantStub()
# DeQuantStub converts tensors from quantized to floating point
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x, PhiWeight, PhiTWeight, PhiTb):
x_quan = self.quant(x)
PhiTb_quan = self.quant(PhiTb)
PhiWeight_quan = self.quant(PhiWeight)
PhiTWeight_quan = self.quant(PhiTWeight)
temp = F.conv2d(x_quan, PhiWeight_quan, padding=0,stride=33, bias=None)
temp1 = F.conv2d(temp, PhiTWeight_quan, padding=0, bias=None)
x1 = x_quan - self.lambda_step * torch.nn.PixelShuffle(33)(temp1)
x_input = x1 + self.lambda_step * PhiTb_quan
x_conv_D = self.conv_D(x_input)
x_conv1_forward_relu = self.convrelu1_forward(x_conv_D)
x_conv2_forward = self.conv2_forward(x_conv1_forward_relu)
x_soft = torch.mul(torch.sign(x_conv2_forward), F.relu(torch.abs(x_conv2_forward) - self.soft_thr))
x_conv1_backward_relu = self.convrelu1_backward(x_soft)
# x_conv1_backward_relu = x_soft + x_soft
x_conv2_backward_relu = self.convrelu2_backward(x_conv1_backward_relu)
# x_conv2_backward_relu = x_conv1_backward_relu + x_conv1_backward_relu
x_conv1_G_relu = self.convrelu1_G(x_conv2_backward_relu)
x_conv2_G = self.conv2_G(x_conv1_G_relu)
x_conv3_G = self.conv3_G(x_conv2_G)
x_pred_quan = x_input+x_conv3_G
# x_pred_quan = self.skip_add.add(x_input, x_conv3_G)
#######################For caculation of SSIM ######################
x2 = self.convrelu1_backward(x_conv2_forward)
# x2 = x_conv2_forward
x_D_est = self.conv2_backward(x2)
# x_D_est = x2
symloss_quan = x_D_est - x_conv_D
##################################################################
x_pred = self.dequant(x_pred_quan)
symloss = self.dequant(symloss_quan)
return [x_pred, symloss]
class OPINENetplus(torch.nn.Module):
def __init__(self, LayerNo, n_input):
super(OPINENetplus, self).__init__()
self.Phi = nn.Parameter(init.xavier_normal_(torch.Tensor(n_input, 1089)))
self.Phi_scale = nn.Parameter(torch.Tensor([0.01]))
onelayer = []
self.LayerNo = LayerNo
# for i in range(LayerNo):
for i in range(1): #share weight
onelayer.append(BasicBlock())
self.fcs = nn.ModuleList(onelayer)
def forward(self, x):
# Sampling-subnet
Phi_ = MyBinarize(self.Phi)
Phi = self.Phi_scale * Phi_
PhiWeight = Phi.contiguous().view(n_input, 1, 33, 33) #Reshape Phi in order to use non-overlapping conv.
Phix = F.conv2d(x, PhiWeight, padding=0, stride=33, bias=None) # Get measurements
# Initialization-subnet
PhiTWeight = Phi.t().contiguous().view(n_output, n_input, 1, 1)
PhiTb = F.conv2d(Phix, PhiTWeight, padding=0, bias=None)
PhiTb = torch.nn.PixelShuffle(33)(PhiTb)
x = PhiTb # Conduct initialization
x_init = x
# Recovery-subnet
layers_sym = [] # for computing symmetric loss
for i in range(self.LayerNo):
[x, layer_sym] = self.fcs[0](x, PhiWeight, PhiTWeight, PhiTb) #share weight
layers_sym.append(layer_sym)
x_final = x
return [x_final, layers_sym, Phi, x_init] |
st183886 | Solved by TYC in post #2
I solved it by myself!
Because I call the self.convrelu1_backward for two times, the weights of self.convrelu1_backward is replaced when performing back-propagation. Later, I use different naming(or objects) for the two calling, and the error is solved!
self.convrelu1_backward = nn.intrinsic.qat.C… |
st183887 | I solved it by myself!
Because I call the self.convrelu1_backward for two times, the weights of self.convrelu1_backward is replaced when performing back-propagation. Later, I use different naming(or objects) for the two calling, and the error is solved!
self.convrelu1_backward = nn.intrinsic.qat.ConvReLU2d(32, 32,(3, 3), padding=1, bias=False, qconfig = qconfig)
self.convrelu2_backward = nn.intrinsic.qat.ConvReLU2d(32, 32,(3, 3), padding=1, bias=False, qconfig = qconfig)
self.conv1_backward = nn.intrinsic.qat.ConvReLU2d(32, 32,(3, 3), padding=1, bias=False, qconfig = qconfig)
value1 = self.convrelu1_backward.weight_fake_quant # convrelu1_backward is fused with relu, but I only want the conv (without relu), and take out the weight
self.conv1_backward.weight_fake_quant = copy.deepcopy(value1)
self.conv2_backward =nn.qat.Conv2d(32, 32,(3, 3), padding=1, bias=False, qconfig = qconfig)
value2 = self.convrelu2_backward.weight_fake_quant # convrelu2_backward is fused with relu, but I only want the conv (without relu), and take out the weight
self.conv2_backward.weight_fake_quant = copy.deepcopy(value2) |
st183888 | I have the same issue, but my head is shared head and it must used many times, How should I solve this problem? just not quantization this module? |
st183889 | Hi Wang-zipeng,
You can make different copies by using copy.deepcopy so that the original gradient won’t be overwrote during backpropagation. Or what do you mean that “head is shared head” ? Can you paste your code here? |
st183890 | Thank you for your reply sir. It’s rpn_head shared by different fpn’s output in faster-rcnn. I think you know that network and I used the implementation in the mmdetection. The code are as follows:
def forward_single(self, x):
"""Forward feature map of a single scale level."""
x = self.rpn_conv(x)
x = F.relu(x, inplace=True)
rpn_cls_score = self.rpn_cls(x)
rpn_bbox_pred = self.rpn_reg(x)
return rpn_cls_score, rpn_bbox_pred
and it’s a for-loop shared by different fpn layers. If I can copy the layers, could I get the correct gradients between different copy? |
st183891 | Hi Wang-zipeng,
Unfortunately, I don’t know faster-rcnn. However, I think that the each copy’s gradient will be updated (and it’s depend on the same loss function). Each copy’s gradient should be the same after updating.
By the way, I suggest you use the nn.ReLU instead of F.ReLU because nn._ can be used to fusing layer during QAT process.
I hope this can be helpful for you. If any question, welcome to discuss with me! |
st183892 | I am trying to leverage Pytorch’s quantized ops functionality, but I notice that its accuracy tends to drop in some cases relative to other quantization frameworks. Inspecting further, I find that there are two cases that cause a drop in accuracy: If a MinMaxObserver has reduce_range=True or reduce_range=False. If reduce_range is false, then nn.quantized.Conv2d no longer behaves as expected. I’ve posted an example below where running an Imagenet sample against the first conv layer of Resnet50 differs in output compared to quantizing-dequantizing the weights/inputs and running the fp32 conv op. If reduce_range is true, this issue is no longer present but overall accuracy still drops and weight quantization error goes up (see script).
I know that similar issues have been brought up before. But I want to add that, it seems like this reduce_range parameter was introduced to resolve these issues. Yet, reducing the range of quantization by one bit will reduce accuracy, so reduce_range is not an all-round fix.
Any clue what’s causing the issues. Any fixes beyond reduce_range? Thanks
import torch
from torch import nn
from torchvision import transforms
import torchvision.models as tvm
from torchvision.datasets.folder import default_loader
from torch.nn.quantized.modules.utils import _quantize_weight
input_scale = 0.01865844801068306
input_zero_point = 114
weight_max = torch.Tensor([0.6586668252944946])
weight_min = torch.Tensor([-0.494000118970871])
output_scale = 0.04476073011755943
output_zero_point = 122
x_filename = 'ILSVRC2012_val_00000293.JPEG'
imsize = 224
val_transforms = transforms.Compose([transforms.ToTensor()])
model = tvm.resnet50(pretrained=True)
cmod = model.conv1
x_raw = default_loader(x_filename)
x = val_transforms(x_raw).unsqueeze(0)
X = torch.quantize_per_tensor(x, input_scale, input_zero_point, torch.quint8)
weight_observer = torch.quantization.observer.MinMaxObserver.with_args(
qscheme=torch.per_tensor_affine,
dtype=torch.qint8,
reduce_range=False
)()
weight_observer.max_val = weight_max
weight_observer.min_val = weight_min
qmod = torch.nn.quantized.Conv2d(
in_channels=cmod.in_channels, out_channels=cmod.out_channels, kernel_size=cmod.kernel_size,
stride=cmod.stride, padding=cmod.padding, padding_mode=cmod.padding_mode, dilation=cmod.dilation, groups=cmod.groups,
bias=False
)
qweight = _quantize_weight(cmod.weight, weight_observer)
qmod.set_weight_bias(qweight, None)
qmod.scale = output_scale
qmod.zero_point = output_zero_point
y_native = qmod(X).dequantize()
y_simulated = torch.quantize_per_tensor(
torch.nn.functional.conv2d(
X.dequantize(),
qmod.weight().dequantize(),
None,
qmod.stride, qmod.padding, qmod.dilation, qmod.groups
),
qmod.scale, qmod.zero_point, torch.quint8
).dequantize()
# Bad
print((y_native[0,33,18-3:18+3,23-3:23+3]-y_simulated[0,33,18-3:18+3,23-3:23+3]).abs())
# Good
print((y_native[0,32,18-3:18+3,23-3:23+3]-y_simulated[0,32,18-3:18+3,23-3:23+3]).abs())
# Quantization error
print('Mean absolute difference', (qmod.weight().dequantize()-cmod.weight).abs().mean())
# Op error
print('Max absolute difference', (y_native-y_simulated).abs().max()) |
st183893 | amrmartini:
I know that similar issues have been brought up before. But I want to add that, it seems like this reduce_range parameter was introduced to resolve these issues. Yet, reducing the range of quantization by one bit will reduce accuracy, so reduce_range is not an all-round fix.
The fbgemm kernels require reduce_range to be True, to prevent potential overflow. One could write a different kernel without this requirement, at the cost of reduced performance.
You are right that using one less bit is detrimental to accuracy. There are a couple of strategies to improve accuracy - using moving average observers, using per-channel weight observers instead of per tensor, etc. |
st183894 | The fbgemm kernels require reduce_range to be True, to prevent potential overflow.
Yeah this makes sense after looking at qconv.cpp, which suggest that vpmaddsubsw instructions are used in fbgemm. Actually these saturate not overflow, but they saturate to 16 bit intermediates rather than 32 bit. So is reduce_range meant to prevent this?
I also modified the script a little bit and found that in torch fx, the fbgemm option doesn’t quite reduce the range by 1 bit, so overflow/saturation is still observed:
import torch
from torch import nn
from torchvision import transforms
import torchvision.models as tvm
from torchvision.datasets.folder import default_loader
from torch.nn.quantized.modules.utils import _quantize_weight
def _fuse_fx(graph_module, fuse_custom_config_dict):
return graph_module
import torch.quantization.quantize_fx
torch.quantization.quantize_fx._fuse_fx = _fuse_fx
backend_str = 'fbgemm'
x_filename = 'ILSVRC2012_val_00000293.JPEG'
imsize = 224
normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
val_transforms = transforms.Compose([
transforms.Resize(imsize + 24),
transforms.CenterCrop(imsize),
transforms.ToTensor(),
normalize
])
x_raw = default_loader(x_filename)
x_encoding = val_transforms(x_raw).unsqueeze(0)
torch.random.manual_seed(0)
x_test = x_encoding*2.0
model = tvm.resnet50(pretrained=True)
weight = model.conv1.weight.detach().clone()
import torch.quantization.quantize_fx as quantize_fx
qconfig_dict = {"": torch.quantization.get_default_qconfig(backend_str)}
model.eval()
model = quantize_fx.prepare_fx(model, qconfig_dict)
model(x_encoding)
model = quantize_fx.convert_fx(model)
qmod = model.conv1
X_test = torch.quantize_per_tensor(x_test, model.conv1_input_scale_0, model.conv1_input_zero_point_0, model.conv1_input_dtype_0)
y_native = qmod(X_test).dequantize()
y_simulated = torch.quantize_per_tensor(
torch.nn.functional.conv2d(
X_test.dequantize(),
qmod.weight().dequantize(),
None,
qmod.stride, qmod.padding, qmod.dilation, qmod.groups
),
qmod.scale, qmod.zero_point, torch.quint8
).dequantize()
if backend_str == 'fbgemm':
print(qmod.weight().q_per_channel_scales()/((weight.max(dim=1).values.max(dim=1).values.max(dim=1).values - weight.min(dim=1).values.min(dim=1).values.min(dim=1).values)/255))
else:
print(qmod.weight().q_scale()/((weight.max()-weight.min())/255))
# Bad
print((y_native[0,33,18-3:18+3,23-3:23+3]-y_simulated[0,33,18-3:18+3,23-3:23+3]).abs())
# Good
print((y_native[0,32,18-3:18+3,23-3:23+3]-y_simulated[0,32,18-3:18+3,23-3:23+3]).abs())
# Op error
print('Max absolute difference', (y_native-y_simulated).abs().max())
print(len(torch.where((y_native-y_simulated).abs() > 0.0005)[0]))
My output shows that all channels have a scale that is less than 2x the expected scale but greater than 1x the expected scale, which seems to indicate that the range is reduced by less than one bit. There are also a handful of overflow/saturation instances. On the other hand, if I manually set an observer with reduce_range=True, the scale is exactly 2x the expected scale (reduced by exactly one bit) and no overflow/saturation issues. So my question is, is the default behavior not to reduce range in torch fx? |
st183895 | looks like it depends on the qconfig i.e. backend you are working on:
github.com
pytorch/pytorch/blob/master/torch/quantization/qconfig.py#L96-L105
def get_default_qconfig(backend='fbgemm'):
if backend == 'fbgemm':
qconfig = QConfig(activation=HistogramObserver.with_args(reduce_range=True),
weight=default_per_channel_weight_observer)
elif backend == 'qnnpack':
qconfig = QConfig(activation=HistogramObserver.with_args(reduce_range=False),
weight=default_weight_observer)
else:
qconfig = default_qconfig
return qconfig
in this case its not a MinMax observer, but reduce_range is set to true |
st183896 | Makes sense and thanks for pointing this out; it’s clear that neither backend relies on MinMax observers. Yet, in both cases, one still gets overflow/saturation issues. I wonder if this is due to the use of Histogram observers. To be specific, in either case, one gets a range < 2x the min-max range. So perhaps the issue is only resolved when range is >= 2x the min-max range? |
st183897 | Yeah, I tried to dig into the observer code to see if I could spot anything. I’m not seeing any usage of reduce range in the actual histogram observer but the underlying observer class has a bit here:
github.com
pytorch/pytorch/blob/master/torch/quantization/observer.py#L204
if custom_quant_min is not None and custom_quant_max is not None:
initial_quant_min, initial_quant_max = custom_quant_min, custom_quant_max
qrange_len = initial_quant_max - initial_quant_min + 1
assert 0 < qrange_len <= 256, \
"quantization range should be positive and not exceed the maximum bit range (=256)."
if self.dtype == torch.qint8:
quant_min, quant_max = -qrange_len // 2, qrange_len // 2 - 1
else:
quant_min, quant_max = 0, qrange_len - 1
if self.reduce_range:
quant_min, quant_max = quant_min // 2, quant_max // 2
else:
# Fallback onto default 8-bit qmin and qmax calculation if dynamic range is not used.
if self.dtype == torch.qint8:
if self.reduce_range:
quant_min, quant_max = -64, 63
else:
quant_min, quant_max = -128, 127
elif self.dtype == torch.quint8:
if self.reduce_range:
None of it looks out of the ordinary although I’m not super familiar with the observer part of the codebase. Having said there, there is:
github.com
pytorch/pytorch/blob/master/torch/quantization/observer.py#L121-L124
if reduce_range:
warnings.warn(
"Please use quant_min and quant_max to specify the range for observers. \
reduce_range will be deprecated in a future release of PyTorch."
which indicates it may be on the way out anyway. |
st183898 | I applied Quantisation aware training using PyTorch lightning on one of the architectures for faster inference, The model has been trained successfully but I am facing model loading issues during inference. I’ve come across a few forums with this same issue but couldn’t find a satisfactory method that can resolve my issue. Any help would be highly appreciated, Thanks!
Below is the attached code (I have trained the QAT model using PyTorch lightning but the issue arises when I try to load it)
Training Code
import torch
import pytorch_lightning as pl
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.callbacks import QuantizationAwareTraining
class Module(pl.LightningModule):
def __init__(self, model):
super().__init__()
self.save_hyperparameters()
self.model = Model(3).to(device)
self.lr = lr
def forward(self, x):
return self.model(x)
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=self.lr)
def training_step(self, batch, batch_idx):
lr, hr = batch
sr = self(lr)
loss = F.mse_loss(sr, hr, reduction="mean")
return loss
def validation_step(self, batch, batch_idx):
lr, hr = batch
sr = self(lr)
loss = F.mse_loss(sr, hr, reduction="mean")
return loss
def test_step(self, batch, batch_idx):
lr, hr = batch
sr = self(lr)
loss = F.mse_loss(sr, hr, reduction="mean")
return loss
if __name__ == '__main__':
scale_factor = 3
batch_size = 24
epochs = 1
lr = 1e-5
input_image_path = '....'
target_image_path = '....'
val_input_path = '....'
val_target_path = '...'
prev_ckpt_path = '...'
device = 'cpu' # Device kept as CPU for Quantisation Aware Training, as it doesnt support GPU
# Define model
model = SRModel(scale_factor).to('cpu')
module = Module(model).load_from_checkpoint(prev_ckpt_path)
# Setup dataloaders
train_dataset = CustomDataset(input_image_path, target_image_path)
training_dataloader = DataLoader(
dataset=train_dataset, num_workers=4, batch_size=batch_size, shuffle=True)
val_dataset = CustomDataset(val_input_path, val_target_path)
val_dataloader = DataLoader(
dataset=val_dataset, num_workers=4, batch_size=batch_size, shuffle=False)
checkpoint_callback = ModelCheckpoint(monitor='val_loss')
trainer = pl.Trainer(max_epochs=epochs, gpus=0, auto_lr_find=True,
logger= wandb_logger, progress_bar_refresh_rate = 3,
callbacks=[QuantizationAwareTraining(observer_type='histogram', input_compatible=True), checkpoint_callback])
trainer.fit(
module,
training_dataloader,
val_dataloader
)
trainer.save_checkpoint("Quantised.pth")
trainer.save_checkpoint("Quantised.ckpt")
And this is the inference code
Inference
class Module(pl.LightningModule):
def __init__(self, model):
super().__init__()
self.save_hyperparameters()
self.model = SRModel(3).to(device)
def forward(self, x):
return self.model(x)
prev_ckpt_path = '.....ckpt'
device = 'cpu'
# Define model
model = SRModel(3).to(device)
module = Module(model).load_from_checkpoint(prev_ckpt_path, strict=False)
RunTime Error: an exception occurred : (‘Copying from quantized Tensor to non-quantized Tensor is not allowed, please use dequantize to get a float Tensor from a quantized Tensor’,). |
st183899 | This is a bit tricky at the moment (also Can't load_from_checkpoint with QuantizationAwareTraining callback during training · Issue #6457 · PyTorchLightning/pytorch-lightning · GitHub 15) and the problem is that the the model isn’t quantized yet.
If you model permits it, the easiest can be to save the scripted model (torch.jit.save(module.to_torchscript(), 'my_model.pt')), but there, too, there is the slight caveat that you need to call the quantization / dequantization yourself ( model.to_torchscript "forgets" input quantization that is automatically called for the model · Issue #7552 · PyTorchLightning/pytorch-lightning · GitHub 7 ).
Edit: to call quantization/dequantization: model.dequant(model(model.quant(inp))). This might break when PL gets fixed, but should do the trick until then.
I hope this helps.
Best regards
Thomas |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.