id
stringlengths
3
8
text
stringlengths
1
115k
st183900
Thanks! @tom I think I need to test using torchscript and infer if the issue still exists.
st183901
Yeah that first link makes a lot of sense, this issue is common since the model needs to be prepared to do QAT but the prepare step happens at some random point within the PL framework (not at start) so loading into it before the prepare is what causes the error. This is primarily a PL issue, but you should be able to circumvent it by creating a new PL module that does the quantization prepare stuff right after model load rather than at the checkpoint hook. You could for example call it manually before loading (though there may be issues with preparing a model twice if the quant prepare step in PL doesn’t check for that).
st183902
so are you suggesting something like self.model = quant(model) (during inference) and then load the model weights??
st183903
No, you’d need to do the pre-inference model preparation, i.e. the fuse and prepare steps. This is the code that PL runs to prepare the model: pytorch-lightning/quantization.py at 92cf396de2fe49e89a625a200d641bd8b6aeb328 · PyTorchLightning/pytorch-lightning · GitHub 19 This is what needs to be run in order to load the checkpoint since the checkpoint is for the model after its been fused/prepared. Figuring out how to do this would require PL expertise that I don’t have, it may be a good idea to ask at the PL forum: https://forums.pytorchlightning.ai/ 3
st183904
This has already come up here 3 but I feel it was not addressed properly. I’m facing this issue with FX tracing on timm's efficientnet_b3. I get 10x speedup on regular backend but a 4x slow down on qnnpack (torch.backends.quantized.engine = 'qnnpack'). Here are the top 10 time eaters: --------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls --------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ model_inference 1.91% 9.592ms 100.00% 503.021ms 503.021ms 1 quantized::add 41.99% 211.224ms 42.02% 211.352ms 11.124ms 19 quantized::conv2d 20.23% 101.774ms 20.45% 102.887ms 791.436us 130 quantized::mul 11.50% 57.855ms 11.56% 58.131ms 2.236ms 26 aten::sigmoid 11.39% 57.277ms 11.40% 57.321ms 2.205ms 26 aten::dequantize 11.26% 56.655ms 11.29% 56.795ms 530.796us 107 aten::silu_ 0.05% 238.570us 0.72% 3.618ms 46.382us 78 aten::silu 0.67% 3.379ms 0.67% 3.379ms 43.323us 78 aten::quantize_per_tensor 0.34% 1.694ms 0.34% 1.694ms 20.918us 81 aten::mean 0.14% 719.232us 0.16% 803.762us 29.769us 27 --------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ It’s sad that so much time is being taken by add and mul. The average time for one add is 11 ms! The convs are taking about as long as the FP32 version. I also tried this with MobileNet V2 and got a 20x slowdown from 41 ms to 806 ms.
st183905
Is this on mobile or on server? I know there is some work being done on the speed of add and mul here: AVX512 and Vec512 · Issue #56187 · pytorch/pytorch · GitHub 10 if its on mobile, it may be necessary to involve the mobile team as well.
st183906
Hi, I have quantized a MobileNetV3-like Network with ‘qnnpack’ for use in an Android app. However, the quantized model is even slower than the original one. All layers seem to be quantized correctly and the model file size decreased to 1/4 of the original size. The model has ~2M Parameters and input resolution is 224x224. Here are some inference time numbers: Model (without quantization) on Ryzen 3700x: ~50ms Model (without quantization) on RTX 2070: ~6ms Model (without quantization) on Huawei Mate 10 lite: ~1s Model (with quantization) on Huawei Mate 10 lite: ~1.5s I did not expect that inference would take ~1s on such a model, even without quantization. Is this expected? Why would a quantized model be slower? Are there any operations/layers/architecture conventions that should be absolutely avoided? Also, the output of the quantized model is extremely noisy. What could be causing this? Here is an output example before and after model quantization: noise1500×732 260 KB
st183907
@singularity thanks for sharing. Is the entire network quantized or are there some layers running in float? If you can reproduce the behavior on server (using qnnpack) then you can use autograd profiler to get an op level breakdown to see which ops are causing the most slowdown. It might also be easier to debug accuracy issue on the server in case the quantization noise is reproducible there.
st183908
Here is the output from the autograd profiler before and after quantization: Before ----------------------- --------------- --------------- --------------- --------------- --------------- --------------- ----------------------------------- Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls Input Shapes ----------------------- --------------- --------------- --------------- --------------- --------------- --------------- ----------------------------------- conv2d 0.67% 906.899us 0.67% 906.899us 906.899us 1 [] convolution 0.67% 902.559us 0.67% 902.559us 902.559us 1 [] _convolution 0.67% 900.649us 0.67% 900.649us 900.649us 1 [] contiguous 0.00% 1.060us 0.00% 1.060us 1.060us 1 [] contiguous 0.00% 0.140us 0.00% 0.140us 0.140us 1 [] contiguous 0.00% 0.120us 0.00% 0.120us 0.120us 1 [] mkldnn_convolution 0.66% 885.439us 0.66% 885.439us 885.439us 1 [] conv2d 0.53% 711.017us 0.53% 711.017us 711.017us 1 [] convolution 0.53% 710.247us 0.53% 710.247us 710.247us 1 [] _convolution 0.52% 704.057us 0.52% 704.057us 704.057us 1 [] contiguous 0.00% 0.290us 0.00% 0.290us 0.290us 1 [] contiguous 0.00% 0.090us 0.00% 0.090us 0.090us 1 [] contiguous 0.00% 0.100us 0.00% 0.100us 0.100us 1 [] mkldnn_convolution 0.52% 698.567us 0.52% 698.567us 698.567us 1 [] conv2d 0.29% 389.754us 0.29% 389.754us 389.754us 1 [] convolution 0.29% 389.274us 0.29% 389.274us 389.274us 1 [] _convolution 0.29% 388.564us 0.29% 388.564us 388.564us 1 [] contiguous 0.00% 0.340us 0.00% 0.340us 0.340us 1 [] contiguous 0.00% 0.090us 0.00% 0.090us 0.090us 1 [] contiguous 0.00% 0.100us 0.00% 0.100us 0.100us 1 [] mkldnn_convolution 0.29% 384.324us 0.29% 384.324us 384.324us 1 [] relu_ 0.02% 29.550us 0.02% 29.550us 29.550us 1 [] conv2d 0.34% 454.195us 0.34% 454.195us 454.195us 1 [] convolution 0.34% 453.735us 0.34% 453.735us 453.735us 1 [] _convolution 0.34% 453.145us 0.34% 453.145us 453.145us 1 [] contiguous 0.00% 0.240us 0.00% 0.240us 0.240us 1 [] contiguous 0.00% 0.100us 0.00% 0.100us 0.100us 1 [] contiguous 0.00% 0.090us 0.00% 0.090us 0.090us 1 [] mkldnn_convolution 0.33% 448.975us 0.33% 448.975us 448.975us 1 [] relu_ 0.02% 21.830us 0.02% 21.830us 21.830us 1 [] conv2d 0.22% 291.363us 0.22% 291.363us 291.363us 1 [] convolution 0.22% 290.863us 0.22% 290.863us 290.863us 1 [] _convolution 0.22% 290.223us 0.22% 290.223us 290.223us 1 [] contiguous 0.00% 0.220us 0.00% 0.220us 0.220us 1 [] contiguous 0.00% 0.100us 0.00% 0.100us 0.100us 1 [] contiguous 0.00% 0.180us 0.00% 0.180us 0.180us 1 [] mkldnn_convolution 0.21% 280.402us 0.21% 280.402us 280.402us 1 [] adaptive_avg_pool2d 0.04% 60.060us 0.04% 60.060us 60.060us 1 [] contiguous 0.00% 0.250us 0.00% 0.250us 0.250us 1 [] view 0.00% 5.270us 0.00% 5.270us 5.270us 1 [] mean 0.03% 44.300us 0.03% 44.300us 44.300us 1 [] view 0.00% 1.870us 0.00% 1.870us 1.870us 1 [] view 0.00% 1.690us 0.00% 1.690us 1.690us 1 [] unsigned short 0.00% 5.701us 0.00% 5.701us 5.701us 1 [] matmul 0.03% 40.580us 0.03% 40.580us 40.580us 1 [] mm 0.03% 34.970us 0.03% 34.970us 34.970us 1 [] relu_ 0.00% 3.950us 0.00% 3.950us 3.950us 1 [] unsigned short 0.00% 2.340us 0.00% 2.340us 2.340us 1 [] matmul 0.00% 4.830us 0.00% 4.830us 4.830us 1 [] mm 0.00% 4.360us 0.00% 4.360us 4.360us 1 [] sigmoid 0.01% 13.000us 0.01% 13.000us 13.000us 1 [] view 0.00% 2.561us 0.00% 2.561us 2.561us 1 [] expand_as 0.00% 4.660us 0.00% 4.660us 4.660us 1 [] expand 0.00% 3.220us 0.00% 3.220us 3.220us 1 [] mul 0.02% 21.070us 0.02% 21.070us 21.070us 1 [] relu_ 0.01% 8.960us 0.01% 8.960us 8.960us 1 [] conv2d 0.21% 286.703us 0.21% 286.703us 286.703us 1 [] convolution 0.21% 286.053us 0.21% 286.053us 286.053us 1 [] _convolution 0.21% 285.113us 0.21% 285.113us 285.113us 1 [] contiguous 0.00% 0.230us 0.00% 0.230us 0.230us 1 [] contiguous 0.01% 17.500us 0.01% 17.500us 17.500us 1 [] contiguous 0.00% 0.200us 0.00% 0.200us 0.200us 1 [] mkldnn_convolution 0.20% 263.112us 0.20% 263.112us 263.112us 1 [] conv2d 0.33% 443.374us 0.33% 443.374us 443.374us 1 [] convolution 0.33% 442.864us 0.33% 442.864us 442.864us 1 [] _convolution 0.33% 442.304us 0.33% 442.304us 442.304us 1 [] contiguous 0.00% 0.260us 0.00% 0.260us 0.260us 1 [] contiguous 0.00% 0.100us 0.00% 0.100us 0.100us 1 [] contiguous 0.00% 0.090us 0.00% 0.090us 0.090us 1 [] mkldnn_convolution 0.33% 438.134us 0.33% 438.134us 438.134us 1 [] relu_ 0.02% 27.920us 0.02% 27.920us 27.920us 1 [] conv2d 0.23% 310.863us 0.23% 310.863us 310.863us 1 [] convolution 0.23% 310.383us 0.23% 310.383us 310.383us 1 [] _convolution 0.23% 309.743us 0.23% 309.743us 309.743us 1 [] contiguous 0.00% 0.230us 0.00% 0.230us 0.230us 1 [] contiguous 0.00% 0.090us 0.00% 0.090us 0.090us 1 [] contiguous 0.00% 0.170us 0.00% 0.170us 0.170us 1 [] mkldnn_convolution 0.23% 305.503us 0.23% 305.503us 305.503us 1 [] relu_ 0.01% 14.660us 0.01% 14.660us 14.660us 1 [] conv2d 0.19% 261.423us 0.19% 261.423us 261.423us 1 [] convolution 0.19% 260.713us 0.19% 260.713us 260.713us 1 [] _convolution 0.19% 255.493us 0.19% 255.493us 255.493us 1 [] contiguous 0.00% 0.260us 0.00% 0.260us 0.260us 1 [] contiguous 0.00% 0.120us 0.00% 0.120us 0.120us 1 [] contiguous 0.00% 0.120us 0.00% 0.120us 0.120us 1 [] mkldnn_convolution 0.19% 250.603us 0.19% 250.603us 250.603us 1 [] conv2d 0.20% 263.663us 0.20% 263.663us 263.663us 1 [] convolution 0.20% 263.183us 0.20% 263.183us 263.183us 1 [] _convolution 0.20% 262.683us 0.20% 262.683us 262.683us 1 [] contiguous 0.00% 0.280us 0.00% 0.280us 0.280us 1 [] contiguous 0.00% 0.100us 0.00% 0.100us 0.100us 1 [] contiguous 0.00% 0.090us 0.00% 0.090us 0.090us 1 [] mkldnn_convolution 0.19% 258.533us 0.19% 258.533us 258.533us 1 [] relu_ 0.01% 13.400us 0.01% 13.400us 13.400us 1 [] conv2d 0.15% 196.892us 0.15% 196.892us 196.892us 1 [] convolution 0.15% 196.412us 0.15% 196.412us 196.412us 1 [] _convolution 0.15% 195.812us 0.15% 195.812us 195.812us 1 [] contiguous 0.00% 0.240us 0.00% 0.240us 0.240us 1 [] contiguous 0.00% 0.110us 0.00% 0.110us 0.110us 1 [] contiguous 0.00% 0.100us 0.00% 0.100us 0.100us 1 [] ----------------------- --------------- --------------- --------------- --------------- --------------- --------------- ----------------------------------- Self CPU time total: 134.621ms
st183909
hit the character limit… After --------------------------- --------------- --------------- --------------- --------------- --------------- --------------- ----------------------------------- Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls Input Shapes --------------------------- --------------- --------------- --------------- --------------- --------------- --------------- ----------------------------------- item 0.01% 7.420us 0.01% 7.420us 7.420us 1 [] _local_scalar_dense 0.01% 4.550us 0.01% 4.550us 4.550us 1 [] aten::Int 0.00% 1.720us 0.00% 1.720us 1.720us 1 [] item 0.00% 0.470us 0.00% 0.470us 0.470us 1 [] _local_scalar_dense 0.00% 0.240us 0.00% 0.240us 0.240us 1 [] quantize_per_tensor 0.09% 69.811us 0.09% 69.811us 69.811us 1 [] quantized::conv2d 1.53% 1.183ms 1.53% 1.183ms 1.183ms 1 [] contiguous 0.22% 167.211us 0.22% 167.211us 167.211us 1 [] empty_like 0.01% 9.810us 0.01% 9.810us 9.810us 1 [] qscheme 0.00% 0.920us 0.00% 0.920us 0.920us 1 [] q_zero_point 0.00% 0.710us 0.00% 0.710us 0.710us 1 [] q_scale 0.00% 0.740us 0.00% 0.740us 0.740us 1 [] _empty_affine_quantized 0.00% 2.930us 0.00% 2.930us 2.930us 1 [] q_scale 0.00% 0.170us 0.00% 0.170us 0.170us 1 [] contiguous 0.00% 0.160us 0.00% 0.160us 0.160us 1 [] _empty_affine_quantized 0.00% 1.440us 0.00% 1.440us 1.440us 1 [] quantize_per_tensor 0.01% 5.550us 0.01% 5.550us 5.550us 1 [] _empty_affine_quantized 0.00% 1.320us 0.00% 1.320us 1.320us 1 [] q_zero_point 0.00% 0.180us 0.00% 0.180us 0.180us 1 [] q_scale 0.00% 0.150us 0.00% 0.150us 0.150us 1 [] q_zero_point 0.00% 0.150us 0.00% 0.150us 0.150us 1 [] q_scale 0.00% 0.130us 0.00% 0.130us 0.130us 1 [] quantized::conv2d 0.34% 266.002us 0.34% 266.002us 266.002us 1 [] contiguous 0.00% 0.140us 0.00% 0.140us 0.140us 1 [] q_scale 0.00% 0.160us 0.00% 0.160us 0.160us 1 [] contiguous 0.00% 0.100us 0.00% 0.100us 0.100us 1 [] _empty_affine_quantized 0.00% 1.260us 0.00% 1.260us 1.260us 1 [] quantize_per_tensor 0.01% 4.290us 0.01% 4.290us 4.290us 1 [] _empty_affine_quantized 0.00% 1.180us 0.00% 1.180us 1.180us 1 [] q_zero_point 0.00% 0.180us 0.00% 0.180us 0.180us 1 [] q_scale 0.00% 0.150us 0.00% 0.150us 0.150us 1 [] q_zero_point 0.00% 0.140us 0.00% 0.140us 0.140us 1 [] q_scale 0.00% 0.140us 0.00% 0.140us 0.140us 1 [] quantized::conv2d_relu 1.11% 856.897us 1.11% 856.897us 856.897us 1 [] contiguous 0.00% 0.170us 0.00% 0.170us 0.170us 1 [] q_scale 0.00% 0.160us 0.00% 0.160us 0.160us 1 [] contiguous 0.00% 0.150us 0.00% 0.150us 0.150us 1 [] _empty_affine_quantized 0.00% 1.260us 0.00% 1.260us 1.260us 1 [] quantize_per_tensor 0.01% 4.370us 0.01% 4.370us 4.370us 1 [] _empty_affine_quantized 0.00% 1.270us 0.00% 1.270us 1.270us 1 [] q_zero_point 0.00% 0.170us 0.00% 0.170us 0.170us 1 [] q_scale 0.00% 0.150us 0.00% 0.150us 0.150us 1 [] q_zero_point 0.00% 0.150us 0.00% 0.150us 0.150us 1 [] q_scale 0.00% 0.130us 0.00% 0.130us 0.130us 1 [] quantized::conv2d_relu 0.49% 378.753us 0.49% 378.753us 378.753us 1 [] contiguous 0.00% 0.200us 0.00% 0.200us 0.200us 1 [] q_scale 0.00% 0.170us 0.00% 0.170us 0.170us 1 [] contiguous 0.00% 0.140us 0.00% 0.140us 0.140us 1 [] _empty_affine_quantized 0.00% 1.290us 0.00% 1.290us 1.290us 1 [] quantize_per_tensor 0.01% 4.700us 0.01% 4.700us 4.700us 1 [] _empty_affine_quantized 0.00% 1.260us 0.00% 1.260us 1.260us 1 [] q_zero_point 0.00% 0.170us 0.00% 0.170us 0.170us 1 [] q_scale 0.00% 0.150us 0.00% 0.150us 0.150us 1 [] q_zero_point 0.00% 0.140us 0.00% 0.140us 0.140us 1 [] q_scale 0.00% 0.130us 0.00% 0.130us 0.130us 1 [] quantized::conv2d 0.18% 140.401us 0.18% 140.401us 140.401us 1 [] contiguous 0.00% 0.180us 0.00% 0.180us 0.180us 1 [] q_scale 0.00% 0.160us 0.00% 0.160us 0.160us 1 [] contiguous 0.00% 0.100us 0.00% 0.100us 0.100us 1 [] _empty_affine_quantized 0.00% 1.060us 0.00% 1.060us 1.060us 1 [] quantize_per_tensor 0.01% 3.920us 0.01% 3.920us 3.920us 1 [] _empty_affine_quantized 0.00% 1.260us 0.00% 1.260us 1.260us 1 [] q_zero_point 0.00% 0.170us 0.00% 0.170us 0.170us 1 [] q_scale 0.00% 0.160us 0.00% 0.160us 0.160us 1 [] q_zero_point 0.00% 0.150us 0.00% 0.150us 0.150us 1 [] q_scale 0.00% 0.140us 0.00% 0.140us 0.140us 1 [] size 0.00% 0.970us 0.00% 0.970us 0.970us 1 [] size 0.00% 0.190us 0.00% 0.190us 0.190us 1 [] adaptive_avg_pool2d 0.02% 16.650us 0.02% 16.650us 16.650us 1 [] _adaptive_avg_pool2d 0.02% 14.410us 0.02% 14.410us 14.410us 1 [] view 0.01% 4.641us 0.01% 4.641us 4.641us 1 [] quantized::linear 0.02% 14.480us 0.02% 14.480us 14.480us 1 [] contiguous 0.00% 0.160us 0.00% 0.160us 0.160us 1 [] q_scale 0.00% 0.340us 0.00% 0.340us 0.340us 1 [] _empty_affine_quantized 0.00% 1.010us 0.00% 1.010us 1.010us 1 [] quantize_per_tensor 0.01% 4.510us 0.01% 4.510us 4.510us 1 [] _empty_affine_quantized 0.00% 0.930us 0.00% 0.930us 0.930us 1 [] q_scale 0.00% 0.220us 0.00% 0.220us 0.220us 1 [] q_zero_point 0.00% 0.200us 0.00% 0.200us 0.200us 1 [] relu_ 0.00% 3.630us 0.00% 3.630us 3.630us 1 [] quantized::linear 0.01% 10.470us 0.01% 10.470us 10.470us 1 [] contiguous 0.00% 0.150us 0.00% 0.150us 0.150us 1 [] q_scale 0.00% 0.170us 0.00% 0.170us 0.170us 1 [] _empty_affine_quantized 0.00% 0.850us 0.00% 0.850us 0.850us 1 [] quantize_per_tensor 0.01% 4.310us 0.01% 4.310us 4.310us 1 [] _empty_affine_quantized 0.00% 0.810us 0.00% 0.810us 0.810us 1 [] q_scale 0.00% 0.180us 0.00% 0.180us 0.180us 1 [] q_zero_point 0.00% 0.150us 0.00% 0.150us 0.150us 1 [] sigmoid 0.01% 7.770us 0.01% 7.770us 7.770us 1 [] view 0.00% 1.290us 0.00% 1.290us 1.290us 1 [] expand_as 0.01% 5.330us 0.01% 5.330us 5.330us 1 [] expand 0.01% 4.190us 0.01% 4.190us 4.190us 1 [] quantized::mul 0.20% 152.521us 0.20% 152.521us 152.521us 1 [] qscheme 0.00% 0.290us 0.00% 0.290us 0.290us 1 [] qscheme 0.00% 0.180us 0.00% 0.180us 0.180us 1 [] qscheme 0.00% 0.140us 0.00% 0.140us 0.140us 1 [] _empty_affine_quantized 0.00% 1.380us 0.00% 1.380us 1.380us 1 [] q_zero_point 0.00% 0.170us 0.00% 0.170us 0.170us 1 [] q_scale 0.00% 0.200us 0.00% 0.200us 0.200us 1 [] q_zero_point 0.00% 0.140us 0.00% 0.140us 0.140us 1 [] --------------------------- --------------- --------------- --------------- --------------- --------------- --------------- ----------------------------------- Self CPU time total: 77.276ms Most of the time is spent in Conv2d/ReLU operations, but they are quantized. So it seems that quantization is indeed working as Desktop CPU time decreases from 134ms to 77ms. However, when I run the quantized model on my mobile device (Huawei Mate 10 lite), there are no performance gains. Any ideas? Also, I found that the noise is likely caused by this qnnpack bug https://github.com/pytorch/pytorch/issues/36253 1. When I train my model only for a few epochs, I can quantize the model without any errors. However, when I fully train the model, these errors: “output scale: convolution scale 4.636909 is greater or equal to 1.0” are thrown during quantization and the model is extremely noisy after quantization. Is there a fix for this yet?
st183910
Regarding the performance, could you set the number of threads to 1 and see if it is still slower? Regarding the noise - Could you try with pytorch nightly build? There was a fix for the scale issue as mentioned here - https://github.com/pytorch/pytorch/issues/33466#issuecomment-627660191 6
st183911
Upgrading to PyTorch Nightly fixed the errors and the output is looking much better now! Edit: I spoke a bit too soon… there is also something wrong with the calibration. It seems that calibration is actually hurting performance. Here is the output of the quantized model after 1,10,100 and 1000 calibration images: calibration1600×394 318 KB This is what it should look like: (output before quantization) I have tried setting the number of CPU threads with org.pytorch.PyTorchAndroid.setNumThreads(1); but it does not make a difference. I have also tried 1,2,3,4. Is this the correct way to set the thread count?
st183912
Hi singularity, Have you solved the quantization performance issue on Android device? I meet a similar one with MobileNetV3 that is performance gain can be obtained on Desktop PC but not on Android devices. Thanks.
st183913
Hi @supriyar , So I have similar problem with mobilenet_v3: I’m testing the time performance of my float32 and quantized model. The quantized model is significantly slower than the float32 model, both on ‘fbgemm’ and ‘qnnpack’ and both on PC and Android: PC, one thread, fbgemm: 0.011s vs 0.034s (avegare from 100 trials) PC, one thread, qnnpack: 0.012s vs 0.035s (avegare from 100 trials) What I basically did is: took Duo Li implementation of MobileNetV3 added QuantStub at the beginning and DeQuantStub at the end of the model changed all adds, muls and divs into FloatFunctional for quantized tensors support set model.qconfig to ‘fbgemm’ or ‘qnnpack’ prepared model for qat converted model to quantized model compared performance between quantized version and non-quantized Quantized model is ~4x smaller, but the inference is taking signifficantly slower. Is there something that I’m missing? to reproduce: torchvision 0.8.2 pytorch 1.7.1 Windows10 """ MIT License Copyright (c) 2019 Duo LI Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. """ import torch from tqdm import tqdm import time import torch.nn as nn import math def _make_divisible(v, divisor, min_value=None): """ This function is taken from the original tf repo. It ensures that all layers have a channel number that is divisible by 8 It can be seen here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py :param v: :param divisor: :param min_value: :return: """ if min_value is None: min_value = divisor new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) # Make sure that round down does not go down by more than 10%. if new_v < 0.9 * v: new_v += divisor return new_v ################################################################################## # FLOAT32 ARCHITECTURE ################################################################################## class h_sigmoid(nn.Module): def __init__(self, inplace=True): super(h_sigmoid, self).__init__() self.relu = nn.ReLU6(inplace=inplace) def forward(self, x): return self.relu(x + 3) / 6 class h_swish(nn.Module): def __init__(self, inplace=True): super(h_swish, self).__init__() self.sigmoid = h_sigmoid(inplace=inplace) def forward(self, x): return x * self.sigmoid(x) class SELayer(nn.Module): def __init__(self, channel, reduction=4): super(SELayer, self).__init__() self.avg_pool = nn.AdaptiveAvgPool2d(1) self.fc = nn.Sequential( nn.Linear(channel, _make_divisible(channel // reduction, 8)), nn.ReLU(inplace=True), nn.Linear(_make_divisible(channel // reduction, 8), channel), h_sigmoid() ) def forward(self, x): b, c, _, _ = x.size() y = self.avg_pool(x).view(b, c) y = self.fc(y).view(b, c, 1, 1) return x * y def conv_3x3_bn(inp, oup, stride): return nn.Sequential( nn.Conv2d(inp, oup, 3, stride, 1, bias=False), nn.BatchNorm2d(oup), h_swish() ) def conv_1x1_bn(inp, oup): return nn.Sequential( nn.Conv2d(inp, oup, 1, 1, 0, bias=False), nn.BatchNorm2d(oup), h_swish() ) class InvertedResidual(nn.Module): def __init__(self, inp, hidden_dim, oup, kernel_size, stride, use_se, use_hs): super(InvertedResidual, self).__init__() assert stride in [1, 2] self.identity = stride == 1 and inp == oup if inp == hidden_dim: self.conv = nn.Sequential( # dw nn.Conv2d(hidden_dim, hidden_dim, kernel_size, stride, (kernel_size - 1) // 2, groups=hidden_dim, bias=False), nn.BatchNorm2d(hidden_dim), h_swish() if use_hs else nn.ReLU(inplace=True), # Squeeze-and-Excite SELayer(hidden_dim) if use_se else nn.Identity(), # pw-linear nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), nn.BatchNorm2d(oup), ) else: self.conv = nn.Sequential( # pw nn.Conv2d(inp, hidden_dim, 1, 1, 0, bias=False), nn.BatchNorm2d(hidden_dim), h_swish() if use_hs else nn.ReLU(inplace=True), # dw nn.Conv2d(hidden_dim, hidden_dim, kernel_size, stride, (kernel_size - 1) // 2, groups=hidden_dim, bias=False), nn.BatchNorm2d(hidden_dim), # Squeeze-and-Excite SELayer(hidden_dim) if use_se else nn.Identity(), h_swish() if use_hs else nn.ReLU(inplace=True), # pw-linear nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), nn.BatchNorm2d(oup), ) def forward(self, x): if self.identity: return x + self.conv(x) else: return self.conv(x) class MobileNetV3(nn.Module): def __init__(self, cfgs, mode, num_classes=1, width_mult=1.): super(MobileNetV3, self).__init__() # setting of inverted residual blocks self.cfgs = cfgs assert mode in ['large', 'small'] # building first layer input_channel = _make_divisible(16 * width_mult, 8) layers = [conv_3x3_bn(3, input_channel, 2)] # building inverted residual blocks block = InvertedResidual for k, t, c, use_se, use_hs, s in self.cfgs: output_channel = _make_divisible(c * width_mult, 8) exp_size = _make_divisible(input_channel * t, 8) layers.append(block(input_channel, exp_size, output_channel, k, s, use_se, use_hs)) input_channel = output_channel self.features = nn.Sequential(*layers) # building last several layers self.conv = conv_1x1_bn(input_channel, exp_size) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) output_channel = {'large': 1280, 'small': 1024} output_channel = _make_divisible(output_channel[mode] * width_mult, 8) if width_mult > 1.0 else output_channel[mode] self.classifier = nn.Sequential( nn.Linear(exp_size, output_channel), h_swish(), nn.Dropout(0.2), nn.Linear(output_channel, num_classes), ) self._initialize_weights() def forward(self, x): x = self.features(x) x = self.conv(x) x = self.avgpool(x) x = x.view(x.size(0), -1) x = self.classifier(x) return x def _initialize_weights(self): for m in self.modules(): if isinstance(m, nn.Conv2d): n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(0, math.sqrt(2. / n)) if m.bias is not None: m.bias.data.zero_() elif isinstance(m, nn.BatchNorm2d): m.weight.data.fill_(1) m.bias.data.zero_() elif isinstance(m, nn.Linear): n = m.weight.size(1) m.weight.data.normal_(0, 0.01) m.bias.data.zero_() def mobilenetv3_small(**kwargs): """ Constructs a MobileNetV3-Small model """ return MobileNetV3(cfgs, mode='small', **kwargs) ################################################################################## # QUANTIZED ARCHITECTURE ################################################################################## class h_sigmoid_quant(nn.Module): def __init__(self, inplace=True): super(h_sigmoid_quant, self).__init__() self.relu = nn.ReLU6(inplace=inplace) self.q_add = nn.quantized.FloatFunctional() def forward(self, x): return self.q_add.mul_scalar(self.relu(self.q_add.add_scalar(x, 3.)), 1/6) # return self.relu(x) class h_swish_quant(nn.Module): def __init__(self, inplace=True): super(h_swish_quant, self).__init__() self.sigmoid = h_sigmoid_quant(inplace=inplace) self.q_mul = nn.quantized.FloatFunctional() def forward(self, x): return self.q_mul.mul(x, self.sigmoid(x)) class SELayerQuant(nn.Module): def __init__(self, channel, reduction=4): super(SELayerQuant, self).__init__() self.avg_pool = nn.AdaptiveAvgPool2d(1) self.fc = nn.Sequential( nn.Linear(channel, _make_divisible(channel // reduction, 8)), nn.ReLU(inplace=True), nn.Linear(_make_divisible(channel // reduction, 8), channel), h_sigmoid_quant() ) self.q_mul = nn.quantized.FloatFunctional() def forward(self, x): b, c, _, _ = x.size() y = self.avg_pool(x).view(b, c) y = self.fc(y).view(b, c, 1, 1) return self.q_mul.mul(x, y) def conv_3x3_bn_quant(inp, oup, stride): return nn.Sequential( nn.Conv2d(inp, oup, 3, stride, 1, bias=False), nn.BatchNorm2d(oup), h_swish_quant() ) def conv_1x1_bn_quant(inp, oup): return nn.Sequential( nn.Conv2d(inp, oup, 1, 1, 0, bias=False), nn.BatchNorm2d(oup), h_swish_quant() ) class InvertedResidualQuant(nn.Module): def __init__(self, inp, hidden_dim, oup, kernel_size, stride, use_se, use_hs): super(InvertedResidualQuant, self).__init__() assert stride in [1, 2] self.identity = stride == 1 and inp == oup self.q_add = nn.quantized.FloatFunctional() if inp == hidden_dim: self.conv = nn.Sequential( # dw nn.Conv2d(hidden_dim, hidden_dim, kernel_size, stride, (kernel_size - 1) // 2, groups=hidden_dim, bias=False), nn.BatchNorm2d(hidden_dim), h_swish_quant() if use_hs else nn.ReLU(inplace=True), # Squeeze-and-Excite SELayerQuant(hidden_dim) if use_se else nn.Identity(), # pw-linear nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), nn.BatchNorm2d(oup), ) else: self.conv = nn.Sequential( # pw nn.Conv2d(inp, hidden_dim, 1, 1, 0, bias=False), nn.BatchNorm2d(hidden_dim), h_swish_quant() if use_hs else nn.ReLU(inplace=True), # dw nn.Conv2d(hidden_dim, hidden_dim, kernel_size, stride, (kernel_size - 1) // 2, groups=hidden_dim, bias=False), nn.BatchNorm2d(hidden_dim), # Squeeze-and-Excite SELayerQuant(hidden_dim) if use_se else nn.Identity(), h_swish_quant() if use_hs else nn.ReLU(inplace=True), # pw-linear nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), nn.BatchNorm2d(oup), ) def forward(self, x): if self.identity: return self.q_add.add(x, self.conv(x)) else: return self.conv(x) class MobileNetV3_quant(nn.Module): def __init__(self, cfgs, mode, num_classes=1, width_mult=1.): super(MobileNetV3_quant, self).__init__() # setting of inverted residual blocks self.cfgs = cfgs assert mode in ['large', 'small'] self.quant = torch.quantization.QuantStub() self.dequant = torch.quantization.DeQuantStub() # building first layer input_channel = _make_divisible(16 * width_mult, 8) layers = [conv_3x3_bn_quant(3, input_channel, 2)] # building inverted residual blocks block = InvertedResidualQuant for k, t, c, use_se, use_hs, s in self.cfgs: output_channel = _make_divisible(c * width_mult, 8) exp_size = _make_divisible(input_channel * t, 8) layers.append(block(input_channel, exp_size, output_channel, k, s, use_se, use_hs)) input_channel = output_channel self.features = nn.Sequential(*layers) # building last several layers self.conv = conv_1x1_bn_quant(input_channel, exp_size) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) output_channel = {'large': 1280, 'small': 1024} output_channel = _make_divisible(output_channel[mode] * width_mult, 8) if width_mult > 1.0 else output_channel[mode] self.classifier = nn.Sequential( nn.Linear(exp_size, output_channel), h_swish_quant(), nn.Dropout(0.2), nn.Linear(output_channel, num_classes), ) self._initialize_weights() def forward(self, x): x = self.quant(x) x = self.features(x) x = self.conv(x) x = self.avgpool(x) x = x.view(x.size(0), -1) x = self.classifier(x) x = self.dequant(x) return x def _initialize_weights(self): for m in self.modules(): if isinstance(m, nn.Conv2d): n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(0, math.sqrt(2. / n)) if m.bias is not None: m.bias.data.zero_() elif isinstance(m, nn.BatchNorm2d): m.weight.data.fill_(1) m.bias.data.zero_() elif isinstance(m, nn.Linear): n = m.weight.size(1) m.weight.data.normal_(0, 0.01) m.bias.data.zero_() def mobilenetv3_small_quant(**kwargs): """ Constructs a MobileNetV3-Small model """ return MobileNetV3_quant(cfgs, mode='small', **kwargs) ################################################################################## # RUN COMPARISION ################################################################################## def test_net(mimage, quant): if quant: model = mobilenetv3_small_quant() model.qconfig = torch.quantization.get_default_qat_qconfig('qnnpack') torch.quantization.prepare_qat(model, inplace=True) else: model = mobilenetv3_small() if quant: model = torch.quantization.convert(model) model.eval() model.to(torch.device("cpu")) t0 = time.time() with torch.no_grad(): with tqdm(total=RUNS, ncols=100) as pbar: for _ in range(RUNS): model(mimage) pbar.update() return (time.time() - t0) / RUNS if __name__ == '__main__': RUNS = 100 cfgs = [ # k, t, c, SE, HS, s [3, 1, 16, 1, 0, 2], [3, 4.5, 24, 0, 0, 2], [3, 3.67, 24, 0, 0, 1], [5, 4, 40, 1, 1, 2], [5, 6, 40, 1, 1, 1], [5, 6, 40, 1, 1, 1], [5, 3, 48, 1, 1, 1], [5, 3, 48, 1, 1, 1], [5, 6, 96, 1, 1, 2], [5, 6, 96, 1, 1, 1], [5, 6, 96, 1, 1, 1], ] torch.set_num_threads(1) image = torch.rand(1, 3, 224, 224) print(f"time float32: {test_net(image, False)}") print(f"time quant: {test_net(image, True)}")
st183914
Having a operator level profile might help to narrow down if certain ops are causing the slowdown. From the model code, seems like there are some 1x1 conv’s in the network, and my understanding is that the performance of these may not be as efficient on fbgemm. cc @dskhudia in case anything else stands out that may be causing slower inference on fbgemm
st183915
So I’ve changed every 1x1 convs to 3x3 convs (with padding=1 to preserve output shapes), and there is significant slowdown in float32 architecture (from 0.011s to 0.019s) and little slowdown in quantized architecture (from 0.034 to 0.037) but it’s still 2x slower than float32. Any other ideas?
st183916
@supriyar 1x1 should be performant using fbgemm backend. Also in the reply by Singularity I do see that quantization improves time. singularity: So it seems that quantization is indeed working as Desktop CPU time decreases from 134ms to 77ms.
st183917
Sadly no I’m facing same issue on UNet architecture What I’ve found out is that when passing smaller input like (1, 3, 32, 32) quantized model performs similarly to fp32. When passing even smaller input like (1,3,16,16) quantized model performs slightly better (6% speedup on fbgemm). Nonetheless input of size 16x16 is quite extreme scenario. @supriyar @dskhudia maybe this is helpful to trace what is wrong?
st183918
I’m facing the same issue with FX tracing on timm's efficientnet_b3. I get 10x speedup on regular backend but a 4x slow down on qnnpack (torch.backends.quantized.engine = 'qnnpack'). Here are the top 10 time eaters: --------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ Name Self CPU % Self CPU CPU total % CPU total CPU time avg # of Calls --------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ model_inference 1.91% 9.592ms 100.00% 503.021ms 503.021ms 1 quantized::add 41.99% 211.224ms 42.02% 211.352ms 11.124ms 19 quantized::conv2d 20.23% 101.774ms 20.45% 102.887ms 791.436us 130 quantized::mul 11.50% 57.855ms 11.56% 58.131ms 2.236ms 26 aten::sigmoid 11.39% 57.277ms 11.40% 57.321ms 2.205ms 26 aten::dequantize 11.26% 56.655ms 11.29% 56.795ms 530.796us 107 aten::silu_ 0.05% 238.570us 0.72% 3.618ms 46.382us 78 aten::silu 0.67% 3.379ms 0.67% 3.379ms 43.323us 78 aten::quantize_per_tensor 0.34% 1.694ms 0.34% 1.694ms 20.918us 81 aten::mean 0.14% 719.232us 0.16% 803.762us 29.769us 27 --------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ It’s sad that so much time is being taken by add and mul. The average time for one add is 11 ms! The convs are taking about as long as the FP32 version. Why is this thread getting so little attention? Serious question as I’m new to putting AI on edge devices and I’m starting to wonder if gone down some rarely followed path.
st183919
Hello I’m trying to do QAT -> Torchscript but am getting an error. My model is Click here import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from torch.quantization import QuantStub, DeQuantStub def SeperableConv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, activation_fn=nn.ReLU): """Replace Conv2d with a depthwise Conv2d and Pointwise Conv2d. """ return nn.Sequential( nn.Conv2d(in_channels=in_channels, out_channels=in_channels, kernel_size=kernel_size, groups=in_channels, stride=stride, padding=padding), activation_fn(), nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=1) ) class SSD(nn.Module): def __init__(self, num_classes: int, is_test=False, config=None, device=None, activation_fn=nn.ReLU): """Compose a SSD model using the given components. """ super(SSD, self).__init__() self.base_channel = 16 self.num_classes = num_classes self.is_test = is_test if device: self.device = device else: self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") if config: self.center_variance = torch.tensor([config['center_variance']], device=device) self.size_variance = torch.tensor([config['size_variance']], device=device) self.priors = config['priors'].to(self.device) else: self.center_variance = torch.tensor([0.1], device=device) self.size_variance = torch.tensor([0.2], device=device) self.extras = nn.Sequential( nn.Conv2d(in_channels=self.base_channel * 16, out_channels=self.base_channel * 4, kernel_size=1), activation_fn(), SeperableConv2d(in_channels=self.base_channel * 4, out_channels=self.base_channel * 16, kernel_size=3, stride=2, padding=1, activation_fn=activation_fn), activation_fn() ) self.regression_headers0 = SeperableConv2d(in_channels=self.base_channel * 4, out_channels=3 * 4, kernel_size=3, padding=1, activation_fn=activation_fn) self.regression_headers1 = SeperableConv2d(in_channels=self.base_channel * 8, out_channels=2 * 4, kernel_size=3, padding=1, activation_fn=activation_fn) self.regression_headers2 = SeperableConv2d(in_channels=self.base_channel * 16, out_channels=2 * 4, kernel_size=3, padding=1, activation_fn=activation_fn) self.regression_headers3 = nn.Conv2d(in_channels=self.base_channel * 16, out_channels=3 * 4, kernel_size=3, padding=1) self.classification_headers0 = SeperableConv2d(in_channels=self.base_channel * 4, out_channels=3 * num_classes, kernel_size=3, padding=1, activation_fn=activation_fn) self.classification_headers1 = SeperableConv2d(in_channels=self.base_channel * 8, out_channels=2 * num_classes, kernel_size=3, padding=1, activation_fn=activation_fn) self.classification_headers2 = SeperableConv2d(in_channels=self.base_channel * 16, out_channels=2 * num_classes, kernel_size=3, padding=1, activation_fn=activation_fn) self.classification_headers3 = nn.Conv2d(in_channels=self.base_channel * 16, out_channels=3 * num_classes, kernel_size=3, padding=1) def conv_bn(inp, oup, stride): return nn.Sequential( nn.Conv2d(inp, oup, 3, stride, 1, bias=False), nn.BatchNorm2d(oup), activation_fn() ) def conv_dw(inp, oup, stride): return nn.Sequential( nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False), nn.BatchNorm2d(inp), activation_fn(), nn.Conv2d(inp, oup, 1, 1, 0, bias=False), nn.BatchNorm2d(oup), activation_fn(), ) self.backbone_chunk1 = nn.Sequential( conv_bn(3, self.base_channel, 2), # 160*120 conv_dw(self.base_channel, self.base_channel * 2, 1), conv_dw(self.base_channel * 2, self.base_channel * 2, 2), # 80*60 conv_dw(self.base_channel * 2, self.base_channel * 2, 1), conv_dw(self.base_channel * 2, self.base_channel * 4, 2), # 40*30 conv_dw(self.base_channel * 4, self.base_channel * 4, 1), conv_dw(self.base_channel * 4, self.base_channel * 4, 1), # BasicRFB(self.base_channel * 4, self.base_channel * 4, stride=1, scale=1.0, activation_fn=activation_fn) ) self.backbone_chunk2 = nn.Sequential( conv_dw(self.base_channel * 4, self.base_channel * 8, 2), # 20*15 conv_dw(self.base_channel * 8, self.base_channel * 8, 1), conv_dw(self.base_channel * 8, self.base_channel * 8, 1), ) self.backbone_chunk3 = nn.Sequential( conv_dw(self.base_channel * 8, self.base_channel * 16, 2), # 10*8 conv_dw(self.base_channel * 16, self.base_channel * 16, 1) ) self.quant0 = QuantStub() self.quant1 = QuantStub() self.quant2 = QuantStub() self.quant3 = QuantStub() self.dequant = DeQuantStub() def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: confidences = [] locations = [] x = self.quant0(x) for layer in self.backbone_chunk1: x = layer(x) x = self.dequant(x) confidence = self.classification_headers0(x) confidence = confidence.permute(0, 2, 3, 1).contiguous() confidence = confidence.view(confidence.size(0), -1, self.num_classes) location = self.regression_headers0(x) location = location.permute(0, 2, 3, 1).contiguous() location = location.view(location.size(0), -1, 4) confidences.append(confidence) locations.append(location) # x = self.quant1(x) for layer in self.backbone_chunk2: x = layer(x) # x = self.dequant(x) confidence = self.classification_headers1(x) confidence = confidence.permute(0, 2, 3, 1).contiguous() confidence = confidence.view(confidence.size(0), -1, self.num_classes) location = self.regression_headers1(x) location = location.permute(0, 2, 3, 1).contiguous() location = location.view(location.size(0), -1, 4) confidences.append(confidence) locations.append(location) # x = self.quant2(x) for layer in self.backbone_chunk3: x = layer(x) # x = self.dequant(x) confidence = self.classification_headers2(x) confidence = confidence.permute(0, 2, 3, 1).contiguous() confidence = confidence.view(confidence.size(0), -1, self.num_classes) location = self.regression_headers2(x) location = location.permute(0, 2, 3, 1).contiguous() location = location.view(location.size(0), -1, 4) confidences.append(confidence) locations.append(location) # x = self.quant3(x) x = self.extras(x) # x = self.dequant(x) confidence = self.classification_headers3(x) confidence = confidence.permute(0, 2, 3, 1).contiguous() confidence = confidence.view(confidence.size(0), -1, self.num_classes) location = self.regression_headers3(x) location = location.permute(0, 2, 3, 1).contiguous() location = location.view(location.size(0), -1, 4) confidences.append(confidence) locations.append(location) confidences = torch.cat(confidences, 1) locations = torch.cat(locations, 1) return confidences, locations def load(self, model): self.load_state_dict(torch.load(model, map_location=lambda storage, loc: storage)) def save(self, model_path): torch.save(self.state_dict(), model_path) I do QAT -> torchscript and test it by running the code: from ssd import SSD ... net = SSD(num_classes=2, device=device, config=config) ... net.load(trained_model_path) net.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') torch.quantization.prepare_qat(net, inplace=True) for epoch in range(num_epochs): train() etc... net.eval() net.cpu() # convert to quantised quant_net = deepcopy(net) quant_net = torch.quantization.convert(quant_net, inplace=False) quant_net.save(os.path.join(args.checkpoint_folder, f"quantised-net.pth")) m = torch.jit.script(quant_net) m.cpu() dummy = torch.randn(1, 3, 480, 640).cpu().float() a = m.forward(dummy) # test to see if scripted module works torch.jit.save(m, os.path.join(args.checkpoint_folder, f"jit-net.pt")) net.to(DEVICE) and I get the error on the line a = m.forward(dummy) : Traceback (most recent call last): File "train_testt.py", line 360, in <module> a = m.forward(dummy) RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): File "/home/joel/Desktop/Ultra-Light-Fast-Generic-Face-Detector-1MB/minimod.py", line 124, in forward x = layer(x) x = self.dequant(x) confidence = self.classification_headers0(x) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE confidence = confidence.permute(0, 2, 3, 1).contiguous() confidence = confidence.view(confidence.size(0), -1, self.num_classes) File "/home/joel/anaconda3/envs/nightlytorch/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward def forward(self, input): for module in self: input = module(input) ~~~~~~ <--- HERE return input File "/home/joel/anaconda3/envs/nightlytorch/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward def forward(self, input): for module in self: input = module(input) ~~~~~~ <--- HERE return input File "/home/joel/anaconda3/envs/nightlytorch/lib/python3.8/site-packages/torch/nn/quantized/modules/conv.py", line 326, in forward if len(input.shape) != 4: raise ValueError("Input shape must be `(N, C, H, W)`!") return ops.quantized.conv2d( ~~~~~~~~~~~~~~~~~~~~ <--- HERE input, self._packed_params, self.scale, self.zero_point) RuntimeError: Could not run 'quantized::conv2d.new' with arguments from the 'CPU' backend. 'quantized::conv2d.new' is only available for these backends: [QuantizedCPU, BackendSelect, Named, Autograd, Profiler, Tracer, Autocast, Batched]. QuantizedCPU: registered at /opt/conda/conda-bld/pytorch_1594145889316/work/aten/src/ATen/native/quantized/cpu/qconv.cpp:736 [kernel] BackendSelect: fallthrough registered at /opt/conda/conda-bld/pytorch_1594145889316/work/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback] Named: registered at /opt/conda/conda-bld/pytorch_1594145889316/work/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] Autograd: fallthrough registered at /opt/conda/conda-bld/pytorch_1594145889316/work/aten/src/ATen/core/VariableFallbackKernel.cpp:31 [backend fallback] Profiler: registered at /opt/conda/conda-bld/pytorch_1594145889316/work/torch/csrc/autograd/profiler.cpp:677 [backend fallback] Tracer: fallthrough registered at /opt/conda/conda-bld/pytorch_1594145889316/work/torch/csrc/jit/frontend/tracer.cpp:960 [backend fallback] Autocast: fallthrough registered at /opt/conda/conda-bld/pytorch_1594145889316/work/aten/src/ATen/autocast_mode.cpp:375 [backend fallback] Batched: registered at /opt/conda/conda-bld/pytorch_1594145889316/work/aten/src/ATen/BatchingRegistrations.cpp:149 [backend fallback] This error does not occur if I remove all QAT/quantization lines and just jit.script the original model. The same error still occurs if I remove all Quant/DeQuantStubs in the model. Does anyone know why this error occurs? Could I also ask whether the commented out QuantStubs/DeStubs in the model are correctly placed? Thank you!
st183920
Solved by kekpirat in post #13 Updating to torch nightly from torch 1.5.1 fixed the issue! (did not try 1.6)
st183921
this means the QuantStub/DeQuantStub is not placed correctly in the model, and the input of quantized::conv2d is not quantized yet, you can look at the model and see if you have a missing QuantStub before conv2d module.
st183922
kekpirat: for layer in self.backbone_chunk1: x = layer(x) looking at the code most likely it’s here: x = self.quant0(x) for layer in self.backbone_chunk1: x = layer(x) only the first x is quantized in this case, instead you should have a quant x for each activation in the loop: x = self.quant0(x) for i, layer in enumerate(self.backbone_chunk1): x = layer(x) x = self.quants[i](x) and define a list of quantstub instances with same length of self.backbone_chunk1 in init
st183923
Hello. Thanks for the reply! I tried what you suggested but still get a slightly different error RuntimeError: Could not run 'quantized::conv2d' with arguments from the 'CPUTensorId' backend. 'quantized::conv2d' is only available for these backends: [QuantizedCPUTensorId]. I’m wondering perhaps how to solve the problem on a simpler model which has the same error/issue import torch from torch import nn, optim from torch.quantization import QuantStub, DeQuantStub from copy import deepcopy device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') class Model(nn.Module): def __init__(self): super().__init__() self.backbone0 = nn.Sequential( nn.Conv2d(3, 1, 1, bias=False), nn.BatchNorm2d(1), nn.ReLU(), ) self.backbone1 = nn.Sequential( nn.Conv2d(1, 2, 3, stride=2, padding=1, bias=False), nn.BatchNorm2d(2), nn.AvgPool2d(14), nn.Sigmoid(), ) self.quant = QuantStub() self.dequant = DeQuantStub() def forward(self, x): x = self.quant(x) x = self.backbone0(x) x = self.dequant(x) x = self.backbone1(x) return x model = Model() model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') torch.quantization.prepare_qat(model, inplace=True) optimizer = optim.Adam(model.parameters(), lr=1) model.to(device) print(model) criterion = nn.BCELoss() for epoch in range(10): model.train() inputs = torch.rand(2, 3, 28, 28) labels = torch.FloatTensor([[1, 1], [0, 0]]) inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) loss = criterion(outputs.view(2, 2), labels) optimizer.zero_grad() loss.backward() optimizer.step() if epoch >= 2: model.apply(torch.quantization.disable_observer) if epoch >= 3: model.apply(torch.nn.intrinsic.qat.freeze_bn_stats) quant_model = deepcopy(model) quant_model = torch.quantization.convert(quant_model.eval().cpu(), inplace=False) with torch.no_grad(): out = quant_model(torch.rand(1, 3, 28, 28)) I tried to prepare_qat only backbone0 as well but got the same error: model.backbone0.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') torch.quantization.prepare_qat(model.backbone0, inplace=True) What is the correct way to place QuantStubs in the forward() such that only backbone0 is quantised?
st183924
Managed to get it to look correct (from looking at print(quant_model)) and not error out by only preparing qat on backbone0, and inserting the Quant/DeQuantStub into the nn.Sequential itself ... self.backbone0 = nn.Sequential( QuantStub(), nn.Conv2d(3, 1, 1, bias=False), nn.BatchNorm2d(1), nn.ReLU(), DeQuantStub() ) ... model.backbone0.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') torch.quantization.prepare_qat(model.backbone0, inplace=True) I notice that when I forward zeros through it, it has a random chance of having either the same output as the non quant-converted model or drastically different outputs. Is this the wrong way to get backbone0 quantised?
st183925
I notice that when I forward zeros through it, it has a random chance of having either the same output as the non quant-converted model or drastically different outputs. is this after qat?
st183926
Yes The code now import torch from torch import nn, optim from torch.quantization import QuantStub, DeQuantStub from copy import deepcopy device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') class Model(nn.Module): def __init__(self): super().__init__() self.backbone0 = nn.Sequential( QuantStub(), nn.Conv2d(3, 1, 1, bias=False), nn.BatchNorm2d(1), nn.ReLU(), DeQuantStub(), ) self.backbone1 = nn.Sequential( nn.Conv2d(1, 2, 3, stride=2, padding=1, bias=False), nn.BatchNorm2d(2), nn.AvgPool2d(14), nn.Sigmoid(), ) def forward(self, x): x = self.backbone0(x) x = self.backbone1(x) return x model = Model() model.backbone0.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') torch.quantization.prepare_qat(model.backbone0, inplace=True) optimizer = optim.Adam(model.parameters(), lr=1) model.to(device) print(model) criterion = nn.BCELoss() for epoch in range(10): print('EPOCH', epoch) model.train() inputs = torch.rand(2, 3, 28, 28) labels = torch.FloatTensor([[1, 1], [0, 0]]) inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) loss = criterion(outputs.view(2, 2), labels) optimizer.zero_grad() loss.backward() optimizer.step() if epoch >= 2: model.apply(torch.quantization.disable_observer) pass if epoch >= 3: model.apply(torch.nn.intrinsic.qat.freeze_bn_stats) quant_model = deepcopy(model) quant_model = torch.quantization.convert(quant_model.eval().cpu(), inplace=False) with torch.no_grad(): inp = torch.zeros([1, 3, 28, 28], device='cpu') model.eval().cpu() quant_model.eval().cpu() qout = quant_model.forward(inp) out = model.forward(inp) print(qout.view(2).tolist()) print(out.view(2).tolist()) model.to(device) I printed the state_dicts of the not converted and converted models if the output from forwarding zeros diverged/stayed the same: diverged # Forward zeros output [0.28445518016815186, 0.3933817744255066] #not converted [0.07719799876213074, 0.6743956208229065] #converted <================ NON-CONVERTED MODEL ====================> OrderedDict([('backbone0.0.activation_post_process.scale', tensor([0.0077])), ('backbone0.0.activation_post_process.zero_point', tensor([0])), ('backbone0.0.activation_post_process.activation_post_process.min_val', tensor(0.0006)), ('backbone0.0.activation_post_process.activation_post_process.max_val', tensor(0.9801)), ('backbone0.1.weight', tensor([[[[-4.4932]], [[ 3.9661]], [[-4.1858]]]])), ('backbone0.1.activation_post_process.scale', tensor([0.0039])), ('backbone0.1.activation_post_process.zero_point', tensor([127])), ('backbone0.1.activation_post_process.activation_post_process.min_val', tensor(-0.4932)), ('backbone0.1.activation_post_process.activation_post_process.max_val', tensor(0.0016)), ('backbone0.1.weight_fake_quant.scale', tensor([0.0028])), ('backbone0.1.weight_fake_quant.zero_point', tensor([0])), ('backbone0.1.weight_fake_quant.activation_post_process.min_vals', tensor([-0.3583])), ('backbone0.1.weight_fake_quant.activation_post_process.max_vals', tensor([0.0458])), ('backbone0.2.weight', tensor([4.2045])), ('backbone0.2.bias', tensor([-0.3767])), ('backbone0.2.running_mean', tensor([-0.1198])), ('backbone0.2.running_var', tensor([0.3677])), ('backbone0.2.num_batches_tracked', tensor(10)), ('backbone0.2.activation_post_process.scale', tensor([0.0341])), ('backbone0.2.activation_post_process.zero_point', tensor([59])), ('backbone0.2.activation_post_process.activation_post_process.min_val', tensor(-2.0061)), ('backbone0.2.activation_post_process.activation_post_process.max_val', tensor(2.3278)), ('backbone0.3.activation_post_process.scale', tensor([0.0184])), ('backbone0.3.activation_post_process.zero_point', tensor([0])), ('backbone0.3.activation_post_process.activation_post_process.min_val', tensor(0.0400)), ('backbone0.3.activation_post_process.activation_post_process.max_val', tensor(2.3427)), ('backbone1.0.weight', tensor([[[[-3.8434, -4.1207, 4.9514], [ 5.1071, -4.7582, -3.8411], [-3.6406, -4.1518, -1.5902]]], [[[-4.2797, -4.4012, 1.1095], [-5.2368, 5.8240, 0.5995], [-3.5678, -3.8644, 1.4833]]]])), ('backbone1.1.weight', tensor([-2.8446, 2.8394])), ('backbone1.1.bias', tensor([ 0.2494, -0.2510])), ('backbone1.1.running_mean', tensor([-12.0386, -4.0614])), ('backbone1.1.running_var', tensor([157.3076, 138.7101])), ('backbone1.1.num_batches_tracked', tensor(10))]) <============== QUANT CONVERTED MODEL ===============> OrderedDict([('backbone0.0.scale', tensor([0.0077])), ('backbone0.0.zero_point', tensor([0])), ('backbone0.1.weight', tensor([[[[-0.3597]], [[ 0.3569]], [[-0.3597]]]], size=(1, 3, 1, 1), dtype=torch.qint8, quantization_scheme=torch.per_channel_affine, scale=tensor([0.0028], dtype=torch.float64), zero_point=tensor([0]), axis=0)), ('backbone0.1.scale', tensor(0.0039)), ('backbone0.1.zero_point', tensor(127)), ('backbone0.1.bias', None), ('backbone0.2.weight', tensor([1.])), ('backbone0.2.bias', tensor([0.])), ('backbone0.2.running_mean', tensor([0.])), ('backbone0.2.running_var', tensor([1.])), ('backbone0.2.num_batches_tracked', tensor(0)), ('backbone1.0.weight', tensor([[[[-3.8434, -4.1207, 4.9514], [ 5.1071, -4.7582, -3.8411], [-3.6406, -4.1518, -1.5902]]], [[[-4.2797, -4.4012, 1.1095], [-5.2368, 5.8240, 0.5995], [-3.5678, -3.8644, 1.4833]]]])), ('backbone1.1.weight', tensor([-2.8446, 2.8394])), ('backbone1.1.bias', tensor([ 0.2494, -0.2510])), ('backbone1.1.running_mean', tensor([-12.0386, -4.0614])), ('backbone1.1.running_var', tensor([157.3076, 138.7101])), ('backbone1.1.num_batches_tracked', tensor(10))]) still the same # Forward zeros output [0.4619605243206024, 0.3693790137767792] #not converted [0.4619605243206024, 0.3693790137767792] #converted <================ NON-CONVERTED MODEL ====================> OrderedDict([('backbone0.0.activation_post_process.scale', tensor([0.0077])), ('backbone0.0.activation_post_process.zero_point', tensor([0])), ('backbone0.0.activation_post_process.activation_post_process.min_val', tensor(8.7249e-05)), ('backbone0.0.activation_post_process.activation_post_process.max_val', tensor(0.9802)), ('backbone0.1.weight', tensor([[[[4.5793]], [[3.9695]], [[4.1871]]]])), ('backbone0.1.activation_post_process.scale', tensor([0.0046])), ('backbone0.1.activation_post_process.zero_point', tensor([42])), ('backbone0.1.activation_post_process.activation_post_process.min_val', tensor(-0.1939)), ('backbone0.1.activation_post_process.activation_post_process.max_val', tensor(0.3904)), ('backbone0.1.weight_fake_quant.scale', tensor([0.0035])), ('backbone0.1.weight_fake_quant.zero_point', tensor([0])), ('backbone0.1.weight_fake_quant.activation_post_process.min_vals', tensor([-0.1654])), ('backbone0.1.weight_fake_quant.activation_post_process.max_vals', tensor([0.4444])), ('backbone0.2.weight', tensor([3.7733])), ('backbone0.2.bias', tensor([-3.2874])), ('backbone0.2.running_mean', tensor([0.4043])), ('backbone0.2.running_var', tensor([0.3758])), ('backbone0.2.num_batches_tracked', tensor(10)), ('backbone0.2.activation_post_process.scale', tensor([0.0358])), ('backbone0.2.activation_post_process.zero_point', tensor([65])), ('backbone0.2.activation_post_process.activation_post_process.min_val', tensor(-2.3070)), ('backbone0.2.activation_post_process.activation_post_process.max_val', tensor(2.2333)), ('backbone0.3.activation_post_process.scale', tensor([0.0179])), ('backbone0.3.activation_post_process.zero_point', tensor([0])), ('backbone0.3.activation_post_process.activation_post_process.min_val', tensor(0.)), ('backbone0.3.activation_post_process.activation_post_process.max_val', tensor(2.2680)), ('backbone1.0.weight', tensor([[[[ 4.3533, -3.5816, 5.0651], [-4.1010, -3.6161, -4.3417], [ 4.0427, -4.3517, 4.1981]]], [[[ 2.5564, -3.3695, 4.0380], [-3.9976, -7.2543, -4.0428], [ 3.7343, -3.5447, 2.7283]]]])), ('backbone1.1.weight', tensor([-0.5695, -2.9817])), ('backbone1.1.bias', tensor([-0.1784, -0.1795])), ('backbone1.1.running_mean', tensor([ 0.1879, -0.5091])), ('backbone1.1.running_var', tensor([16.9781, 18.2449])), ('backbone1.1.num_batches_tracked', tensor(10))]) <============== QUANT CONVERTED MODEL ===============> OrderedDict([('backbone0.0.scale', tensor([0.0077])), ('backbone0.0.zero_point', tensor([0])), ('backbone0.1.weight', tensor([[[[0.4426]], [[0.4426]], [[0.4426]]]], size=(1, 3, 1, 1), dtype=torch.qint8, quantization_scheme=torch.per_channel_affine, scale=tensor([0.0035], dtype=torch.float64), zero_point=tensor([0]), axis=0)), ('backbone0.1.scale', tensor(0.0046)), ('backbone0.1.zero_point', tensor(42)), ('backbone0.1.bias', None), ('backbone0.2.weight', tensor([1.])), ('backbone0.2.bias', tensor([0.])), ('backbone0.2.running_mean', tensor([0.])), ('backbone0.2.running_var', tensor([1.])), ('backbone0.2.num_batches_tracked', tensor(0)), ('backbone1.0.weight', tensor([[[[ 4.3533, -3.5816, 5.0651], [-4.1010, -3.6161, -4.3417], [ 4.0427, -4.3517, 4.1981]]], [[[ 2.5564, -3.3695, 4.0380], [-3.9976, -7.2543, -4.0428], [ 3.7343, -3.5447, 2.7283]]]])), ('backbone1.1.weight', tensor([-0.5695, -2.9817])), ('backbone1.1.bias', tensor([-0.1784, -0.1795])), ('backbone1.1.running_mean', tensor([ 0.1879, -0.5091])), ('backbone1.1.running_var', tensor([16.9781, 18.2449])), ('backbone1.1.num_batches_tracked', tensor(10))])
st183927
Hello, Here’s the current code where I turn on QAT only near the end of training, and still has the same issue: import torch from torch import nn, optim from torch.quantization import QuantStub, DeQuantStub from copy import deepcopy print(torch.__version__) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') class Model(nn.Module): def __init__(self): super().__init__() self.backbone0 = nn.Sequential( QuantStub(), nn.Conv2d(3, 1, 1, bias=False), nn.BatchNorm2d(1), nn.ReLU(), DeQuantStub(), ) self.backbone1 = nn.Sequential( nn.Conv2d(1, 2, 3, stride=2, padding=1, bias=False), nn.BatchNorm2d(2), nn.MaxPool2d(14), nn.Sigmoid(), ) def forward(self, x): x = self.backbone0(x) x = self.backbone1(x) return x model = Model() # torch.quantization.fuse_modules(model, [['1', '2', '3'], ['4', '5']], inplace=True) # model.backbone0.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') # torch.quantization.prepare_qat(model.backbone0, inplace=True) optimizer = optim.Adam(model.parameters(), lr=1) model.to(device) criterion = nn.BCELoss() for epoch in range(1000): # print('EPOCH', epoch) model.train() inputs = torch.rand(2, 3, 28, 28) labels = torch.FloatTensor([[1, 1], [0, 0]]) inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) loss = criterion(outputs.view(2, 2), labels) optimizer.zero_grad() loss.backward() optimizer.step() if epoch == 945: # turn on qat model.to('cpu') model.backbone0.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') torch.quantization.prepare_qat(model.backbone0, inplace=True) model.to(device) if epoch >= 950: model.apply(torch.quantization.disable_observer) pass if epoch >= 950: model.apply(torch.nn.intrinsic.qat.freeze_bn_stats) if epoch == 999: # print('MODEL', model) quant_model = deepcopy(model) quant_model = torch.quantization.convert(quant_model.eval().cpu(), inplace=False) with torch.no_grad(): inp = torch.zeros([1, 3, 28, 28], device='cpu') model.eval().cpu() quant_model.eval().cpu() qout = quant_model.forward(inp) out = model.forward(inp) print(f"<============== EPOCH {epoch} ===============>") print(out.view(2).tolist(), "#not converted") print(qout.view(2).tolist(), "#quant converted") print(f"<============== NOT CONVERTED MODEL ===============>") print(model.state_dict()) print(f"<============== QUANT CONVERTED MODEL ===============>") print(quant_model.state_dict()) model.to(device)
st183928
Looks like in the case when they diverged the state_dict still matches? did you enable fake quantization when you compare the result?
st183929
Yes. It was enabled although I’m wondering if enabling it only on a sequential container layer in the module as I did is incorrect?
st183930
I have quantization a model from 32-bit float to int8. I want to test the quantization performance, such as latency. Is there any way to inference the model with 8bit fix point?
st183931
Hi @0Chen , have you tried the autograd profiler (example: PyTorch Profiler — PyTorch Tutorials 1.8.1+cu102 documentation 7)?
st183932
Thanks! That can test time and memory usage. However, I want to test in low-bit compute, such as 8bit or 6bit. Could pytorch just allocate a tensor with 8bit.
st183933
@0Chen I am facing the same issue too! Does anyone know how to perform inference after quantizing the model?? When I try to get the output, I get this error Error(s) in loading state_dict for Module: ('Copying from quantized Tensor to non-quantized Tensor is not allowed, please use dequantize to get a float Tensor from a quantized Tensor',)
st183934
If you have a quantized model, then it is doing low-bit computation depending on what format it was quantized to. Otherwise, I’m not sure I understand what you are trying to do if your quantized model isn’t already doing what you want.
st183935
I don’t think this is related to OP’s issue. But it looks like you are trying to load a saved quantized model into a non-quantized model. the state_dict to my knowledge is just a record of the parameters for different modules. Since quantized modules have different parameters than non-quantized ones, you won’t be able to load one into another. I believe that when you create the model before loading the state dict, you have to fuse/prepare/convert it, at which point you should be able to load the state dict per How do I save and load quantization model 4 If this is not helpful, can you make a separate post within the #quantization category so as not to hijack this thread? I am the oncall for quantization so I am responding to any new posts.
st183936
While going through code in this tutorial here 1 I’m not able to understand how is this step # Calibrate with the training set evaluate(myModel, criterion, data_loader, neval_batches=num_calibration_batches) affecting the model, because in evaluate function all we do is inference. What am I missing here ? Thank you!
st183937
Solved by HDCharles in post #4 Those details are abstracted away in practice. num_calibration_batches = 32 myModel = load_model(saved_model_dir + float_model_file).to('cpu') myModel.eval() # Fuse Conv, bn and relu myModel.fuse_model() # Specify quantization configuration # Start with simple min/max range estimation and per-te…
st183938
Theoretically, all you need to do quantization is to just round all the weights and set your model to round all the activations and you would obtain a quantized model. The issue is that we have to choose what numbers to round to. In real life, we round to integers, but that won’t necessarily work well for ML. If we are doing int8 quantization, we only have 256 possible values, forcing us to choose where on the number line to place those rounding points. We want to choose the spacing and range of these 256 possible values to define our rounding process in a way that minimizes the error induced by quantization. At a high level, the quantization framework ‘observes’ the activations and inputs that come into the model to get a better understanding of the distribution of values and then chooses the spacing and min/max (called scale and zero_point) based on this. In practice the different observer types take different types of statistics and use that to decide what scale and zero_point to use. This is similar to how a batchnorm operates.
st183939
@HDCharles Thank you for the explanation Exactly this is what I’m trying to understand in the code, I see evaluate function def evaluate(model, criterion, data_loader, neval_batches): model.eval() top1 = AverageMeter('Acc@1', ':6.2f') top5 = AverageMeter('Acc@5', ':6.2f') cnt = 0 with torch.no_grad(): for image, target in data_loader: output = model(image) loss = criterion(output, target) cnt += 1 acc1, acc5 = accuracy(output, target, topk=(1, 5)) print('.', end = '') top1.update(acc1[0], image.size(0)) top5.update(acc5[0], image.size(0)) if cnt >= neval_batches: return top1, top5 return top1, top5 which is here https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#helper-functions 1, and I can’t relate the code to what you explained.
st183940
Those details are abstracted away in practice. num_calibration_batches = 32 myModel = load_model(saved_model_dir + float_model_file).to('cpu') myModel.eval() # Fuse Conv, bn and relu myModel.fuse_model() # Specify quantization configuration # Start with simple min/max range estimation and per-tensor quantization of weights myModel.qconfig = torch.quantization.default_qconfig print(myModel.qconfig) torch.quantization.prepare(myModel, inplace=True) ^After this point, the model has been fused and prepared. This means that observers have been inserted into the model which passively record the activations and the values of the weights of each layer. You can see this change from the original model if you print the model. You will see a bunch of observers have been added. Over time they refine the scale and zero_point parameters that define the quantization process, updating each time the the model does a forward pass. This is similar to how the batchnorm module operates, i.e. it passively records incoming data and refines the parameters of the module depending on these observations. For this reason if you take any quantized net in the prepare phase (or net with a batchnorm module) and call the model twice on the same exact input, you will get different results. The documentation for the observers can be found here: https://pytorch.org/docs/stable/torch.quantization.html#torch.quantization.MinMaxObserver 1 there are a few different types depending on your quantization methodology.
st183941
Hi All, I was trying to quantize a resnet model. I thought of starting with some toy models, so, instead of coding one myself (also, I am slightly scared with the big model ) I did the following class CustomResNet(nn.Module): def __init__(self,num_classes,feature_extract): super(CustomResNet, self).__init__() self.resnet = models.resnet18(pretrained=False) self.resnet.conv1 = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=(7,7), stride=(2,2)) #I am working only with b/w images self.resnet.num_classes = num_classes #This is specific to my test case set_parameter_requires_grad(self.resnet, feature_extract) self.quant = torch.quantization.QuantStub() self.dequant = torch.quantization.DeQuantStub() def forward(self, x): x = self.quant(x) x = self.resnet(x) x = self.dequant(x) return x Then I quantized it and after printing the model, I could see that all the modules have been quantized. However, when I try to evaluate it using a sample dataset, then, I am getting the following error: RuntimeError: Could not run 'aten::add_.Tensor' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::add_.Tensor' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, Meta, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode]. I have assumed that this is because the modules have been quantized, but the skip connections in the model and the corresponding operation have not been done. Can anyone kindly review and let me know, whether this is the correct assumption? Regards,Arpan
st183942
Solved by HDCharles in post #2 Hi Arpan, The error you are getting is more or less from here: i.e. that op seems to be for sparse tensors, not quantized ones. Its complaining because it doens’t list QuantizedCPU in the dispatch section of that op. I’m not sure exactly what you are doing for the system to be giving you that er…
st183943
Hi Arpan, The error you are getting is more or less from here: github.com pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml#L358-L365 1 - func: add_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> Tensor(a!) device_check: NoCheck # TensorIterator variants: method structured_delegate: add.out dispatch: SparseCPU, SparseCUDA: add_sparse_ SparseCsrCPU: add_sparse_csr_ MkldnnCPU: mkldnn_add_ i.e. that op seems to be for sparse tensors, not quantized ones. Its complaining because it doens’t list QuantizedCPU in the dispatch section of that op. I’m not sure exactly what you are doing for the system to be giving you that error without a full repro since the error doesn’t include the full stack trace. Your code shows the model bus is missing the actual quantization process you are using. Otherwise, I’d recommend taking another look at the quantization docs/tutorials to make sure you’re doing things in the right order, it can be quite finnicky. That tutorial may be a better place to start and once you understand that, you can apply those techniques to your image classification problem. tutorials https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 13 you can find the other tutorials under Model Optimization in the sidebar within the pytorch tutorials pages.
st183944
For tensor operations, you will need to use FloatFunctionals to ensure that quantization works correctly. See for example here: vision/resnet.py at master · pytorch/vision · GitHub 23
st183945
The .pth was train with mixed precision. How to I make the model into FP32 again?
st183946
generally my understanding is that training with mixed precision occurs as follows (pseudocode): class Model: <model code> def train(net): with torch.cuda.autocast() <train loop> main: net = Model() train(net) save(net) in which case, your model should be saved in fp32, since the autocast part just alters the calculations in there, not the model itself. If you did something different, i’m not sure if its possible to revert. I would double check the documentation thought to make sure you are doing things as expected: https://pytorch.org/docs/stable/amp.html 1 https://pytorch.org/docs/stable/notes/amp_examples.html 1
st183947
Similar to fp16 inference in pytorch framework like [Training With Mixed Precision :: NVIDIA Deep Learning Performance Documentation 2](Training With Mixed Precision :: NVIDIA Deep Learning Performance Documentation 2), is there any framework about fp8 inference in pytorch? Thank you for your time.
st183948
To my knowledge, PyTorch’s mixed precision support (Automatic Mixed Precision package - torch.cuda.amp — PyTorch 1.8.1 documentation 11) does not handle fp8 either. For 8 bit precision you’d need to look towards quantization to integers or fake quants but that doesn’t really fall under the umbrella of mixed precision, though I’m not sure if that’s core to your request or just ancillary. For info about quantization, you can see this: Quantization — PyTorch 1.8.1 documentation 24
st183949
You can reproduce by codes below. import torch import torch.nn.quantized as nnq ### 1 channel # inpus act x = torch.Tensor([[[[3,9,30],[14,4,22],[11,7,5]]]]) xq = torch.quantize_per_tensor(x, scale = 1.0, zero_point = 0, dtype=torch.quint8) xq.int_repr() c = nnq.Conv2d(1,1,3) # weights weight = torch.Tensor([[[[17,21,-59],[-4,-10,-31],[2,-3,59]]]]) qweight = torch.quantize_per_channel(weight, scales=torch.Tensor([1.0]).to(torch.double), zero_points = torch.Tensor([0]).to(torch.int64), axis=0, dtype=torch.qint8) c.set_weight_bias(qweight, torch.Tensor([0.0])) c.scale = 32.0 c.zero_point = 0 out = c(xq) out The manual calcultaion result is -2016 (-63*32), but the result of posted code is 0. tensor([[[[0.]]]], size=(1, 1, 1, 1), dtype=torch.quint8, quantization_scheme=torch.per_tensor_affine, scale=32.0, zero_point=0) Maybe this is due to data overflow. But i cannot figure it out ( because -2016 > (-64*32)).
st183950
Solved by dskhudia in post #6 @SoufSilence, Actual result of the operation is -63 and since output tensor is of type torch.quint8, thus the result need to be in the range [0, 255]. Therefore, -63 gets converted to the nearest number in this range. If you pick the following numbers (I changed -59 to 59 in w), you get the result…
st183951
Thanks for reporting. Actual result should be the following: import numpy as np x = np.array([3, 9, 30, 14, 4, 22, 11, 7, 5], dtype=np.int32) w = np.array([17, 21, -59, -4, -10, -31, 2, -3, 59], dtype=np.int32) dotP = np.dot(x, w) result = round(dotP / c.scale) print(dotP) print(result) -2012 -63 Will check if it’s a bug. I don’t see any kind of overflow or saturation issue with these xq and Wq.
st183952
Yes, the result is -2012 if no round processing is done, -2016 is the result after round. This question has been bothering me for a long time. Looking forward to your reply.
st183953
@SoufSilence, Actual result of the operation is -63 and since output tensor is of type torch.quint8, thus the result need to be in the range [0, 255]. Therefore, -63 gets converted to the nearest number in this range. If you pick the following numbers (I changed -59 to 59 in w), you get the result as 48. x = np.array([3, 9, 30, 14, 4, 22, 11, 7, 5], dtype=np.int32) w = np.array([17, 21, 59, -4, -10, -31, 2, -3, 59], dtype=np.int32) Actual result of this operation is 47.75 and that gets rounded to 48. Since 48 is in the range [0, 255], it is returned as is.
st183954
@dskhudia, Thanks for your help. This is really the cause of the problem. I didn’t realize that the output activation values had the same quantization type as the input activation values. This was a really careless mistake.
st183955
The tutorial here 4 provides an example of per-channel quantization training. In my case I need to perform per-tensor quantization since the downstream mobile-device inference library (e.g. TNN 1) does not support per-channel quantized models. I think the problem here is how to setup a per-tensor quantization around: model.qconfig = torch.quantization.get_default_qat_qconfig("fbgemm") Currently this part is not extensively documented and I cannot find many resources. So could someone give an example configuration for per-tensor quantization?
st183956
Hi @kaizhao , The qnnpack backend has default settings with per-Tensor observers. You can create a config with this setting like this: model.qconfig = torch.quantization.get_default_qat_qconfig("qnnpack") If you’d like to customize the qconfig manually, you could take a look here: pytorch/qconfig.py at master · pytorch/pytorch · GitHub 3 You can change just the per-channel setting with something like this: qconfig = QConfig(activation=FakeQuantize.with_args(observer=MovingAverageMinMaxObserver, quant_min=0, quant_max=255, reduce_range=True), weight=default_weight_fake_quant) # weight=default_per_channel_weight_fake_quant)
st183957
Solved by Vasiliy_Kuznetsov in post #2 Hi @YZW-explorer , you can find it here: pytorch/conv.py at master · pytorch/pytorch · GitHub
st183958
Hi @YZW-explorer , you can find it here: pytorch/conv.py at master · pytorch/pytorch · GitHub 3
st183959
Hi, With a quantized model, it’s necessary to set the correct backend (fbgemm or qnnpack) for inference. But in the quantization aware training, does this backend affect the training? For instance, can I train the quantized model using fbgemm backend and then use it with the qnnpack in the inference phase!
st183960
Solved by Vasiliy_Kuznetsov in post #2 Hi @eefahd , There are a couple of things to keep in mind: default qconfigs have different settings for qnnpack and fbgemm. One setting in particular, reduce_range, if set to False only works correctly in qnnpack and leads to potential overflow in fbgemm. when weights are packed, the global back…
st183961
Hi @eefahd , There are a couple of things to keep in mind: default qconfigs have different settings for qnnpack and fbgemm. One setting in particular, reduce_range, if set to False only works correctly in qnnpack and leads to potential overflow in fbgemm. when weights are packed, the global backend setting is used to determine whether to pack for fbgemm or for qnnpack.
st183962
Hi, I am working on quantizing a FasterRCNN Model from pre-trained weights, and I was running into a couple issues regarding the FeaturePyramidNetwork layer. When trying to run my model, I am getting the following error since I am trying to work with quantized images on what I assume is a non quantized layer. File “/home/maria/anaconda3/envs/engie/lib/python3.8/site-packages/torchvision/ops/feature_pyramid_network.py”, line 131, in forward last_inner = inner_lateral + inner_top_down RuntimeError: Could not run ‘aten::add.Tensor’ with arguments from the ‘QuantizedCPU’ backend. ‘aten::add.Tensor’ is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, Meta, Named, Autograd, Profiler, Tracer]. Any thoughts on how I can fix this issue and run this quantized model successfully on quantized Tensor images?
st183963
Hi @Maria_Vazhaeparambil , we have a quick help page about this type of errors: Quantization — PyTorch master documentation 13 In this case, it looks like a quantized tensor is being passed to a floaing point kernel. You could fix it by a couple of ways: convert it to fp32 before passing to the layer (by passing through torch.quantization.DeQuantStub() or, if you are quantizing the network, use torch.quantization.FloatFunctional for adds
st183964
Thank you so much for your reply. I was able to use your fix to get my code up and running, but I realized that this method of statically quantizing my model means that all the computations will be run on the CPU. I read online that Quantization Aware Training allows for me to still train on the GPU, so I was hoping to try and use that instead. I quantized my model using the following code: class faster_countor_NN(Countor_NN): def __init__(self): super(faster_countor_NN, self).__init__() self.conv = torch.nn.Conv2d(1, 1, 1) self.relu = torch.nn.ReLU() def forward(self, images, boxes, boxes_ids, ROI_images): det_boxes, det_scores, det_labels, tck_boxes, tck_scores, tck_labels, tck_ids = super().forward(images, boxes, boxes_ids, ROI_images) return det_boxes, det_scores, det_labels, tck_boxes, tck_scores, tck_labels, tck_ids class QuantizedRCNN(torch.nn.Module): def __init__(self, model_fp32): super(QuantizedRCNN, self).__init__() self.model_fp32 = model_fp32 def forward(self, images, boxes, boxes_ids, ROI_images): det_boxes, det_scores, det_labels, tck_boxes, tck_scores, tck_labels, tck_ids = self.model_fp32(images, boxes, boxes_ids, ROI_images) return det_boxes, det_scores, det_labels, tck_boxes, tck_scores, tck_labels, tck_ids init_net = faster_countor_NN() init_net.load_state_dict(torch.load(paths['path_faster_RCNN_weigths'],map_location="cuda"), strict=False) init_net.cpu() init_net_fused = copy.deepcopy(init_net) init_net.train() init_net_fused.train() init_net = torch.quantization.fuse_modules(init_net,[['conv', 'relu']]) init_net.eval() init_net_fused.eval() countor_net = QuantizedRCNN(model_fp32=init_net_fused) quantization_config = torch.quantization.get_default_qconfig("fbgemm") countor_net.qconfig = quantization_config torch.quantization.prepare_qat(countor_net, inplace=True) countor_net.train() countor_net.to('cpu') countor_net = torch.quantization.convert(countor_net, inplace=True) countor_net.cuda() countor_net.eval() However, I am still getting the following error in my code. File "/home/maria/anaconda3/envs/engie/lib/python3.8/site-packages/torch/nn/quantized/modules/conv.py", line 331, in forward return ops.quantized.conv2d( RuntimeError: Could not run 'quantized::conv2d.new' with arguments from the 'CUDA' backend. 'quantized::conv2d.new' is only available for these backends: [QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode]. CPU: registered at /opt/conda/conda-bld/pytorch_1607370172916/work/build/aten/src/ATen/CPUType.cpp:2127 [kernel] BackendSelect: fallthrough registered at /opt/conda/conda-bld/pytorch_1607370172916/work/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback] Named: registered at /opt/conda/conda-bld/pytorch_1607370172916/work/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] AutogradOther: registered at /opt/conda/conda-bld/pytorch_1607370172916/work/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel] AutogradCPU: registered at /opt/conda/conda-bld/pytorch_1607370172916/work/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel] AutogradCUDA: registered at /opt/conda/conda-bld/pytorch_1607370172916/work/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel] AutogradXLA: registered at /opt/conda/conda-bld/pytorch_1607370172916/work/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel] AutogradPrivateUse1: registered at /opt/conda/conda-bld/pytorch_1607370172916/work/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel] AutogradPrivateUse2: registered at /opt/conda/conda-bld/pytorch_1607370172916/work/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel] AutogradPrivateUse3: registered at /opt/conda/conda-bld/pytorch_1607370172916/work/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel] Tracer: registered at /opt/conda/conda-bld/pytorch_1607370172916/work/torch/csrc/autograd/generated/TraceType_2.cpp:9654 [kernel] Autocast: fallthrough registered at /opt/conda/conda-bld/pytorch_1607370172916/work/aten/src/ATen/autocast_mode.cpp:254 [backend fallback] Batched: registered at /opt/conda/conda-bld/pytorch_1607370172916/work/aten/src/ATen/BatchingRegistrations.cpp:511 [backend fallback] VmapMode: fallthrough registered at /opt/conda/conda-bld/pytorch_1607370172916/work/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback] Even when I try to resolve this issue by sending my images back to the CPU (which I don’t want), I am getting a segmentation fault since my model is on the GPU. Do you know any way that I can resolve this issue to successfully run my model on the GPU? Is this possible, or am I not understanding the Quantizd Aware Training correctly?
st183965
Maria_Vazhaeparambil: countor_net = torch.quantization.convert(countor_net, inplace=True) countor_net.cuda() countor_net.eval() Hi @Maria_Vazhaeparambil , this snippet is the part which is not supported. When you do torch.quantization.convert, the fp32 kernels get swapped to int8 kernels. There is currently no support to run int8 kernels on the GPU. If you’d like to evaluate a model with int8 kernels, it has to be done on CPU, so you would need to move your converted model to CPU.
st183966
I read this on the Pytorch website (Introduction to Quantization on PyTorch | PyTorch 2): “However, quantization aware training occurs in full floating point and can run on either GPU or CPU. Quantization aware training is typically only used in CNN models when post training static or dynamic quantization doesn’t yield sufficient accuracy.” Would this not be possible?
st183967
This means that QAT can run training on the GPU. Inference on a converted model is only supported on CPU.
st183968
Hi @Vasiliy_Kuznetsov. I too want to quantize a pretrained FasterRCNN model with MobilnetV3 backbone, but I don’t know where to start. Do I have to make a copy of and modify the original source code here? Let’s say I modified the MobileNetV3 backbone and added necessary “quant” lines, what changes do I need to make in the object detector part (RoiHeads, RPN, etc)? Thank you.
st183969
Thank you @jerryzh168. So, I tried to quantize the mobilenetv3 backbone only. I modified the mobilenet_backbone() function to obtain the quantized version of the backbone as follows: #backbone = mobilenet.__dict__[backbone_name](pretrained=pretrained, norm_layer=norm_layer).features backbone = torchvision.models.quantization.mobilenet_v3_large(pretrained=False, quantize=False).features Then, I followed this tutorial 3 to create and train a FasterRCNN model. Before training, I fused the model backbone as follows: # Create quantized model fused_model = get_quantized_object_detection_model(num_classes, pretrained=True) fused_model.to(cpu_device) fused_model.train() # Fuse layers for m in fused_model.backbone.modules(): if type(m) == ConvBNActivation: modules_to_fuse = ['0', '1'] if type(m[2]) == nn.ReLU: modules_to_fuse.append('2') fuse_modules(m, modules_to_fuse, inplace=True) elif type(m) == QuantizableSqueezeExcitation: fuse_modules(m, ['fc1', 'relu'], inplace=True) elif type(m) == QuantizableInvertedResidual: for idx in range(len(m.block)): if type(m.block[idx]) == nn.Conv2d: fuse_modules(m.block, [str(idx), str(idx + 1)], inplace=True) Then, prepared the model for quantization aware training: backend = 'fbgemm' fused_model.qconfig = torch.quantization.get_default_qat_qconfig(backend) torch.quantization.prepare_qat(fused_model.backbone, inplace=True) fused_model.to(cuda_device) # ... creating optimizer and training fused_model.to(cpu_device) torch.quantization.convert(fused_model.backbone, inplace=True) fused_model.eval() # ... save model During preparation I did not get any error, but after quantization the model size did not decrease (74MB) and inference speed and accuracy decreased. Obviously something is not right. Any thoughts?
st183970
goksinan: fused_model.to(cuda_device) could you print the final quantized model, does it contain any quantized modules?
st183971
Here is the output of print(fused_model): FasterRCNN( (transform): GeneralizedRCNNTransform( Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) Resize(min_size=(320,), max_size=640, mode='bilinear') ) (backbone): BackboneWithFPN( (body): IntermediateLayerGetter( (0): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(16, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (1): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBnReLU2d( (0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=16, bias=False) (1): BatchNorm2d(16, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): ReLU() ) (1): Identity() (2): Identity() ) (1): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(16, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (2): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBnReLU2d( (0): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): ReLU() ) (1): Identity() (2): Identity() ) (1): ConvBNActivation( (0): ConvBnReLU2d( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=64, bias=False) (1): BatchNorm2d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): ReLU() ) (1): Identity() (2): Identity() ) (2): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(64, 24, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(24, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (3): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBnReLU2d( (0): Conv2d(24, 72, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(72, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): ReLU() ) (1): Identity() (2): Identity() ) (1): ConvBNActivation( (0): ConvBnReLU2d( (0): Conv2d(72, 72, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=72, bias=False) (1): BatchNorm2d(72, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): ReLU() ) (1): Identity() (2): Identity() ) (2): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(72, 24, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(24, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (4): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBnReLU2d( (0): Conv2d(24, 72, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(72, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): ReLU() ) (1): Identity() (2): Identity() ) (1): ConvBNActivation( (0): ConvBnReLU2d( (0): Conv2d(72, 72, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), groups=72, bias=False) (1): BatchNorm2d(72, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): ReLU() ) (1): Identity() (2): Identity() ) (2): QuantizableSqueezeExcitation( (fc1): ConvReLU2d( (0): Conv2d(72, 24, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU() ) (relu): Identity() (fc2): Conv2d(24, 72, kernel_size=(1, 1), stride=(1, 1)) (skip_mul): FloatFunctional( (activation_post_process): Identity() ) ) (3): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(72, 40, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(40, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (5): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBnReLU2d( (0): Conv2d(40, 120, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(120, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): ReLU() ) (1): Identity() (2): Identity() ) (1): ConvBNActivation( (0): ConvBnReLU2d( (0): Conv2d(120, 120, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=120, bias=False) (1): BatchNorm2d(120, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): ReLU() ) (1): Identity() (2): Identity() ) (2): QuantizableSqueezeExcitation( (fc1): ConvReLU2d( (0): Conv2d(120, 32, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU() ) (relu): Identity() (fc2): Conv2d(32, 120, kernel_size=(1, 1), stride=(1, 1)) (skip_mul): FloatFunctional( (activation_post_process): Identity() ) ) (3): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(120, 40, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(40, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (6): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBnReLU2d( (0): Conv2d(40, 120, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(120, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): ReLU() ) (1): Identity() (2): Identity() ) (1): ConvBNActivation( (0): ConvBnReLU2d( (0): Conv2d(120, 120, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=120, bias=False) (1): BatchNorm2d(120, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) (2): ReLU() ) (1): Identity() (2): Identity() ) (2): QuantizableSqueezeExcitation( (fc1): ConvReLU2d( (0): Conv2d(120, 32, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU() ) (relu): Identity() (fc2): Conv2d(32, 120, kernel_size=(1, 1), stride=(1, 1)) (skip_mul): FloatFunctional( (activation_post_process): Identity() ) ) (3): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(120, 40, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(40, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (7): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(240, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (1): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(240, 240, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=240, bias=False) (1): BatchNorm2d(240, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (2): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(240, 80, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(80, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (8): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(80, 200, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(200, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (1): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(200, 200, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=200, bias=False) (1): BatchNorm2d(200, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (2): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(200, 80, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(80, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (9): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(80, 184, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(184, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (1): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(184, 184, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=184, bias=False) (1): BatchNorm2d(184, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (2): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(184, 80, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(80, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (10): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(80, 184, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(184, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (1): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(184, 184, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=184, bias=False) (1): BatchNorm2d(184, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (2): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(184, 80, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(80, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (11): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(480, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (1): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(480, 480, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=480, bias=False) (1): BatchNorm2d(480, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (2): QuantizableSqueezeExcitation( (fc1): ConvReLU2d( (0): Conv2d(480, 120, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU() ) (relu): Identity() (fc2): Conv2d(120, 480, kernel_size=(1, 1), stride=(1, 1)) (skip_mul): FloatFunctional( (activation_post_process): Identity() ) ) (3): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(480, 112, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(112, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (12): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(672, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (1): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(672, 672, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=672, bias=False) (1): BatchNorm2d(672, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (2): QuantizableSqueezeExcitation( (fc1): ConvReLU2d( (0): Conv2d(672, 168, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU() ) (relu): Identity() (fc2): Conv2d(168, 672, kernel_size=(1, 1), stride=(1, 1)) (skip_mul): FloatFunctional( (activation_post_process): Identity() ) ) (3): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(672, 112, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(112, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (13): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(672, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (1): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(672, 672, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), groups=672, bias=False) (1): BatchNorm2d(672, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (2): QuantizableSqueezeExcitation( (fc1): ConvReLU2d( (0): Conv2d(672, 168, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU() ) (relu): Identity() (fc2): Conv2d(168, 672, kernel_size=(1, 1), stride=(1, 1)) (skip_mul): FloatFunctional( (activation_post_process): Identity() ) ) (3): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(672, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(160, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (14): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (1): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(960, 960, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=960, bias=False) (1): BatchNorm2d(960, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (2): QuantizableSqueezeExcitation( (fc1): ConvReLU2d( (0): Conv2d(960, 240, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU() ) (relu): Identity() (fc2): Conv2d(240, 960, kernel_size=(1, 1), stride=(1, 1)) (skip_mul): FloatFunctional( (activation_post_process): Identity() ) ) (3): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(160, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (15): QuantizableInvertedResidual( (block): Sequential( (0): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (1): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(960, 960, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=960, bias=False) (1): BatchNorm2d(960, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) (2): QuantizableSqueezeExcitation( (fc1): ConvReLU2d( (0): Conv2d(960, 240, kernel_size=(1, 1), stride=(1, 1)) (1): ReLU() ) (relu): Identity() (fc2): Conv2d(240, 960, kernel_size=(1, 1), stride=(1, 1)) (skip_mul): FloatFunctional( (activation_post_process): Identity() ) ) (3): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(160, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (16): ConvBNActivation( (0): ConvBn2d( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=0.001, momentum=0.01, affine=True, track_running_stats=True) ) (1): Identity() (2): Hardswish() ) ) (fpn): FeaturePyramidNetwork( (inner_blocks): ModuleList( (0): Conv2d(160, 256, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(960, 256, kernel_size=(1, 1), stride=(1, 1)) ) (layer_blocks): ModuleList( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (extra_blocks): LastLevelMaxPool() ) ) (rpn): RegionProposalNetwork( (anchor_generator): AnchorGenerator() (head): RPNHead( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (cls_logits): Conv2d(256, 15, kernel_size=(1, 1), stride=(1, 1)) (bbox_pred): Conv2d(256, 60, kernel_size=(1, 1), stride=(1, 1)) ) ) (roi_heads): RoIHeads( (box_roi_pool): MultiScaleRoIAlign(featmap_names=['0', '1', '2', '3'], output_size=(7, 7), sampling_ratio=2) (box_head): TwoMLPHead( (fc6): Linear(in_features=12544, out_features=1024, bias=True) (fc7): Linear(in_features=1024, out_features=1024, bias=True) ) (box_predictor): FastRCNNPredictor( (cls_score): Linear(in_features=1024, out_features=2, bias=True) (bbox_pred): Linear(in_features=1024, out_features=8, bias=True) ) ) )
st183972
goksinan: modules_to_fuse.append('2') looks like there is no quantized modules in backbone, maybe there is something wrong in the previous steps
st183973
I try to quantize my pretrained model from timm library. But it not work, so the question is why it not work and how to make timm models being quantized? import timm model = timm.create_model('mobilenetv2_120d', pretrained=True) model_int8 = torch.quantization.quantize_dynamic( model, # the original model {torch.nn.Linear, torch.nn.Conv2d, torch.nn.ReLU, torch.nn.BatchNorm2d}, dtype=torch.qint8) print_model_size(model) print_model_size(model_int8) 23.74 MB 23.74 MB
st183974
We currently only support dynamic quantization of Linear operations from the list you’ve specified. Can you print the quantized model to check how many layers were actually quantized?
st183975
Yes it truly work only for Linear. (conv_head): Conv2d(384, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn2): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (act2): ReLU6(inplace=True) (global_pool): SelectAdaptivePool2d (pool_type=avg, flatten=True) (classifier): DynamicQuantizedLinear(in_features=1280, out_features=5, dtype=torch.qint8, qscheme=torch.per_tensor_affine) But my goal is to measure evaluation time of quantized model and compare this time with float32 model. For that i try to static quantization: model_sigmoid.qconfig = torch.quantization.get_default_qconfig('qnnpack') # insert observers torch.quantization.prepare(model_sigmoid, inplace=True) # Calibrate the model and collect statistics # convert to quantized version torch.quantization.convert(model_sigmoid, inplace=True) This code quantize all the layer. But i cant run this quantized model,because of that: start_time = time.time() with torch.no_grad(): # with torch.autograd.set_detect_anomaly(True): pred = model_sigmoid(torch_img) print('Time = ', time.time() - start_time) RuntimeError: Could not run 'quantized::conv2d.new' with arguments from the 'CPU' backend. 'quantized::conv2d.new' is only available for these backends: [QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode]. I understand that this backend not support cpu and cuda, so question is it possible to run this static quantized model on windows 10 (x64)? And it will be cool if you compare each backed in RuntimeError with device on which it can evaluete.
st183976
For static quantization, in addition to using the qconfig you also need to add Quant/Dequant Stubs around the modules you want quantized. The tutorial (beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials 1.8.1+cu102 documentation 6 has more details on how to do so.
st183977
I have the following model definition, which I know for a fact quantized without any problems circa November 2020: class UNet(nn.Module): def __init__(self): super().__init__() self.q_1 = torch.quantization.QuantStub() self.conv_1_1 = nn.Conv2d(3, 64, 3) torch.nn.init.kaiming_normal_(self.conv_1_1.weight) self.relu_1_2 = nn.ReLU() self.norm_1_3 = nn.BatchNorm2d(64) self.conv_1_4 = nn.Conv2d(64, 64, 3) torch.nn.init.kaiming_normal_(self.conv_1_4.weight) self.relu_1_5 = nn.ReLU() self.norm_1_6 = nn.BatchNorm2d(64) self.pool_1_7 = nn.MaxPool2d(2) self.conv_2_1 = nn.Conv2d(64, 128, 3) torch.nn.init.kaiming_normal_(self.conv_2_1.weight) self.relu_2_2 = nn.ReLU() self.norm_2_3 = nn.BatchNorm2d(128) self.conv_2_4 = nn.Conv2d(128, 128, 3) torch.nn.init.kaiming_normal_(self.conv_2_4.weight) self.relu_2_5 = nn.ReLU() self.norm_2_6 = nn.BatchNorm2d(128) self.pool_2_7 = nn.MaxPool2d(2) self.conv_3_1 = nn.Conv2d(128, 256, 3) torch.nn.init.kaiming_normal_(self.conv_3_1.weight) self.relu_3_2 = nn.ReLU() self.norm_3_3 = nn.BatchNorm2d(256) self.conv_3_4 = nn.Conv2d(256, 256, 3) torch.nn.init.kaiming_normal_(self.conv_3_4.weight) self.relu_3_5 = nn.ReLU() self.norm_3_6 = nn.BatchNorm2d(256) self.pool_3_7 = nn.MaxPool2d(2) self.conv_4_1 = nn.Conv2d(256, 512, 3) torch.nn.init.kaiming_normal_(self.conv_4_1.weight) self.relu_4_2 = nn.ReLU() self.norm_4_3 = nn.BatchNorm2d(512) self.conv_4_4 = nn.Conv2d(512, 512, 3) torch.nn.init.kaiming_normal_(self.conv_4_4.weight) self.relu_4_5 = nn.ReLU() self.norm_4_6 = nn.BatchNorm2d(512) self.dq_1 = torch.quantization.DeQuantStub() # deconv is the '2D transposed convolution operator' self.deconv_5_1 = nn.ConvTranspose2d(512, 256, (2, 2), 2) # 61x61 -> 48x48 crop self.c_crop_5_2 = lambda x: x[:, :, 6:54, 6:54] self.concat_5_3 = lambda x, y: torch.cat((x, y), dim=1) self.q_2 = torch.quantization.QuantStub() self.conv_5_4 = nn.Conv2d(512, 256, 3) torch.nn.init.kaiming_normal_(self.conv_5_4.weight) self.relu_5_5 = nn.ReLU() self.norm_5_6 = nn.BatchNorm2d(256) self.conv_5_7 = nn.Conv2d(256, 256, 3) torch.nn.init.kaiming_normal_(self.conv_5_7.weight) self.relu_5_8 = nn.ReLU() self.norm_5_9 = nn.BatchNorm2d(256) self.dq_2 = torch.quantization.DeQuantStub() self.deconv_6_1 = nn.ConvTranspose2d(256, 128, (2, 2), 2) # 121x121 -> 88x88 crop self.c_crop_6_2 = lambda x: x[:, :, 17:105, 17:105] self.concat_6_3 = lambda x, y: torch.cat((x, y), dim=1) self.q_3 = torch.quantization.QuantStub() self.conv_6_4 = nn.Conv2d(256, 128, 3) torch.nn.init.kaiming_normal_(self.conv_6_4.weight) self.relu_6_5 = nn.ReLU() self.norm_6_6 = nn.BatchNorm2d(128) self.conv_6_7 = nn.Conv2d(128, 128, 3) torch.nn.init.kaiming_normal_(self.conv_6_7.weight) self.relu_6_8 = nn.ReLU() self.norm_6_9 = nn.BatchNorm2d(128) self.dq_3 = torch.quantization.DeQuantStub() self.deconv_7_1 = nn.ConvTranspose2d(128, 64, (2, 2), 2) # 252x252 -> 168x168 crop self.c_crop_7_2 = lambda x: x[:, :, 44:212, 44:212] self.concat_7_3 = lambda x, y: torch.cat((x, y), dim=1) self.q_4 = torch.quantization.QuantStub() self.conv_7_4 = nn.Conv2d(128, 64, 3) torch.nn.init.kaiming_normal_(self.conv_7_4.weight) self.relu_7_5 = nn.ReLU() self.norm_7_6 = nn.BatchNorm2d(64) self.conv_7_7 = nn.Conv2d(64, 64, 3) torch.nn.init.kaiming_normal_(self.conv_7_7.weight) self.relu_7_8 = nn.ReLU() self.norm_7_9 = nn.BatchNorm2d(64) # 1x1 conv ~= fc; n_classes = 9 self.conv_8_1 = nn.Conv2d(64, 9, 1) self.dq_4 = torch.quantization.DeQuantStub() # residual connections need to be dequantized seperately self.dq_resid_1 = torch.quantization.DeQuantStub() self.dq_resid_2 = torch.quantization.DeQuantStub() self.dq_resid_3 = torch.quantization.DeQuantStub() def forward(self, x): x = self.q_1(x) x = self.conv_1_1(x) x = self.relu_1_2(x) x = self.norm_1_3(x) x = self.conv_1_4(x) x = self.relu_1_5(x) x_resid_1_quantized = self.norm_1_6(x) x = self.pool_1_7(x_resid_1_quantized) x_resid_1 = self.dq_resid_1(x_resid_1_quantized) x = self.conv_2_1(x) x = self.relu_2_2(x) x = self.norm_2_3(x) x = self.conv_2_4(x) x = self.relu_2_5(x) x_resid_2_quantized = self.norm_2_6(x) x = self.pool_2_7(x_resid_2_quantized) x_resid_2 = self.dq_resid_2(x_resid_2_quantized) x = self.conv_3_1(x) x = self.relu_3_2(x) x = self.norm_3_3(x) x = self.conv_3_4(x) x = self.relu_3_5(x) x_resid_3_quantized = self.norm_3_6(x) x = self.pool_3_7(x_resid_3_quantized) x_resid_3 = self.dq_resid_3(x_resid_3_quantized) x = self.conv_4_1(x) x = self.relu_4_2(x) x = self.norm_4_3(x) x = self.conv_4_4(x) x = self.relu_4_5(x) x = self.norm_4_6(x) x = self.dq_1(x) x = self.deconv_5_1(x) x = self.concat_5_3(self.c_crop_5_2(x_resid_3), x) x = self.q_2(x) x = self.conv_5_4(x) x = self.relu_5_5(x) x = self.norm_5_6(x) x = self.conv_5_7(x) x = self.relu_5_8(x) x = self.norm_5_9(x) x = self.dq_2(x) x = self.deconv_6_1(x) x = self.concat_6_3(self.c_crop_6_2(x_resid_2), x) x = self.q_3(x) x = self.conv_6_4(x) x = self.relu_6_5(x) x = self.norm_6_6(x) x = self.conv_6_7(x) x = self.relu_6_8(x) x = self.norm_6_9(x) x = self.dq_3(x) x = self.deconv_7_1(x) x = self.concat_7_3(self.c_crop_7_2(x_resid_1), x) x = self.q_4(x) x = self.conv_7_4(x) x = self.relu_7_5(x) x = self.norm_7_6(x) x = self.conv_7_7(x) x = self.relu_7_8(x) x = self.norm_7_9(x) x = self.conv_8_1(x) x = self.dq_4(x) return x When I attempt to quantize this model using latest PyTorch (1.8.1): def get_model(): model = UNet() model.qconfig = torch.quantization.get_default_qconfig('fbgemm') checkpoints_dir = '/mnt/checkpoints' model.load_state_dict( torch.load(f"{checkpoints_dir}/model_50.pth", map_location=torch.device('cpu')) ) model.eval() # NOTE(aleksey): we could potentially speed this up even more by switching from # conv->relu->batchnorm order to conv->batchnorm->relu order. PyTorch curently supports # conv->batchnorm->relu fusion *only*. # # Which placement of the relu layer is optimal is a subject of academic debate. The order # that the model *currently* uses seems to be the more popular option. I am not swapping the # order of the operations out of laziness -- but you can probably speed things up a little # bit more by going ahead and making that more invasive change. model = torch.quantization.fuse_modules( model, [ ['conv_1_1', 'relu_1_2'], ['conv_1_4', 'relu_1_5'], ['conv_2_1', 'relu_2_2'], ['conv_2_4', 'relu_2_5'], ['conv_3_1', 'relu_3_2'], ['conv_3_4', 'relu_3_5'], ['conv_4_1', 'relu_4_2'], ['conv_4_4', 'relu_4_5'], ] ) model = torch.quantization.prepare(model) print(f"Quantizing the model...") start_time = time.time() dataloader = get_dataloader() for i, (batch, segmap) in enumerate(dataloader): model(batch) model = torch.quantization.convert(model) print(f"Quantization done in {str(time.time() - start_time)} seconds.") model.eval() return model I recieve the following error: AssertionError: Per channel weight observer is not supported yet for ConvTranspose{n}d. This error occurs because of the ConvTranspose2d layers in the model, which is not currently supported by the model quantization API. However, I’ve traced through the model by hand, and every place where ConvTranspose2d appears, I’ve carefully packed with QuantStub and DeQuantStub. Can anyone else spot where my error with this network definition is? Or perhaps this is a bad assert, e.g. a recently introduced bug in PyTorch?
st183978
Solved by supriyar in post #2 I think the error is due to the fact that we ignore the placement of quant-dequant nodes in the model in the logic that raises this error. Since the global qconfig is used, it raises the error as it expects the ConvTranspose to be quantized using same qconfig. I have filed ConvTranspose config error…
st183979
I think the error is due to the fact that we ignore the placement of quant-dequant nodes in the model in the logic that raises this error. Since the global qconfig is used, it raises the error as it expects the ConvTranspose to be quantized using same qconfig. I have filed ConvTranspose config error in prepare API · Issue #57420 · pytorch/pytorch · GitHub 10 to track this. To avoid this error can you try setting the qconfig of all the ConvTranspose modules to None? Hopefully, that should fix it.
st183980
supriyar: To avoid this error can you try setting the qconfig of all the ConvTranspose modules to None? Hopefully, that should fix it. yes, +1, this is the recommended solution
st183981
If I have a pytorch script model with fp32 datatype. I want to measure the quant performance on modile with qnnpack( just use this fp32 model but choose to use int8 as inference datatype). I just want to know with the same net architecture, the performance difference between fp32 and int8. Does pytorch has this kind of tools? Like TensorRT trtexec --int8 with fp32 model
st183982
This thread might be useful for you Speed benchmarking on android? 3 Please reach out to the mobile team if this script doesn’t work as expected.
st183983
I am new to pytorch quantization, confused with two points 1、pytorch two quantization ways: qat and ptq. Are they use the same operations in framework during inference phase when deploy on mobile device? will they have the same performance? 2、Why activations are quantized to uint8 meanwhile weights quantized to int8. Because int8 weights doesn’t need to subtract the zero-point?
st183984
yes the operators used during inference for both QAT and PTQ flows will remain the same. On mobile some of the kernels use QNNPACK for inference so the performance may differ on mobile compared to server. This is due to the requirement of underlying kernels that perform the GEMM operation, i.e. FBGEMM and QNNPACK.
st183985
Thanks!! About the point 2, what’s the data type for acivations and weights in QNNPACK. The code shows that activations are uint8 but weights are void*. github.com pytorch/QNNPACK/blob/master/src/q8dwconv/up8x9-neon.c#L18 #include <arm_neon.h> #include <qnnpack/q8dwconv.h> void q8dwconv_ukernel_up8x9__neon( size_t channels, size_t output_width, const uint8_t** input, const void* weights, uint8_t* output, size_t input_stride, size_t output_increment, const union qnnp_conv_quantization_params quantization_params[restrict static 1]) { const uint8x8_t vkernel_zero_point = vld1_dup_u8((const uint8_t*) &quantization_params->neon.kernel_zero_point); const int32x4_t vmultiplier = vld1q_dup_s32(&quantization_params->neon.multiplier); const int32x4_t vright_shift = vld1q_dup_s32(&quantization_params->neon.right_shift); const int16x8_t voutput_zero_point = vld1q_dup_s16(&quantization_params->neon.output_zero_point); const uint8x8_t voutput_min = vld1_dup_u8(&quantization_params->neon.output_min);
st183986
Hi: I am trying to run quantization on a model. The model I am using is the pretrained restnet18: I am using the quantization aware training. I run into a similar problem like https://discuss.pytorch.org/t/runtimeerror-could-not-run-aten-add-tensor-with-arguments-from-the-quantizedcpu-backend/110039 2. The code is running on GPU first. Before quantization, I transfer it from GPU to CPU and quantize it. It seems like the quantization is working. The problem arises when the quantized model is called later in the code to run the tester. However, I tried the solution by adding the dequant() and quant() around that ‘aten::add_.Tensor’ as image704×814 176 KB I still got errors Could not run ‘aten::add_.Tensor’ with arguments from the ‘QuantizedCPU’ backend image1702×874 701 KB I even tried to use, however, I still has this error out = torch.nn.quantized.modules.FloatFunctional().add(out,identity) Here is my model: class Resnet18_ONE(nn.Module): def __init__(self): super(Resnet18_ONE,self).__init__() #self.loss = loss self.quant = torch.quantization.QuantStub() resnet18_tmp = resnet.resnet18(pretrained=True) #set_parameter_requires_grad(resnet18,True) num_ftrs = resnet18_tmp.fc.in_features #num_classes = superclasses self.base= nn.Sequential(*list(resnet18_tmp.children())[:-1]) #print(self.base) self.linear_sub = nn.Linear(num_ftrs, superclasses) self.linear_bird = nn.Linear(num_ftrs, classes_bird) self.linear_boat = nn.Linear(num_ftrs, classes_boat) self.linear_car = nn.Linear(num_ftrs, classes_car) self.linear_cat = nn.Linear(num_ftrs, classes_cat) self.linear_fungus = nn.Linear(num_ftrs, classes_fungus) self.linear_insect = nn.Linear(num_ftrs, classes_insect) self.linear_monkey = nn.Linear(num_ftrs, classes_monkey) self.linear_truck = nn.Linear(num_ftrs, classes_truck) self.linear_dog = nn.Linear(num_ftrs, classes_dog) self.linear_fruit = nn.Linear(num_ftrs, classes_fruit) self.dequant = torch.quantization.DeQuantStub() def forward(self,x): x = self.quant(x) x = self.base(x) x = self.dequant(x) x = torch.flatten(x, 1) if task == 'SUB': x = self.linear_sub(x) elif task == 'BIRD': #print("I am in bird") x = self.linear_bird(x) elif task == 'BOAT': x = self.linear_boat(x) elif task == 'CAR': x = self.linear_car(x) elif task == 'CAT': x = self.linear_cat(x) elif task == 'FUNGUS': x = self.linear_fungus(x) elif task == 'INSECT': x = self.linear_insect(x) elif task == 'MONKEY': x = self.linear_monkey(x) elif task == 'TRUCK': x = self.linear_truck(x) elif task == 'DOG': x = self.linear_dog(x) else: #print("enter fruit") x = self.linear_fruit(x) return x Here is the setting for quantization aware training model_one.train() model_one.qconfig = torch.quantization.get_default_qat_qconfig(‘fbgemm’) model_one_fused = torch.quantization.fuse_modules(model_one,[[‘base.0’,‘base.1’],[‘base.4.0.conv1’,‘base.4.0.bn1’],[‘base.4.0.conv2’,‘base.4.0.bn2’],[‘base.4.1.conv1’,‘base.4.1.bn1’],[‘base.4.1.conv2’,‘base.4.1.bn2’],[‘base.5.0.conv1’,‘base.5.0.bn1’],[‘base.5.0.conv2’,‘base.5.0.bn2’],[‘base.5.1.conv1’,‘base.5.1.bn1’],[‘base.5.1.conv2’,‘base.5.1.bn2’],[‘base.6.0.conv1’,‘base.6.0.bn1’],[‘base.6.0.conv2’,‘base.6.0.bn2’],[‘base.6.1.conv1’,‘base.6.1.bn1’],[‘base.6.1.conv2’,‘base.6.1.bn2’],[‘base.7.0.conv1’,‘base.7.0.bn1’],[‘base.7.0.conv2’,‘base.7.0.bn2’],[‘base.7.1.conv1’,‘base.7.1.bn1’],[‘base.7.1.conv2’,‘base.7.1.bn2’]]) model_one_prepared = torch.quantization.prepare(model_one_fused) torch.quantization.prepare_qat(model_one_prepared,inplace = True) Here is where I convert my model and test it: model_one_prepared.to(‘cpu’) model_one_int8 = torch.quantization.convert(model_one_prepared.eval(),inplace=False) acc = test_cpu(testLoader_mix10,model_one_int8)
st183987
Hi: I am trying to use quantization aware training for my CNN network. What I want to do is I load a pretrained RestNet18 and finetune it with other dataset. I run into some problem to quantitize the network during training. I followed the tutorial (pytorch.org/docs/stable/quantization.html 4). However, I run into an NotImplementedError: Cannot fuse modules: in fusemodule.py: Could you help me understand this error? Many thanks Shixian Wen Error Messages Traceback (most recent call last): File “attetion_delta_quantization.py”, line 468, in model_one_fused = torch.quantization.fuse_modules(model_one,[[‘base.0.weight’,‘base.1.weight’,‘base.1.bias’,‘base.4.0.conv1.weight’,‘base.4.0.bn1.weight’,‘base.4.0.bn1.bias’,‘base.4.0.conv2.weight’,‘base.4.0.bn2.weight’,‘base.4.0.bn2.bias’,‘base.4.1.conv1.weight’,‘base.4.1.bn1.weight’,‘base.4.1.bn1.bias’,‘base.4.1.conv2.weight’,‘base.4.1.bn2.weight’,‘base.4.1.bn2.bias’,‘base.5.0.conv1.weight’,‘base.5.0.bn1.weight’,‘base.5.0.bn1.bias’,‘base.5.0.conv2.weight’,‘base.5.0.bn2.weight’,‘base.5.0.bn2.bias’,‘base.5.0.downsample.0.weight’,‘base.5.0.downsample.1.weight’,‘base.5.0.downsample.1.bias’,‘base.5.1.conv1.weight’,‘base.5.1.bn1.weight’,‘base.5.1.bn1.bias’,‘base.5.1.conv2.weight’,‘base.5.1.bn2.weight’,‘base.5.1.bn2.bias’,‘base.6.0.conv1.weight’,‘base.6.0.bn1.weight’,‘base.6.0.bn1.bias’,‘base.6.0.conv2.weight’,‘base.6.0.bn2.weight’,‘base.6.0.bn2.bias’,‘base.6.0.downsample.0.weight’,‘base.6.0.downsample.1.weight’,‘base.6.0.downsample.1.bias’,‘base.6.1.conv1.weight’,‘base.6.1.bn1.weight’,‘base.6.1.bn1.bias’,‘base.6.1.conv2.weight’,‘base.6.1.bn2.weight’,‘base.6.1.bn2.bias’,‘base.7.0.conv1.weight’,‘base.7.0.bn1.weight’,‘base.7.0.bn1.bias’,‘base.7.0.conv2.weight’,‘base.7.0.bn2.weight’,‘base.7.0.bn2.bias’,‘base.7.0.downsample.0.weight’,‘base.7.0.downsample.1.weight’,‘base.7.0.downsample.1.bias’,‘base.7.1.conv1.weight’,‘base.7.1.bn1.weight’,‘base.7.1.bn1.bias’,‘base.7.1.conv2.weight’,‘base.7.1.bn2.weight’,‘base.7.1.bn2.bias’,‘linear_sub’,‘linear_bird’,‘linear_boat’,‘linear_car’,‘linear_cat’,‘linear_fungus’,‘linear_insect’,‘linear_monkey’,‘linear_truck’,‘linear_dog’,‘linear_fruit’]]) File “/home/shixian/anaconda3/envs/pytorchnew/lib/python3.7/site-packages/torch/quantization/fuse_modules.py”, line 198, in fuse_modules _fuse_modules(model, module_list, fuser_func) File “/home/shixian/anaconda3/envs/pytorchnew/lib/python3.7/site-packages/torch/quantization/fuse_modules.py”, line 141, in _fuse_modules new_mod_list = fuser_func(mod_list) File “/home/shixian/anaconda3/envs/pytorchnew/lib/python3.7/site-packages/torch/quantization/fuse_modules.py”, line 124, in fuse_known_modules raise NotImplementedError(“Cannot fuse modules: {}”.format(types)) NotImplementedError: Cannot fuse modules: (<class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.parameter.Parameter’>, <class ‘torch.nn.modules.linear.Linear’>, <class ‘torch.nn.modules.linear.Linear’>, <class ‘torch.nn.modules.linear.Linear’>, <class ‘torch.nn.modules.linear.Linear’>, <class ‘torch.nn.modules.linear.Linear’>, <class ‘torch.nn.modules.linear.Linear’>, <class ‘torch.nn.modules.linear.Linear’>, <class ‘torch.nn.modules.linear.Linear’>, <class ‘torch.nn.modules.linear.Linear’>, <class ‘torch.nn.modules.linear.Linear’>, <class ‘torch.nn.modules.linear.Linear’>) Here is my CNN model: class Resnet18_ONE(nn.Module): def __init__(self): super(Resnet18_ONE,self).__init__() self.quant = torch.quantization.QuantStub() resnet18 = torchvision.models.resnet18(pretrained=True) num_ftrs = resnet18.fc.in_features self.base= nn.Sequential(*list(resnet18.children())[:-1]) self.linear_sub = nn.Linear(num_ftrs, superclasses) self.linear_bird = nn.Linear(num_ftrs, classes_bird) self.linear_boat = nn.Linear(num_ftrs, classes_boat) self.linear_car = nn.Linear(num_ftrs, classes_car) self.linear_cat = nn.Linear(num_ftrs, classes_cat) self.linear_fungus = nn.Linear(num_ftrs, classes_fungus) self.linear_insect = nn.Linear(num_ftrs, classes_insect) self.linear_monkey = nn.Linear(num_ftrs, classes_monkey) self.linear_truck = nn.Linear(num_ftrs, classes_truck) self.linear_dog = nn.Linear(num_ftrs, classes_dog) self.linear_fruit = nn.Linear(num_ftrs, classes_fruit) self.dequant = torch.quantization.DeQuantStub() def forward(self,x): x = self.quant(x) x = self.base(x) x = torch.flatten(x, 1) if task == 'SUB': x = self.linear_sub(x) elif task == 'BIRD': x = self.linear_bird(x) elif task == 'BOAT': x = self.linear_boat(x) elif task == 'CAR': x = self.linear_car(x) elif task == 'CAT': x = self.linear_cat(x) elif task == 'FUNGUS': x = self.linear_fungus(x) elif task == 'INSECT': x = self.linear_insect(x) elif task == 'MONKEY': x = self.linear_monkey(x) elif task == 'TRUCK': x = self.linear_truck(x) elif task == 'DOG': x = self.linear_dog(x) else: x = self.linear_fruit(x) x = self.dequant(x) return x Here is the model settings according to the tutorial: model_one = Resnet18_ONE() model_one.train() model_one.qconfig = torch.quantization.get_default_qat_qconfig(‘fbgemm’) model_one_fused = torch.quantization.fuse_modules(model_one,[[‘base.0.weight’,‘base.1.weight’,‘base.1.bias’,‘base.4.0.conv1.weight’,‘base.4.0.bn1.weight’,‘base.4.0.bn1.bias’,‘base.4.0.conv2.weight’,‘base.4.0.bn2.weight’,‘base.4.0.bn2.bias’,‘base.4.1.conv1.weight’,‘base.4.1.bn1.weight’,‘base.4.1.bn1.bias’,‘base.4.1.conv2.weight’,‘base.4.1.bn2.weight’,‘base.4.1.bn2.bias’,‘base.5.0.conv1.weight’,‘base.5.0.bn1.weight’,‘base.5.0.bn1.bias’,‘base.5.0.conv2.weight’,‘base.5.0.bn2.weight’,‘base.5.0.bn2.bias’,‘base.5.0.downsample.0.weight’,‘base.5.0.downsample.1.weight’,‘base.5.0.downsample.1.bias’,‘base.5.1.conv1.weight’,‘base.5.1.bn1.weight’,‘base.5.1.bn1.bias’,‘base.5.1.conv2.weight’,‘base.5.1.bn2.weight’,‘base.5.1.bn2.bias’,‘base.6.0.conv1.weight’,‘base.6.0.bn1.weight’,‘base.6.0.bn1.bias’,‘base.6.0.conv2.weight’,‘base.6.0.bn2.weight’,‘base.6.0.bn2.bias’,‘base.6.0.downsample.0.weight’,‘base.6.0.downsample.1.weight’,‘base.6.0.downsample.1.bias’,‘base.6.1.conv1.weight’,‘base.6.1.bn1.weight’,‘base.6.1.bn1.bias’,‘base.6.1.conv2.weight’,‘base.6.1.bn2.weight’,‘base.6.1.bn2.bias’,‘base.7.0.conv1.weight’,‘base.7.0.bn1.weight’,‘base.7.0.bn1.bias’,‘base.7.0.conv2.weight’,‘base.7.0.bn2.weight’,‘base.7.0.bn2.bias’,‘base.7.0.downsample.0.weight’,‘base.7.0.downsample.1.weight’,‘base.7.0.downsample.1.bias’,‘base.7.1.conv1.weight’,‘base.7.1.bn1.weight’,‘base.7.1.bn1.bias’,‘base.7.1.conv2.weight’,‘base.7.1.bn2.weight’,‘base.7.1.bn2.bias’,‘linear_sub’,‘linear_bird’,‘linear_boat’,‘linear_car’,‘linear_cat’,‘linear_fungus’,‘linear_insect’,‘linear_monkey’,‘linear_truck’,‘linear_dog’,‘linear_fruit’]]) model_one_prepared = torch.quantization.prepare(model_one_fused)
st183988
Solved by jerryzh168 in post #4 oh please specify them in list of lists, what are you fusing? we support conv + bn, conv + bn + relu fusion. For example if we have 0.conv, 0.bn, 0.relu and 1.conv, 1.bn we will call: fuse_modules(model, [[“0.conv”, “0.bn”, “0.relu”], [“1.conv”, “1.bn”]])
st183989
fusion should be applied on modules, not parameters, looks like you are applying fusion on parameters (weights and bias)?
st183990
Hi Jerry: Thank you for your reply. I am using the nightly version pytorch: ‘1.9.0.dev20210422+cu111’, torchvision: ‘0.10.0.dev20210422+cu111’ I tested your idea. When I set model_one_fused = torch.quantization.fuse_modules(model_one,[[‘base.0’,‘base.1’]]), the model works fine. However, as long as I add more modules to it e.g., model_one_fused = torch.quantization.fuse_modules(model_one,[[‘base.0’,‘base.1’,‘base.4.0.conv1’,‘base.4.0.bn1’,‘base.4.0.conv2’,‘base.4.0.bn2’]]) The system told me: assert fuser_method is not None, "did not find fuser method for: {} ".format(op_list) AssertionError: did not find fuser method for: (<class ‘torch.nn.modules.conv.Conv2d’>, <class ‘torch.nn.modules.batchnorm.BatchNorm2d’>, <class ‘torch.nn.modules.conv.Conv2d’>, <class ‘torch.nn.modules.batchnorm.BatchNorm2d’>, <class ‘torch.nn.modules.conv.Conv2d’>, <class ‘torch.nn.modules.batchnorm.BatchNorm2d’>) Many thanks Shixian
st183991
oh please specify them in list of lists, what are you fusing? we support conv + bn, conv + bn + relu fusion. For example if we have 0.conv, 0.bn, 0.relu and 1.conv, 1.bn we will call: fuse_modules(model, [[“0.conv”, “0.bn”, “0.relu”], [“1.conv”, “1.bn”]])
st183992
Hello! Previously I had problems 2 with quantizable version of segmentation pipeline but installing torch from master branch surprisingly made it working. Now I am trying to perform QAT with regression model which basically has almost the same encoder architecture but different detection decoder (stacked linears, dropouts, relus, cat operations and sigmoid on top of it). I am doing everything according to the manual, but nothing is working - after some steps my model stops training well (comparing with regular model). I tried these hypotheses in order to find the problem: Different torch versions: 1.7.1, 1.8.0, 1.8.1, from master source; Wrapped .cat operation in decoder with FloatFunctional; Tried training with both fused and not fused modules; Put QuantStub and DeQuantStub at the same hierarchy level in the model; Do you have maybe other hypotheses I have to try to test? Or is there any way I can debug quantization mode to find the reason? Thanks in advance!
st183993
Hi, I have created a small layer to make my networks smaller as a drop-in replacement for a Linear layer: class FactorizedLinear(nn.Module): def __init__(self, or_linear, dim_ratio=1.0, random_init=False): super().__init__() self.bias = nn.parameter.Parameter(or_linear.bias.data) if random_init: u, vh = self.random_init(or_linear.weight.data, dim_ratio=dim_ratio) print(f'Doing zero init of tensor {or_linear.weight.shape}, U: {u.shape}, Vh: {vh.shape}') else: u, vh = self.spectral_init(or_linear.weight.data, dim_ratio=dim_ratio) print(f'Doing SVD of tensor {or_linear.weight.shape}, U: {u.shape}, Vh: {vh.shape}') self.u = nn.parameter.Parameter(u) self.vh = nn.parameter.Parameter(vh) self.dim_ratio = dim_ratio self.in_features = u.size(0) self.out_features = vh.size(1) @staticmethod @torch.jit.ignore def spectral_init(m, dim_ratio=1): u, s, vh = torch.linalg.svd(m, full_matrices=False) u = u @ torch.diag(torch.sqrt(s)) vh = torch.diag(torch.sqrt(s)) @ vh if dim_ratio < 1: dims = int(u.size(1) * dim_ratio) u = u[:, :dims] vh = vh[:dims, :] s_share = s[:dims].sum() / s.sum() * 100 print(f'SVD eigenvalue share {s_share:.2f}%') return u, vh @staticmethod @torch.jit.ignore def random_init(m, dim_ratio=1): bottleneck = int(m.size(1) * dim_ratio) u = torch.zeros(m.size(0), bottleneck) vh = torch.zeros(bottleneck, m.size(1)) return u, vh def extra_repr(self) -> str: return (f'in_features={self.in_features}, ' f'out_features={self.out_features}, ' f'bias=True, dim_ratio={self.dim_ratio}') def forward(self, x): return x @ (self.u @ self.vh).transpose(0, 1) + self.bias In practice when I use dim_ratio=0.25 I can achieve a 50% smaller network with slightly worse performance (~10%). I took a look at this module 1 and this line more or less tells me that you cannot easily extend this class with my logic: def forward(self, x: torch.Tensor) -> torch.Tensor: return torch.ops.quantized.linear( x, self._packed_params._packed_params, self.scale, self.zero_point) I can of course use this self._packed_params = torch.ops.quantized.linear_prepack(weight, bias) to distribute the “small” package, and then just pre-calculate the self.u @ self.vh part and then essentially create the quantized network on-the-fly, but this would make model packaging / distribution very complicated compared to just loading a jit file. All of this is very experimental and so far I have played only with dim_ratio=0.25, which makes the network itself 2x smaller. But quantization (in my case) usually also provides about 2-3x total module size reduction, so I would need to compress with dim_ratio=0.1 or lower to produce really small networks without quantization. Maybe I am still missing an elephant in the room, idk. Hope someone will find this useful or have any ideas. Combining quantization (up to 4x smaller networks) with factorization (2-4x smaller) looks like a killer feature to me.
st183994
Solved by snakers41 in post #3 Yeah, I thought about this. But in our particular case this would make model distribution much more complicated (or it would negate the effects of factorization). But I fould an elegant solution. I forgot that bias=True is not mandatory in PyTorch. You can just do this and it solves the problem. Yo…
st183995
could you precompute self.u @ self.vh and then transform FactorizedLinear to Linear before quantization? so that it can be quantized to QuantizedLinear and get the size reduction?
st183996
Yeah, I thought about this. But in our particular case this would make model distribution much more complicated (or it would negate the effects of factorization). But I fould an elegant solution. I forgot that bias=True is not mandatory in PyTorch. You can just do this and it solves the problem. You can pass the above class to the below class and it will just quantize: class FactorizedQLinear(nn.Module): def __init__(self, f_linear): super().__init__() self.in_features = f_linear.in_features self.out_features = f_linear.out_features self.dim_ratio = f_linear.dim_ratio self.u_linear = nn.Linear(in_features=f_linear.u.data.size(0), out_features=f_linear.u.data.size(1), bias=True) self.vh_linear = nn.Linear(in_features=f_linear.vh.data.size(0), out_features=f_linear.vh.data.size(1), bias=False) self.u_linear.weight.data = f_linear.u.data self.u_linear.bias.data = f_linear.bias self.vh_linear.weight.data = f_linear.vh.data def extra_repr(self) -> str: return (f'in_features={self.in_features}, ' f'out_features={self.out_features}, ' f'bias=True, dim_ratio={self.dim_ratio}') def forward(self, x): return self.u_linear(self.vh_linear(x))
st183997
Also, this method really works. There are some sacrifices in quality (we do s2s), but probably in plain classification it will work just perfectly. Also quantization used together with factorization can really reduce your model size ~10x, which is nice! An xsmall_q model here - Quality Benchmarks · snakers4/silero-models Wiki (github.com) 1 - is 30x smaller that large models and it is created by a combination of quantization / minification / factorization.
st183998
Hey all. I’ve taken a look at quantization recently for my final university project. I’ve seen that apparently PyTorch support at most 8-bit quantization. But is there any way to quantize my neural network to a lower precision (e.g. 4-bit or 2-bit)? Is it impossible instead? Please respond me.
st183999
Solved by jerryzh168 in post #2 it’s not supported yet, responded here: Quantization aware training lower than 8-bits?