id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st184900 | did you use FloatFunctional(https://github.com/pytorch/pytorch/blob/master/torch/nn/quantized/modules/functional_modules.py 37) to replace the call to torch.cat? |
st184901 | When i use a pretrained QAT model, my task is face detection, the face landmark and the bbox is almost same like float training, but the classification score from the result of softmax will be wrong, because it will be greater than 1.It think it is a bug in scale and point from softmax? |
st184902 | Hi,
I have a CNN+linear model that I want to quantize. To make the most of PyTorch’s quantization, is there any way to perform static quantization on the CNN and dynamic on linear?
Thank you. |
st184903 | felicitywang:
Thank you.
if you are doing static quantization, why don’t do static quantization on both conv and linear?
If you really need to do this, you can first apply static quantization, and then dynamic quantization. |
st184904 | When i run QAT, training is normal, but when i want to evaluate the qat model, an error length of scales must equal to channel confuse me.
I use pytorch 1.4.0, and my code is
# Traing is Normal
net = MyQATNet()
net.fuse_model()
net.qconfig = torch.quantization.get_default_qat_config("fbgemm")
net_eval = net
model.apply(torch.quantization.enable_observer)
model.apply(torch.quantization.enable_fake_quant)
# Evaluate
qat_model = copy.deepcopy(net_eval)
qat_model.to(torch.device('cpu'))
torch.quantization.convert(qat_model, inplace=True) # Error is there
qat_model.eval()
@ptrblck Can you have a look? |
st184905 | Solved by xieydd in post #10
I think somebody have the same error: Issue with Quantization
I think i know the answew:
# pytorch 1.4
#save
torch.save(net.state_dict(),'xx') # fp32 model
#load
model.qconfig = torch.quantization.get_default_qat_qconfig("fbgemm")
model.fuse_model()
torch.quantization.prepare_qat(net, inplace=True… |
st184906 | can you paste your network definition and the input you use to run the model? it might be a problem with your input |
st184907 | I think i find where is the error, When i use DataParallel, it will be such an error, but when i use single gpu, will no error. |
st184908 | How can i save the qat trained model, when i save torch.save(qat_model.state_dict(),'qat_model.pth') or i directly save training model torch.save(net, 'net.pth'), when i want to load the pretrained qat model, for qat_model, the key is like conv1.0.activation_post_process.scale; and when net, the key have no conv1.0.activation_post_process.scale, but expected key is conv1.0.0.activation_post_process.scale, so KeyError happened. When i see the model definition, expected key is right. |
st184909 | I think somebody have same question like me. https://github.com/pytorch/pytorch/issues/32691 10 |
st184910 | xieydd:
I think i find where is the error, When i use DataParallel, it will be such an error, but when i use single gpu, will no error.
We recently fixed a bug with QAT and DataParallel, please try with pytorch nightly to see if the issue still persists. cc @Vasiliy_Kuznetsov |
st184911 | you mean the load_state_dict KeyError is also solved in newest version of pytorch? tks |
st184912 | I think somebody have the same error: Issue with Quantization 9
I think i know the answew:
# pytorch 1.4
#save
torch.save(net.state_dict(),'xx') # fp32 model
#load
model.qconfig = torch.quantization.get_default_qat_qconfig("fbgemm")
model.fuse_model()
torch.quantization.prepare_qat(net, inplace=True)
state_dict = torch.load('xx', map_location=torch.device('cpu'))
# remove module. module
#torch.quantization.convert(net,inplace=True) # convert it to int8 model
x = torch.randn((2,3,300,300))
y = net(x)
print(y)
print('Success') |
st184913 | Yes, https://github.com/pytorch/pytorch/pull/37032 13 fixes an error for DataParallel. @xieydd you can try the nightly to verify if this fixes your problem. |
st184914 | Hi,
I have this model that I have statically quantized in two manners: 1) feature extractor only and 2) feature extractor + classifier. I get adequate results with the former, but I get rubbish with the later.
I do not know where to start looking, does anyone have any idea what could be going wrong?
It seems when I quantize the classifier, there is very little variation in the classification output.
Validation error before quantization for both cases is good, but it goes down to almost 0 when I quantize the classifier.
Thanks! |
st184915 | I am getting all kind of errors like:
RuntimeError: Could not run ‘aten::empty.memory_format’ with arguments from the ‘QuantizedCPUTensorId’ backend. ‘aten::empty.memory_format’ is only available for these backends: [CUDATensorId, SparseCPUTensorId, VariableTensorId, CPUTensorId, MkldnnCPUTensorId, SparseCUDATensorId].
when using hacky solutions like nn.quantized.FloatFunctional().mul(x,0)
What is the correct way to quantize this operation? |
st184916 | sorry, it seems that nn.quantized.FloatFunctional().mul_scalar(x,0) hacky solutions work in pytorch 1.5 (I was using 1.4). |
st184917 | Is there any tutorial or instructions about how to deploy our quantized model to C++ frontend? I can’t find any on the Internet. |
st184918 | maybe https://pytorch.org/tutorials/advanced/cpp_export.html 18? there is nothing specific to quantized models, it should work the same way as other models |
st184919 | Hi,
After converting my program into mixed-precision using amp, the forward time gets shorter while the backward time gets longer when I record running time by “import time”.
Then I use “torch.profiler” to record the running time.
image1704×1143 56 KB
However, it seems that the result of torch.profiler meets my expectation: cuda.time becomes around 1/4 of the fp32.
Here my question is: what is the cuda.time means and why the backpropagation time becomes longer in mix precision?
mixed-precision:
import time :
Entire Epoch: [1] Train: [0] SUM: 1828.379 DT: 13.270 FW: 71.987 BK: 1711.248 CM: 31.873
torch profiler:
Self CPU time total: 56.883s
CUDA time total: 893.112s
FP32:
import time :
Entire Epoch: [3] Train: [0] SUM: 1584.260 DT: 15.714 FW: 368.550 BK: 1119.231 CM: 80.766
torch profiler:
Self CPU time total: 105.049s
CUDA time total: 3021.355s
Detailed Logs of my program are listed below:
http://49.234.107.127:81/index.php/s/qa3Yjo8WJwNZjCS (mixed precision)
http://49.234.107.127:81/index.php/s/y8SpyfiM3d5SZp7
My model uses conv3d and I run my code on Tesla V100.
Many thanks. |
st184920 | I build a pytorch model based on conv1d. I gone through quantization and implemented some cases as well but all those are working on conv2d, bn,relu but In my case, my model is built on conv1d and PReLU. Does this quatization valid for these network layers? Because when I did quantization only the layers which are included in mapping is only quantized. Let me show you those layers for which quantization is valid(i.e which are included in mapping)
Please find the list of modules it supports:(according to source code in went through)
(Actual layer : quantized layer)
nn.Linear: nnq.Linear,
nn.ReLU: nnq.ReLU,
nn.ReLU6: nnq.ReLU6,
nn.Conv2d: nnq.Conv2d,
nn.Conv3d: nnq.Conv3d,
nn.BatchNorm2d: nnq.BatchNorm2d,
nn.BatchNorm3d: nnq.BatchNorm3d,
QuantStub: nnq.Quantize,
DeQuantStub: nnq.DeQuantize,
Wrapper Modules:
nnq.FloatFunctional: nnq.QFunctional,
Intrinsic modules:
nni.ConvReLU2d: nniq.ConvReLU2d,
nni.ConvReLU3d: nniq.ConvReLU3d,
nni.LinearReLU: nniq.LinearReLU,
nniqat.ConvReLU2d: nniq.ConvReLU2d,
nniqat.LinearReLU: nniq.LinearReLU,
nniqat.ConvBn2d: nnq.Conv2d,
nniqat.ConvBnReLU2d: nniq.ConvReLU2d,
QAT modules:
nnqat.Linear: nnq.Linear,
nnqat.Conv2d: nnq.Conv2d,
Is it, it means that quantization can’t be done on conv1d and PReLU? |
st184921 | Solved by supriyar in post #5
Only the quantized model will work if all the layers were quantized is it right? Or else we need to dequantize the parameters again before it passes through not quantized layer is it so?
You can insert QuantStub, DequantStub blocks around the code that can be quantized. Please see https://pytorch… |
st184922 | We are in the process of implementing the Conv1d module and ConvReLU1d fused module. The PR list is here https://github.com/pytorch/pytorch/pull/38438 39. Feel free to try out your model with the changes in this PR. The quantization flow should convert it.
We don’t currently support fusion with PReLU and LayerNorm, so they will have to be executed separately. |
st184923 | Hi @supriyar
Fusing is optional one in quantization if I’m not wrong. We need our modules to be quantized i.e., each layer we implemented, in order to get our quantized parameters to pass through it. Only the quantized model will work if all the layers were quantized is it right? Or else we need to dequantize the parameters again before it passes through not quantized layer is it so?
Thank you @supriyar |
st184924 | Hi @supriyar
Hey, you gave me that PR link above in the last comment for support of quantized conv1d. I decided to give a try with that but the torch module is not importing and it showing the error ‘torch.version’ is not there. So i copied ‘version.py’ from my earlier version after this too its not importing torch throwing another error ‘torch._c import default_generators’ failed to import.
May I know that the changes in that PR is applicable for torch CPU version or not?
Thanks @supriyar ,
Aravind |
st184925 | Only the quantized model will work if all the layers were quantized is it right? Or else we need to dequantize the parameters again before it passes through not quantized layer is it so?
You can insert QuantStub, DequantStub blocks around the code that can be quantized. Please see https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 21 for an example of this.
May I know that the changes in that PR is applicable for torch CPU version or not?
The change is applicable for CPU, to get these changes you can either build pytorch from source or install from pytorch nightly. |
st184926 | Hi @supriyar,
Thank you, Now I’m able to work with the changes in new PR as you suggested I tried with nightly.
Thanks @supriyar,
Aravind. |
st184927 | Hi @supriyar
I have done Quantizing my model, now I tried to save it with ‘jit.save(jit.script(model))’ but it pops the error as ‘‘aten::slice.t(t[] l, int start, int end=9223372036854775807, int step=1) -> (t[]): could not match type tensor to list[t] in argument ‘l’: cannot match list[t] to tensor.’’ very similar to this 2 more pops also arised. I googled about this error and in some discussions I found that it is the error regarding slicing(: , : , : ) operator says that ‘jit.script will not support scripting for directly used slicing operator’. Is it the error actually pointed to that? Because I too used slicing operator in the middle of my model and error throws at the same line.
This is one thing and now I go with another alternative to save quantized model i.e., with state_dict(). I’m able to save model with this but when I want to perform inference I have to initialize the model with the params which I have saved earlier with state_dict(). Now the params are quantized one but our model is defined for float. So error popped up as ‘exception occured : (‘copying from quantized tensor to non-quantized tensor is not allowed, please use dequantize to get a float tensor from a quantized tensor’’. This is the issue that I can’t save my model with jit and if I do so with state_dict() here I can’t initialize my model to go with inference.
Can you suggest any alternative?
Thanks @supriyar
Aravind. |
st184928 | Could you give a small repro for the error with aten::slice not working with jit?
Regarding loading the quantized state_dict - you will first have to convert the model to quantized model before loading the state dict as well. You can call prepare and convert APIs to do this (no need to calibrate). This way the model state dict will match the saved quantized state dict and you should be able to load it. |
st184929 | Hi @supriyar
Thanks for the suggestion.
supriyar:
call prepare and convert APIs
I tried with this one and its done.
Hey regarding
supriyar:
aten::slice not working with jit
This has done successfully because JIT might accepts only direct representation of Int for slicing as “w[:,:,:-16]” where as I initially represented as “w = w[:,:,:-2*2**3]”. So tried a chance and it worked.
Something I would like to say is JIT is not able to find attribute “.new_tensor” where as “.clone().detach()” is identified
Thanks @supriyar,
Aravind |
st184930 | Hi @supriyar,
I have some doubt about operations on quantized variables. Here I’m quoting it
z = self.dequant_tcnet(z)
w = self.dequant_tcnet(w)
v = self.dequant_tcnet(v)
x = self.dequant_tcnet(x)
z = z + w
x = x + v
x = self.quant_tcnet2(x)
Here inorder to perform these 2 operations z = z + w; x = x + v I need to dequantize the variables involved in that operations and then perform those operations. Can we able to perform those operations without dequantizing i.e., as quantized variables because if I do run this without dequantizing variables “RuntimeError: Could not run ‘aten::add.Tensor’ with arguments from the ‘QuantizedCPU’ backend. ‘aten::add.Tensor’ is only available for these backends: [CPU, MkldnnCPU, SparseCPU, Autograd, Profile].” error is interrupting.
Can we perform “addition” on 2 quantized tensor variables? without dequantizing them in any other alternative.
Thanks @supriyar,
Aravind. |
st184931 | Hi @supriyar,
z = self.dequant_tcnet(z)
w = self.dequant_tcnet(w)
v = self.dequant_tcnet(v)
x = self.dequant_tcnet(x)
z = z + w
x = x + v
x = self.quant_tcnet2(x)
Can we perform “addition” on 2 quantized tensor variables? without dequantizing them in any other alternative.
For this i tried QFunctional & FloatFunctional but output is not up to the mark by using this. where as placing quantstubs works good.
I have some concern here. My float model which takes 0.2 - 0.3 second (~300ms) to process single input whereas after i quantizing my model with Int8 Quantization the time taken is increased from 0.2-0.3(float precision) to 0.4 to 0.5(Int8).
Here i show you the exact float model block and quantized model block
***** float block ******
*****Round 1********
y = self.conv1x12(x)
y = self.prelu2(y)
y = self.norm2(y)
w = self.depthwise_conv12(y)
#w = w[:,:,:-2*2**2]
w = w[:,:,:-8]
y = self.depthwise_conv2(y)
y = y[:,:,:-8]
y = self.prelu22(y)
y = self.norm22(y)
v = self.pointwise_conv2(y)
z = z + w
x = x + v
This is float model block. This block/computation will be repeated 13 more times (total 14 blocks). This is taking 0.2 - 0.3 seconds
Quantized block
round 1**
y = self.conv1x12(x)
y = self.prelu2(y)
y = self.norm2(y)
w = self.depthwise_conv12(y)
w = w[:,:,:-8]
y = self.depthwise_conv2(y)
y = y[:,:,:-8]
y = self.prelu22(y)
y = self.norm22(y)
v = self.pointwise_conv2(y)
w = self.dequant_tcnet(w)
z = self.dequant_tcnet(z)
v = self.dequant_tcnet(v)
x = self.dequant_tcnet(x)
z = z + w
x = x + v
x = self.quant_tcnet3(x)
This is quantized block model where is placed quantstubs for those arthematic operations & remaining all layers are quantized. This quantized model is taking 0.4 - 0.5 seconds
So after quantizing my model, the size of model is optimized but computation time is not optimized. Could you tell me is there any flaw? I crosschecked and the output is also good but computation is not reduced
Thanks @supriyar,
Aravind. |
st184932 | Hi @supriyar @raghuramank100,
Referring to above mentioned issue I want to make it clear about for which layers i have done quantization.
Conv1d (from nightly)
LayerNorm (from nightly)
ReLU
Linear
additional layers:
quantstub / dequantstubs
QFunctional / FloatFunctional
All these layers are quantized and I fused Relu and Conv1d as well ( since beginning Im referring to this documentation Static Quantization with eager mode in pytorch 8
If I use FloatFunctional, I’m not using Quant/Dequantstubs in my model where arithmetic operations are triggered between quantized layers.
After successfully done quantizing, still my model CPU computation is not reduced instead computation has increased after quantization!
Could you tell me in which cases this might happen?
Thanks,
Aravind |
st184933 | For add you could use torch.nn.FloatFunctional, the extra dequant and quant ops in the network could be slowing things down.
Regarding performance you can try running the torch.autograd.profiler on your model for some iterations to see which ops take up most time. It will give you an op level breakdown with runtime so you can compare float vs quantized model. |
st184934 | Hi @supriyar,
Yah, I did that replacement of quantstubs/dequantstubs with floatfunctional. I’ll drop it here
> y = self.conv1x11(x)
y = self.prelu1(y)
y = self.norm1(y)
w = self.depthwise_conv11(y)
#w = w[:,:,:-2*2**1]
w = w[:,:,:-4]
y = self.depthwise_conv1(y)
#y = y[:,:,:-2*2**1]
y = y[:,:,:-4]
y = self.prelu11(y)
y = self.norm11(y)
v = self.pointwise_conv1(y)
#w = self.pointwise_conv_skp1(y)
#z = self.dequant_tcnet(z)
#w = self.dequant_tcnet(w)
#v = self.dequant_tcnet(v)
#x = self.dequant_tcnet(x)
#z = z + w
#x = x + v
z = self.Qf_s.add(z,w)
x = self.Qf_s.add(x,v)
#x = self.quant_tcnet2(x)
#z = self.quant(z)
as you can see now I removed stubs and using FloatFunctional but yet usage is not reduced
Thanks @supriyar,
Aravind |
st184935 | Hi @supriyar @raghuramank100,
Regarding CPU usage i debug the usage with
supriyar:
torch.autograd.profiler
Here I’m listing the output:
Usage of Float model:
Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls
slow_conv_dilated2d 25.34% 127.183ms 98.72% 495.549ms 42.081us 11776
size 15.79% 79.242ms 15.79% 79.242ms 0.463us 171097
_cat 9.76% 49.007ms 11.61% 58.285ms 2.534ms 23
mkldnn_convolution 9.67% 48.534ms 19.48% 97.764ms 1.397ms 70
threshold 6.45% 32.364ms 6.52% 32.726ms 1.091ms 30
slice 5.26% 26.391ms 9.76% 49.012ms 2.065us 23737
native_layer_norm 3.79% 19.017ms 7.74% 38.849ms 669.817us 58
convolution 3.71% 18.632ms 86.18% 432.610ms 7.459ms 58
empty 2.81% 14.117ms 2.82% 14.143ms 1.179us 11991
select 2.65% 13.278ms 9.45% 47.425ms 4.027us 11778
as_strided 2.52% 12.638ms 2.52% 12.638ms 0.530us 23837
fill 2.37% 11.903ms 2.38% 11.933ms 2.026us 5889
add 2.23% 11.208ms 4.53% 22.760ms 421.489us 54
total time : 502 ms
Cpu usage of Quantized model without fusing:
Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls
quantized::conv1d 81.39% 454.615ms 81.39% 454.615ms 7.838ms 58
quantized::add 9.25% 51.657ms 9.25% 51.657ms 1.987ms 26
quantized::layer_norm 7.62% 42.582ms 7.62% 42.582ms 1.468ms 29
relu 0.63% 3.513ms 1.31% 7.296ms 121.595us 60
quantized::mul 0.59% 3.322ms 0.59% 3.322ms 3.322ms 1
quantized::linear 0.18% 1.019ms 0.18% 1.019ms 1.019ms 1
total time : 558ms
CPU usage of quantized model after fusing:
Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls
quantized::conv1d 41.77% 239.086ms 41.77% 239.086ms 7.970ms 30
quantized::conv1d_relu 39.94% 228.633ms 39.94% 228.633ms 8.165ms 28
quantized::add 9.35% 53.523ms 9.35% 53.523ms 2.059ms 26
quantized::layer_norm 7.67% 43.932ms 7.67% 43.932ms 1.515ms 29
quantized::mul 0.62% 3.564ms 0.62% 3.564ms 3.564ms 1
index_add_ 0.27% 1.542ms 0.54% 3.100ms 1.550ms 2
quantized::linear 0.18% 1.017ms 0.18% 1.017ms 1.017ms 1
relu 0.06% 370.631us 0.13% 762.027us 190.507us 4
total time : 572 ms
If you see these three usages conv1d is taking high usage after quantization
quantized::conv1d 239.086ms
quantized::conv1d_relu 228.633ms
slow_conv_dilated2d 127.183ms
Could anyone have any view why this was happened i.e., quantized conv1d increased CPU usage?
Thanks @supriyar @raghuramank100,
Aravind. |
st184936 | Hi @raghuramank100 @supriyar
BOLLOJU_ARAVIND:
quantized::conv1d 239.086ms
quantized::conv1d_relu 228.633ms
slow_conv_dilated2d 127.183ms
Do you have any idea why this quantized Conv1d takes more time?
Thanks @raghuramank100 @supriyar,
Aravind |
st184937 | We are currently implementing the operator using quantized::conv2d under the hood after unsqueezing the activation and weight tensors. That might be proving sub-optimal for certain input shapes.
Could you give us details about the input and weight dimensions and the parameters (kernel, stride, pad, dilation) to conv1d in your case? |
st184938 | Hi @supriyar,
Thanks for the reply.
Here I’m attaching some layers in my model
Screenshot_2020-06-09-07-25-38-141261×511 214 KB
These are the layers in float model
Thanks @supriyar,
Aravind. |
st184939 | I am working on static quantization and have found that quantizing model with fuse module gives more accuracy than the models without applying fuse model.
What is the meaning of ConvRelu2d? whether the batchnormalization part is fused with the convolution weights or bachnormalization is removed?
Please do help me understand this concept. |
st184940 | Hi @Midhilesh,
What I understand from ConvReLU2d() is that it just converts from
Conv2d()-> ReLU() to something like ReLU(Conv2d()) .
However, there is no BatchNorm2d here, if you want to fuse even the BatchNorm layer, you could use this ConvBnReLU2d() .
You will find more info from the quantization doc https://pytorch.org/docs/stable/quantization.html#torch.nn.intrinsic.ConvReLU2d 103 |
st184941 | Hi, I am new to Deep Learning and Pytorch. I am interested in quantization and have gone through the Transfer learning 2 and Post Training Static Quantization 15 tutorial. However, there are a few questions that i hope to get some idea from the community.
For Transfer learning 2:
I noticed that the quantized model implements a custom head, for fine tuning purpose. However, since the model is restructured due to the
from torch import nn
def create_combined_model(model_fe):
Step 1. Isolate the feature extractor.
model_fe_features = nn.Sequential(
model_fe.quant, # Quantize the input
model_fe.conv1,
model_fe.bn1,
model_fe.relu,
model_fe.maxpool,
model_fe.layer1,
model_fe.layer2,
model_fe.layer3,
model_fe.layer4,
model_fe.avgpool,
model_fe.dequant, # Dequantize the output
)
Step 2. Create a new “head”
new_head = nn.Sequential(
nn.Dropout(p=0.5),
nn.Linear(num_ftrs, 2),
)
Step 3. Combine, and don’t forget the quant stubs.
new_model = nn.Sequential(
model_fe_features,
nn.Flatten(1),
new_head,
)
return new_model
Why is there no new forward() defined for it? Can the forward() recognize the new network layout for the model automatically?
For Post Training Static Quantization 15:
I noticed that the tutorial transforms the pretrained model to quantized model by merging the intermediate operations such as nn.Conv2d(), nn.ReLU() and nn.BatchNorm2d(), into ConvBnReLU2d() 7. I know that the guideline in Quantization 4, suggests to perform operation fusing whenever quantizing a model. But is it possible to implement each quantized modules independently without fusing all of them as one? As i believe i have seen the quantization implementation of nn.quantized.Conv2d() 1, nn.quantized.ReLU() 1. (Although there is no nn.quantized.BatchNorm2d() yet).
The reason i am asking is because i am interested in extracting the intermediate outputs. I would like to inspect the intermediate outputs such as output from nn.quantized.Conv2d() and nn.quantized.ReLU() independently. I believe if i fuse the module using ConvBnReLU2d, it would only yield me the final output that has gone through BatchNorm2d() and ReLU() instead of the intermediate outputs for each intermediate operations, right?
I am new to this community and this is my first post. If this post does not follow the community guideline, please let me know. Thank you. |
st184942 | For Post Training Static Quantization
I think you can leave Conv2d and ReLU separated, but it will impact the performance. It could work for debugging purpose. For batchnorm you have to fuse it with Conv since there’s no quantized batchnorm.
cc @Zafar for question on transfer learning. |
st184943 | @JC_DL we do support quantized conv, quantized relu and quantized batchnorm operators. So it should be possible to execute these operators standalone as well. |
st184944 | Hi, may i know where i can find out more about quantized batchnorm? I did not see BatchNorm2d() listed under torch.nn.quantized in the Quantization 12 page. Thanks. |
st184945 | I believe the docs haven’t been updated. Will do so.
Here is the code for quantized batchorm - https://github.com/pytorch/pytorch/blob/master/torch/nn/quantized/modules/batchnorm.py#L10 28 |
st184946 | Hi,
Can you please explain why there is a performance impact if we don’t fuse the layers?
what exactly the meaning of fusing layers?
what does ConvRelu2d means? What has happened to the batch normalization layer?
Please help me to understand! |
st184947 | Hi, I have an issue in quantizing yolov3 model. I am working on this implementation: https://github.com/eriklindernoren/PyTorch-YOLOv3 1
I followed the dynamic quantization tutorial and the quantization code as following: loading the model, loading pretrained weights from Darknet and using the dynamic quantization.
model = Darknet(opt.model_def, img_size=opt.img_size).to(device)
if opt.weights_path.endswith(".weights"):
# Load darknet weights
model.load_darknet_weights(opt.weights_path)
else:
# Load checkpoint weights
model.load_state_dict(torch.load(opt.weights_path))
qmodel = torch.quantization.quantize_dynamic(model, dtype=torch.qint8)
But I got this error message:
“For purely script modules use my_script_module.save() instead.”)
_pickle.PickleError: ScriptModules cannot be deepcopied using copy.deepcopy or saved using torch.save. Mixed serialization of script and non-script modules is not supported. For purely script modules use my_script_module.save() instead.
Do you have any suggestions guys? |
st184948 | Solved by huoge in post #2
It looks like you are trying to quantize the scripted net.
The correct order seems like first quantize your net then script it! |
st184949 | It looks like you are trying to quantize the scripted net.
The correct order seems like first quantize your net then script it! |
st184950 | Here is a snippet of my test code:
data = torch.tensor([[[1.], [2.], [3.], [4.]]]) # N=1 * L=4 * C_in=1
data = data.permute(0, 2, 1)
conv = nn.Conv1d(1, 1, 2, bias=None)
print(conv.weight)
output = conv(data)
print("output : {}".format(output))
dweight = conv.weight.transpose(0, 1).flip(2, ) # dweight = conv.weight.transpose(0, 1)
print("dweight : {}".format(dweight))
dconv = F.conv_transpose1d(input=output, weight=dweight, bias=None)
print(dconv) # != data
The result I suppose it would be is: input -> conv1d(input) -> **conv_transpose1d(conv1d(input)) should be equal to input.
But they are not equal, whether I flip the temporal axis or not, but they are supposed to be the same, right?
I’m really confused and frustrated here, could anyone figure it out? |
st184951 | Solved by 111324 in post #3
I think I misunderstand the “tied weight” concept.
I wrote the conv_transposed1d in doubly block circulant matrix form and I find that one don’t need to flip the temporal axis actually.
Suppose the conv1d’s matrix is [image] and the corresponding conv_transpose1d’s matrix is [image].
The square … |
st184952 | I think I misunderstand the “tied weight” concept.
I wrote the conv_transposed1d in doubly block circulant matrix form and I find that one don’t need to flip the temporal axis actually.
Suppose the conv1d’s matrix is and the corresponding conv_transpose1d’s matrix is .
The square matrix apprently is not always identity matrix. So the result need not to be identical to the input. |
st184953 | I have trained a model in pytorch with float data type. I want to improve my inference time by converting this model to quantized model. I have used torch.quantization.convert and torch.quantization.quantize_dynamic api to convert my model’s weight to uint8 data type. However, when I use this model for inference, I do not get any performance improvement. Am I doing something wrong here ?
The Unet Model code:
def gen_initialization(m):
if type(m) == nn.Conv2d:
sh = m.weight.shape
nn.init.normal_(m.weight, std=math.sqrt(2.0 / (sh[0]*sh[2]*sh[3])))
nn.init.constant_(m.bias, 0)
elif type(m) == nn.BatchNorm2d:
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
class TripleConv(nn.Module):
def __init__(self, in_ch, out_ch):
super(TripleConv, self).__init__()
mid_ch = (in_ch + out_ch) // 2
self.conv = nn.Sequential(
nn.Conv2d(in_ch, mid_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.BatchNorm2d(num_features=mid_ch),
nn.LeakyReLU(negative_slope=0.1),
nn.Conv2d(mid_ch, mid_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.BatchNorm2d(num_features=mid_ch),
nn.LeakyReLU(negative_slope=0.1),
nn.Conv2d(mid_ch, out_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.BatchNorm2d(num_features=out_ch),
nn.LeakyReLU(negative_slope=0.1)
)
self.conv.apply(gen_initialization)
def forward(self, x):
return self.conv(x)
class Down(nn.Module):
def __init__(self, in_ch, out_ch):
super(Down, self).__init__()
self.triple_conv = TripleConv(in_ch, out_ch)
self.avg_pool_conv = nn.AvgPool2d(2, 2)
self.in_ch = in_ch
self.out_ch = out_ch
def forward(self, x):
self.cache = self.triple_conv(x)
pad = torch.zeros(x.shape[0], self.out_ch - self.in_ch, x.shape[2], x.shape[3], device=x.device)
x = torch.cat((x, pad), dim=1)
self.cache += x
return self.avg_pool_conv(self.cache)
class Center(nn.Module):
def __init__(self, in_ch, out_ch):
super(Center, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.BatchNorm2d(num_features=out_ch),
nn.LeakyReLU(negative_slope=0.1, inplace=True)
)
self.conv.apply(gen_initialization)
def forward(self, x):
return self.conv(x)
class Up(nn.Module):
def __init__(self, in_ch, out_ch):
super(Up, self).__init__()
self.upsample = nn.Upsample(scale_factor=2, mode='bilinear',
align_corners=True)
self.triple_conv = TripleConv(in_ch, out_ch)
def forward(self, x, cache):
x = self.upsample(x)
x = torch.cat((x, cache), dim=1)
x = self.triple_conv(x)
return x
class UNet(nn.Module):
def __init__(self, in_ch, first_ch=None):
super(UNet, self).__init__()
if not first_ch:
first_ch = 32
self.down1 = Down(in_ch, first_ch)
self.down2 = Down(first_ch, first_ch*2)
self.down3 = Down(first_ch*2, first_ch*4)
self.down4 = Down(first_ch*4, first_ch*8)
self.center = Center(first_ch*8, first_ch*8)
self.up4 = Up(first_ch*8*2, first_ch*4)
self.up3 = Up(first_ch*4*2, first_ch*2)
self.up2 = Up(first_ch*2*2, first_ch)
self.up1 = Up(first_ch*2, first_ch)
self.output = nn.Conv2d(first_ch, in_ch, kernel_size=3, stride=1,
padding=1, bias=True)
self.output.apply(gen_initialization)
def forward(self, x):
x = self.down1(x)
x = self.down2(x)
x = self.down3(x)
x = self.down4(x)
x = self.center(x)
x = self.up4(x, self.down4.cache)
x = self.up3(x, self.down3.cache)
x = self.up2(x, self.down2.cache)
x = self.up1(x, self.down1.cache)
return self.output(x)
The inference code:
from tqdm import tqdm
import os
import numpy as np
import torch
import gan_network
import torch.nn.parallel
from torch.utils.data import DataLoader
import torch.utils.data as data
import random
import glob
import scipy.io
import time
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"
class DataFolder(data.Dataset):
def __init__(self, file):
super(DataFolder, self).__init__()
self.image_names = []
fid = file
for line in fid:
# line = line[:-1]
if line == '':
continue
# print(line)
self.image_names.append(line)
random.shuffle(self.image_names)
self.image_names = self.image_names[0:]
def __len__(self):
return len(self.image_names)
def __getitem__(self, index):
path = self.image_names[index]
img = np.load(path)
img = np.rollaxis(img, 2, 0)
img = torch.from_numpy(img[:, :, :])
return img, path
if __name__ == '__main__':
batch_size = 1
image_size = 2048
channels = 6
model_path = 'D:/WorkProjects/Network_Training_Aqusens/FullFovReconst/network/network_epoch9.pth'
test_data = glob.glob('D:/save/temp/*.npy')
dest_dir = 'D:/save/temp/results/'
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
net = gan_network.UNet(6, 32)
if torch.cuda.device_count() > 1:
net = torch.nn.DataParallel(net)
net.to(device)
net.load_state_dict(torch.load(model_path))
quantized_model = torch.quantization.quantize_dynamic(net, {torch.nn.Conv2d, torch.nn.BatchNorm2d}, inplace=False)
dataset = DataFolder(file=test_data)
print(f'{len(dataset)}')
data_loader = DataLoader(dataset=dataset, num_workers=4,
batch_size=batch_size, shuffle=False,
drop_last=False, pin_memory=True)
input = torch.Tensor(batch_size, channels, image_size, image_size).to(device)
t0 = time.time()
with torch.no_grad():
for i, batch in enumerate(tqdm(data_loader)):
input.copy_(batch[0])
output = net(input).cpu().clone().numpy()
np.array(output)
output = np.rollaxis(output, 1, 4)
for num in range(batch_size):
arr = output[num, :, :, :]
file_name = os.path.basename(batch[1][num])
save_name = os.path.join(dest_dir, file_name)
save_name = save_name.replace(".npy", "")
scipy.io.savemat(save_name+'.mat', {'output': arr})
t1 = time.time()
print(f'Elapsed time = {t1-t0}')
For models net and quantized_model, i get the elapsed time around 30 seconds for 12 images passed through them. |
st184954 | I can no longer find the requisite doc link, but Pytorch dynamic quantization is currently (v1.5) only provided for Linear and LSTM layers. A model that doesn’t have a high proportion of those will not benefit from dynamic quantization.
(To confirm, print() a quantized model to see which layers have been replaced.) |
st184955 | Hi @dnaik
In the quoted message I didn’t find that you did quantized the model with convert() API. If it is, then quantize the model with convert() API.
Refer to this Doc to quantize model with prepare & convert() APIs : Static quantization 6
Quantize_dynamic() is not applicable to your model because as of now it only supports to quantize lSTM and Linear layers. |
st184956 | Hi,
The PyTorch quantized operations are great but they return the result after it has been quantized back to 8-bit integer. Is there a simple way of accessing the accumulated result at int32? Are there other libraries, such as Caffe or a different quantization backend, that have this API? |
st184957 | Re-quantization (conversion from int32 to int8) is performed in the quantized operator libraries - FBGEMM and QNNPACK.
Currently there is no easy way to access this value from pytorch API. |
st184958 | Hi all,
I have been experimenting with the post static quantization feature on VGG-16.
I know that torch.quantization.convert() will automatically remap every layer in the model to its quantized implementation.
However, i noticed that, a few types of layer is not converted, which is:
nn.MaxPool2d() , nn.AdaptiveAvgPool2d() and nn.Dropout()
I believe nn.Dropout() should not be an issue, whether its quantized or not.
However, i am not sure if nn.MaxPool2d() and nn.AdaptiveAvgPool2d() would do any difference if it is not quantized.
I have seen nn.quantized.MaxPool2d() being mentioned here 2 and tried to remap my layer to this module. But, it seems like it is still referring to nn.modules.pooling.MaxPool2d() when i check the layer type after reassigning.
I have also seen nn.quantized.functional.MaxPool2d() and nn.quantized.functional.AdaptiveAvgPool2d() being mentioned in the Quantization documentation 3. But i have read from the forum, and found that, it is not conventional to directly call functional, instead, its module or its wrapper class should be called.
So, i would like to ask, is there any effect to my quantized model performance if i don’t change the nn.MaxPool2d() and nn.AdaptiveAvgPool2d() to their quantized version?
Should i just leave nn.MaxPool2d() and nn.AdaptiveAvgPool2d() as it is?
Or, if i should change to their quantized implementation, how should i do it?
Thanks. |
st184959 | You do not need to change MaxPool2d() and adaptiveAvgPool2d() from nn to nn.quantized. These operations do not require calibration and are automatically converted to quantized operations when convert is called. Under the hood, these modules call the appropriate function when quantized values are passed as input. |
st184960 | Trying to quantize the object detection model of EdgeNet2 1 but fail, using the pretrained model from here 1(ms coco 300x300).
qconfig = torch.quantization.get_default_qconfig('qnnpack')
print(qconfig)
model.eval()
model.qconfig = qconfig
torch.quantization.prepare(model, inplace=True)
predictor = BoxPredictor(cfg=cfg, device=device)
#works fine if I do not run the main_images
main_images(predictor=predictor, model=model, object_names=object_names,
in_dir=args.im_dir, out_dir=args.save_dir, device=device)
# Convert to quantized model
torch.quantization.convert(model, inplace=True)
torch.save(model.state_dict(), "espnet2v_300.pt")
print("save as quantize model")
def main_images(predictor, model, object_names, in_dir, out_dir, device='cuda'):
png_file_names = glob.glob(in_dir + os.sep + '*.png')
jpg_file_names = glob.glob(in_dir + os.sep + '*.jpg')
file_names = png_file_names + jpg_file_names
if len(file_names) == 0:
print_error_message('No image files in the folder')
# model in eval mode
model.eval()
with torch.no_grad():
for img_name in file_names:
image = cv2.imread(img_name)
predictor.predict(model, image, is_scaling=False)
If I do not run the main_images function, I can quantize the model, If I run the main_images function, I received error message.
The expanded size of the tensor (243076) must match the existing size (262144) at non-singleton dimension 0. Target sizes: [243076]. Tensor sizes: [262144]
The main_images function works fine if I do not quantize it, any idea how should I fix this?Or do anyone know an object detection project of pytorch suit for quantization? Thanks
ps : Tried with the model of tflite, but accuracy is not that good. |
st184961 | Based on the error message it seems you might be passing the wrong input shape.
Is this error specific to the quantization module, i.e. is it running fine without quantization?
Could you try to resize the image array to the expected shape? |
st184962 | ptrblck:
is it running fine without quantization?
Yes, totally fine.
ptrblck:
Could you try to resize the image array to the expected shape?
Already do that, it works before quantize |
st184963 | I am interested in using PyTorch for 1-bit neural network training. It seems to me that pytorch now only supports dtype=qint8. I am wondering if there is an good guide for PyTorch dtype system and how to expanding it.
Thanks. |
st184964 | cc @raghuramank100 has a diff out, but it’s not landed yet: https://github.com/pytorch/pytorch/pull/33743 129 |
st184965 | https://github.com/pytorch/pytorch/pull/33743 67 give a nice touch on the sub-8-bit quantization. However, if I want to do some 1-bit quantization which quantizes the feature map and weight matrices into {-1, 1}. This may requires further changes in the qscheme. I am guessing that will require me add some PyTorch intrinsics in ATen? Or there is a better way to accomendate that need?
In any case, I am looking forward to see 33743 67 land soon. |
st184966 | lijunzh:
https://github.com/pytorch/pytorch/pull/33743 give a nice touch on the sub-8-bit quantization. However, if I want to do some 1-bit quantization which quantizes the feature map and weight matrices into {-1, 1}. This may requires further changes in the qscheme . I am guessing that will require me add some PyTorch intrinsics in ATen? Or there is a better way to accomendate that need?
right, if it is {-1, 1} it is not probably not affine quantization, what would you quantize 0 into?
I think you’ll probably need to extend qscheme and implement a new quantizer(https://codebrowser.bddppq.com/pytorch/pytorch/aten/src/ATen/quantized/Quantizer.h.html 14) to support this. |
st184967 | if it is {-1, 1} it is not probably not affine quantization, what would you quantize 0 into?
I am trying to follow the XNOR-Net 6 paper and its variant which quantizes the filter weights W into B such that W approximates aB where a is a real valued scalar. Thus, by the equation (4) in that paper, Bi = +1 if Wi >= 0 and Bi = -1 if Wi < 0.
I am looking at the ATen code, it seems to be that if we want to add the support such binary quantization scheme, we will have to recompile PyTorch locally in order to have the new ATen library takes effect. Is there any way I can do it using the official PyTorch distribution without re-compilation? |
st184968 | yeah, that is correct, you’ll need to implement a new quantization scheme, and probably adding a new quantize function https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml#L3809 12 etc. we don’t have a good way to simulate this in current codebase I think. since this level of support is targeted at production.
However, if you just want to simulate the accuracy of the model in training, you might get away with less complete support in core, for example, just add a new quantization scheme and add support for it in the fake quantize module: https://github.com/pytorch/pytorch/blob/master/torch/quantization/fake_quantize.py#L92 17.
But anyways you’ll probably need to add a new quantization scheme https://github.com/pytorch/pytorch/blob/master/c10/core/QScheme.h 11 |
st184969 | Thanks for the detailed explanation. This is very helpful.
I mainly target at evaluating models in 1-bit precision with QAT, so I guess the second approach is good enough (for now). It seems that I still will need touch the Aten/c10 code and recompile it locally. If so, is this something the PyTorch team interested to merge upstream in the future (assuming my code meets the quality requirement)?
On a related note, Nvidia’s new A100 architecture 11 will support binary (1-bit) precision.
Acceleration for all data types, including FP16, BF16, TF32, FP64, INT8, INT4, and Binary
This is not too far away from the production. Details can also be found in their white paper 12. It feel like an interesting direction for PyTorch community to explore and will be meaningful if we can support in the long-run. |
st184970 | yeah low precision quantization is definitely something we want to pursue, but it may not need extra quantization schemes to support them, although we’ll need new data type support. for example, we can have per tensor affine quantization(existing quantization scheme) with Int4(new data type).
In the case of 1-bit precision to {1, -1}, we also need a new quantization scheme since it is not affine quantization. if the integer values are consecutive, e.g. {-1, 0, 1}, {0, 1}, I think we should be able to represent it with per tensor affine quantization and a new INT1/INT2 data type. |
st184971 | I agree with the comment of sub-8-bit quantization. We should be able to support 2-7 bit using the existing infrastructure with some new data types INT2-7.
In the case of 1-bit (binary), you can represent {-1, 1} in {0, 1} by assigning -1 to 0. In fact, that’s what will be implemented in hardware. However, that means you will replace multiplication by XNOR. This change results in a separate set of operators/functionals/modules need to be overload for binary network. From math point of view, I would like to see BNN implemented in this way (which exactly match the hardware). However, it is a lot of work and hard to be maintained (separately from all other NN modules). Frankly, an engineer will argue that there is no significant benefit of doing so. I feel like a new data type BINT1 for {-1, 1} (to be different from INT1 for {0, 1}) is a better choice.
I will try to experiment with this idea and submit issue/PR in the coming months. |
st184972 | Hi,
I’ve trained a custom transformer model and followed this 15 to save a quantized model.
However when I try to load the model using
model.load_state_dict(torch.load('path'))
I receive the following error:
Missing key(s) in state_dict: “xxxxx.weight”,
Unexpected key(s) in state_dict: “xxxx.scale, xxxx.zero_point, …”
It looks like the names of the original parameters of the model have been changed. Can anyone help with how I can resolve this error? |
st184973 | Solved by supriyar in post #2
While loading the model is the model now a quantized model? If you convert the model to quantized model and then load the quantized state_dict it should work. |
st184974 | While loading the model is the model now a quantized model? If you convert the model to quantized model and then load the quantized state_dict it should work. |
st184975 | I’m not sure that I understand, assuming class A inherits from nn.Module and corresponds to the architecture of my dnn.
model = A()
is essentially all I do. Do I need to do anything to quantize it? |
st184976 | Ok I think I get it now, I have to do something like this after
model = A
quantized_model = torch.quantization.quantize_dynamic(
model, {torch.nn.Linear}, dtype=torch.qint8
) |
st184977 | Hello! I have some deep learning model which i want to transfer to C++ and make parallel threaded inference. My use-case requires all threads to have its own model replica and each thread must execute model in one core.
Here is python script
import os
os.environ["OMP_NUM_THREADS"] = "1"
os.environ["MKL_NUM_THREADS"] = "1"
import tqdm
import argparse
import torch
import torch.nn.quantized
import torch.quantization
def make_fused_linear(in_features: int, out_features: int):
return torch.quantization.fuse_modules(
torch.nn.Sequential(
torch.nn.Linear(in_features=in_features, out_features=out_features),
torch.nn.ReLU(inplace=True)
),
modules_to_fuse=['0', '1']
)
class FeedforwardModel(torch.nn.Module):
def __init__(self, features):
super(FeedforwardModel, self).__init__()
self._net = torch.nn.Sequential(
make_fused_linear(features, 90),
make_fused_linear(90, 90),
make_fused_linear(90, 90),
make_fused_linear(90, 90),
make_fused_linear(90, 90),
make_fused_linear(90, 90),
)
self._final = torch.nn.Linear(90, 50)
self._quant = torch.quantization.QuantStub()
self._dequant = torch.quantization.DeQuantStub()
def forward(self, x: torch.Tensor):
x = self._quant(x)
x = self._final(self._net(x))
x = self._dequant(x)
return x
def timeit_model(model, *inputs):
for _ in tqdm.trange(10000000000000):
with torch.no_grad():
model(*inputs)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-q", action="store_true", help="Use quantized model")
parser.add_argument("-b", default=1, help="Batch size", type=int)
torch.set_num_interop_threads(1)
torch.set_num_threads(1)
args = parser.parse_args()
use_quantized = args.q
batch_size = args.b
in_features = 40 * 64 # new user model with 40 queries
inputs = torch.rand(batch_size, in_features)
with torch.no_grad():
if not use_quantized:
model = FeedforwardModel(in_features)
model.eval()
traced_script_module = torch.jit.trace(model, inputs)
traced_script_module.save("model.torch")
timeit_model(traced_script_module, inputs)
else:
model = FeedforwardModel(in_features)
model.eval()
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(model, inplace=True)
model(inputs)
torch.quantization.convert(model, inplace=True)
traced_script_module = torch.jit.trace(model, inputs)
traced_script_module.save("quantized_model.torch")
timeit_model(traced_script_module, inputs)
And corresponding C++ code
#include <iostream>
#include <future>
#include <torch/all.h>
#include <torch/script.h> // One-stop header.
int main(int argc, const char* argv[]) {
// WARNING! this does not work for quantized model! For quantized, only setting
// MKL_NUM_THREADS=1 and OMP_NUM_THREADS=1 work! We are investigating this issue with torch guys
torch::set_num_threads(1);
at::set_num_interop_threads(1);
auto model_path = argv[1];
auto num_threads = std::stoi( argv[2] );
torch::Tensor inputs = torch::zeros({1, 40 * 64}, torch::kFloat32);
std::vector<std::future<void>> futures;
for (auto i = 0; i < num_threads; i++) {
futures.emplace_back(std::move(std::async(std::launch::async, [model_path, inputs]() {
torch::NoGradGuard torch_guard;
auto model = torch::jit::load(model_path);
model.eval();
auto thread_inputs = inputs.clone();
while (true) {
model.forward({thread_inputs});
}
})));
}
for (auto &f : futures) { f.get(); }
return 0;
}
The first issue is written in C++ comments: setting num threads programmatically does not work with quantized backend.
The second, more severe issue - when i launch code in 40 threads on 40-core machine, floating-point model parallels perfectly (as it should), quantized model stucks with some mutex. This is easily seen either in htop (cpu cores spend time in kernel syscalls) and strace. strace says that quantized model calls futex very frequently. Floating point model does no syscalls after all threads started.
Can you help me to get rid of this lock? Maybe i’m doing something wrong? |
st184978 | I think quantized backend uses fbgemm which uses MKL_NUM_THREADS=1 and OMP_NUM_THREADS=1 to control the threading. cc @raghuramank100 @jianyuhuang for the second issue. |
st184979 | Are you using a recent version of PyTorch (and compiled with C++14?). We made a recent improvement in the quantized backend library 3 (fbgemm) to make use of read-write lock from C++14 and these are not supported on MacOS due to issues. Are you running it on MacOS?
github.com
pytorch/FBGEMM/blob/master/src/CodeCache.h#L12-L16 1
#if __cplusplus >= 201402L && !defined(__APPLE__)
// For C++14, use shared_timed_mutex.
// some macOS C++14 compilers don't support shared_timed_mutex.
#define FBGEMM_USE_SHARED_TIMED_MUTEX
#endif |
st184980 | Thank you!
I am running code on linux server with 46-core processor. Unfortunately, getting fresh latest libtorch did not help.
I will try profile code and figure out which part of code makes issue. |
st184981 | I use ResNet-18 to test in post static quantization, and the step 2 is I need to fuse module like conv,bn,relu or conv,bn and so on. The the code is wrote as follows:
110 def fuse_model(self):
111 modules_names = [m for m in self.named_modules()]
112 modules_list = [m for m in self.modules()]
113 for ind, m in enumerate(modules_list):
114 if type(m) == nn.Conv2d and type(modules_list[ind+1]) == nn.BatchNorm2d and type(modules_list[ind+2]) == nn.ReLU:
115 print("Find ConvBNReLu: ", modules_names[ind][0], '-->', modules_names[ind+1][0], '-->', modules_names[ind+2][0])
116 torch.quantization.fuse_modules(self, [modules_names[ind][0], modules_names[ind+1][0], modules_names[ind+2][0]], inplace=True)
117 elif type(m) == nn.Conv2d and type(modules_list[ind+1]) == nn.BatchNorm2d:
118 print("Find ConvBN: ", modules_names[ind][0], '-->', modules_names[ind+1][0])
119 torch.quantization.fuse_modules(self, [modules_names[ind][0], modules_names[ind+1][0]], inplace=True)
120 elif type(m) == nn.Conv2d and type(modules_list[ind+1]) == nn.ReLU:
121 print("Find ConvReLU: ", modules_names[ind][0], '-->', modules_names[ind+1][0])
122 torch.quantization.fuse_modules(self, [modules_names[ind][0], modules_names[ind+1][0]], inplace=True)
And I print the layer to fuse as follows:
Find ConvBN: conv1 --> bn1
Find ConvBN: layer1.0.conv1 --> layer1.0.bn1
Find ConvBNReLu: layer1.0.conv2 --> layer1.0.bn2 --> layer1.0.relu
Find ConvBN: layer1.1.conv1 --> layer1.1.bn1
Find ConvBNReLu: layer1.1.conv2 --> layer1.1.bn2 --> layer1.1.relu
Find ConvBN: layer2.0.conv1 --> layer2.0.bn1
Find ConvBNReLu: layer2.0.conv2 --> layer2.0.bn2 --> layer2.0.relu
Find ConvBN: layer2.0.shortcut.0 --> layer2.0.shortcut.1
Find ConvBN: layer2.1.conv1 --> layer2.1.bn1
Find ConvBNReLu: layer2.1.conv2 --> layer2.1.bn2 --> layer2.1.relu
Find ConvBN: layer3.0.conv1 --> layer3.0.bn1
Find ConvBNReLu: layer3.0.conv2 --> layer3.0.bn2 --> layer3.0.relu
Find ConvBN: layer3.0.shortcut.0 --> layer3.0.shortcut.1
Find ConvBN: layer3.1.conv1 --> layer3.1.bn1
Find ConvBNReLu: layer3.1.conv2 --> layer3.1.bn2 --> layer3.1.relu
Find ConvBN: layer4.0.conv1 --> layer4.0.bn1
Find ConvBNReLu: layer4.0.conv2 --> layer4.0.bn2 --> layer4.0.relu
Find ConvBN: layer4.0.shortcut.0 --> layer4.0.shortcut.1
Find ConvBN: layer4.1.conv1 --> layer4.1.bn1
Find ConvBNReLu: layer4.1.conv2 --> layer4.1.bn2 --> layer4.1.relu
And It seems every thing is ok, but when I run eval with pretrained model, the accuracy is 1% for CIFAR100, which origin acc is about 73%. So I start to debug, I found that when I remove fuse ConvBnRelu module, the result is good.
110 def fuse_model(self):
111 modules_names = [m for m in self.named_modules()]
112 modules_list = [m for m in self.modules()]
113 for ind, m in enumerate(modules_list):
114 #if type(m) == nn.Conv2d and type(modules_list[ind+1]) == nn.BatchNorm2d and type(modules_list[ind+2]) == nn.ReLU:
115 # print("Find ConvBNReLu: ", modules_names[ind][0], '-->', modules_names[ind+1][0], '-->', modules_names[ind+2][0])
116 # torch.quantization.fuse_modules(self, [modules_names[ind][0], modules_names[ind+1][0], modules_names[ind+2][0]], inplace=True)
117 if type(m) == nn.Conv2d and type(modules_list[ind+1]) == nn.BatchNorm2d:
118 print("Find ConvBN: ", modules_names[ind][0], '-->', modules_names[ind+1][0])
119 torch.quantization.fuse_modules(self, [modules_names[ind][0], modules_names[ind+1][0]], inplace=True)
120 elif type(m) == nn.Conv2d and type(modules_list[ind+1]) == nn.ReLU:
121 print("Find ConvReLU: ", modules_names[ind][0], '-->', modules_names[ind+1][0])
122 torch.quantization.fuse_modules(self, [modules_names[ind][0], modules_names[ind+1][0]], inplace=True)
So I am confused if ConvBnReLU fuse module has problem, and I test it on pytorch-1.5 and pytorch 1.4, I has this problem both.
So please help me, thanks. |
st184982 | dingyongchao:
ind
Can you share more details? Are you calling fusion after the model is set to eval? Are you quantizing the model? |
st184983 | Hello Raghuraman,
I bump into this question when I was searching for details about the module fusion.
In your quantization tutorial, it was explicitly mentioned that module fusion will help make the model faster by saving on memory access while also improving numerical accuracy.
I am curious about what exactly does PyTorch do when fusing the modules? I may understand that it may save memory access but why will the fusion improve the numerical accuracy?
Thanks a lot, |
st184984 | Hello,
I’ve just started to dive into quantization tools that were introduced in version 1.3.
For now I am trying to train a network with existing model generation code. The code has certain subtleties, one of these are _forward_pre_hooks in several submodules. (See code below)
Here is the problem. After prepare_qat with default config changes submodules to ones with fake quantization and hooks are disappeared. Is it possible to prevent hooks from disappearing during prepare_qat (and submodule.fuse() too)
There is an intermediate code:
...
print('pre qat: ', model.backbone.bottom_up.blocks[2][3].conv_pwl)
print('pre qat: ', model.backbone.bottom_up.blocks[2][3].conv_pwl._forward_pre_hooks.values())
model.qconfig = torch.quantization.get_default_qat_qconfig('qnnpack')
torch.quantization.prepare_qat(model, inplace=True)
print('post qat: ', model.backbone.bottom_up.blocks[2][3].conv_pwl)
print('post qat: ', model.backbone.bottom_up.blocks[2][3].conv_pwl._forward_pre_hooks.values())
...
And the output is:
pre qat: Conv2d(120, 40, kernel_size=(1, 1), stride=(1, 1), bias=False)
pre qat: odict_values([functools.partial(<bound method FeatureHooks._collect_output_hook of <timm.models.feature_hooks.FeatureHooks object at 0x7fd3b5bcf5d0>>, 'blocks.2.3.conv_pwl')])
post qat: Conv2d(
120, 40, kernel_size=(1, 1), stride=(1, 1), bias=False
(activation_post_process): FakeQuantize(
fake_quant_enabled=True, observer_enabled=True, scale=tensor([1.]), zero_point=tensor([0])
(activation_post_process): MovingAverageMinMaxObserver(min_val=tensor([]), max_val=tensor([]))
)
(weight_fake_quant): FakeQuantize(
fake_quant_enabled=True, observer_enabled=True, scale=tensor([1.]), zero_point=tensor([0])
(activation_post_process): MovingAverageMinMaxObserver(min_val=tensor([]), max_val=tensor([]))
)
)
post qat: odict_values([]) |
st184985 | As I can to prevent hooks from disappearing it is needed to put relevant code somewhere here (during prepare_qat):
github.com
pytorch/pytorch/blob/master/torch/quantization/quantize.py#L335 1
mod: input module
mapping: a dictionary that maps from nn module to nnq module
Return:
The corresponding quantized module of `mod`
"""
new_mod = mod
# Always replace dequantstub with dequantize
if hasattr(mod, 'qconfig') and mod.qconfig is not None or type(mod) == DeQuantStub:
if type(mod) in mapping:
new_mod = mapping[type(mod)].from_float(mod)
return new_mod
def get_observer_dict(mod, target_dict, prefix=""):
r"""Traverse the modules and save all observers into dict.
This is mainly used for quantization accuracy debug
Args:
mod: the top module we want to save all observers
prefix: the prefix for the current module
target_dict: the dictionary used to save all the observers
"""
But during fusion it is not that obvious because we map up to three modules to one and each of them could have hooks. I think it is possible to work around with torch.quantization.fuse_modules(...fuser_func=<func>...) |
st184986 | Yeah feel free to submit a PR for preserving the pre hooks in swapping.
For fusion I think we probably need to error out if you have a prehook for intermediate moduels like BatchNorm because when we fuse batchnorm into conv, batchnorm is gone. |
st184987 | I’ll try!
So Jerry could you explain please for what purpose that was done?
github.com
pytorch/pytorch/blob/master/torch/quantization/quantize.py#L99 6
if type(child) == nnq.FloatFunctional:
if hasattr(child, 'qconfig') and child.qconfig is not None:
child.activation_post_process = child.qconfig.activation()
else:
add_observer_(child)
# Insert observers only for leaf nodes, note that this observer is for
# the output of the module, for input QuantStub will observe them
if hasattr(module, 'qconfig') and module.qconfig is not None and \
len(module._modules) == 0 and not isinstance(module, torch.nn.Sequential):
# observer and hook will be gone after we swap the module
module.add_module('activation_post_process', module.qconfig.activation())
module.register_forward_hook(_observer_forward_hook)
def add_quant_dequant(module):
r"""Wrap the leaf child module in QuantWrapper if it has a valid qconfig
Note that this function will modify the children of module inplace and it
can return a new module which wraps the input module as well.
Args:
module: input module with qconfig attributes for all the leaf modules
After prepare() is called convert() will remove all hooks.
I think there’s a reason to create such hook than to remove it should we preserve all pre forward / post forward hooks except this one? |
st184988 | Yeah I think we should preserve it, but we need to consider each case carefully, since this is interfering with the quantization. That is, when do we run pre forward hook and post forward hook, do we run it before/after observation/quantization? |
st184989 | Hello, again @jerryzh168,
Have a look at this example of pre forward hook run. Here’s EfficientNet implementation that assumes it can be integrated as backbone to FPN (https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/efficientnet.py#L471 4)
So here we can see that during forward blocks are called sequentially than we collect input features from specific layers.
Assume that we have block1 who outputs some data, block2 who has pre forward hook and directly obtains data from first one and block3 who waits for the same data but obtains it using hook of second one.
In this particular example (EfficientNet implementation) during preparartion pre forward hook on block2 should be called after observation and therefore after quantization (because we collected statistics for that). If block2 happens to be the first in a row who work with quantized data it is very likely that block3 works with it too. Anyway we can place dequant before block3.
As for post forward hooks we do not modify input of the module so run them after observation and quantization
Please leave your thoughts when you have time |
st184990 | zetyquickly:
Assume that we have block1 who outputs some data, block2 who has pre forward hook and directly obtains data from first one and block3 who waits for the same data but obtains it using hook of second one.
I think this requires that the hooks will do some meaningful computation for both quantized and unquantized data. But as long as we define the semantics clearly it should be OK. So after quantization, the for pre_forward hooks will work with quantized input from previous later, and the forward hooks will work with quantized output from current later, right? |
st184991 | zetyquickly:
Assume that we have block1 who outputs some data, block2 who has pre forward hook and directly obtains data from first one and block3 who waits for the same data but obtains it using hook of second one.
actaully we use hooks to implement observe and fake quantize as well, so please make sure that works. |
st184992 | Yes, I think so.
pre_forward hooks work with quantized data from previous layer, forward hooks work with quantized output from current layer |
st184993 | You are right, we should check whether we are trying to preserve observer or right hook.
In my PR 2 I have handled that case. Without it provided test set fails, with it works well.
I think I should extended test set to test new functionality. Would you mind to guide me in that? |
st184994 | Also anticipating possible questions
I’ve introduced changes to fuse_modules.py 4
I propose that it is needed to preserve pre and post hooks on fused modules, where it possible. While it hard to define how to preserve hooks for second and third module in sequence (because input data changes and three modules are collapsing into atomic one). But we can easily preserve pre_forward hook of base module.
What cases can we process also? |
st184995 | Here it is
github.com/pytorch/pytorch
Quantization: preserving pre and post forward hooks 17
pytorch:master ← zetyquickly:zetyquickly/preserve-hooks
opened
Apr 24, 2020
zetyquickly
+9
-0 |
st184996 | Dear pytorch community,
Given an integer representation and the affine transform, how do I create a corresponding torch.tensor object? The python tensor constructor seems to assume no quantization.
Thanks a lot for your help,
moritz |
st184997 | Solved by jerryzh168 in post #3
we do have some non-public API to do this: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml#L3862 and https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml#L3868
but they we might change the API when we officially release … |
st184998 | For posterity since I was looking here for the solution, if your main interest is in the original weight approximation one solution is to approximate the original weights manually:
restweights = (quant_ints - q_zero_point)*q_scale
This ‘hacky’ approach works for me since I was going to call dequantize anyway. |
st184999 | we do have some non-public API to do this: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml#L3862 9 and https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml#L3868 4
but they we might change the API when we officially release quantization as a stable feature. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.