id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st184700 | we are not working on onnx conversions, feel free to submit PRs to add the support. |
st184701 | Hello all,
I try to quantize nn.TransformerEncoder, but get errors during inference.
The problem is with nn.MultiheadAttention, which is basically a set of nn.Linear operations and should work OK after quantization.
Minimal example:
import torch
mlth = torch.nn.MultiheadAttention(512, 8)
possible_input = torch.rand((10, 10, 512))
quatized = torch.quantization.quantize_dynamic(mlth)
quatized(possible_input, possible_input, possible_input)
It fails with:
/opt/miniconda/lib/python3.7/site-packages/torch/nn/functional.py in multi_head_attention_forward(query, key, value, embed_dim_to_check, num_heads, in_proj_weight, in_proj_bias, bias_k, bias_v, add_zero_attn, dropout_p, out_proj_weight, out_proj_bias, training, key_padding_mask, need_weights, attn_mask, use_separate_proj_weight, q_proj_weight, k_proj_weight, v_proj_weight, static_k, static_v)
3946 assert list(attn_output.size()) == [bsz * num_heads, tgt_len, head_dim]
3947 attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim)
-> 3948 attn_output = linear(attn_output, out_proj_weight, out_proj_bias)
3949
3950 if need_weights:
/opt/miniconda/lib/python3.7/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1610 ret = torch.addmm(bias, input, weight.t())
1611 else:
-> 1612 output = input.matmul(weight.t())
1613 if bias is not None:
1614 output += bias
AttributeError: 'function' object has no attribute 't'
That's because `.weight` is not parameter anymore, but the method (for components of the quantized module).
You can check it like:
mlth.out_proj.weight
Parameter containing:
tensor([[-0.0280, 0.0016, 0.0163, ..., 0.0375, 0.0153, -0.0435],
[-0.0168, 0.0310, -0.0211, ..., -0.0258, 0.0043, -0.0094],
[ 0.0412, -0.0078, 0.0262, ..., 0.0328, 0.0439, 0.0066],
...,
[-0.0278, 0.0337, 0.0189, ..., -0.0402, 0.0193, -0.0163],
[ 0.0034, -0.0364, -0.0418, ..., -0.0248, -0.0375, -0.0236],
[-0.0312, 0.0236, 0.0404, ..., 0.0266, 0.0255, 0.0265]],
requires_grad=True)
while
quatized.out_proj.weight
<bound method Linear.weight of DynamicQuantizedLinear(in_features=512, out_features=512, qscheme=torch.per_tensor_affine)>
Can you please guide me about this? Is it expected behavior? Should I report it to pyTorch GitHub issues?
It looks like quantization break all the module which use .weight inside.
Thanks in advance |
st184702 | Solved by Vasiliy_Kuznetsov in post #2
hi @skurzhanskyi, I am able to run your example without issues on the nighly. What version of PyTorch are you using? Can you check if using a more recent version / a nightly build fixes your issue? |
st184703 | hi @skurzhanskyi, I am able to run your example without issues on the nighly. What version of PyTorch are you using? Can you check if using a more recent version / a nightly build fixes your issue? |
st184704 | Hi @Vasiliy_Kuznetsov
Thanks for the reply. Indeed, in the nightly version, there’s no error. At the same time, nn.Multihead doesn’t compress, nevertheless, it’s just a set of Linear operations
Is there any information regarding adding quantization to the layer (or for instance nn.Embedings)? |
st184705 | yes, currently nn.MultiheadAttention is not supported yet in eager mode quantization. There are folks working on adding support for both this and embeddings quantization. |
st184706 | Good to hear that. Is there any open information when it will be released (at least approximately)? |
st184707 | hi @skurzhanskyi, one other thing you could try is https://pytorch.org/blog/pytorch-1.6-released/#graph-mode-quantization 14 , which we just released today in v1.6. It might be easier to make multiheadattention work in graph mode.
As far as first class quantization for nn.MultiheadAttention and nn.EmbeddingBag / nn.Embedding - we don’t have a specific timeline we can share, but it should be on the order of months (not weeks or years) - we have folks actively working on this. |
st184708 | I have quantized version of resnet18 network. When do the inference, I fed a single image from sequentially sampled data loader.
I’ve tried to printed the image tensor and input to the first convolution layer,got some mismatches.
To print input image fed from data loader, I used:
img = image.detach().numpy()
img = np.transpose(img,(2,3,1,0))
To get the input to first conv layer:
layer_input={}
def get_input(name):
def hook(model, input, output):
layer_input[name] = input
return hook
model.conv1.register_forward_hook(get_input('conv1'))
qdm = torch.nn.quantized.DeQuantize()
deqout = qdm( val )
deqout = deqout.numpy()
deqout = np.transpose( deqout, (2, 3, 1, 0) )
Image data:
tensor([[[[-0.5082, -0.3883, -0.4226, …, 0.9303, 0.3823, 0.6392],
[-0.6281, -0.6965, -0.4397, …, 0.8104, 0.5878, 0.2111],
[-0.5767, -0.1486, 0.0741, …, 0.7419, 0.8961, 0.2282],
Input to conv layer:
-0.52449334,-0.5619572,-0.7492762,-0.3746381,-0.41210192,-0.5619572,-0.41210192,-0.03746381,0.07492762,0.0,-0.26224667,-0.59942096,-0.18731906,-0.41210192,-0.7118124 ,-0.7118124 |
st184709 | These should be close to each other. Just to confirm, your “img” input is after the normalize transformation mentioned here: https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]), |
st184710 | After I used torch.quantization.quantize_dynamic() to quantize the original model, I saved and loaded the quantized model. But, when I ran inference, it returned this error. The original model still ran inference well, I don’t know why
Traceback (most recent call last):
File "inference.py", line 81, in <module>
output = infer(args.text, model)
File "inference.py", line 30, in infer
mel_outputs, mel_outputs_postnet, _, alignments = model.inference(sequence)
File "/media/tma/DATA/Khai-folder/Tacotron2-PyTorch/model/model.py", line 542, in inference
encoder_outputs = self.encoder.inference(embedded_inputs)
File "/media/tma/DATA/Khai-folder/Tacotron2-PyTorch/model/model.py", line 219, in inference
self.lstm.flatten_parameters()
File "/media/tma/DATA/miniconda3/envs/ttsv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 576, in __get
attr__
type(self).__name__, name))
AttributeError: 'LSTM' object has no attribute 'flatten_parameters' |
st184711 | Solved by khaidoan25 in post #5
Actually, when you load your quantized model, you need to quantize your initial model first.
quantized_model = torch.quantization.quantize_dynamic(
model, {nn.LSTM, nn.Linear}, dtype=torch.qint8
) // Do s.t like this first before loading your quantized model |
st184712 | Which pytorch version are you using? I think this is a known problem and should go away with 1.5 release.
Could you wait for that or re-try with the nightly build and see if this issue goes away? |
st184713 | @khaidoan25, I am facing the same issue with Pytorch version - 1.5.1 . Have you been able to solve it? |
st184714 | Actually, when you load your quantized model, you need to quantize your initial model first.
quantized_model = torch.quantization.quantize_dynamic(
model, {nn.LSTM, nn.Linear}, dtype=torch.qint8
) // Do s.t like this first before loading your quantized model |
st184715 | Hi,
I have a bottleneck on the dataloading during training. I run cProfiler and these are the results:
1 0.012 0.012 1820.534 1820.534 models.py:15(fit)
56 0.001 0.000 1808.163 32.289 dataloader.py:775(__next__)
52 0.001 0.000 1807.264 34.755 dataloader.py:742(_get_data)
392 0.016 0.000 1807.263 4.610 dataloader.py:711(_try_get_data)
392 0.006 0.000 1807.178 4.610 queues.py:91(get)
392 0.002 0.000 1806.842 4.609 connection.py:253(poll)
392 0.002 0.000 1806.840 4.609 connection.py:413(_poll)
392 0.009 0.000 1806.837 4.609 connection.py:906(wait)
392 0.004 0.000 1806.810 4.609 selectors.py:402(select)
392 1806.805 4.609 1806.805 4.609 {method 'poll' of 'select.poll' objects}
4 0.000 0.000 6.452 1.613 dataloader.py:274(__iter__)
4 0.016 0.004 6.452 1.613 dataloader.py:635(__init__)
128 0.007 0.000 5.553 0.043 process.py:101(start)
128 0.001 0.000 5.531 0.043 context.py:221(_Popen)
128 0.003 0.000 5.530 0.043 context.py:274(_Popen)
I am using 32 workers (I have 40 cpus available).
Do you what is causing the dataloading to be slow? Do you know what are the files queues.py and connetctions.py? Functions there seem to be taking great part of the time.
Cheers |
st184716 | Solved by ptrblck in post #2
Have a look at this post, which explains some potential bottlenecks and workarounds. |
st184717 | Have a look at this post 69, which explains some potential bottlenecks and workarounds. |
st184718 | I’m tinkering with post-training static quantization in PyTorch by trying out different activation functions on the same model, then I try to quantize it and run inference ( I want to see what are the activations that are supported). For example, I replaced ReLU with leakyReLU on ResNet50 then applied quantization. The inference ran just fine ( it was a bit slower with a 3% accuracy drop but this does not matter as I’m only experimenting). After that, I tried the Mish activation function, the conversion was successful, however, I got the following error during inference:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
7 helper.print_size_of_model(resnet)
8
----> 9 top1, top5, time_elapsed= helper.evaluate(resnet, criterion, testloader, neval_batches=num_eval_batches)
10 print('Evaluation accuracy on %d images, top5: %2.2f, top1: %2.2f'%(num_eval_batches * eval_batch_size, top5.avg,top1.avg))
11 print('time slapsed: %s' % str(datetime.timedelta(seconds=time_elapsed)))
d:\github\PyTorch_CIFAR10\helper.py in evaluate(model, criterion, data_loader, neval_batches, device)
30 for image, target in data_loader:
31 image.to(device)
---> 32 output = model(image)
33 loss = criterion(output, target)
34 cnt += 1
~\anaconda3\envs\PFE_env\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
d:\github\PyTorch_CIFAR10\cifar10_models\resnetQ.py in forward(self, x)
227 x = self.conv1(x)
228 x = self.bn1(x)
--> 229 x = self.relu(x)
230 x = self.maxpool(x)
231 x = self.layer1(x)
~\anaconda3\envs\PFE_env\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
d:\github\PyTorch_CIFAR10\cifar10_models\resnetQ.py in forward(self, x)
24
25 def forward(self, x):
---> 26 x = x * (torch.tanh(torch.nn.functional.softplus(x)))
27 return x
28
RuntimeError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::empty.memory_format' is only available for these backends: [CPUTensorId, CUDATensorId, MkldnnCPUTensorId, SparseCPUTensorId, SparseCUDATensorId, BackendSelect, VariableTensorId].
the mish layer is defined by:
class Mish(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = x * (torch.tanh(torch.nn.functional.softplus(x)))
return x
Any help in this matter would be greatly appreciated because ultimately, I want to apply quantization on YOLOv4 which relies on Mish as an activation function. |
st184719 | Solved by Vasiliy_Kuznetsov in post #2
Softplus currently does not have a quantized implementation. This error also usually means that a quantized tensor is being passed to a non-quantized function.
For a quick fix, you could add a torch.quantization.DeQuantStub() and torch.quantization.QuantStub() around the areas of the network which… |
st184720 | Softplus currently does not have a quantized implementation. This error also usually means that a quantized tensor is being passed to a non-quantized function.
For a quick fix, you could add a torch.quantization.DeQuantStub() and torch.quantization.QuantStub() around the areas of the network which cannot be quantized such as Softplus.
The longer term fix would be to add quantization support for Softplus to PyTorch. |
st184721 | The quick fix seems reasonable enough. Hopefully it won’t have a significant impact on inference time. I guess I’ll just have to try it out and see. Thank you for your help. |
st184722 | Hi, following the static quantization tutorial, 10, I am trying to extract parameters of quantized, and jitted model. It seems after jitting, parameters are packed in a way that I don’t understand. For example, if I run the snippet below after the tutorial script, I get the output below.
input_size = (1, 3, 224, 224)
inp = np.random.randn(*input_size).astype("float32")
trace = torch.jit.trace(per_channel_quantized_model, torch.from_numpy(inp))
state_dict = trace.state_dict()
for (k, v) in state_dict.items():
print(k, v.size())
features.0.0._packed_params torch.Size([128])
features.1.conv.0.0._packed_params torch.Size([128])
features.1.conv.1._packed_params torch.Size([128])
features.2.conv.0.0._packed_params torch.Size([128])
features.2.conv.1.0._packed_params torch.Size([128])
features.2.conv.2._packed_params torch.Size([128])
features.3.conv.0.0._packed_params torch.Size([128])
features.3.conv.1.0._packed_params torch.Size([128])
features.3.conv.2._packed_params torch.Size([128])
features.4.conv.0.0._packed_params torch.Size([128])
features.4.conv.1.0._packed_params torch.Size([128])
features.4.conv.2._packed_params torch.Size([128])
features.5.conv.0.0._packed_params torch.Size([128])
features.5.conv.1.0._packed_params torch.Size([128])
features.5.conv.2._packed_params torch.Size([128])
features.6.conv.0.0._packed_params torch.Size([128])
features.6.conv.1.0._packed_params torch.Size([128])
features.6.conv.2._packed_params torch.Size([128])
features.7.conv.0.0._packed_params torch.Size([128])
features.7.conv.1.0._packed_params torch.Size([128])
features.7.conv.2._packed_params torch.Size([128])
features.8.conv.0.0._packed_params torch.Size([128])
features.8.conv.1.0._packed_params torch.Size([128])
features.8.conv.2._packed_params torch.Size([128])
features.9.conv.0.0._packed_params torch.Size([128])
features.9.conv.1.0._packed_params torch.Size([128])
features.9.conv.2._packed_params torch.Size([128])
features.10.conv.0.0._packed_params torch.Size([128])
features.10.conv.1.0._packed_params torch.Size([128])
features.10.conv.2._packed_params torch.Size([128])
features.11.conv.0.0._packed_params torch.Size([128])
features.11.conv.1.0._packed_params torch.Size([128])
features.11.conv.2._packed_params torch.Size([128])
features.12.conv.0.0._packed_params torch.Size([128])
features.12.conv.1.0._packed_params torch.Size([128])
features.12.conv.2._packed_params torch.Size([128])
features.13.conv.0.0._packed_params torch.Size([128])
features.13.conv.1.0._packed_params torch.Size([128])
features.13.conv.2._packed_params torch.Size([128])
features.14.conv.0.0._packed_params torch.Size([128])
features.14.conv.1.0._packed_params torch.Size([128])
features.14.conv.2._packed_params torch.Size([128])
features.15.conv.0.0._packed_params torch.Size([128])
features.15.conv.1.0._packed_params torch.Size([128])
features.15.conv.2._packed_params torch.Size([128])
features.16.conv.0.0._packed_params torch.Size([128])
features.16.conv.1.0._packed_params torch.Size([128])
features.16.conv.2._packed_params torch.Size([128])
features.17.conv.0.0._packed_params torch.Size([128])
features.17.conv.1.0._packed_params torch.Size([128])
features.17.conv.2._packed_params torch.Size([128])
features.18.0._packed_params torch.Size([128])
quant.scale torch.Size([1])
quant.zero_point torch.Size([1])
classifier.1._packed_params._packed_params torch.Size([104])
I have no idea what is going on in this format and I have many questions. But for now let me ask you these:
Is there a documentation of the packing format?
How can I extract the original floating point tensors along with scale and zero point? I confirmed that they are available before tracing.
Or even better, is there a way to prevent packing?
During tracing, where in the code base does this packing happen?
I’m trying to translate jitted, quantized PyTorch model to TVM IR. For that I need floating point tensors with scale and zero point. That is the reason I’m asking here.
cc @raghuramank100 @jerryzh168 |
st184723 | Solved by masahi in post #2
ok torch.ops.quantized.conv2d_unpack did the job. |
st184724 | Hello, I met the same problem. Could you show me the detail of the “torch.ops.quantized.conv2d_unpack”? And how to deal with classifier.1._packed_params?
Thanks! |
st184725 | See the implementation in TVM I added:
github.com
apache/incubator-tvm/blob/06e9542ee0bfd014bd06a4dd4fdb3af9d2d29eb0/python/tvm/relay/frontend/qnn_torch.py#L50-L100 10
def _unpack_quant_params(param_name, packed_params, unpack_func):
# Torch stores quantized params in a custom packed format,
# need to unpack and retrieve them as numpy arrays
qweight, bias = unpack_func(packed_params)
weight_np = qweight.dequantize().numpy()
import torch
if qweight.qscheme() == torch.per_tensor_affine:
param = QNNParam(weight_np, bias, qweight.q_scale(),
int(qweight.q_zero_point()), param_name)
else:
scales = qweight.q_per_channel_scales().numpy()
zero_points = qweight.q_per_channel_zero_points().numpy()
# This is an assumption posed by QNN
msg = "The values of zero points should be all zero for per channel"
assert np.all(zero_points == 0), msg
param = QNNParam(weight_np, bias, scales, 0, param_name)
return param
This file has been truncated. show original
From the name classifier.1._packed_params I guess it comes from nn.Linear. In that case, you need to use torch.ops.quantized.linear_unpack. |
st184726 | Hi I am working with a quantized model in C++, I wonder if I can parse the jitted model parameters like this in C++ ? I could not find any unpacking modules in torch::jit::script::Module . I have trained and quantized my model in Python and loaded to C++. I am using version 1.6.0+ |
st184727 | Hello everyone.
This is a followup question concerning this one 2
The issue is everything goes just fine expect at some point in time, this weird error occurs when running this specific block! :
class SEBlock(nn.Module):
def __init__(self, channel, reduction=16):
super().__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.mult_xy = nn.quantized.FloatFunctional()
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction),
nn.PReLU(),
# nn.ReLU(),
nn.Linear(channel // reduction, channel),
nn.Sigmoid()
)
self.fc1 = self.fc[0]
self.prelu = self.fc[1]
self.fc2 = self.fc[2]
self.sigmoid = self.fc[3]
self.prelu_q = PReLU_Quantized(self.prelu)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
# y = self.fc(y).view(b, c, 1, 1)
y = self.fc1(y)
y = self.prelu_q(y)
y = self.fc2(y)
y = self.sigmoid(y).view(b, c, 1, 1)
# out = x*y
out = self.mult_xy.mul(x, y)
return out
It runs several times fine, but at some point it fails with the following error message :
Traceback (most recent call last):
File "d:\Codes\org\python\Quantization\quantizer.py", line 248, in <module>
quantize_test()
File "d:\Codes\org\python\Quantization\quantizer.py", line 230, in quantize_test
evaluate(model, dtloader, neval_batches=num_calibration_batches)
File "d:\Codes\org\python\Quantization\quantizer.py", line 145, in evaluate
features = model(image.unsqueeze(0))
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 473, in forward
x = self.layer3(x)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\container.py", line 100, in forward
input = module(input)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 387, in forward
out = self.se(out)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 345, in forward
y = self.prelu_q(y)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 221, in forward
inputs = self.quantized_op.add(tmax, weight_min_res).unsqueeze(0)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\quantized\modules\functional_modules.py", line 43, in add
r = self.activation_post_process(r)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\quantization\observer.py", line 833, in forward
self.bins)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\quantization\observer.py", line 789, in _combine_histograms
histogram_with_output_range = torch.zeros((Nbins * downsample_rate), device=orig_hist.device)
RuntimeError: Trying to create tensor with negative dimension -4398046511104: [-4398046511104]
what am I missing here ?
Any help is geatly appreciated |
st184728 | Hi,
Can you check what are the inputs to the add operation at "d:\codes\org\python\FV\quantized_models.py", line 221
It looks like it is not handling these properly.
If you could give us a set of inputs that reproduces this issue so that we can reproduce on our side, that would be very helpful! |
st184729 | Hi,
This is the latest error I get (after updating the PReLU_Quantized (link to implementation is here 1 by the way):
The inputs are included in the log below (as X) and the error only happens in SEBlock module which its definition is also given below all other modules that use the PReLU_Quantzied module run fine except SEBlock!:
Size (MB): 89.297826
QConfig(activation=functools.partial(<class 'torch.quantization.observer.HistogramObserver'>, reduce_range=True), weight=functools.partial(<class 'torch.quantization.observer.PerChannelMinMaxObserver'>, dtype=torch.qint8, qscheme=torch.per_channel_symmetric))
Post Training Quantization Prepare: Inserting Observers
Inverted Residual Block:After observer insertion
Conv2d(
3, 64, kernel_size=(3, 3), stride=(1, 1)
(activation_post_process): HistogramObserver()
)
<inside se forward:>
X: tensor([[-1.5691, -0.7516, -0.7360, -0.6458]])
--------------------------
<inside se forward:>
X: tensor([[ 3.6605e-01, 3.3855e+00, -5.0032e-19, -9.0280e-19]])
Traceback (most recent call last):
File "d:\Codes\org\python\Quantization\quantizer.py", line 266, in <module>
quantize_test()
File "d:\Codes\org\python\Quantization\quantizer.py", line 248, in quantize_test
evaluate(model, dtloader, neval_batches=num_calibration_batches)
File "d:\Codes\org\python\Quantization\quantizer.py", line 152, in evaluate
features = model(image.unsqueeze(0))
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 576, in forward
x = self.layer1(x)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\container.py", line 100, in forward
input = module(input)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 489, in forward
out = self.se(out)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 447, in forward
y = self.prelu_q(y)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 322, in forward
inputs = self.quantized_op.add(torch.relu(inputs), weight_min_res)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\quantized\modules\functional_modules.py", line 43, in add
r = self.activation_post_process(r)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\quantization\observer.py", line 833, in forward
self.bins)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\quantization\observer.py", line 789, in _combine_histograms
histogram_with_output_range = torch.zeros((Nbins * downsample_rate), device=orig_hist.device)
RuntimeError: Trying to create tensor with negative dimension -4398046511104: [-4398046511104]
and this is how the SE block looks like :
class SEBlock(nn.Module):
def __init__(self, channel, reduction=16):
super().__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.mult_xy = nn.quantized.FloatFunctional()
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction),
nn.PReLU(),
# nn.ReLU(),
nn.Linear(channel // reduction, channel),
nn.Sigmoid()
)
self.fc1 = self.fc[0]
self.prelu = self.fc[1]
self.fc2 = self.fc[2]
self.sigmoid = self.fc[3]
self.prelu_q = PReLU_Quantized(self.prelu)
def forward(self, x):
print(f'<inside se forward:>')
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
# y = self.fc(y).view(b, c, 1, 1)
y = self.fc1(y)
print(f'X: {y}')
y = self.prelu_q(y)
y = self.fc2(y)
y = self.sigmoid(y).view(b, c, 1, 1)
print('--------------------------')
# out = x*y
out = self.mult_xy.mul(x, y)
return out
amd this is the output when I use PReLU instead of PReLU_Quantized in the SE block only (all other instance of PReLU is replaced with PReLU_Quantized in other modulels of ResNet) :
Summary
Size (MB): 89.29209
QConfig(activation=functools.partial(<class 'torch.quantization.observer.HistogramObserver'>, reduce_range=True), weight=functools.partial(<class 'torch.quantization.observer.PerChannelMinMaxObserver'>, dtype=torch.qint8, qscheme=torch.per_channel_symmetric))
Post Training Quantization Prepare: Inserting Observers
Inverted Residual Block:After observer insertion
Conv2d(
3, 64, kernel_size=(3, 3), stride=(1, 1)
(activation_post_process): HistogramObserver()
)
<inside se forward:>
X: tensor([[-1.5691, -0.7516, -0.7360, -0.6458]])
--------------------------
<inside se forward:>
X: tensor([[ 3.6605e-01, 3.3855e+00, -5.0032e-19, -9.0280e-19]])
--------------------------
<inside se forward:>
X: tensor([[-1.0513, -0.0656, -0.4529, 0.0653, -0.4762, -0.6304, -1.5043, -0.9484]])
--------------------------
<inside se forward:>
X: tensor([[ 4.8730, 1.6650, -0.5135, -0.6811, -0.0392, -0.4689, -0.1496, 0.0717]])
--------------------------
<inside se forward:>
X: tensor([[-1.8759, -0.8886, -1.3295, -0.5375, 0.7598, -0.8526, -1.9066, 0.0985,
-0.1461, -0.5857, 0.1513, -0.3050, 0.1955, -0.8470, 0.4528, 0.9689]])
--------------------------
<inside se forward:>
X: tensor([[ 1.6184e+00, -2.2714e-18, 2.8052e+00, 1.0378e+01, 4.6361e-05,
1.0644e+01, 1.4302e-02, 2.6143e-02, 2.4926e-05, 6.2237e+00,
8.8411e-05, 6.4360e+00, 3.3530e+00, 3.9302e-05, 8.1652e+00,
8.7950e-07]])
--------------------------
<inside se forward:>
X: tensor([[ 9.1687e+00, 3.1469e+00, -1.1788e+01, 4.9410e-02, 1.7272e+00,
-3.0913e+00, 1.1572e+00, -6.7104e+00, 1.1371e+01, 4.8926e+00,
-1.3102e+00, -4.9774e+00, -4.1444e+00, -6.3367e-01, -1.5672e+00,
4.2629e+00, 3.2491e+00, -4.6632e+00, 5.9241e-01, -2.4883e+00,
5.2599e+00, -7.1710e+00, 4.7197e+00, 7.2724e+00, -2.3363e+00,
-2.2564e+00, 5.4431e+00, -2.2832e-12, 1.9732e+00, 1.1682e+00,
6.1555e+00, 6.3574e+00]])
--------------------------
<inside se forward:>
X: tensor([[ 1.2785e-01, 1.1057e+00, 3.1581e-07, 9.7595e-01, 9.7386e-03,
8.4260e-07, 2.4243e-01, 2.1749e+00, 4.5704e-01, 2.9307e+00,
3.2384e+00, 2.6099e+00, 1.7640e-01, 4.3206e-04, 9.9380e-18,
1.3450e-11, 1.5721e-09, 2.7632e-07, 3.6721e-04, 2.1237e-07,
1.8839e-10, 1.8423e-02, 1.8514e-13, 4.3584e+00, 1.0972e-01,
7.5909e-03, 4.3828e-02, 2.9285e-02, 8.3840e-07, -2.6420e-19,
3.6933e-01, 1.0561e+00]])
--------------------------
0-feature dims: torch.Size([1, 512])
<inside se forward:>
X: tensor([[-1.5517, -0.8007, -0.7286, -0.6478]])
--------------------------
<inside se forward:>
X: tensor([[ 5.0945e-01, 3.2514e+00, -5.2950e-19, -9.1256e-19]])
--------------------------
<inside se forward:>
X: tensor([[-1.0556, -0.1015, -0.4792, 0.0956, -0.4782, -0.6346, -1.4946, -0.9745]])
--------------------------
<inside se forward:>
X: tensor([[ 4.8254, 1.6459, -0.4613, -0.6462, -0.0376, -0.4217, -0.0865, 0.0773]])
--------------------------
<inside se forward:>
X: tensor([[-1.8807, -0.8899, -1.3275, -0.5305, 0.7527, -0.8557, -1.9068, 0.1042,
-0.1444, -0.5798, 0.1493, -0.3055, 0.1952, -0.8383, 0.4532, 0.9664]])
--------------------------
<inside se forward:>
X: tensor([[ 1.6193e+00, -2.2732e-18, 2.8069e+00, 1.0384e+01, 4.6389e-05,
1.0650e+01, 1.4310e-02, 2.6159e-02, 2.4941e-05, 6.2275e+00,
8.8464e-05, 6.4398e+00, 3.3551e+00, 3.9326e-05, 8.1701e+00,
8.8003e-07]])
--------------------------
<inside se forward:>
X: tensor([[ 9.1444e+00, 3.1584e+00, -1.1794e+01, 4.9510e-02, 1.7366e+00,
-3.0976e+00, 1.1594e+00, -6.7127e+00, 1.1380e+01, 4.9035e+00,
-1.3231e+00, -4.9740e+00, -4.1439e+00, -6.3774e-01, -1.5777e+00,
4.2655e+00, 3.2341e+00, -4.6753e+00, 6.1677e-01, -2.4898e+00,
5.2556e+00, -7.1508e+00, 4.7271e+00, 7.2643e+00, -2.3301e+00,
-2.2546e+00, 5.4412e+00, -2.2872e-12, 1.9668e+00, 1.1764e+00,
6.1590e+00, 6.3575e+00]])
--------------------------
<inside se forward:>
X: tensor([[ 1.2778e-01, 1.1051e+00, 3.1564e-07, 9.7544e-01, 9.7335e-03,
8.4216e-07, 2.4230e-01, 2.1737e+00, 4.5681e-01, 2.9292e+00,
3.2367e+00, 2.6086e+00, 1.7631e-01, 4.3183e-04, 9.9393e-18,
1.3443e-11, 1.5713e-09, 2.7617e-07, 3.6702e-04, 2.1226e-07,
1.8829e-10, 1.8414e-02, 1.8504e-13, 4.3561e+00, 1.0967e-01,
7.5869e-03, 4.3805e-02, 2.9270e-02, 8.3797e-07, -2.6259e-19,
3.6914e-01, 1.0555e+00]])
--------------------------
1-feature dims: torch.Size([1, 512])
<inside se forward:>
X: tensor([[-1.6008, -0.7627, -0.7418, -0.6562]])
--------------------------
<inside se forward:>
X: tensor([[ 4.6180e-01, 3.2969e+00, -5.1091e-19, -8.5673e-19]])
--------------------------
<inside se forward:>
X: tensor([[-1.0860, -0.0888, -0.4410, 0.0515, -0.4853, -0.6203, -1.4854, -0.9521]])
--------------------------
<inside se forward:>
X: tensor([[ 4.8713, 1.6702, -0.5249, -0.6848, -0.0393, -0.4817, -0.1603, 0.0686]])
--------------------------
<inside se forward:>
X: tensor([[-1.8888, -0.8991, -1.3308, -0.5351, 0.7626, -0.8547, -1.9075, 0.1075,
-0.1457, -0.5770, 0.1518, -0.3068, 0.2023, -0.8418, 0.4610, 0.9654]])
--------------------------
<inside se forward:>
X: tensor([[ 1.6180e+00, -2.2720e-18, 2.8046e+00, 1.0376e+01, 4.6351e-05,
1.0642e+01, 1.4299e-02, 2.6138e-02, 2.4921e-05, 6.2225e+00,
8.8393e-05, 6.4347e+00, 3.3524e+00, 3.9294e-05, 8.1636e+00,
8.7932e-07]])
--------------------------
<inside se forward:>
X: tensor([[ 9.1925e+00, 3.1589e+00, -1.1792e+01, 4.9472e-02, 1.7246e+00,
-3.0884e+00, 1.1586e+00, -6.7112e+00, 1.1375e+01, 4.8954e+00,
-1.3047e+00, -4.9715e+00, -4.1392e+00, -6.4653e-01, -1.5772e+00,
4.2795e+00, 3.2537e+00, -4.6607e+00, 5.9939e-01, -2.4853e+00,
5.2615e+00, -7.1921e+00, 4.7311e+00, 7.2626e+00, -2.3221e+00,
-2.2574e+00, 5.4390e+00, -2.2799e-12, 1.9636e+00, 1.1820e+00,
6.1593e+00, 6.3554e+00]])
--------------------------
<inside se forward:>
X: tensor([[ 1.2775e-01, 1.1048e+00, 3.1557e-07, 9.7521e-01, 9.7312e-03,
8.4196e-07, 2.4225e-01, 2.1732e+00, 4.5670e-01, 2.9285e+00,
3.2360e+00, 2.6079e+00, 1.7627e-01, 4.3173e-04, 9.9377e-18,
1.3440e-11, 1.5710e-09, 2.7611e-07, 3.6693e-04, 2.1221e-07,
1.8825e-10, 1.8409e-02, 1.8500e-13, 4.3551e+00, 1.0964e-01,
7.5851e-03, 4.3795e-02, 2.9263e-02, 8.3777e-07, -2.6075e-19,
3.6905e-01, 1.0553e+00]])
--------------------------
2-feature dims: torch.Size([1, 512])
<inside se forward:>
X: tensor([[-1.5790, -0.8100, -0.7292, -0.6440]])
--------------------------
<inside se forward:>
X: tensor([[ 5.0116e-01, 3.2659e+00, -5.2126e-19, -8.5920e-19]])
--------------------------
<inside se forward:>
X: tensor([[-1.0427, -0.0929, -0.4953, 0.0674, -0.4784, -0.6115, -1.4972, -0.9645]])
--------------------------
<inside se forward:>
X: tensor([[ 4.8374, 1.6551, -0.4788, -0.6555, -0.0380, -0.4393, -0.1045, 0.0742]])
--------------------------
<inside se forward:>
X: tensor([[-1.8727, -0.8932, -1.3280, -0.5371, 0.7591, -0.8533, -1.8998, 0.1003,
-0.1452, -0.5813, 0.1475, -0.3055, 0.2016, -0.8411, 0.4535, 0.9559]])
--------------------------
<inside se forward:>
X: tensor([[ 1.6189e+00, -2.2717e-18, 2.8060e+00, 1.0381e+01, 4.6375e-05,
1.0647e+01, 1.4306e-02, 2.6151e-02, 2.4933e-05, 6.2256e+00,
8.8438e-05, 6.4379e+00, 3.3541e+00, 3.9314e-05, 8.1676e+00,
8.7976e-07]])
--------------------------
<inside se forward:>
X: tensor([[ 9.1427e+00, 3.1480e+00, -1.1763e+01, 4.9449e-02, 1.7342e+00,
-3.0890e+00, 1.1581e+00, -6.7127e+00, 1.1348e+01, 4.8951e+00,
-1.3154e+00, -4.9691e+00, -4.1414e+00, -6.4151e-01, -1.5783e+00,
4.2688e+00, 3.2439e+00, -4.6649e+00, 6.0231e-01, -2.4855e+00,
5.2647e+00, -7.1494e+00, 4.7290e+00, 7.2520e+00, -2.3288e+00,
-2.2466e+00, 5.4410e+00, -2.2847e-12, 1.9777e+00, 1.1817e+00,
6.1588e+00, 6.3552e+00]])
--------------------------
<inside se forward:>
X: tensor([[ 1.2778e-01, 1.1050e+00, 3.1563e-07, 9.7541e-01, 9.7331e-03,
8.4213e-07, 2.4229e-01, 2.1736e+00, 4.5679e-01, 2.9291e+00,
3.2366e+00, 2.6084e+00, 1.7631e-01, 4.3181e-04, 9.9368e-18,
1.3443e-11, 1.5713e-09, 2.7616e-07, 3.6700e-04, 2.1225e-07,
1.8828e-10, 1.8413e-02, 1.8503e-13, 4.3559e+00, 1.0966e-01,
7.5866e-03, 4.3804e-02, 2.9269e-02, 8.3793e-07, -2.6231e-19,
3.6912e-01, 1.0555e+00]])
--------------------------
3-feature dims: torch.Size([1, 512])
<inside se forward:>
X: tensor([[-1.6226, -0.7605, -0.6854, -0.5836]])
--------------------------
<inside se forward:>
X: tensor([[ 2.6039e-01, 3.4835e+00, -4.8167e-19, -8.5980e-19]])
--------------------------
<inside se forward:>
X: tensor([[-1.0699, -0.0526, -0.4319, -0.0069, -0.4890, -0.6087, -1.4835, -0.9184]])
--------------------------
<inside se forward:>
X: tensor([[ 4.8828, 1.6724, -0.5539, -0.7054, -0.0402, -0.5061, -0.2002, 0.0661]])
--------------------------
<inside se forward:>
X: tensor([[-1.8790, -0.8969, -1.3365, -0.5384, 0.7664, -0.8571, -1.9043, 0.1059,
-0.1459, -0.5847, 0.1542, -0.3094, 0.2076, -0.8439, 0.4567, 0.9642]])
--------------------------
<inside se forward:>
X: tensor([[ 1.6174e+00, -2.2710e-18, 2.8035e+00, 1.0371e+01, 4.6333e-05,
1.0638e+01, 1.4293e-02, 2.6128e-02, 2.4911e-05, 6.2200e+00,
8.8358e-05, 6.4321e+00, 3.3510e+00, 3.9279e-05, 8.1603e+00,
8.7897e-07]])
--------------------------
<inside se forward:>
X: tensor([[ 9.1583e+00, 3.1523e+00, -1.1765e+01, 4.9511e-02, 1.7292e+00,
-3.0851e+00, 1.1595e+00, -6.7154e+00, 1.1350e+01, 4.9005e+00,
-1.3040e+00, -4.9675e+00, -4.1433e+00, -6.3643e-01, -1.5745e+00,
4.2669e+00, 3.2492e+00, -4.6569e+00, 6.0002e-01, -2.4789e+00,
5.2519e+00, -7.1619e+00, 4.7275e+00, 7.2465e+00, -2.3229e+00,
-2.2525e+00, 5.4448e+00, -2.2806e-12, 1.9732e+00, 1.1739e+00,
6.1550e+00, 6.3576e+00]])
--------------------------
<inside se forward:>
X: tensor([[ 1.2778e-01, 1.1050e+00, 3.1563e-07, 9.7540e-01, 9.7331e-03,
8.4212e-07, 2.4229e-01, 2.1736e+00, 4.5679e-01, 2.9291e+00,
3.2366e+00, 2.6084e+00, 1.7630e-01, 4.3181e-04, 9.9369e-18,
1.3443e-11, 1.5712e-09, 2.7616e-07, 3.6700e-04, 2.1225e-07,
1.8828e-10, 1.8413e-02, 1.8503e-13, 4.3559e+00, 1.0966e-01,
7.5866e-03, 4.3803e-02, 2.9269e-02, 8.3793e-07, -2.6177e-19,
3.6912e-01, 1.0555e+00]])
--------------------------
4-feature dims: torch.Size([1, 512])
<inside se forward:>
X: tensor([[-1.5559, -0.7016, -0.7545, -0.6793]])
--------------------------
<inside se forward:>
X: tensor([[ 4.6992e-01, 3.2951e+00, -5.1868e-19, -8.9299e-19]])
--------------------------
<inside se forward:>
X: tensor([[-1.0106, -0.0831, -0.5151, 0.0650, -0.4869, -0.6094, -1.5116, -0.9355]])
--------------------------
<inside se forward:>
X: tensor([[ 4.8588, 1.6723, -0.4774, -0.6520, -0.0379, -0.4428, -0.0917, 0.0721]])
--------------------------
<inside se forward:>
X: tensor([[-1.8655, -0.8893, -1.3313, -0.5367, 0.7590, -0.8533, -1.9023, 0.1008,
-0.1428, -0.5834, 0.1448, -0.3016, 0.2040, -0.8361, 0.4534, 0.9494]])
--------------------------
<inside se forward:>
X: tensor([[ 1.6194e+00, -2.2728e-18, 2.8070e+00, 1.0384e+01, 4.6391e-05,
1.0651e+01, 1.4311e-02, 2.6160e-02, 2.4942e-05, 6.2277e+00,
8.8468e-05, 6.4401e+00, 3.3552e+00, 3.9328e-05, 8.1704e+00,
8.8006e-07]])
--------------------------
<inside se forward:>
X: tensor([[ 9.1170e+00, 3.1500e+00, -1.1769e+01, 4.9446e-02, 1.7362e+00,
-3.0951e+00, 1.1581e+00, -6.7183e+00, 1.1354e+01, 4.8964e+00,
-1.3110e+00, -4.9689e+00, -4.1461e+00, -6.4890e-01, -1.5875e+00,
4.2782e+00, 3.2361e+00, -4.6685e+00, 6.0150e-01, -2.4799e+00,
5.2726e+00, -7.1287e+00, 4.7384e+00, 7.2532e+00, -2.3235e+00,
-2.2367e+00, 5.4463e+00, -2.2915e-12, 1.9780e+00, 1.1893e+00,
6.1668e+00, 6.3629e+00]])
--------------------------
<inside se forward:>
X: tensor([[ 1.2774e-01, 1.1047e+00, 3.1554e-07, 9.7513e-01, 9.7304e-03,
8.4189e-07, 2.4223e-01, 2.1730e+00, 4.5666e-01, 2.9283e+00,
3.2357e+00, 2.6077e+00, 1.7626e-01, 4.3169e-04, 9.9353e-18,
1.3439e-11, 1.5708e-09, 2.7609e-07, 3.6690e-04, 2.1219e-07,
1.8823e-10, 1.8408e-02, 1.8498e-13, 4.3547e+00, 1.0963e-01,
7.5845e-03, 4.3792e-02, 2.9261e-02, 8.3770e-07, -2.6081e-19,
3.6902e-01, 1.0552e+00]])
--------------------------
5-feature dims: torch.Size([1, 512])
<inside se forward:>
X: tensor([[-1.5922, -0.7833, -0.8099, -0.7581]])
--------------------------
<inside se forward:>
X: tensor([[ 6.0425e-01, 3.1537e+00, -5.2917e-19, -8.2412e-19]])
--------------------------
<inside se forward:>
X: tensor([[-1.0295, -0.1079, -0.5239, 0.1099, -0.4906, -0.6187, -1.5178, -0.9515]])
--------------------------
<inside se forward:>
X: tensor([[ 4.9047, 1.7059, -0.4654, -0.6338, -0.0371, -0.4419, -0.0531, 0.0689]])
--------------------------
<inside se forward:>
X: tensor([[-1.8792, -0.8972, -1.3274, -0.5352, 0.7649, -0.8542, -1.9078, 0.1055,
-0.1455, -0.5737, 0.1437, -0.3026, 0.2050, -0.8408, 0.4609, 0.9527]])
--------------------------
<inside se forward:>
X: tensor([[ 1.6192e+00, -2.2734e-18, 2.8065e+00, 1.0383e+01, 4.6383e-05,
1.0649e+01, 1.4309e-02, 2.6156e-02, 2.4938e-05, 6.2268e+00,
8.8454e-05, 6.4391e+00, 3.3547e+00, 3.9321e-05, 8.1692e+00,
8.7993e-07]])
--------------------------
<inside se forward:>
X: tensor([[ 9.1462e+00, 3.1594e+00, -1.1777e+01, 4.9445e-02, 1.7250e+00,
-3.0903e+00, 1.1580e+00, -6.6971e+00, 1.1362e+01, 4.8978e+00,
-1.3202e+00, -4.9701e+00, -4.1377e+00, -6.3982e-01, -1.5717e+00,
4.2688e+00, 3.2314e+00, -4.6666e+00, 6.1283e-01, -2.4762e+00,
5.2739e+00, -7.1517e+00, 4.7211e+00, 7.2673e+00, -2.3338e+00,
-2.2474e+00, 5.4291e+00, -2.2837e-12, 1.9676e+00, 1.1787e+00,
6.1559e+00, 6.3495e+00]])
--------------------------
<inside se forward:>
X: tensor([[ 1.2772e-01, 1.1046e+00, 3.1549e-07, 9.7498e-01, 9.7289e-03,
8.4176e-07, 2.4219e-01, 2.1727e+00, 4.5659e-01, 2.9278e+00,
3.2352e+00, 2.6073e+00, 1.7623e-01, 4.3163e-04, 9.9370e-18,
1.3437e-11, 1.5706e-09, 2.7604e-07, 3.6684e-04, 2.1216e-07,
1.8820e-10, 1.8405e-02, 1.8495e-13, 4.3541e+00, 1.0962e-01,
7.5833e-03, 4.3785e-02, 2.9256e-02, 8.3757e-07, -2.6052e-19,
3.6896e-01, 1.0550e+00]])
--------------------------
6-feature dims: torch.Size([1, 512])
<inside se forward:>
X: tensor([[-1.5156, -0.5839, -0.7718, -0.6881]])
--------------------------
<inside se forward:>
X: tensor([[ 6.3789e-01, 3.1470e+00, -5.4607e-19, -8.8140e-19]])
--------------------------
<inside se forward:>
X: tensor([[-1.0068, -0.1239, -0.5419, 0.1311, -0.4739, -0.6220, -1.5159, -1.0039]])
--------------------------
<inside se forward:>
X: tensor([[ 4.7764, 1.6289, -0.3860, -0.5940, -0.0352, -0.3554, 0.0103, 0.0848]])
--------------------------
<inside se forward:>
X: tensor([[-1.8759, -0.8883, -1.3219, -0.5339, 0.7527, -0.8555, -1.9051, 0.0963,
-0.1418, -0.5765, 0.1501, -0.2970, 0.1911, -0.8370, 0.4527, 0.9548]])
--------------------------
<inside se forward:>
X: tensor([[ 1.6203e+00, -2.2739e-18, 2.8086e+00, 1.0390e+01, 4.6417e-05,
1.0657e+01, 1.4319e-02, 2.6175e-02, 2.4956e-05, 6.2312e+00,
8.8518e-05, 6.4437e+00, 3.3571e+00, 3.9350e-05, 8.1750e+00,
8.8056e-07]])
--------------------------
<inside se forward:>
X: tensor([[ 9.1541e+00, 3.1531e+00, -1.1772e+01, 4.9404e-02, 1.7326e+00,
-3.0931e+00, 1.1571e+00, -6.6943e+00, 1.1357e+01, 4.8937e+00,
-1.3274e+00, -4.9758e+00, -4.1305e+00, -6.4647e-01, -1.5764e+00,
4.2726e+00, 3.2396e+00, -4.6719e+00, 6.0704e-01, -2.4865e+00,
5.2721e+00, -7.1595e+00, 4.7218e+00, 7.2695e+00, -2.3445e+00,
-2.2482e+00, 5.4221e+00, -2.2827e-12, 1.9751e+00, 1.1886e+00,
6.1566e+00, 6.3400e+00]])
--------------------------
<inside se forward:>
X: tensor([[ 1.2782e-01, 1.1054e+00, 3.1574e-07, 9.7576e-01, 9.7366e-03,
8.4243e-07, 2.4238e-01, 2.1744e+00, 4.5695e-01, 2.9301e+00,
3.2378e+00, 2.6094e+00, 1.7637e-01, 4.3197e-04, 9.9450e-18,
1.3448e-11, 1.5718e-09, 2.7626e-07, 3.6714e-04, 2.1232e-07,
1.8835e-10, 1.8419e-02, 1.8510e-13, 4.3575e+00, 1.0970e-01,
7.5893e-03, 4.3819e-02, 2.9279e-02, 8.3823e-07, -2.6201e-19,
3.6925e-01, 1.0558e+00]])
--------------------------
7-feature dims: torch.Size([1, 512])
<inside se forward:>
X: tensor([[-1.5567, -0.7524, -0.7620, -0.6805]])
--------------------------
<inside se forward:>
X: tensor([[ 5.3279e-01, 3.2445e+00, -5.2411e-19, -8.5973e-19]])
--------------------------
<inside se forward:>
X: tensor([[-1.0248, -0.1011, -0.5172, 0.0823, -0.4737, -0.6192, -1.4961, -0.9762]])
--------------------------
<inside se forward:>
X: tensor([[ 4.8410, 1.6705, -0.4254, -0.6104, -0.0360, -0.4001, -0.0183, 0.0759]])
--------------------------
<inside se forward:>
X: tensor([[-1.8740, -0.8943, -1.3243, -0.5337, 0.7550, -0.8610, -1.9063, 0.1108,
-0.1408, -0.5770, 0.1506, -0.3089, 0.1984, -0.8347, 0.4544, 0.9591]])
--------------------------
<inside se forward:>
X: tensor([[ 1.6191e+00, -2.2732e-18, 2.8065e+00, 1.0383e+01, 4.6383e-05,
1.0649e+01, 1.4309e-02, 2.6156e-02, 2.4938e-05, 6.2267e+00,
8.8453e-05, 6.4390e+00, 3.3546e+00, 3.9321e-05, 8.1691e+00,
8.7992e-07]])
--------------------------
<inside se forward:>
X: tensor([[ 9.1553e+00, 3.1582e+00, -1.1776e+01, 4.9516e-02, 1.7335e+00,
-3.0909e+00, 1.1595e+00, -6.7080e+00, 1.1362e+01, 4.9002e+00,
-1.3237e+00, -4.9679e+00, -4.1376e+00, -6.4026e-01, -1.5758e+00,
4.2652e+00, 3.2360e+00, -4.6691e+00, 6.1957e-01, -2.4899e+00,
5.2536e+00, -7.1605e+00, 4.7257e+00, 7.2488e+00, -2.3271e+00,
-2.2548e+00, 5.4335e+00, -2.2811e-12, 1.9611e+00, 1.1809e+00,
6.1551e+00, 6.3494e+00]])
--------------------------
<inside se forward:>
X: tensor([[ 1.2783e-01, 1.1055e+00, 3.1575e-07, 9.7577e-01, 9.7367e-03,
8.4244e-07, 2.4238e-01, 2.1744e+00, 4.5696e-01, 2.9302e+00,
3.2378e+00, 2.6094e+00, 1.7637e-01, 4.3197e-04, 9.9456e-18,
1.3448e-11, 1.5718e-09, 2.7626e-07, 3.6714e-04, 2.1233e-07,
1.8835e-10, 1.8420e-02, 1.8510e-13, 4.3576e+00, 1.0970e-01,
7.5894e-03, 4.3820e-02, 2.9280e-02, 8.3824e-07, -2.6233e-19,
3.6926e-01, 1.0559e+00]])
--------------------------
8-feature dims: torch.Size([1, 512])
<inside se forward:>
X: tensor([[-1.6060, -0.9100, -0.7711, -0.7195]])
--------------------------
<inside se forward:>
X: tensor([[ 5.6481e-01, 3.2033e+00, -5.2471e-19, -8.0308e-19]])
--------------------------
<inside se forward:>
X: tensor([[-1.0948, -0.1106, -0.4654, 0.0768, -0.5028, -0.6202, -1.4778, -0.9581]])
--------------------------
<inside se forward:>
X: tensor([[ 4.9064, 1.6963, -0.5052, -0.6644, -0.0385, -0.4721, -0.1160, 0.0672]])
--------------------------
<inside se forward:>
X: tensor([[-1.8868, -0.8981, -1.3322, -0.5298, 0.7566, -0.8556, -1.9039, 0.1134,
-0.1447, -0.5744, 0.1480, -0.3113, 0.2017, -0.8359, 0.4564, 0.9658]])
--------------------------
<inside se forward:>
X: tensor([[ 1.6177e+00, -2.2718e-18, 2.8040e+00, 1.0373e+01, 4.6342e-05,
1.0640e+01, 1.4296e-02, 2.6133e-02, 2.4916e-05, 6.2212e+00,
8.8375e-05, 6.4333e+00, 3.3517e+00, 3.9286e-05, 8.1619e+00,
8.7914e-07]])
--------------------------
<inside se forward:>
X: tensor([[ 9.1471e+00, 3.1531e+00, -1.1779e+01, 4.9447e-02, 1.7370e+00,
-3.0912e+00, 1.1580e+00, -6.7101e+00, 1.1363e+01, 4.9010e+00,
-1.3083e+00, -4.9699e+00, -4.1370e+00, -6.3986e-01, -1.5794e+00,
4.2680e+00, 3.2415e+00, -4.6646e+00, 6.0562e-01, -2.4862e+00,
5.2591e+00, -7.1519e+00, 4.7275e+00, 7.2529e+00, -2.3203e+00,
-2.2537e+00, 5.4380e+00, -2.2843e-12, 1.9685e+00, 1.1793e+00,
6.1543e+00, 6.3497e+00]])
--------------------------
<inside se forward:>
X: tensor([[ 1.2776e-01, 1.1049e+00, 3.1558e-07, 9.7525e-01, 9.7316e-03,
8.4199e-07, 2.4226e-01, 2.1733e+00, 4.5672e-01, 2.9286e+00,
3.2361e+00, 2.6080e+00, 1.7628e-01, 4.3175e-04, 9.9388e-18,
1.3441e-11, 1.5710e-09, 2.7612e-07, 3.6695e-04, 2.1221e-07,
1.8825e-10, 1.8410e-02, 1.8500e-13, 4.3553e+00, 1.0965e-01,
7.5854e-03, 4.3797e-02, 2.9264e-02, 8.3780e-07, -2.6109e-19,
3.6906e-01, 1.0553e+00]])
--------------------------
9-feature dims: torch.Size([1, 512])
Post Training Quantization: Calibration done
C:\Users\User\Anaconda3\Lib\site-packages\torch\quantization\observer.py:845: UserWarning: must run observer before calling calculate_qparams.
Returning default scale and zero point
Returning default scale and zero point "
Post Training Quantization: Convert done
Inverted Residual Block: After fusion and quantization, note fused modules:
QuantizedConv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.011990774422883987, zero_point=80)
Size of model after quantization
Size (MB): 24.397458 |
st184730 | Which version of PyTorch are you currently using? We recently fixed a bug in the histogram observer that should be available in 1.6. You can also use nightlies to see if it fixes your issue. |
st184731 | Updated to the latest nighly(1.7.0.dev20200714+cpu and torchvision-0.8.0.dev20200714+cpu) just now , it got a bit further, but ultimately crashed with the same error :
Size (MB): 89.322487
QConfig(activation=functools.partial(<class 'torch.quantization.observer.HistogramObserver'>, reduce_range=True), weight=functools.partial(<class 'torch.quantization.observer.PerChannelMinMaxObserver'>, dtype=torch.qint8, qscheme=torch.per_channel_symmetric))
Post Training Quantization Prepare: Inserting Observers
Inverted Residual Block:After observer insertion
Conv2d(
3, 64, kernel_size=(3, 3), stride=(1, 1)
(activation_post_process): HistogramObserver()
)
<inside se forward:>
X: tensor([[-1.5691, -0.7516, -0.7360, -0.6458]])
--------------------------
<inside se forward:>
X: tensor([[ 3.6604e-01, 3.3855e+00, -5.0032e-19, -9.0280e-19]])
--------------------------
<inside se forward:>
X: tensor([[-1.0513, -0.0656, -0.4529, 0.0653, -0.4762, -0.6304, -1.5043, -0.9484]])
--------------------------
<inside se forward:>
X: tensor([[ 4.8730, 1.6650, -0.5135, -0.6811, -0.0392, -0.4689, -0.1496, 0.0717]])
--------------------------
<inside se forward:>
X: tensor([[-1.8759, -0.8886, -1.3295, -0.5375, 0.7598, -0.8526, -1.9066, 0.0985,
-0.1461, -0.5857, 0.1513, -0.3050, 0.1955, -0.8470, 0.4528, 0.9689]])
--------------------------
<inside se forward:>
X: tensor([[ 1.6184e+00, -2.2714e-18, 2.8052e+00, 1.0378e+01, 4.6361e-05,
1.0644e+01, 1.4302e-02, 2.6143e-02, 2.4926e-05, 6.2237e+00,
8.8411e-05, 6.4360e+00, 3.3530e+00, 3.9302e-05, 8.1652e+00,
8.7950e-07]])
--------------------------
<inside se forward:>
X: tensor([[ 9.1687e+00, 3.1469e+00, -1.1788e+01, 4.9410e-02, 1.7272e+00,
-3.0913e+00, 1.1572e+00, -6.7104e+00, 1.1371e+01, 4.8926e+00,
-1.3102e+00, -4.9773e+00, -4.1444e+00, -6.3367e-01, -1.5672e+00,
4.2629e+00, 3.2491e+00, -4.6632e+00, 5.9241e-01, -2.4883e+00,
5.2599e+00, -7.1710e+00, 4.7197e+00, 7.2724e+00, -2.3363e+00,
-2.2564e+00, 5.4431e+00, -2.2832e-12, 1.9732e+00, 1.1682e+00,
6.1555e+00, 6.3574e+00]])
--------------------------
<inside se forward:>
X: tensor([[ 1.2785e-01, 1.1057e+00, 3.1581e-07, 9.7595e-01, 9.7386e-03,
8.4260e-07, 2.4243e-01, 2.1749e+00, 4.5704e-01, 2.9307e+00,
3.2384e+00, 2.6099e+00, 1.7640e-01, 4.3206e-04, 9.9380e-18,
1.3450e-11, 1.5721e-09, 2.7632e-07, 3.6721e-04, 2.1237e-07,
1.8839e-10, 1.8423e-02, 1.8514e-13, 4.3584e+00, 1.0972e-01,
7.5909e-03, 4.3828e-02, 2.9285e-02, 8.3840e-07, -2.6420e-19,
3.6933e-01, 1.0561e+00]])
--------------------------
0-feature dims: torch.Size([1, 512])
<inside se forward:>
X: tensor([[-1.5517, -0.8007, -0.7286, -0.6478]])
--------------------------
<inside se forward:>
X: tensor([[ 5.0945e-01, 3.2514e+00, -5.2950e-19, -9.1256e-19]])
Traceback (most recent call last):
File "d:\Codes\org\python\Quantization\quantizer.py", line 266, in <module>
quantize_test()
File "d:\Codes\org\python\Quantization\quantizer.py", line 248, in quantize_test
evaluate(model, dtloader, neval_batches=num_calibration_batches)
File "d:\Codes\org\python\Quantization\quantizer.py", line 152, in evaluate
features = model(image.unsqueeze(0))
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 576, in forward
x = self.layer1(x)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\container.py", line 117, in forward
input = module(input)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 489, in forward
out = self.se(out)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 447, in forward
y = self.prelu_q(y)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 322, in forward
inputs = self.quantized_op.add(torch.relu(inputs), weight_min_res)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\quantized\modules\functional_modules.py", line 46, in add
r = self.activation_post_process(r)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\quantization\observer.py", line 862, in forward
self.bins)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\quantization\observer.py", line 813, in _combine_histograms
histogram_with_output_range = torch.zeros((Nbins * downsample_rate), device=orig_hist.device)
RuntimeError: Trying to create tensor with negative dimension -4398046511104: [-4398046511104] |
st184732 | The initial error was due to the histogram observer getting a tensor with same values or all zero values. But since that was fixed I am not quite sure of the cause of this error.
Could you provide a small repro for us to take a look? Along with the input tensor data for which this error shows up. Thanks! |
st184733 | seems updating to 1.7 solved this issue! Hoever, the Unimplemented type and native bn related issues are still present.
I created a minimal self contained example with Resnet18 and a simple 2 layered Network from quantizing the model to testing it using fake data.
By setting the two variables at the top (use_relu, disable_single_bn) you can see different behaviors(most of the code is biolerplates and resnet18 definitions)
you are free to test this both with the ResNet18 or the SimpleNetwork:
import os
from os.path import abspath, dirname, join
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision.datasets import FakeData
import torchvision.transforms as transforms
from torch.quantization import fuse_modules
use_relu = False
disable_single_bns = False
class PReLU_Quantized(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
self.quantized_op = nn.quantized.FloatFunctional()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, inputs):
# inputs = max(0, inputs) + alpha * min(0, inputs)
# this is how we do it
# pos = torch.relu(inputs)
# neg = -alpha * torch.relu(-inputs)
# res3 = pos + neg
self.weight = self.quant(self.weight)
weight_min_res = self.quantized_op.mul(-self.weight, torch.relu(-inputs))
inputs = self.quantized_op.add(torch.relu(inputs), weight_min_res)
inputs = self.dequant(inputs)
self.weight = self.dequant(self.weight)
return inputs
def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super().__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
self.add_relu = torch.nn.quantized.FloatFunctional()
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
# out += residual
# out = self.relu(out)
out = self.add_relu.add_relu(out, residual)
return out
def fuse_model(self):
torch.quantization.fuse_modules(self, [['conv1', 'bn1', 'relu'],
['conv2', 'bn2']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu1 = nn.ReLU(inplace=False)
self.relu2 = nn.ReLU(inplace=False)
self.downsample = downsample
self.stride = stride
self.skip_add_relu = nn.quantized.FloatFunctional()
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu1(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu2(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
# out += residual
# out = self.relu(out)
out = self.skip_add_relu.add_relu(out, residual)
return out
def fuse_model(self):
fuse_modules(self, [['conv1', 'bn1', 'relu1'],
['conv2', 'bn2', 'relu2'],
['conv3', 'bn3']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class SEBlock(nn.Module):
def __init__(self, channel, reduction=16):
super().__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.mult_xy = nn.quantized.FloatFunctional()
self.fc = nn.Sequential(nn.Linear(channel, channel // reduction),
nn.PReLU(),
nn.Linear(channel // reduction, channel),
nn.Sigmoid())
self.fc1 = self.fc[0]
self.prelu = self.fc[1]
self.fc2 = self.fc[2]
self.sigmoid = self.fc[3]
self.prelu_q = PReLU_Quantized(self.prelu)
if use_relu:
self.prelu_q_or_relu = torch.relu
else:
self.prelu_q_or_relu = self.prelu_q
def forward(self, x):
# print(f'<inside se forward:>')
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
# y = self.fc(y).view(b, c, 1, 1)
y = self.fc1(y)
y = self.prelu_q_or_relu(y)
y = self.fc2(y)
y = self.sigmoid(y).view(b, c, 1, 1)
# print('--------------------------')
# out = x*y
out = self.mult_xy.mul(x, y)
return out
class IRBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, use_se=True):
super().__init__()
self.bn0 = nn.BatchNorm2d(inplanes)
if disable_single_bns:
self.bn0_or_identity = torch.nn.Identity()
else:
self.bn0_or_identity = self.bn0
self.conv1 = conv3x3(inplanes, inplanes)
self.bn1 = nn.BatchNorm2d(inplanes)
self.prelu = nn.PReLU()
self.prelu_q = PReLU_Quantized(self.prelu)
if use_relu:
self.prelu_q_or_relu = torch.relu
else:
self.prelu_q_or_relu = self.prelu_q
self.conv2 = conv3x3(inplanes, planes, stride)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
self.use_se = use_se
# if self.use_se:
self.se = SEBlock(planes)
self.add_residual = nn.quantized.FloatFunctional()
def forward(self, x):
residual = x
# TODO:
# this needs to be quantized as well!
out = self.bn0_or_identity(x)
out = self.conv1(out)
out = self.bn1(out)
# out = self.prelu(out)
out = self.prelu_q_or_relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.use_se:
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
# out += residual
# out = self.prelu(out)
out = self.prelu_q_or_relu(out)
# we may need to change prelu into relu and instead of add, use add_relu here
out = self.add_residual.add(out, residual)
return out
def fuse_model(self):
fuse_modules(self, [# ['bn0'],
['conv1', 'bn1'],
['conv2', 'bn2']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class ResNet(nn.Module):
def __init__(self, block, layers, use_se=True):
self.inplanes = 64
self.use_se = use_se
super().__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.prelu = nn.PReLU()
self.prelu_q = PReLU_Quantized(self.prelu)
# This is to only get rid of the unimplemented CPUQuantization type error
# when we use PReLU_Quantized during test time
if use_relu:
self.prelu_q_or_relu = torch.relu
else:
self.prelu_q_or_relu = self.prelu_q
self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.bn2 = nn.BatchNorm2d(512)
# This is to get around the single BatchNorms not getting fused and thus causing
# a RuntimeError: Could not run 'aten::native_batch_norm' with arguments from the 'QuantizedCPU' backend.
# 'aten::native_batch_norm' is only available for these backends: [CPU, MkldnnCPU, BackendSelect, Named, Autograd, Profiler, Tracer, Autocast, Batched].
# during test time
if disable_single_bns:
self.bn2_or_identity = torch.nn.Identity()
else:
self.bn2_or_identity = self.bn2
self.dropout = nn.Dropout()
self.fc = nn.Linear(512 * 7 * 7, 512)
self.bn3 = nn.BatchNorm1d(512)
if disable_single_bns:
self.bn3_or_identity = torch.nn.Identity()
else:
self.bn3_or_identity = self.bn3
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.xavier_normal_(m.weight)
elif isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.xavier_normal_(m.weight)
nn.init.constant_(m.bias, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, use_se=self.use_se))
self.inplanes = planes
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, use_se=self.use_se))
return nn.Sequential(*layers)
def forward(self, x):
x = self.quant(x)
x = self.conv1(x)
# TODO: single bn needs to be fused
x = self.bn1(x)
# x = self.prelu(x)
x = self.prelu_q_or_relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.bn2_or_identity(x)
x = self.dropout(x)
# x = x.view(x.size(0), -1)
x = x.reshape(x.size(0), -1)
x = self.fc(x)
# TODO: single bn needs to be fused
x = self.bn3_or_identity(x)
x = self.dequant(x)
return x
def fuse_model(self):
r"""Fuse conv/bn/relu modules in resnet models
Fuse conv+bn+relu/ Conv+relu/conv+Bn modules to prepare for quantization.
Model is modified in place. Note that this operation does not change numerics
and the model after modification is in floating point
"""
fuse_modules(self, ['conv1', 'bn1'], inplace=True)
for m in self.modules():
if type(m) == Bottleneck or type(m) == BasicBlock or type(m) == IRBlock:
m.fuse_model()
class SimpleNetwork(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3, stride=1, padding=1)
self.bn1 = nn.BatchNorm2d(10)
self.relu1 = nn.ReLU()
self.prelu_q = PReLU_Quantized(nn.PReLU())
self.bn = nn.BatchNorm2d(10)
self.prelu_q_or_relu = torch.relu if use_relu else self.prelu_q
self.bn_or_identity = nn.Identity() if disable_single_bns else self.bn
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv1(x)
x = self.bn1(x)
x = self.relu1(x)
x = self.prelu_q_or_relu(x)
x = self.bn_or_identity(x)
x = self.dequant(x)
return x
def resnet18(use_se=True, **kwargs):
return ResNet(IRBlock, [2, 2, 2, 2], use_se=use_se, **kwargs)
def print_size_of_model(model):
torch.save(model.state_dict(), "temp.p")
print('Size (MB):', os.path.getsize("temp.p")/1e6)
os.remove('temp.p')
def evaluate(model, data_loader, eval_batches):
model.eval()
with torch.no_grad():
for i, (image, target) in enumerate(data_loader):
features = model(image)
print(f'{i})feature dims: {features.shape}')
if i >= eval_batches:
return
def load_quantized(model, quantized_checkpoint_file_path):
model.eval()
if type(model) == ResNet:
model.fuse_model()
# Specify quantization configuration
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(model, inplace=True)
# Convert to quantized model
torch.quantization.convert(model, inplace=True)
checkpoint = torch.load(quantized_checkpoint_file_path, map_location=torch.device('cpu'))
model.load_state_dict(checkpoint)
print_size_of_model(model)
return model
def test_the_model(model, dtloader):
current_dir = abspath(dirname(__file__))
model = load_quantized(model, join(current_dir, 'data', 'model_quantized_jit.pth'))
model.eval()
img, _ = next(iter(dtloader))
embd1 = model(img)
def quantize_model(model, dtloader):
calibration_batches = 10
saved_model_dir = 'data'
scripted_quantized_model_file = 'model_quantized_jit.pth'
# model = resnet18()
model.eval()
if type(model) == ResNet:
model.fuse_model()
print_size_of_model(model)
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
print(model.qconfig)
torch.quantization.prepare(model, inplace=True)
print(f'Model after fusion(prepared): {model}')
# Calibrate first
print('Post Training Quantization Prepare: Inserting Observers')
print('\n Inverted Residual Block:After observer insertion \n\n', model.conv1)
# Calibrate with the training set
evaluate(model, dtloader, eval_batches=calibration_batches)
print('Post Training Quantization: Calibration done')
# Convert to quantized model
torch.quantization.convert(model, inplace=True)
print('Post Training Quantization: Convert done')
print('\n Inverted Residual Block: After fusion and quantization, note fused modules: \n\n', model.conv1)
print("Size of model after quantization")
print_size_of_model(model)
script = torch.jit.script(model)
path_tosave = join(dirname(abspath(__file__)), saved_model_dir, scripted_quantized_model_file)
print(f'path to save: {path_tosave}')
with open(path_tosave, 'wb') as f:
torch.save(model.state_dict(), f)
print(f'model after quantization (prepared and converted:) {model}')
# torch.jit.save(script, path_tosave)
dataset = FakeData(1000, image_size=(3, 112, 112), num_classes=5, transform=transforms.ToTensor())
data_loader = DataLoader(dataset, batch_size=1)
# quantize the model
model = resnet18()
# model = SimpleNetwork()
quantize_model(model, data_loader)
# and load and test the quantized model
model = resnet18()
# model = SimpleNetwork()
test_the_model(model, data_loader) |
st184734 | I try to run quantization benchmark:
GitHub
z-a-f/quantization_benchmarks 6
PyTorch quantization benchmarks. Contribute to z-a-f/quantization_benchmarks development by creating an account on GitHub.
But I didn’t see any speed up with quantized model.
For example:
googlenet:
Train time:
q: 192.940
f: 192.940
Test time:
q: 193.114
f: 193.114
On the third model I got:
Downloading: “https://download.pytorch.org/models/mobilenet_v2-b0353104.pth” to
…/.cache\torch\hub\checkpoints\mobilenet_v2-b0353104.pth
100%|█████████████████████████████████████| 13.6M/13.6M [00:01<00:00, 8.20MB/s]
File “…Anaconda3\envs\torch1.5\lib\site-packages\torchvision\mod
els\quantization\utils.py”, line 22, in quantize_model
raise RuntimeError("Quantized backend not supported ")
RuntimeError: Quantized backend not supported
Why does the quantized model not provide performance improvements?
How can I activate quantized backend?
My environment:
conda 4.8.3
torch 1.7.0.dev20200716
torchvision 0.8.0.dev20200716
Python 3.6.10
CPU: i5-4670 |
st184735 | fel88:
Quantized backend not supported
one thing to try would be:
torch.backends.quantized.engine = 'fbgemm'
model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') |
st184736 | It still doesn’t work, but
https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 15
works well. It is enough for my purposes. |
st184737 | I am trying to export a quantized int8 PyTorch model to ONNX from the following tutorial.
https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 144
However, PyTorch to ONNX conversion of quantized models is not supported. Various types of quantized models will either explicitly say their conversion is not supported or they will throw an attribute error.
My question is — how do we do the conversion manually? Specifically, how do we define a custom mapping of ONNX operations for PyTorch classes? I assume the logic is the same for non-quantized layers, whose conversion needed to be defined until it was built-in, but I am having trouble finding an example. |
st184738 | We currently only support conversion to ONNX for Caffe2 backend. This thread has additional context on what we currently support - ONNX export of quantized model 340
If you would like to add custom conversion logic to onnx operators for quantized pytorch ops you can follow the code in https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic_caffe2.py 96 which adds the mapping for the Caffe2 ops in ONNX. |
st184739 | @ Joseph_Konan Hello, can you now convert the quantified model to ONNX, thank you! |
st184740 | Hi PyTorch community,
TLDR; DistilBert’s nn.quantized.Linear encounters KeyError when loading from state_dict. Saving from state_dict uses version 3 format, but loading evaluates local_metadata.get('version', None) == None which defaults to using version 1 format.
I have a problem with loading DistilBert classifier. I would load it from a pre-trained model, fine-tune it, quantize it, then save its state_dict. The issue happens when saving and reloading this quantized version. When DynamicQuantizedLinear generates keys, it uses this format:
_distilbert.transformer.layer.0.attention.q_lin._packed_params.weight
# Version 3
# self
# |--- _packed_params : (Tensor, Tensor) representing (weight, bias)
# of LinearPackedParams
# |--- dtype : torch.dtype
Printing the state_dict in that key:
# print(state_dict['_distilbert.transformer.layer.0.attention.q_lin._packed_params.weight'])
tensor([[ 0.0357, 0.0365, 0.0119, ..., -0.0230, 0.0199, 0.0397],
[ 0.0119, -0.0349, 0.0048, ..., 0.0294, -0.0127, -0.0119],
[ 0.0540, 0.0159, -0.0032, ..., 0.0008, -0.0183, -0.0016],
...,
[ 0.0064, -0.0079, 0.0302, ..., -0.0199, 0.0008, -0.0095],
[ 0.0024, -0.0056, 0.0183, ..., 0.0008, 0.0175, 0.0270],
[-0.0024, -0.0119, -0.0238, ..., 0.0294, 0.0199, 0.0175]],
size=(768, 768), dtype=torch.qint8,
quantization_scheme=torch.per_tensor_affine, scale=0.0007942558731883764,
zero_point=0)
However, when loading the model using the same Python environment and on the same machine, the de-serialization fails with the following error:
File "/home/.venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 827, in load
state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)
File "/home/.venv/lib/python3.6/site-packages/torch/nn/quantized/modules/linear.py", line 207, in _load_from_state_dict
weight = state_dict.pop(prefix + 'weight')
KeyError: '_distilbert.transformer.layer.0.attention.q_lin.weight'
Here’s what the de-serialization method that fails looks like:
# file: torch/nn/quantized/modules/linear.py
# ===== Deserialization methods =====
# Counterpart to the serialization methods, we must pack the serialized QTensor
# weight into its packed format for use by the FBGEMM ops.
def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
missing_keys, unexpected_keys, error_msgs):
self.scale = float(state_dict[prefix + 'scale'])
state_dict.pop(prefix + 'scale')
self.zero_point = int(state_dict[prefix + 'zero_point'])
state_dict.pop(prefix + 'zero_point')
version = local_metadata.get('version', None)
if version is None or version == 1:
# We moved the parameters into a LinearPackedParameters submodule
weight = state_dict.pop(prefix + 'weight')
bias = state_dict.pop(prefix + 'bias')
state_dict.update({prefix + '_packed_params.weight': weight,
prefix + '_packed_params.bias': bias})
super(Linear, self)._load_from_state_dict(state_dict, prefix, local_metadata, False,
missing_keys, unexpected_keys, error_msgs)
The issue seems to be that the backward compatibility if-statement defaults to assuming the model was serialized using an earlier version if version is None. This fails in my case. Changing if version is None or version == 1: to if version == 1: fixes the issue for me but I’d like a more sustainable solution.
How do I make sure my model’s version evaluates to the correct value?
Thanks in advance for any help! |
st184741 | hi @salimmj,
if you are seeing this on a recent version of PyTorch (v1.5 or nightlies), would you mind filing a github issue?
for a quick local fix, you can also modify the checkpoint data. Here is a code snippet (for a different case) which is doing something similar:
def adjust_convbn_metadata(mod, prefix, old_state_dict):
for name, child in mod._modules.items():
new_prefix = prefix + '.' + name if prefix != '' else name
if isinstance(child, torch.nn.intrinsic.qat.ConvBn2d):
old_state_dict._metadata[new_prefix]['version'] = 2
adjust_convbn_metadata(child, new_prefix, old_state_dict)
adjust_convbn_metadata(model, '', checkpoint['model']) |
st184742 | Hi @Vasiliy_Kuznetsov,
Thanks for your help, I was actually just working on this. I ended up fixing it like this:
# This is a temporary fix for https://discuss.pytorch.org/t/loading-quantized-model-from-state-dict-with-version-none/89042
model_checkpoint['state_dict'] = OrderedDict(model_checkpoint['state_dict'])
if not hasattr(model_checkpoint['state_dict'], '_metadata'):
setattr(model_checkpoint['state_dict'], '_metadata', OrderedDict({'version': 2}))
Your code seems more specific, I wonder if mine could break. For now I only really need to change the version for the Linear layer so I don’t know if doing it like this is going to break something else.
I will file a Github issue! |
st184743 | Hello,
I am very new to this topic but I am trying to prune the model I am working with. For reference, I am using this 2 page. The model is quite big, containing different encoders, ResNet modules, and decoders. So, I’m guessing that I have to prune each network individually (I couldn’t find a reference where the whole model is being pruned together, but please attach some links where it’s being done). The list of different modules are like:
module.model_enc1.1.weight
module.model_enc1.1.bias
module.model_enc1.2.weight
module.model_enc1.2.bias
module.model_enc1.4.weight
module.model_enc1.4.bias
module.model_enc1.5.weight
.
.
.
So I’m only taking the module.model_enc1.1.weight using the following code:
test = netM.module.model_enc1
where netM contains the model weights (<class 'torch.nn.parallel.data_parallel.DataParallel'> ).
So test contains the following model:
Sequential(
(0): ReflectionPad2d((3, 3, 3, 3))
(1): Conv2d(3, 64, kernel_size=(7, 7), stride=(1, 1))
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU(inplace=True)
(4): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(5): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): ReLU(inplace=True)
)
And when I run the pruning method by pytorch
prune.random_unstructured(test, name='1.weight', amount=0.3), I get the following error:
AttributeError Traceback (most recent call last)
in
----> 1 prune.random_unstructured(test, name=‘1.weight’, amount=0.3)
/usr/local/lib/python3.6/dist-packages/torch/nn/utils/prune.py in random_unstructured(module, name, amount)
851
852 “”"
–> 853 RandomUnstructured.apply(module, name, amount)
854 return module
855
/usr/local/lib/python3.6/dist-packages/torch/nn/utils/prune.py in apply(cls, module, name, amount)
473 “”"
474 return super(RandomUnstructured, cls).apply(
–> 475 module, name, amount=amount
476 )
477
/usr/local/lib/python3.6/dist-packages/torch/nn/utils/prune.py in apply(cls, module, name, *args, **kwargs)
155 # starting from the state it is found in prior to this iteration of
156 # pruning
–> 157 orig = getattr(module, name)
158
159 # If this is the first time pruning is applied, take care of moving
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in getattr(self, name)
592 return modules[name]
593 raise AttributeError("’{}’ object has no attribute ‘{}’".format(
–> 594 type(self).name, name))
595
596 def setattr(self, name, value):
AttributeError: ‘Sequential’ object has no attribute ‘1.weight’
How do I fix this? Is there any better way to prune these networks? |
st184744 | Solved by ani0075 in post #5
Your first Conv layer in the Sequential module is at index 1. Try prune.random_unstructured(test[1], name=‘weight’, amount=0.3). I think that should work. Let me know if it doesn’t. |
st184745 | Flock1:
del is quite big, containing different encoders, ResNet modules, and decoders. So, I’m guessing that I have to prune each network individually (I couldn’t find a reference where the whole model is being pruned together, but please attach some links where it’s being done). The list of different modules are like:
print out test and look at the structure of the module. You’ll need to index test and use name=‘weight’. ‘1.weight’ is not an acceptable parameter name.
Try prune.random_unstructured(test[0], name=‘weight’, amount=0.3) or any other index in place of 0. |
st184746 | Flock1:
I am very new to this topic but I am trying to prune the model I am working with. For reference, I am using this page. The model is quite big, containing different encoders, ResNet modules, and decoders. So, I’m guessing that I have to prune each network individually (I couldn’t find a reference where the whole model is being pruned together, but please attach some links where it’s being done). The list of different modules are like:
@Michela could you take a look? |
st184747 | Hi. I got 1.weight by running this print(list(netM.module.model_enc1.named_parameters())). The output is:
[('1.weight', Parameter containing:
tensor([[[[ 2.5521e-02, 5.2238e-02, 4.7848e-04, ..., 5.6985e-02,
5.1901e-02, 5.1235e-02],
.
.
.
.
And then we have values for 1.bias, 2.weight, 2.bias as mentioned above.
This is the error I got for the code you mentioned:
AttributeError: ‘ReflectionPad2d’ object has no attribute ‘weight’ |
st184748 | Your first Conv layer in the Sequential module is at index 1. Try prune.random_unstructured(test[1], name=‘weight’, amount=0.3). I think that should work. Let me know if it doesn’t. |
st184749 | Hi. This worked. Thank you so much. Can you also tell me if there’s some way I can prune such a big network in one go? Or do I need to iterate through every layer and prune it individually? |
st184750 | Try this https://pytorch.org/tutorials/intermediate/pruning_tutorial.html#global-pruning 4 |
st184751 | @Flock1 the issue was related to how you were accessing the parameter. Either do dict(netM.module.model_enc1.named_parameters())['1.weight'] or do netM.module.model_enc1[1].weight.
@ani0075’s solution is correct (and related to the second option in the previous line of this answer) for when you want to refer to that module/parameter combination for the sake of pruning.
Beyond that, what do you mean by pruning “in one go”? Global pruning will allow you to prune the network by pooling all parameters together and comparing them with each other while deciding which ones to prune. That’s not the same thing as pruning each layer individually, but in an efficient way, without mixing weights across layers. Which one of the two were you interested in? |
st184752 | Hi @Michela. Firstly, big fan of your work when it comes to network pruning and ML applications in physics. I work on ML applications for quantum computing and astrophysics. I am so glad you replied.
I think I am looking for Global pruning. I was kinda thinking that pruning each layer might not be an effective way compared to global pruning. I was trying one layer just to see how to go about it since it was my first time. But I want to prune the whole network. Moreover, do you recommend pruning the BatchNorm layer? |
st184753 | Global pruning is generally more flexible and has empirically been shown to have better performance – though be careful not to let it prune entire layers thus disconnecting your network!
Re: batch norm – that’s a more complicated issue, it depends what you want to achieve. Pruning batch norm params won’t really help you significantly reduce the number of params in the network. But if you prune an entire output, does it make any sense to keep its corresponding batch norm parameters? Btw, on the other hand, some even directly use batch norm to figure out which channels to prune in the respective layer. I’d recommend checking out the literature for this. |
st184754 | Thank you. I have one question. Can you elaborate on “entire output”? Do you mean the final layer output or something else. Please let me know. |
st184755 | I mean an entire output dimension in any of the hidden layers. Batch norm layers compute y = γx + β with parameters γ,β for each normalized x at that layer.
If you pruned the previous layer such that a specific x is now always 0, does it make sense to keep its corresponding γ,β around? |
st184756 | Hi,
I would like to extend QAT to support two below cases. Does anyone know how I can achieve these?
mixed-precision: being able to set precision for each layer separately (manually)
lower precisions: being able to fake-quantize to lower than 8-bit (using a QConfig?)
Thanks! |
st184757 | Raghu is adding support for sub 8 bit qat right now cc @raghuramank100
I think mixed precision is supported as long as you can have sub 8 bit observers, in eager mode quantization you’ll need to set the qconfig manually for each child module. |
st184758 | Thanks @jerryzh168 and @raghuramank100.
Is there a way to have fake 8-bit observers (scale factors and zero point) in current implementation? |
st184759 | you’ll need to implement your own observer module (https://github.com/pytorch/pytorch/blob/master/torch/quantization/observer.py 6) and fake quantize module(https://github.com/pytorch/pytorch/blob/master/torch/quantization/fake_quantize.py 7) to support 8-bit scale and zero_point |
st184760 | @jerryzh168 @raghuramank100
It seems that after I serialize and load back a quantized model, the output type of quantized operators, QUInt8, is lost and instead it is replaced by float Tensor type. See below for a module with a single quantized conv layer.
Before torch.jit.save
graph(%self.1 : __torch__.AnnotatedConvModel,
%X : Float(2, 3, 10, 10)):
...
%input : QUInt8(2, 3, 10, 10) = aten::quantize_per_tensor(%X, %67, %68, %69), scope: __module.quant # /home/masa/anaconda3/lib/python3.7/site-packages/torch/nn/quantized/modules/__init__.py:43:0
...
%Xq : QUInt8(2, 3, 8, 8) = quantized::conv2d(%input, %71, %74, %77, %80, %81, %82, %83), scope: __module.conv # /home/masa/anaconda3/lib/python3.7/site-packages/torch/nn/quantized/modules/conv.py:215:0
%85 : Float(2, 3, 8, 8) = aten::dequantize(%Xq), scope: __module.dequant # /home/masa/anaconda3/lib/python3.7/site-packages/torch/nn/quantized/modules/__init__.py:74:0
return (%85)
After torch.jit.load
graph(%self.1 : __torch__.AnnotatedConvModel,
%X.1 : Tensor):
...
%input.1 : Tensor = aten::quantize_per_tensor(%X.1, %9, %10, %11) # /home/masa/anaconda3/lib/python3.7/site-packages/torch/nn/quantized/modules/__init__.py:43:0
%Xq.1 : Tensor = quantized::conv2d(%input.1, %15, %17, %18, %19, %16, %20, %21) # /home/masa/anaconda3/lib/python3.7/site-packages/torch/nn/quantized/modules/conv.py:215:0
...
%24 : Tensor = aten::dequantize(%Xq.1) # /home/masa/anaconda3/lib/python3.7/site-packages/torch/nn/quantized/modules/__init__.py:74:0
return (%24)
The PyTorch frontend in TVM uses this tensor type information to decide if a torch op is invoked on a quantized tensor. See for example the case of converting adaptive avg pooling, which requires special care for quantized case, but in the Torch IR the same op aten::adaptive_avg_pool2d appears for both float and quantized input.
github.com
apache/incubator-tvm/blob/master/python/tvm/relay/frontend/pytorch.py#L600-L601
if input_types[0] == "quint8":
return qnn_torch.apply_with_upcast(data, func)
Without correct typing, we cannot convert serialized quantized PyTorch models. What happens right now is since Torch tells TVM that input tensor is float type, TVM incorrectly converts some quantized ops into float ops.
A repro script, tested on v1.5
import torch
from torch.quantization import QuantStub, DeQuantStub, default_qconfig
class AnnotatedConvModel(torch.nn.Module):
def __init__(self):
super(AnnotatedConvModel, self).__init__()
self.qconfig = default_qconfig
self.conv = torch.nn.Conv2d(3, 3, 3, bias=False).to(dtype=torch.float)
self.quant = QuantStub()
self.dequant = DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = self.dequant(x)
return x
def quantize_model(model, inp):
model.qconfig = default_qconfig
torch.quantization.prepare(model, inplace=True)
model(inp)
torch.quantization.convert(model, inplace=True)
def test_conv():
inp = torch.rand(2, 3, 10, 10)
annotated_conv_model = AnnotatedConvModel()
quantize_model(annotated_conv_model, inp)
trace = torch.jit.trace(annotated_conv_model, inp)
torch._C._jit_pass_inline(trace.graph)
print(trace.graph)
torch.jit.save(trace, "trace.pt")
trace = torch.jit.load("trace.pt")
print(trace.graph)
test_conv() |
st184761 | masahi:
%input : QUInt8(2, 3, 10, 10) = aten::quantize_per_tensor(%X, %67, %68, %69), scope: __module.quant # /home/masa/anaconda3/lib/python3.7/site-packages/torch/nn/quantized/modules/__init__.py:43:0
You can use the type, but don’t rely on the shape since it will probably change every time you run the model with input of different shape |
st184762 | def test_conv():
inp = torch.rand(2, 3, 10, 10)
annotated_conv_model = AnnotatedConvModel()
quantize_model(annotated_conv_model, inp)
trace = torch.jit.trace(annotated_conv_model, inp)
torch._C._jit_pass_inline(trace.graph)
print(trace.graph)
torch.jit.save(trace, "trace.pt")
loaded = torch.jit.load("trace.pt")
for i in range(5):
out = loaded(torch.rand(2, 3, 10, 10))
print(loaded.graph)
Tried running a loaded graph with some inputs, it still says
%Xq.1 : Tensor = quantized::conv2d(...) |
st184763 | @jerryzh168 COrrect me if I am wrong, but I think that’s what jit does irrespective of it being quantized or not. I believe we should talk to the JIT team to somehow allow dtype to be exposed. |
st184764 | Hi all,
Working on static quantizing a few models and hitting this error on a basic Resnet18 - any insight into what is missing to complete the quantization?
Did not ‘fuse’ the BN but unclear if that is the core issue?
Any assistance would be appreciated - very hard to find much documentation.
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
1921 return torch.batch_norm(
1922 input, weight, bias, running_mean, running_var,
-> 1923 training, momentum, eps, torch.backends.cudnn.enabled
1924 )
1925
RuntimeError: Could not run 'aten::native_batch_norm' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::native_batch_norm' is only available for these backends: [CPUTensorId, CUDATensorId, MkldnnCPUTensorId, VariableTensorId].
Thanks |
st184765 | Solved by hx89 in post #2
I think you need to fuse BN since there’s no quantized BN layer. |
st184766 | Thanks very much - will try to fuse! The docs implied it was more to boost accuracy vs a requirement but makes that it won’t otherwise quantize itself so to speak. |
st184767 | There is no convolutional layer in front, I want to know how to merge the bn layer,thanks!
class MobileFaceNet(Module):
def init(self, embedding_size):
super(MobileFaceNet, self).init()
self.conv1 = Conv_Block(3, 64, kernel=3, stride=2, padding=1)
self.conv2 = Conv_Block(64, 64, kernel=3, stride=1, padding=1, groups=64)
self.conv3 = InvertedResidual(64, 64, kernel=3, stride=2, padding=1, groups=128)
self.conv4 = MakeBlocks(64, num_block=4, kernel=3, stride=1, padding=1, groups=128)
self.conv5 = InvertedResidual(64, 128, kernel=3, stride=2, padding=1, groups=256)
self.conv6 = MakeBlocks(128, num_block=6, kernel=3, stride=1, padding=1, groups=256)
self.conv7 = InvertedResidual(128, 128, kernel=3, stride=2, padding=1, groups=512)
self.conv8 = MakeBlocks(128, num_block=2, kernel=3, stride=1, padding=1, groups=256)
self.conv9 = Conv_Block(128, 512, kernel=1, stride=1, padding=0)
self.conv10 = Conv_Block(512, 512, kernel=7, stride=1, padding=0, groups=512, is_linear=True)
self.ft = Flatten()
self.ln = Linear(512, embedding_size, bias=False)
self.bn = BatchNorm1d(embedding_size)
self.quant = QuantStub()
self.dequant = DeQuantStub()
for m in self.modules():
if isinstance(m, Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def forward(self, x):
x = self.quant(x)
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.conv5(x)
x = self.conv6(x)
x = self.conv7(x)
x = self.conv8(x)
x = self.conv9(x)
x = self.conv10(x)
x = self.ft(x)
x = self.ln(x)
x = self.bn(x)
x = self.dequant(x)
return x
def fuse_model(self):
for m in self.modules():
if type(m) == Conv_Block:
m.fuse_model() |
st184768 | Given the it won’t be normal to feed a deep learning model the training data all at once (especially if the training data is 1000+) due the available RAM and other factors, I really want to know the effects of running the backward propagation after feeding all batches of data to the model per epoch versus running the backward propagation after feeding the model per batch.
Thanks to anyone who answers this. |
st184769 | Generally larger batches will provide a more accurate estimate of the gradient, while smaller batches will introduce more noise (which is often seen as beneficial up to a certain degree).
Chapter 8.1.3 - DeepLearningBook 1 discusses these effects in more detail. |
st184770 | Thank you for your reply, but I just wanted to know if backward propagation should be carried out after each batch within one epoch or after the entire batch has been fed the model (per epoch) |
st184771 | Usually, you would update the model after each batch, but that’s not a hard rule as explained.
Depending on your use case, you might want to accumulate the gradients and update the model after a couple of batches (or all of them). |
st184772 | Hello, What are the different methods i can build prediction models, link objective functions and do optimization using pytorch. Pl suggest |
st184773 | I’m not sure I understand the question correctly, but it seems you are looking for the complete support of PyTorch methods?
If that’s the case, the docs and tutorials 1 might be a good starter. |
st184774 | Hello everyone.
This is a followup question concerning this 8 . The issue is in the Resnet model that I’m dealing with, I cant replace PReLU with ReLU as it drastically affects the network performance.
So my question is, what are my options here? what should I be doing in this case?
Would doing sth like this suffice?
class PReLU_Quantized(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.weight = prelu_object.weight
self.quantized_op = nn.quantized.FloatFunctional()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, inputs):
# inputs = torch.max(0, inputs) + self.weight * torch.min(0, inputs)
self.weight = self.quant(self.weight)
weight_min_res = self.quantized_op.mul(self.weight, torch.min(inputs)[0])
inputs = self.quantized_op.add(torch.max(inputs)[0], weight_min_res).unsqueeze(0)
self.weight = self.dequant(self.weight)
return inputs
and for the replacement :
class model(nn.Module):
def __init__(self)
super().__init__()
....
self.prelu = PReLU()
self.prelu_q = PReLU_Quantized(self.prelu)
....
Thanks a lot in advance |
st184775 | Solved by Shisho_Sama in post #4
Thanks to dear God its done!
Here is the final solution!:
class PReLU_2(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
def forward(self, inputs):
pos = torch.relu(inpu… |
st184776 | for some reason, the error between the actual PReLU and my implementation is very large!
here are sample diffs in different layers:
diff : 1.1562038660049438
diff : 0.02868632599711418
diff : 0.3653906583786011
diff : 1.6100226640701294
diff : 0.8999372720718384
diff : 0.03773299604654312
diff : -0.5090572834014893
diff : 0.1654307246208191
diff : 1.161868691444397
diff : 0.026089997962117195
diff : 0.4205571115016937
diff : 1.5337920188903809
diff : 0.8799554705619812
diff : 0.03827812895178795
diff : -0.40296515822410583
diff : 0.15618863701820374
and the diff is calculated like this in the forward pass:
def forward(self, x):
residual = x
out = self.bn0(x)
out = self.conv1(out)
out = self.bn1(out)
out = self.prelu(out)
out2 = self.prelu2(out)
print(f'diff : {( out - out2).mean().item()}')
out = self.conv2(out)
This is the normal implementation which I used on ordinary model (i.e. not quantized!) to assess whether it produces correct result and then move on to quantized version:
class PReLU_2(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
def forward(self, inputs):
x = self.weight
tmin, _ = torch.min(inputs,dim=0)
tmax, _ = torch.max(inputs,dim=0)
weight_min_res = torch.mul(x, tmin)
inputs = torch.add(tmax, weight_min_res)
inputs = inputs.unsqueeze(0)
return inputs
what am I missing here? |
st184777 | OK, I figured it out! I made a huge mistake in the very begining. I needed to calculate
PReLU(x)=max(0,x)+a∗min(0,x)
or
and not the actual min! or max! which doesnt make sense!
now, can anyone do me a favor and tell me how I can vectorize this ? I’m kind of lost at the moment! |
st184778 | Thanks to dear God its done!
Here is the final solution!:
class PReLU_2(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
def forward(self, inputs):
pos = torch.relu(inputs)
neg = -self.weight * torch.relu(-inputs)
inputs = pos + neg
return inputs
and t his is the quantized version :
class PReLU_Quantized(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
self.quantized_op = nn.quantized.FloatFunctional()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, inputs):
# inputs = max(0, inputs) + alpha * min(0, inputs)
self.weight = self.quant(self.weight)
weight_min_res = self.quantized_op.mul(-self.weight, torch.relu(-inputs))
inputs = self.quantized_op.add(torch.relu(inputs), weight_min_res)
inputs = self.dequant(inputs)
self.weight = self.dequant(self.weight)
return inputs |
st184779 | Hello,
I’m kind of confused by the use of Quant/DeQuantStubs for Quantization Aware Training.
From my understanding only layers in between the Quant/DequantStubs are supposed to be quantised (is that correct?) but for my model when I place quantstubs around just the backbone:
x = self.quant0(x)
x = self.backbone0(x)
x = self.dequant(x)
confidence = self.classification_headers0(x)
and I look at the layers in classification_headers before and after preparation:
print(model.classification_headers0)
model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
torch.quantization.prepare_qat(model, inplace=True)
print(model.classification_headers0)
I get
Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64)
(1): ReLU()
(2): Conv2d(64, 6, kernel_size=(1, 1), stride=(1, 1))
)
Sequential(
(0): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64
(activation_post_process): FakeQuantize(
fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), scale=tensor([1.]), zero_point=tensor([0])
(activation_post_process): MovingAverageMinMaxObserver(min_val=tensor([]), max_val=tensor([]))
)
(weight_fake_quant): FakeQuantize(
fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), scale=tensor([1.]), zero_point=tensor([0])
(activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([]), max_val=tensor([]))
)
)
(1): ReLU(
(activation_post_process): FakeQuantize(
fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), scale=tensor([1.]), zero_point=tensor([0])
(activation_post_process): MovingAverageMinMaxObserver(min_val=tensor([]), max_val=tensor([]))
)
)
(2): Conv2d(
64, 6, kernel_size=(1, 1), stride=(1, 1)
(activation_post_process): FakeQuantize(
fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), scale=tensor([1.]), zero_point=tensor([0])
(activation_post_process): MovingAverageMinMaxObserver(min_val=tensor([]), max_val=tensor([]))
)
(weight_fake_quant): FakeQuantize(
fake_quant_enabled=tensor([1], dtype=torch.uint8), observer_enabled=tensor([1], dtype=torch.uint8), scale=tensor([1.]), zero_point=tensor([0])
(activation_post_process): MovingAveragePerChannelMinMaxObserver(min_val=tensor([]), max_val=tensor([]))
)
)
)
Why are the layers in classification_headers0 prepared for quantisation too? |
st184780 | Solved by Vasiliy_Kuznetsov in post #2
Which layers to quantize is controlled by the qconfig, when we do model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') it applies the defaults settings of which modules to swap to the entire model. |
st184781 | kekpirat:
From my understanding only layers in between the Quant/DequantStubs are supposed to be quantised (is that correct?)
Which layers to quantize is controlled by the qconfig, when we do model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') it applies the defaults settings of which modules to swap to the entire model. |
st184782 | the mapping argument to prepare_qat (https://github.com/pytorch/pytorch/blob/733b8c23c436d906125c20f0a64692bf57bce040/torch/quantization/quantize.py#L289 10) can be used to customize which layer’s you’d like to quantize |
st184783 | I have quantized model and I want to load it in pytorch but I am not able to do it.
After quantisation the definition of model is changing as fusion of BatchNormalization layer is happening.
But when I am loading the model I have previous definition which does not contain fused layer but other layers are there like quant and dequant layer.
Is there a way to load quantized model in pytorch? |
st184784 | Solved by Giang_Dang in post #6
Hi mohit7,
Make sure you create the net using previous definition, and let the net go through process that was applied during quantization before (prepare_model, fuse_model, and convert), without rerun the calibration process.
After that you can load the quantized state_dict in. Hope it helps. |
st184785 | Hi Mohit,
Can you provide more details/code? You can load/save quantized models by saving a state_dict(). When you perform fusion, make sure you set inplace=True. |
st184786 | Hey @raghuramank100 I have saved the model correctly but I want to use it in pytorch so we must know the definition of model then we can load the state_dict from the saved model file.
But what I have is definition of model without the fusion of layer and that’s where the definition of model changing and I can’t load model.
So Do I need to change the model definition according to the fused layers? |
st184787 | I think the expectation is to have the original model and go through the whole eager mode quantization flow again, and then load from the saved state_dict. |
st184788 | Hi mohit7,
Make sure you create the net using previous definition, and let the net go through process that was applied during quantization before (prepare_model, fuse_model, and convert), without rerun the calibration process.
After that you can load the quantized state_dict in. Hope it helps. |
st184789 | Modify the amount of calibration data, the model accuracy after int8 quantization is actually higher than the original model accuracy
def get_imagenet(dataset_dir=’…/dataset/CIFAR10’, batch_size=32):
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
train_transform = transforms.Compose([
# transforms.Resize(256),
transforms.RandomResizedCrop(32),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
])
test_transform = transforms.Compose([
transforms.ToTensor(),
normalize,
])
train_dataset = datasets.CIFAR10(root=dataset_dir, train=True, transform=train_transform, download=True)
test_dataset = datasets.CIFAR10(root=dataset_dir, train=False, transform=test_transform, download=True)
trainloader = DataLoader(train_dataset, batch_size=batch_size, num_workers=NUM_WORKERS,
pin_memory=True, shuffle=False)
testloader = DataLoader(test_dataset, batch_size=batch_size, num_workers=NUM_WORKERS,
pin_memory=True, shuffle=False)
return trainloader, testloader
class quantizeModel(object):
def __init__(self):
super(quantizeModel, self).__init__()
self.device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
self.train_loader, self.test_loader = get_imagenet()
self.quant()
def quant(self):
model = self.load_model()
model.eval()
self.print_size_of_model(model)
self.validate(model, "original_resnet18", self.test_loader)
model.fuse_model()
self.print_size_of_model(model)
self.quantize(model)
def load_model(self):
model = resnet18()
state_dict = torch.load("CIFAR10_resnet18.pth", map_location=self.device)
model.load_state_dict(state_dict)
model.to(self.device)
return model
def print_size_of_model(self, model):
torch.save(model.state_dict(), "temp.p")
print('Size (MB):', os.path.getsize("temp.p") / 1e6)
os.remove('temp.p')
def validate(self, model, name, data_loader):
with torch.no_grad():
correct = 0
total = 0
acc = 0
for data in data_loader:
images, labels = data
images, labels = images.to(self.device), labels.to(self.device)
output = model(images)
_, predicted = torch.max(output, dim=1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
if total == 1024: #calibration data
break
acc = round(100 * correct / total, 3)
print('{{"metric": "{}_val_accuracy", "value": {}%}}'.format(name, acc))
return acc
def quantize(self, model):
#model.qconfig = torch.quantization.default_qconfig
#model.qconfig = torch.quantization.default_per_channel_qconfig
model.qconfig = torch.quantization.QConfig(
activation=torch.quantization.observer.MinMaxObserver.with_args(reduce_range=True),
weight=torch.quantization.observer.PerChannelMinMaxObserver.with_args(dtype=torch.qint8,
qscheme=torch.per_channel_affine))
pmodel = torch.quantization.prepare(model)
#calibration
self.validate(pmodel, "quntize_per_channel_resent18_train", self.train_loader)
qmodel = torch.quantization.convert(pmodel)
self.validate(qmodel, "quntize_per_chaannel_resent18_test", self.test_loader)
self.print_size_of_model(qmodel)
torch.jit.save(torch.jit.script(qmodel), "quantization_per_channel_model18.pth")
Original model accuracy:71.76%
First quantification:batch_size:32 calibration data: 2048
Quantified model accuracy:71.51%
Second quantification:batch_size:32 calibration data: 1024
Quantified model accuracy:71.85%
Why the accuracy becomes higher after quantization?
In addition, I think that the total number of calibration data remains unchanged, the maximum and minimum range of activation should be fixed, and the quantization accuracy should also be fixed. However, it is found that the total number of calibration data remains unchanged, and if batch_size is modified, the accuracy after quantization will change. What is the reason? |
st184790 | blueskywwc:
However, it is found that the total number of calibration data remains unchanged, and if batch_size is modified, the accuracy after quantization will change. What is the reason?
Is there any randomness in which specific dataset slice is getting used for calibration? Can you reproduce the accuracy changes if you set torch.manual_seed(0)? |
st184791 | torch.manual_seed(191009)
train_dataset = datasets.CIFAR10(root=dataset_dir, train=True, transform=train_transform, download=True)
test_dataset = datasets.CIFAR10(root=dataset_dir, train=False, transform=test_transform, download=True)
trainloader = DataLoader(train_dataset, batch_size=batch_size, num_workers=NUM_WORKERS,
pin_memory=True, shuffle=False)
testloader = DataLoader(test_dataset, batch_size=batch_size, num_workers=NUM_WORKERS,
pin_memory=True, shuffle=False)
if batch_size is modified, the accuracy after quantization will change,no modification, the accuracy rate will not change.
Why the accuracy becomes higher after quantization? |
st184792 | blueskywwc:
Why the accuracy becomes higher after quantization?
We don’t expect accuracy to increase due to quantization, this is likely random variation. To test this theory, you could run evaluation on various slices of data unseen in training. I would expect the mean difference of accuracy on a set of slices would be a slight drop for the quantized model compared to the floating point model. |
st184793 | blueskywwc:
if batch_size is modified, the accuracy after quantization will change,no modification, the accuracy rate will not change.
If MinMax observers are used, we do not expect the ordering or batch size of the calibration data to matter, as long as the same dataset gets seen via calibration. One thing to look into would be whether the way the evaluation score is measured depends on batch size, and if you are feeding exactly the same set of images through. |
st184794 | Thank you very much for your reply
model.qconfig = torch.quantization.QConfig(
activation=torch.quantization.observer.MinMaxObserver.with_args(dtype=torch.quint8,
qscheme=torch.per_channel_affine,
reduce_range=True),
weight=torch.quantization.observer.PerChannelMinMaxObserver.with_args(dtype=torch.qint8,
qscheme=torch.per_channel_affine,
reduce_range=False))
The total number of calibration data sets remains unchanged, and different batch_sizes (8, 16, 32, 64) are tested, and the quantized accuracy rate will still slightly fluctuate.
batch_size has little effect on the quantization result, I can choose a group with the highest accuracy as the final quantization result. |
st184795 | Does pytorch support multi-GPU in quantization awareness training?
In this script https://github.com/pytorch/vision/blob/master/references/classification/train_quantization.py#L73 31, it seems that it has the logic of multi-GPU. |
st184796 | Hi @robotcator123,
Multi gpu training is orthogonal to quantization aware training. Code written with Pytorch’s quantization aware training modules will work whether you are using a single gpu or using Data parallel on multiple gpus. Hope this helps! |
st184797 | Hi, @Mazhar_Shaikh,
Actually, I train the mobilenet using this command in cluster.
python -m torch.distributed.launch --nproc_per_node=8 --use_env train.py --data-path=./imagenet_1k , it seems that the code works fine.
But I change the training script to
python -m torch.distributed.launch --nproc_per_node=8 --use_env train_quantization.py --data-path=./imagenet_1k
it will raise error like this after print lost of unknown data:
`Namespace(backend=‘qnnpack’, batch_size=32, cache_dataset=False, data_path=’~/test/imagenet_1k’, device=‘cuda’, dist_backend=‘nccl’, dist_url=‘env://’, distributed=True, epochs=90, eval_batch_size=128, gpu=0, lr=0.0001, lr_gamma=0.1, lr_step_size=30, model=‘mobilenet_v2’, momentum=0.9, num_batch_norm_update_epochs=3, num_calibration_batches=32, num_observer_update_epochs=4, output_dir=’.’, post_training_quantize=False, print_freq=10, rank=0, resume=’’, start_epoch=0, test_only=False, weight_decay=0.0001, workers=16, world_size=8)
Loading data
Loading data
Loading training data
Took 0.27007627487182617
Loading validation data
Creating data loaders
Creating model mobilenet_v2
Traceback (most recent call last):
File “train_quantization.py”, line 258, in
main(args)
File “train_quantization.py”, line 77, in main
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
File “xxx/.conda/envs/pytorch1.3/lib/python3.6/site-packages/torch/nn/parallel/distributed.py”, line 298, in init
self.broadcast_bucket_size)
File “xxx/.conda/envs/pytorch1.3/lib/python3.6/site-packages/torch/nn/parallel/distributed.py”, line 480, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(self.process_group, tensors, buffer_size)
TypeError: _broadcast_coalesced(): incompatible function arguments. The following argument types are supported:
1. (process_group: torch.distributed.ProcessGroup, tensors: List[at::Tensor], buffer_size: int) -> None
Invoked with: <torch.distributed.ProcessGroupNCCL object at 0x7f943f78dd18>, [tensor([[[[ 1.3185e-02, -4.3213e-03, 1.4823e-02],
…
…
…
subprocess.CalledProcessError: Command ‘[’/xxxx/pytorch1.3/bin/python’, ‘-u’, ‘train_quantization.py’, ‘–data-path=./imagenet_1k’]’ returned non-zero exit status 1.`
sorry for hiding some personal information. |
st184798 | It seems that broadcast can not support None tensor, is there anybody know this problem? |
st184799 | Hi @robotcator123, If you believe broadcast of None doesn’t work as expected, please open an issue against PyTorch with a minimal reproducible example. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.