id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st184400 | I’ve discovered that issue when I was trying to understand why does my quantized version of EfficientNet-b4 work slower than the float one.
I use torch 1.6.0, fbgemm default config (torch.quantization.get_default_qconfig('fbgemm'))
torchprof library shows that _depthwise_conv in blocks 2, 6, 10, 22 are the key problem: they are 3-10 times slower than in float model!
These layers differ from others only with (2, 2) stride instead of (1,1), (3,3), (5,5). I do not give the full list of hassle-free layers in the screenshot, but I’ve checked that
Is this the expected behavior? What could be the reason?
stride_issue1201×1130 143 KB |
st184401 | Solved by dskhudia in post #4
It works much better if you use equal padding for your 2, 6, 10 and 22 depthwise. For equal padding, depthwise conv goes through a fast path.
equal padding for (3, 3) kernel would be (1, 1) and (5, 5) it would be (2, 2). (Similar to the padding you have for (3, 3) and (5, 5) kernel sizes for other … |
st184402 | How do you do the quantization? manually or through graph quantization?
Does graph quantization give you the same issues? |
st184403 | It works much better if you use equal padding for your 2, 6, 10 and 22 depthwise. For equal padding, depthwise conv goes through a fast path.
equal padding for (3, 3) kernel would be (1, 1) and (5, 5) it would be (2, 2). (Similar to the padding you have for (3, 3) and (5, 5) kernel sizes for other convolutions). |
st184404 | unfortunately, I do not understand what do you mean by saying “manually or through graph quantization”
If you mean, do I profile Torchscript model or original python one: I’ve done both, and both of them show bad performance. The layer-by-layer profiling via torchprof does not work with TorchScript models, but I expect that “slow” layers stay slow both in original model and in a Torchscript one, even if timing original model is not so precise
Quantization is done the same way as in MobileNet tutorial 3.
Here is the code snippet:
def try_config(qconfig, one_thread_inference=True, calibration_max_batches=None, metrics_max_batches=None):
# Fuse modules (my own implementation for EfficientNet; do you need it?)
q_model = fuse_modules()
# apply config (this looks complex because I do not quantize _conv_stem (1st convolution)
for block in q_model.feature_extractor._blocks:
block.qconfig = qconfig
q_model.quant = QuantStub(qconfig)
print(qconfig)
torch.quantization.prepare(q_model, inplace=True,
white_list=(
torch,nn.Conv2d,
torch.nn.BatchNorm2d,
torch.quantization.stubs.QuantStub,
torch.nn.quantized.modules.functional_modules.FloatFunctional
))
q_model.eval()
print('Post Training Quantization Prepare: Inserting Observers')
if one_thread_inference:
torch.set_num_threads(multiprocessing.cpu_count())
inference(q_model, dev_loader, max_batches=calibration_max_batches) # custom func just for model inference
print('Post Training Quantization: Calibration done')
# Convert to quantized model
torch.quantization.convert(q_model, inplace=True)
print('Post Training Quantization: Convert done') |
st184405 | I have a quantized model which is basically a resnet18. the quantization seems to go just fine until, when I try to load the quantized model from disk using sth like this :
def load_quantized(quantized_checkpoint_file_path):
model = fvmodels.resnet18(pretrained=False, use_se=True)
model.eval()
model.fuse_model()
# print(f'model: {model}')
# Specify quantization configuration
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
# print(model.qconfig)
torch.quantization.prepare(model, inplace=True)
# Convert to quantized model
torch.quantization.convert(model, inplace=True)
checkpoint = torch.load(quantized_checkpoint_file_path, map_location=torch.device('cpu'))
model.load_state_dict(checkpoint, strict=False)
# model = torch.jit.load(quantized_checkpoint_file_path, map_location=torch.device('cpu'))
fvmodels.print_size_of_model(model)
return model
and while trying to use that :
model = load_quantized('path to model')
model.eval()
with torch.no_grad():
for img, lbl in dtloader:
features = model(img.unsqueeze(0))
I face the following error :
RuntimeError: Could not run 'aten::native_batch_norm' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::native_batch_norm' is only available for these backends: [CPUTensorId, MkldnnCPUTensorId, VariableTensorId].
This seems to be casued by the fact that the batchnorm layer is not fused! and the issue is I dont know how to fuse it. to be more specific here is the resnet model I have at hand :
class ResNet(nn.Module):
def __init__(self, block, layers, use_se=True):
self.inplanes = 64
self.use_se = use_se
super().__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
# self.prelu = nn.PReLU()
self.prelu = nn.ReLU()
self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.bn2 = nn.BatchNorm2d(512)
self.dropout = nn.Dropout()
self.fc = nn.Linear(512 * 7 * 7, 512)
self.bn3 = nn.BatchNorm1d(512)
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.xavier_normal_(m.weight)
elif isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.xavier_normal_(m.weight)
nn.init.constant_(m.bias, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, use_se=self.use_se))
self.inplanes = planes
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, use_se=self.use_se))
return nn.Sequential(*layers)
def forward(self, x):
x = self.quant(x)
x = self.conv1(x)
x = self.bn1(x)
x = self.prelu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.bn2(x)
x = self.dropout(x)
# x = x.view(x.size(0), -1)
x = x.reshape(x.size(0), -1)
x = self.fc(x)
x = self.bn3(x)
x = self.dequant(x)
return x
def fuse_model(self):
r"""Fuse conv/bn/relu modules in resnet models
Fuse conv+bn+relu/ Conv+relu/conv+Bn modules to prepare for quantization.
Model is modified in place. Note that this operation does not change numerics
and the model after modification is in floating point
"""
fuse_modules(self, [['conv1', 'bn1', 'prelu'],
['bn2'],
['bn3']], inplace=True)
for m in self.modules():
# print(m)
if type(m) == Bottleneck or type(m) == BasicBlock or type(m) == IRBlock:
m.fuse_model()
as you can see in the forward pass we have :
...
x = self.bn2(x)
x = self.dropout(x)
which is followed by a dropout and unlike previous ones, doesnt come with neither conv or relu!
the same thing goes to bn3 a couple of lines later:
...
x = self.fc(x)
x = self.bn3(x)
x = self.dequant(x)
....
So I’m not sure how I’m supposed to get around this. obviously the way I’m fusing is wrong:
def fuse_model(self):
fuse_modules(self, [['conv1', 'bn1', 'prelu'],
['bn2'],
['bn3']], inplace=True)
for m in self.modules():
# print(m)
if type(m) == Bottleneck or type(m) == BasicBlock or type(m) == IRBlock:
m.fuse_model()
For the sake of completeness here are the whole models :
def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super().__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
self.add_relu = torch.nn.quantized.FloatFunctional()
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
# out += residual
# out = self.relu(out)
out = self.add_relu.add_relu(out, residual)
return out
def fuse_model(self):
torch.quantization.fuse_modules(self, [['conv1', 'bn1', 'relu'],
['conv2', 'bn2']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu1 = nn.ReLU(inplace=False)
self.relu2 = nn.ReLU(inplace=False)
self.downsample = downsample
self.stride = stride
self.skip_add_relu = nn.quantized.FloatFunctional()
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu1(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu2(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
# out += residual
# out = self.relu(out)
out = self.skip_add_relu.add_relu(out, residual)
return out
def fuse_model(self):
fuse_modules(self, [['conv1', 'bn1', 'relu1'],
['conv2', 'bn2', 'relu2'],
['conv3', 'bn3']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class SEBlock(nn.Module):
def __init__(self, channel, reduction=16):
super().__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.mult_xy = nn.quantized.FloatFunctional()
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction),
# nn.PReLU(),
nn.ReLU(),
nn.Linear(channel // reduction, channel),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
# out = x*y
out = self.mult_xy.mul(x, y)
return out
class IRBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, use_se=True):
super().__init__()
self.bn0 = nn.BatchNorm2d(inplanes)
self.conv1 = conv3x3(inplanes, inplanes)
self.bn1 = nn.BatchNorm2d(inplanes)
# self.prelu = nn.PReLU()
self.prelu = nn.ReLU()
self.conv2 = conv3x3(inplanes, planes, stride)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
self.use_se = use_se
if self.use_se:
self.se = SEBlock(planes)
self.add_residual_relu = nn.quantized.FloatFunctional()
def forward(self, x):
residual = x
out = self.bn0(x)
out = self.conv1(out)
out = self.bn1(out)
out = self.prelu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.use_se:
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
# out += residual
# out = self.prelu(out)
# we may need to change prelu into relu and this, instead of add, use add_relu here
out = self.add_residual_relu.add_relu(out, residual)
# out = self.prelu(out)
return out
def fuse_model(self):
fuse_modules(self, [['conv1', 'bn1', 'prelu'],
['conv2', 'bn2']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class ResNet(nn.Module):
def __init__(self, block, layers, use_se=True):
self.inplanes = 64
self.use_se = use_se
super().__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
# self.prelu = nn.PReLU()
self.prelu = nn.ReLU()
self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.bn2 = nn.BatchNorm2d(512)
self.dropout = nn.Dropout()
self.fc = nn.Linear(512 * 7 * 7, 512)
self.bn3 = nn.BatchNorm1d(512)
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.xavier_normal_(m.weight)
elif isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.xavier_normal_(m.weight)
nn.init.constant_(m.bias, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, use_se=self.use_se))
self.inplanes = planes
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, use_se=self.use_se))
return nn.Sequential(*layers)
def forward(self, x):
x = self.quant(x)
x = self.conv1(x)
x = self.bn1(x)
x = self.prelu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.bn2(x)
x = self.dropout(x)
# x = x.view(x.size(0), -1)
x = x.reshape(x.size(0), -1)
x = self.fc(x)
x = self.bn3(x)
x = self.dequant(x)
return x
def fuse_model(self):
r"""Fuse conv/bn/relu modules in resnet models
Fuse conv+bn+relu/ Conv+relu/conv+Bn modules to prepare for quantization.
Model is modified in place. Note that this operation does not change numerics
and the model after modification is in floating point
"""
fuse_modules(self, [['conv1', 'bn1', 'prelu'],
['bn2'],
['bn3']], inplace=True)
for m in self.modules():
# print(m)
if type(m) == Bottleneck or type(m) == BasicBlock or type(m) == IRBlock:
m.fuse_model()
def resnet18(pretrained, use_se, **kwargs):
model = ResNet(IRBlock, [2, 2, 2, 2], use_se=use_se, **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))
return model
side note:
Also the actual model (resnet18 can be found from this link 1 in case someone might need it)
Config:
Im using Pytorch 1.5.0+cpu on windows 10 x64 v1803
Any help is greatly appreciated |
st184406 | fuse_modules(self, [['conv1', 'bn1', 'prelu'],
['bn2'],
['bn3']], inplace=True)
change into:
fuse_modules(self, [‘conv1’, ‘bn1’, ‘prelu’], inplace=True) |
st184407 | Thanks but, that was the initial attempt which results in the mentioned error as well.
Traceback (most recent call last):
File "d:\Codes\org\python\Quantization\quantizer.py", line 265, in <module>
features = model(img.unsqueeze(0))
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 418, in forward
x = self.bn3(x)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\batchnorm.py", line 106, in forward
exponential_average_factor, self.eps)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\functional.py", line 1923, in batch_norm
training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: Could not run 'aten::native_batch_norm' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::native_batch_norm' is only available for these backends: [CPUTensorId, MkldnnCPUTensorId, VariableTensorId]. |
st184408 | Can you try printing the quantized model after prepare and convert? We do support quantized batch_norm so nn.BatchNorm2d module should get replaced with quantized one. |
st184409 | Hi, here it is :
Ran using the latest nighly 1.7.0.dev20200714+cpu and torchvision-0.8.0.dev20200714+cpu
Size (MB): 87.218199
QConfig(activation=functools.partial(<class 'torch.quantization.observer.HistogramObserver'>, reduce_range=True), weight=functools.partial(<class 'torch.quantization.observer.PerChannelMinMaxObserver'>, dtype=torch.qint8, qscheme=torch.per_channel_symmetric))
Model after being fused-prepared: ResNet(
(conv1): Conv2d(
3, 64, kernel_size=(3, 3), stride=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn1): Identity()
(prelu): PReLU(num_parameters=1)
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): IRBlock(
(bn0): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
(activation_post_process): HistogramObserver()
)
(conv1): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn1): Identity()
(prelu): PReLU(num_parameters=1)
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn2): Identity()
(se): SEBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(mult_xy): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(fc): Sequential(
(0): Linear(
in_features=64, out_features=4, bias=True
(activation_post_process): HistogramObserver()
)
(1): PReLU(num_parameters=1)
(2): Linear(
in_features=4, out_features=64, bias=True
(activation_post_process): HistogramObserver()
)
(3): Sigmoid()
)
(fc1): Linear(
in_features=64, out_features=4, bias=True
(activation_post_process): HistogramObserver()
)
(prelu): PReLU(num_parameters=1)
(fc2): Linear(
in_features=4, out_features=64, bias=True
(activation_post_process): HistogramObserver()
)
(sigmoid): Sigmoid()
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(fc_q): Sequential(
(0): Linear(
in_features=64, out_features=4, bias=True
(activation_post_process): HistogramObserver()
)
(1): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(2): Linear(
in_features=4, out_features=64, bias=True
(activation_post_process): HistogramObserver()
)
(3): Sigmoid()
)
)
(add_residual_relu): FloatFunctional(
(activation_post_process): HistogramObserver()
)
)
(1): IRBlock(
(bn0): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
(activation_post_process): HistogramObserver()
)
(conv1): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn1): Identity()
(prelu): PReLU(num_parameters=1)
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn2): Identity()
(se): SEBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(mult_xy): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(fc): Sequential(
(0): Linear(
in_features=64, out_features=4, bias=True
(activation_post_process): HistogramObserver()
)
(1): PReLU(num_parameters=1)
(2): Linear(
in_features=4, out_features=64, bias=True
(activation_post_process): HistogramObserver()
)
(3): Sigmoid()
)
(fc1): Linear(
in_features=64, out_features=4, bias=True
(activation_post_process): HistogramObserver()
)
(prelu): PReLU(num_parameters=1)
(fc2): Linear(
in_features=4, out_features=64, bias=True
(activation_post_process): HistogramObserver()
)
(sigmoid): Sigmoid()
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(fc_q): Sequential(
(0): Linear(
in_features=64, out_features=4, bias=True
(activation_post_process): HistogramObserver()
)
(1): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(2): Linear(
in_features=4, out_features=64, bias=True
(activation_post_process): HistogramObserver()
)
(3): Sigmoid()
)
)
(add_residual_relu): FloatFunctional(
(activation_post_process): HistogramObserver()
)
)
)
(layer2): Sequential(
(0): IRBlock(
(bn0): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
(activation_post_process): HistogramObserver()
)
(conv1): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn1): Identity()
(prelu): PReLU(num_parameters=1)
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(conv2): Conv2d(
64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn2): Identity()
(downsample): Sequential(
(0): Conv2d(
64, 128, kernel_size=(1, 1), stride=(2, 2)
(activation_post_process): HistogramObserver()
)
(1): Identity()
)
(se): SEBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(mult_xy): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(fc): Sequential(
(0): Linear(
in_features=128, out_features=8, bias=True
(activation_post_process): HistogramObserver()
)
(1): PReLU(num_parameters=1)
(2): Linear(
in_features=8, out_features=128, bias=True
(activation_post_process): HistogramObserver()
)
(3): Sigmoid()
)
(fc1): Linear(
in_features=128, out_features=8, bias=True
(activation_post_process): HistogramObserver()
)
(prelu): PReLU(num_parameters=1)
(fc2): Linear(
in_features=8, out_features=128, bias=True
(activation_post_process): HistogramObserver()
)
(sigmoid): Sigmoid()
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(fc_q): Sequential(
(0): Linear(
in_features=128, out_features=8, bias=True
(activation_post_process): HistogramObserver()
)
(1): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(2): Linear(
in_features=8, out_features=128, bias=True
(activation_post_process): HistogramObserver()
)
(3): Sigmoid()
)
)
(add_residual_relu): FloatFunctional(
(activation_post_process): HistogramObserver()
)
)
(1): IRBlock(
(bn0): BatchNorm2d(
128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
(activation_post_process): HistogramObserver()
)
(conv1): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn1): Identity()
(prelu): PReLU(num_parameters=1)
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn2): Identity()
(se): SEBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(mult_xy): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(fc): Sequential(
(0): Linear(
in_features=128, out_features=8, bias=True
(activation_post_process): HistogramObserver()
)
(1): PReLU(num_parameters=1)
(2): Linear(
in_features=8, out_features=128, bias=True
(activation_post_process): HistogramObserver()
)
(3): Sigmoid()
)
(fc1): Linear(
in_features=128, out_features=8, bias=True
(activation_post_process): HistogramObserver()
)
(prelu): PReLU(num_parameters=1)
(fc2): Linear(
in_features=8, out_features=128, bias=True
(activation_post_process): HistogramObserver()
)
(sigmoid): Sigmoid()
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(fc_q): Sequential(
(0): Linear(
in_features=128, out_features=8, bias=True
(activation_post_process): HistogramObserver()
)
(1): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(2): Linear(
in_features=8, out_features=128, bias=True
(activation_post_process): HistogramObserver()
)
(3): Sigmoid()
)
)
(add_residual_relu): FloatFunctional(
(activation_post_process): HistogramObserver()
)
)
)
(layer3): Sequential(
(0): IRBlock(
(bn0): BatchNorm2d(
128, eps=1e-05, momentum=0.
and for the sake of completeness, here are the whole modules used :
Summary
class PReLU_Quantized(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
self.quantized_op = nn.quantized.FloatFunctional()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, inputs):
# inputs = max(0, inputs) + alpha * min(0, inputs)
# this is how we do it
# pos = torch.relu(inputs)
# neg = -alpha * torch.relu(-inputs)
# res3 = pos + neg
self.weight = self.quant(self.weight)
weight_min_res = self.quantized_op.mul(-self.weight, torch.relu(-inputs))
inputs = self.quantized_op.add(torch.relu(inputs), weight_min_res)
inputs = self.dequant(inputs)
self.weight = self.dequant(self.weight)
return inputs
def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super().__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
self.add_relu = torch.nn.quantized.FloatFunctional()
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
# out += residual
# out = self.relu(out)
out = self.add_relu.add_relu(out, residual)
return out
def fuse_model(self):
torch.quantization.fuse_modules(self, [['conv1', 'bn1', 'relu'],
['conv2', 'bn2']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu1 = nn.ReLU(inplace=False)
self.relu2 = nn.ReLU(inplace=False)
self.downsample = downsample
self.stride = stride
self.skip_add_relu = nn.quantized.FloatFunctional()
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu1(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu2(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
# out += residual
# out = self.relu(out)
out = self.skip_add_relu.add_relu(out, residual)
return out
def fuse_model(self):
fuse_modules(self, [['conv1', 'bn1', 'relu1'],
['conv2', 'bn2', 'relu2'],
['conv3', 'bn3']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class SEBlock(nn.Module):
def __init__(self, channel, reduction=16):
super().__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.mult_xy = nn.quantized.FloatFunctional()
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction),
nn.PReLU(),
# nn.ReLU(),
nn.Linear(channel // reduction, channel),
nn.Sigmoid()
)
self.fc1 = self.fc[0]
self.prelu = self.fc[1]
self.fc2 = self.fc[2]
self.sigmoid = self.fc[3]
self.prelu_q = PReLU_Quantized(self.prelu)
def forward(self, x):
print(f'<inside se forward:>')
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
# y = self.fc(y).view(b, c, 1, 1)
y = self.fc1(y)
print(f'X: {y}')
y = self.prelu_q(y)
y = self.fc2(y)
y = self.sigmoid(y).view(b, c, 1, 1)
print('--------------------------')
# out = x*y
out = self.mult_xy.mul(x, y)
return out
class IRBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, use_se=True):
super().__init__()
self.bn0 = nn.BatchNorm2d(inplanes)
self.conv1 = conv3x3(inplanes, inplanes)
self.bn1 = nn.BatchNorm2d(inplanes)
self.prelu = nn.PReLU()
self.prelu_q = PReLU_Quantized(self.prelu)
# self.prelu = nn.ReLU()
self.conv2 = conv3x3(inplanes, planes, stride)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
self.use_se = use_se
if self.use_se:
self.se = SEBlock(planes)
self.add_residual_relu = nn.quantized.FloatFunctional()
def forward(self, x):
residual = x
out = self.bn0(x)
out = self.conv1(out)
out = self.bn1(out)
# out = self.prelu(out)
out = self.prelu_q(out)
out = self.conv2(out)
out = self.bn2(out)
if self.use_se:
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
# out += residual
# out = self.prelu(out)
# we may need to change prelu into relu and this, instead of add, use add_relu here
out = self.add_residual_relu.add_relu(out, residual)
# out = self.prelu(out)
return out
def fuse_model(self):
fuse_modules(self, [['conv1', 'bn1'],# 'prelu'],
['conv2', 'bn2']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class ResNet(nn.Module):
def __init__(self, block, layers, use_se=True):
self.inplanes = 64
self.use_se = use_se
super().__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.prelu = nn.PReLU()
self.prelu_q = PReLU_Quantized(self.prelu)
# self.prelu = nn.ReLU()
self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.bn2 = nn.BatchNorm2d(512)
self.dropout = nn.Dropout()
self.fc = nn.Linear(512 * 7 * 7, 512)
self.bn3 = nn.BatchNorm1d(512)
# self.bn2_q = BatchNorm2d_Quantized(self.bn2)
# self.bn3_q = BatchNorm1d_Quantized(self.bn3)
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.xavier_normal_(m.weight)
elif isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.xavier_normal_(m.weight)
nn.init.constant_(m.bias, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, use_se=self.use_se))
self.inplanes = planes
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, use_se=self.use_se))
return nn.Sequential(*layers)
def forward(self, x):
x = self.quant(x)
x = self.conv1(x)
x = self.bn1(x)
# x = self.prelu(x)
x = self.prelu_q(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.bn2(x)
# x = self.bn2_q(x)
x = self.dropout(x)
# x = x.view(x.size(0), -1)
x = x.reshape(x.size(0), -1)
x = self.fc(x)
x = self.bn3(x)
# x = self.bn3_q(x)
x = self.dequant(x)
return x
def fuse_model(self):
r"""Fuse conv/bn/relu modules in resnet models
Fuse conv+bn+relu/ Conv+relu/conv+Bn modules to prepare for quantization.
Model is modified in place. Note that this operation does not change numerics
and the model after modification is in floating point
"""
fuse_modules(self, [['conv1', 'bn1'],# 'prelu'],
# ['bn2'], ['bn3']
], inplace=True)
for m in self.modules():
# print(m)
if type(m) == Bottleneck or type(m) == BasicBlock or type(m) == IRBlock:
m.fuse_model()
def resnet18(pretrained, use_se, **kwargs):
model = ResNet(IRBlock, [2, 2, 2, 2], use_se=use_se, **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))
return model |
st184410 | Here is another sample, this is the model output when I removed all PReLUs and used ReLUs isntead(incase it was a hinderance):
Size (MB): 87.205847
QConfig(activation=functools.partial(<class 'torch.quantization.observer.HistogramObserver'>, reduce_range=True), weight=functools.partial(<class 'torch.quantization.observer.PerChannelMinMaxObserver'>, dtype=torch.qint8, qscheme=torch.per_channel_symmetric))
Model after quantization(converted-prepared): ResNet(
(conv1): ConvReLU2d(
(0): Conv2d(
3, 64, kernel_size=(3, 3), stride=(1, 1)
(activation_post_process): HistogramObserver()
)
(1): ReLU(
(activation_post_process): HistogramObserver()
)
)
(bn1): Identity()
(prelu): PReLU(num_parameters=1)
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(reluooo): Identity()
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): IRBlock(
(bn0): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
(activation_post_process): HistogramObserver()
)
(conv1): ConvReLU2d(
(0): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(1): ReLU(
(activation_post_process): HistogramObserver()
)
)
(bn1): Identity()
(prelu): PReLU(num_parameters=1)
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(reluooo): Identity()
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn2): Identity()
(se): SEBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(mult_xy): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(fc): Sequential(
(0): Linear(
in_features=64, out_features=4, bias=True
(activation_post_process): HistogramObserver()
)
(1): PReLU(num_parameters=1)
(2): Linear(
in_features=4, out_features=64, bias=True
(activation_post_process): HistogramObserver()
)
(3): Sigmoid()
)
(fc1): Linear(
in_features=64, out_features=4, bias=True
(activation_post_process): HistogramObserver()
)
(prelu): PReLU(num_parameters=1)
(fc2): Linear(
in_features=4, out_features=64, bias=True
(activation_post_process): HistogramObserver()
)
(sigmoid): Sigmoid()
)
(add_residual_relu): FloatFunctional(
(activation_post_process): HistogramObserver()
)
)
(1): IRBlock(
(bn0): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
(activation_post_process): HistogramObserver()
)
(conv1): ConvReLU2d(
(0): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(1): ReLU(
(activation_post_process): HistogramObserver()
)
)
(bn1): Identity()
(prelu): PReLU(num_parameters=1)
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(reluooo): Identity()
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn2): Identity()
(se): SEBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(mult_xy): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(fc): Sequential(
(0): Linear(
in_features=64, out_features=4, bias=True
(activation_post_process): HistogramObserver()
)
(1): PReLU(num_parameters=1)
(2): Linear(
in_features=4, out_features=64, bias=True
(activation_post_process): HistogramObserver()
)
(3): Sigmoid()
)
(fc1): Linear(
in_features=64, out_features=4, bias=True
(activation_post_process): HistogramObserver()
)
(prelu): PReLU(num_parameters=1)
(fc2): Linear(
in_features=4, out_features=64, bias=True
(activation_post_process): HistogramObserver()
)
(sigmoid): Sigmoid()
)
(add_residual_relu): FloatFunctional(
(activation_post_process): HistogramObserver()
)
)
)
(layer2): Sequential(
(0): IRBlock(
(bn0): BatchNorm2d(
64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
(activation_post_process): HistogramObserver()
)
(conv1): ConvReLU2d(
(0): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(1): ReLU(
(activation_post_process): HistogramObserver()
)
)
(bn1): Identity()
(prelu): PReLU(num_parameters=1)
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(reluooo): Identity()
(conv2): Conv2d(
64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn2): Identity()
(downsample): Sequential(
(0): Conv2d(
64, 128, kernel_size=(1, 1), stride=(2, 2)
(activation_post_process): HistogramObserver()
)
(1): Identity()
)
(se): SEBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(mult_xy): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(fc): Sequential(
(0): Linear(
in_features=128, out_features=8, bias=True
(activation_post_process): HistogramObserver()
)
(1): PReLU(num_parameters=1)
(2): Linear(
in_features=8, out_features=128, bias=True
(activation_post_process): HistogramObserver()
)
(3): Sigmoid()
)
(fc1): Linear(
in_features=128, out_features=8, bias=True
(activation_post_process): HistogramObserver()
)
(prelu): PReLU(num_parameters=1)
(fc2): Linear(
in_features=8, out_features=128, bias=True
(activation_post_process): HistogramObserver()
)
(sigmoid): Sigmoid()
)
(add_residual_relu): FloatFunctional(
(activation_post_process): HistogramObserver()
)
)
(1): IRBlock(
(bn0): BatchNorm2d(
128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
(activation_post_process): HistogramObserver()
)
(conv1): ConvReLU2d(
(0): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(1): ReLU(
(activation_post_process): HistogramObserver()
)
)
(bn1): Identity()
(prelu): PReLU(num_parameters=1)
(prelu_q): PReLU_Quantized(
(quantized_op): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(quant): QuantStub(
(activation_post_process): HistogramObserver()
)
(dequant): DeQuantStub()
)
(reluooo): Identity()
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
(activation_post_process): HistogramObserver()
)
(bn2): Identity()
(se): SEBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(mult_xy): FloatFunctional(
(activation_post_process): HistogramObserver()
)
(fc): Sequential(
(0): Linear(
in_features=128, out_features=8, bias=True
(activation_post_process): HistogramObserver()
)
(1): PReLU(num_parameters=1)
(2): Linear(
in_features=8, out_features=128, bias=True
(activation_post_process): HistogramObserver()
)
(3): Sigmoid()
)
(fc1): Linear(
in_features=128, out_features=8, bias=True
(activation_post_process): HistogramObserver()
)
(prelu): PReLU(num_parameters=1)
(fc2): Linear(
in_features=8, out_features=128, bias=True
(activation_post_process): HistogramObserver()
)
(sigmoid): Sigmoid()
)
(add_residual_relu): FloatFunctional(
(activation_post_process): HistogramObserver()
)
)
)
(layer3): Sequential(
(0)
Post Training Quantization Prepare: Inserting Observers
Inverted Residual Block:After observer insertion
ConvReLU2d(
(0): Conv2d(
3, 64, kernel_size=(3, 3), stride=(1, 1)
(activation_post_process): HistogramObserver()
)
(1): ReLU(
(activation_post_process): HistogramObserver()
)
)
This is the error I get when using this model (above):
--------------------------
Traceback (most recent call last):
File "d:\Codes\org\python\Quantization\quantizer.py", line 270, in <module>
test_the_model(True)
File "d:\Codes\org\python\Quantization\quantizer.py", line 218, in test_the_model
check_and_tell(model, pic1, pic2)
File "d:\Codes\org\python\Quantization\quantizer.py", line 203, in check_and_tell
embd1 = model(img1.unsqueeze(0))
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "d:\codes\org\python\FV\quantized_models.py", line 599, in forward
x = self.bn3(x)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 726, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\modules\batchnorm.py", line 136, in forward
self.weight, self.bias, bn_training, exponential_average_factor, self.eps)
File "C:\Users\User\Anaconda3\Lib\site-packages\torch\nn\functional.py", line 2039, in batch_norm
training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: Could not run 'aten::native_batch_norm' with arguments from the 'QuantizedCPU' backend. 'aten::native_batch_norm' is only available for these backends: [CPU, MkldnnCPU, BackendSelect, Named, Autograd, Profiler, Tracer, Autocast, Batched].
CPU: registered at aten\src\ATen\CPUType.cpp:1594 [kernel]
MkldnnCPU: registered at aten\src\ATen\MkldnnCPUType.cpp:139 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Autograd: registered at ..\torch\csrc\autograd\generated\VariableType_0.cpp:7879 [kernel]
Profiler: registered at ..\torch\csrc\autograd\generated\ProfiledType_0.cpp:2050 [kernel]
Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_0.cpp:8256 [kernel]
Autocast: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:375 [backend fallback]
Batched: registered at ..\aten\src\ATen\BatchingRegistrations.cpp:149 [backend fallback] |
st184411 | I tried commenting out self.bn(x), and the code ran through. Is there any other solution to this problem? |
st184412 | Thanks, but thats not a solution to me, removing bn drastically affects the performance and aside from that, @supriyar says Pytorch has a quantized version of BatchNorm in place and it should have got converted in first place!
So its not known what is missing or what else needs to be done. |
st184413 | You can replace Linear with a 1*1 convolutional layer, and then merge the convolutional layer and bn layer. I have tried and solved my problem |
st184414 | Thanks, but the problem is, I have other instances of BN where they are used alone! one instance is in IRBlock where the first layer is bn!
So I need to fix this properly |
st184415 | Shisho_Sama:
(conv1): Conv2d(
3, 64, kernel_size=(3, 3), stride=(1, 1)
(activation_post_process): HistogramObserver()
)
Looking at the first conv of your model after convert, it doesn’t seem like it is actually quantized (it should be QuantizedConv), same for the subsequent modules. One thing to debug would be why the modules are not getting replaced.
Are you calling model.eval() before running convert? |
st184416 | Hi, Thanks, but no that model is not yet converted, what is printed up there, is just the output after running fuse_model() and then torch.quantization.prepare. if I comment out the single bns, (and also repalce PReLUs to not face the current issues), the final model does get quantized (its size becomes 22Mb from 88Mb and you see the QuantizedConv2d, etc as well. |
st184417 | the error message looks like you are trying to pass a quantized input to BN, but BN is not quantized. So, you’d need to either fuse it to the preceding module, quantize it, or make sure the input is converted to floating point. Here is a toy example of expected behavior:
import torch
import torch.nn as nn
class M(nn.Module):
def __init__(self):
super(M, self).__init__()
self.quant = torch.quantization.QuantStub()
self.conv1 = nn.Conv2d(1, 1, 1)
self.bn1 = nn.BatchNorm2d(1)
self.conv2 = nn.Conv2d(1, 1, 1)
self.bn2 = nn.BatchNorm2d(1)
def forward(self, x):
x = self.quant(x)
x = self.conv1(x)
x = self.bn1(x)
x = self.conv2(x)
x = self.bn2(x)
return x
m = M()
m.qconfig = torch.quantization.default_qconfig
m.eval()
torch.quantization.fuse_modules(
m,
[
['conv1', 'bn1'], # fuse bn1 into conv1
# for example's sake, don't fuse conv2 and bn2
],
inplace=True)
torch.quantization.prepare(m, inplace=True)
# toy calibration
data = torch.randn(32, 1, 16, 16)
m(data)
torch.quantization.convert(m, inplace=True)
# self.bn1 was fused with conv1 earlier
# self.bn2 will be QuantizedBatchNorm2d
print(m) |
st184418 | Thanks a lot. good point. on a normal model this looks alright, but in the self contained example I made, this doesnt apply,.
Here have a look :
Here is a self contained example with Resnet18 and SimpleNetwork ( a simple 2 layered CNN) using fake data to demonstrate the problem. You can change the use_relu and disable_single_bns to see different results:
import os
from os.path import abspath, dirname, join
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision.datasets import FakeData
import torchvision.transforms as transforms
from torch.quantization import fuse_modules
use_relu = False
disable_single_bns = False
class PReLU_Quantized(nn.Module):
def __init__(self, prelu_object):
super().__init__()
self.prelu_weight = prelu_object.weight
self.weight = self.prelu_weight
self.quantized_op = nn.quantized.FloatFunctional()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, inputs):
# inputs = max(0, inputs) + alpha * min(0, inputs)
# this is how we do it
# pos = torch.relu(inputs)
# neg = -alpha * torch.relu(-inputs)
# res3 = pos + neg
self.weight = self.quant(self.weight)
weight_min_res = self.quantized_op.mul(-self.weight, torch.relu(-inputs))
inputs = self.quantized_op.add(torch.relu(inputs), weight_min_res)
inputs = self.dequant(inputs)
self.weight = self.dequant(self.weight)
return inputs
def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super().__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
self.add_relu = torch.nn.quantized.FloatFunctional()
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
# out += residual
# out = self.relu(out)
out = self.add_relu.add_relu(out, residual)
return out
def fuse_model(self):
torch.quantization.fuse_modules(self, [['conv1', 'bn1', 'relu'],
['conv2', 'bn2']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * 4)
self.relu1 = nn.ReLU(inplace=False)
self.relu2 = nn.ReLU(inplace=False)
self.downsample = downsample
self.stride = stride
self.skip_add_relu = nn.quantized.FloatFunctional()
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu1(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu2(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
# out += residual
# out = self.relu(out)
out = self.skip_add_relu.add_relu(out, residual)
return out
def fuse_model(self):
fuse_modules(self, [['conv1', 'bn1', 'relu1'],
['conv2', 'bn2', 'relu2'],
['conv3', 'bn3']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class SEBlock(nn.Module):
def __init__(self, channel, reduction=16):
super().__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.mult_xy = nn.quantized.FloatFunctional()
self.fc = nn.Sequential(nn.Linear(channel, channel // reduction),
nn.PReLU(),
nn.Linear(channel // reduction, channel),
nn.Sigmoid())
self.fc1 = self.fc[0]
self.prelu = self.fc[1]
self.fc2 = self.fc[2]
self.sigmoid = self.fc[3]
self.prelu_q = PReLU_Quantized(self.prelu)
if use_relu:
self.prelu_q_or_relu = torch.relu
else:
self.prelu_q_or_relu = self.prelu_q
def forward(self, x):
# print(f'<inside se forward:>')
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
# y = self.fc(y).view(b, c, 1, 1)
y = self.fc1(y)
y = self.prelu_q_or_relu(y)
y = self.fc2(y)
y = self.sigmoid(y).view(b, c, 1, 1)
# print('--------------------------')
# out = x*y
out = self.mult_xy.mul(x, y)
return out
class IRBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, use_se=True):
super().__init__()
self.bn0 = nn.BatchNorm2d(inplanes)
if disable_single_bns:
self.bn0_or_identity = torch.nn.Identity()
else:
self.bn0_or_identity = self.bn0
self.conv1 = conv3x3(inplanes, inplanes)
self.bn1 = nn.BatchNorm2d(inplanes)
self.prelu = nn.PReLU()
self.prelu_q = PReLU_Quantized(self.prelu)
if use_relu:
self.prelu_q_or_relu = torch.relu
else:
self.prelu_q_or_relu = self.prelu_q
self.conv2 = conv3x3(inplanes, planes, stride)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
self.use_se = use_se
# if self.use_se:
self.se = SEBlock(planes)
self.add_residual = nn.quantized.FloatFunctional()
def forward(self, x):
residual = x
# TODO:
# this needs to be quantized as well!
out = self.bn0_or_identity(x)
out = self.conv1(out)
out = self.bn1(out)
# out = self.prelu(out)
out = self.prelu_q_or_relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.use_se:
out = self.se(out)
if self.downsample is not None:
residual = self.downsample(x)
# out += residual
# out = self.prelu(out)
out = self.prelu_q_or_relu(out)
# we may need to change prelu into relu and instead of add, use add_relu here
out = self.add_residual.add(out, residual)
return out
def fuse_model(self):
fuse_modules(self, [# ['bn0'],
['conv1', 'bn1'],
['conv2', 'bn2']], inplace=True)
if self.downsample:
torch.quantization.fuse_modules(self.downsample, ['0', '1'], inplace=True)
class ResNet(nn.Module):
def __init__(self, block, layers, use_se=True):
self.inplanes = 64
self.use_se = use_se
super().__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.prelu = nn.PReLU()
self.prelu_q = PReLU_Quantized(self.prelu)
# This is to only get rid of the unimplemented CPUQuantization type error
# when we use PReLU_Quantized during test time
if use_relu:
self.prelu_q_or_relu = torch.relu
else:
self.prelu_q_or_relu = self.prelu_q
self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.bn2 = nn.BatchNorm2d(512)
# This is to get around the single BatchNorms not getting fused and thus causing
# a RuntimeError: Could not run 'aten::native_batch_norm' with arguments from the 'QuantizedCPU' backend.
# 'aten::native_batch_norm' is only available for these backends: [CPU, MkldnnCPU, BackendSelect, Named, Autograd, Profiler, Tracer, Autocast, Batched].
# during test time
if disable_single_bns:
self.bn2_or_identity = torch.nn.Identity()
else:
self.bn2_or_identity = self.bn2
self.dropout = nn.Dropout()
self.fc = nn.Linear(512 * 7 * 7, 512)
self.bn3 = nn.BatchNorm1d(512)
if disable_single_bns:
self.bn3_or_identity = torch.nn.Identity()
else:
self.bn3_or_identity = self.bn3
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.xavier_normal_(m.weight)
elif isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.xavier_normal_(m.weight)
nn.init.constant_(m.bias, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, use_se=self.use_se))
self.inplanes = planes
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, use_se=self.use_se))
return nn.Sequential(*layers)
def forward(self, x):
x = self.quant(x)
x = self.conv1(x)
# TODO: single bn needs to be fused
x = self.bn1(x)
# x = self.prelu(x)
x = self.prelu_q_or_relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.bn2_or_identity(x)
x = self.dropout(x)
# x = x.view(x.size(0), -1)
x = x.reshape(x.size(0), -1)
x = self.fc(x)
# TODO: single bn needs to be fused
x = self.bn3_or_identity(x)
x = self.dequant(x)
return x
def fuse_model(self):
r"""Fuse conv/bn/relu modules in resnet models
Fuse conv+bn+relu/ Conv+relu/conv+Bn modules to prepare for quantization.
Model is modified in place. Note that this operation does not change numerics
and the model after modification is in floating point
"""
fuse_modules(self, ['conv1', 'bn1'], inplace=True)
for m in self.modules():
if type(m) == Bottleneck or type(m) == BasicBlock or type(m) == IRBlock:
m.fuse_model()
class SimpleNetwork(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3, stride=1, padding=1)
self.bn1 = nn.BatchNorm2d(10)
self.relu1 = nn.ReLU()
self.prelu_q = PReLU_Quantized(nn.PReLU())
self.bn = nn.BatchNorm2d(10)
self.prelu_q_or_relu = torch.relu if use_relu else self.prelu_q
self.bn_or_identity = nn.Identity() if disable_single_bns else self.bn
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv1(x)
x = self.bn1(x)
x = self.relu1(x)
x = self.prelu_q_or_relu(x)
x = self.bn_or_identity(x)
x = self.dequant(x)
return x
def resnet18(use_se=True, **kwargs):
return ResNet(IRBlock, [2, 2, 2, 2], use_se=use_se, **kwargs)
def print_size_of_model(model):
torch.save(model.state_dict(), "temp.p")
print('Size (MB):', os.path.getsize("temp.p")/1e6)
os.remove('temp.p')
def evaluate(model, data_loader, eval_batches):
model.eval()
with torch.no_grad():
for i, (image, target) in enumerate(data_loader):
features = model(image)
print(f'{i})feature dims: {features.shape}')
if i >= eval_batches:
return
def load_quantized(model, quantized_checkpoint_file_path):
model.eval()
if type(model) == ResNet:
model.fuse_model()
# Specify quantization configuration
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(model, inplace=True)
# Convert to quantized model
torch.quantization.convert(model, inplace=True)
checkpoint = torch.load(quantized_checkpoint_file_path, map_location=torch.device('cpu'))
model.load_state_dict(checkpoint)
print_size_of_model(model)
return model
def test_the_model(model, dtloader):
current_dir = abspath(dirname(__file__))
model = load_quantized(model, join(current_dir, 'data', 'model_quantized_jit.pth'))
model.eval()
img, _ = next(iter(dtloader))
embd1 = model(img)
def quantize_model(model, dtloader):
calibration_batches = 10
saved_model_dir = 'data'
scripted_quantized_model_file = 'model_quantized_jit.pth'
# model = resnet18()
model.eval()
if type(model) == ResNet:
model.fuse_model()
print_size_of_model(model)
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
print(model.qconfig)
torch.quantization.prepare(model, inplace=True)
print(f'Model after fusion(prepared): {model}')
# Calibrate first
print('Post Training Quantization Prepare: Inserting Observers')
print('\n Inverted Residual Block:After observer insertion \n\n', model.conv1)
# Calibrate with the training set
evaluate(model, dtloader, eval_batches=calibration_batches)
print('Post Training Quantization: Calibration done')
# Convert to quantized model
torch.quantization.convert(model, inplace=True)
print('Post Training Quantization: Convert done')
print('\n Inverted Residual Block: After fusion and quantization, note fused modules: \n\n', model.conv1)
print("Size of model after quantization")
print_size_of_model(model)
script = torch.jit.script(model)
path_tosave = join(dirname(abspath(__file__)), saved_model_dir, scripted_quantized_model_file)
print(f'path to save: {path_tosave}')
with open(path_tosave, 'wb') as f:
torch.save(model.state_dict(), f)
print(f'model after quantization (prepared and converted:) {model}')
# torch.jit.save(script, path_tosave)
dataset = FakeData(1000, image_size=(3, 112, 112), num_classes=5, transform=transforms.ToTensor())
data_loader = DataLoader(dataset, batch_size=1)
# quantize the model
model = resnet18()
# model = SimpleNetwork()
quantize_model(model, data_loader)
# and load and test the quantized model
model = resnet18()
# model = SimpleNetwork()
test_the_model(model, data_loader)
I changed the SimpleNetwork, based on what you suggested and It doesnt fail anymore, but this is not the case with the ResNet18.
I’ll try to dig a bit more and see why I find .
Thanks alot for your time really appreciate it |
st184419 | I noticed two things so far:
Pytorch has issues with branches in the model for some reason. that is, lets consider SimpleNetwork here.
class SimpleNetwork(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3, stride=1, padding=1)
self.bn1 = nn.BatchNorm2d(10)
self.relu1 = nn.ReLU()
self.prelu_q = PReLU_Quantized(nn.PReLU())
self.bn = nn.BatchNorm2d(10)
self.prelu_q_or_relu = torch.relu if use_relu else self.prelu_q
self.bn_or_identity = nn.Identity() if disable_single_bns else self.bn
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv1(x)
x = self.bn1(x)
x = self.relu1(x)
x = self.prelu_q_or_relu(x)
x = self.bn_or_identity(x)
x = self.dequant(x)
return x
This by default results in the infamous error stated in the op. However, if I simply remove :
self.bn_or_identity = nn.Identity() if disable_single_bns else self.bn
and simply use the
self.bn = nn.BatchNorm2d(10)
in the forward pass, I no longer see that error!.
I tried to do the same thing to ResNet18, and it seems, all previous bns are fine except the bn3 in the ResNet model (the penultimate layer) which regardless of what I change, still give that error ! |
st184420 | what is the error when you use
self.bn_or_identity = nn.Identity() if disable_single_bns else self.bn ? I think this is what you need to do |
st184421 | Hi! I’m trying to replace my PReLU’s with quantized ones and run the example with SimpleNetwork and had faced the problem like this:
weight_min_res = self.quantized_op.mul(-self.weight, torch.relu(-inputs))
~~~~~~~~~~~~ <--- HERE
inputs = self.quantized_op.add(torch.relu(inputs), weight_min_res)
inputs = self.dequant(inputs)
RuntimeError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend. 'aten::empty.memory_format' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, BackendSelect, Autograd, Profiler, Tracer].
Have you faced this problem before and do you know by any chance how to overcome it? |
st184422 | If you want to go down the rabbit hole and implement PReLU for quantization this is 11 the best you get.
But if you want my suggestion, I’d say simply go for the graph quantization instead of the manual approach (qat for example). just jit trace your model first and then graph quantize it. thats the simplest/easiest method that I know so far. |
st184423 | Hi, all
I finally success converting the fp32 model to the int8 model thanks to pytorch forum community .
In order to make sure that the model is quantized, I checked that the size of my quantized model is smaller than the fp32 model (500MB->130MB).
However, operating my quantized model is much slower than operating the fp32 model. (700ms -> 2.4s)
I converted pre-trained VGG16 model in torchvision.models.
I am working on Nvidia JetsonTx2, and I checked that quantized mobilenet in torchvision.models.quantization.mobilenet is much faster than fp32 mobilenet model.
So I think that my conversion work might be wrong.
If you guys need more information, please let me know.
This is an output of “print(quantized_model)”.
RecursiveScriptModule(
original_name=VGG
(features): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Conv2d)
(1): RecursiveScriptModule(original_name=ReLU)
(2): RecursiveScriptModule(original_name=Conv2d)
(3): RecursiveScriptModule(original_name=ReLU)
(4): RecursiveScriptModule(original_name=MaxPool2d)
(5): RecursiveScriptModule(original_name=Conv2d)
(6): RecursiveScriptModule(original_name=ReLU)
(7): RecursiveScriptModule(original_name=Conv2d)
(8): RecursiveScriptModule(original_name=ReLU)
(9): RecursiveScriptModule(original_name=MaxPool2d)
(10): RecursiveScriptModule(original_name=Conv2d)
(11): RecursiveScriptModule(original_name=ReLU)
(12): RecursiveScriptModule(original_name=Conv2d)
(13): RecursiveScriptModule(original_name=ReLU)
(14): RecursiveScriptModule(original_name=Conv2d)
(15): RecursiveScriptModule(original_name=ReLU)
(16): RecursiveScriptModule(original_name=MaxPool2d)
(17): RecursiveScriptModule(original_name=Conv2d)
(18): RecursiveScriptModule(original_name=ReLU)
(19): RecursiveScriptModule(original_name=Conv2d)
(20): RecursiveScriptModule(original_name=ReLU)
(21): RecursiveScriptModule(original_name=Conv2d)
(22): RecursiveScriptModule(original_name=ReLU)
(23): RecursiveScriptModule(original_name=MaxPool2d)
(24): RecursiveScriptModule(original_name=Conv2d)
(25): RecursiveScriptModule(original_name=ReLU)
(26): RecursiveScriptModule(original_name=Conv2d)
(27): RecursiveScriptModule(original_name=ReLU)
(28): RecursiveScriptModule(original_name=Conv2d)
(29): RecursiveScriptModule(original_name=ReLU)
(30): RecursiveScriptModule(original_name=MaxPool2d)
)
(avgpool): RecursiveScriptModule(original_name=AdaptiveAvgPool2d)
(classifier): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(
original_name=Linear
(_packed_params): RecursiveScriptModule(original_name=LinearPackedParams)
)
(1): RecursiveScriptModule(original_name=ReLU)
(2): RecursiveScriptModule(original_name=Dropout)
(3): RecursiveScriptModule(
original_name=Linear
(_packed_params): RecursiveScriptModule(original_name=LinearPackedParams)
)
(4): RecursiveScriptModule(original_name=ReLU)
(5): RecursiveScriptModule(original_name=Dropout)
(6): RecursiveScriptModule(
original_name=Linear
(_packed_params): RecursiveScriptModule(original_name=LinearPackedParams)
)
)
(quant): RecursiveScriptModule(original_name=Quantize)
(dequant): RecursiveScriptModule(original_name=DeQuantize)
)
The following code is the conversion code that I wrote.
from torch.quantization import QuantStub, DeQuantStub
import numpy as np
import torch
import torch.nn as nn
import torchvision
from torch.utils.data import DataLoader
from torchvision import datasets
import torchvision.transforms as transforms
import os
import time
import sys
import torch.quantization
# # Setup warnings
import warnings
warnings.filterwarnings(
action='ignore',
category=DeprecationWarning,
module=r'.*'
)
warnings.filterwarnings(
action='default',
module=r'torch.quantization'
)
# Specify random seed for repeatable results
torch.manual_seed(191009)
from torch.hub import load_state_dict_from_url
__all__ = [
'VGG', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn',
'vgg19_bn', 'vgg19',
]
model_urls = {
'vgg11': 'https://download.pytorch.org/models/vgg11-bbd30ac9.pth',
'vgg13': 'https://download.pytorch.org/models/vgg13-c768596a.pth',
'vgg16': 'https://download.pytorch.org/models/vgg16-397923af.pth',
'vgg19': 'https://download.pytorch.org/models/vgg19-dcbb9e9d.pth',
'vgg11_bn': 'https://download.pytorch.org/models/vgg11_bn-6002323d.pth',
'vgg13_bn': 'https://download.pytorch.org/models/vgg13_bn-abd245e5.pth',
'vgg16_bn': 'https://download.pytorch.org/models/vgg16_bn-6c64b313.pth',
'vgg19_bn': 'https://download.pytorch.org/models/vgg19_bn-c79401a0.pth',
}
class VGG(nn.Module):
def __init__(self, features, num_classes=1000, init_weights=True):
super(VGG, self).__init__()
self.features = features
self.avgpool = nn.AdaptiveAvgPool2d((7, 7))
self.classifier = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, num_classes),
)
if init_weights:
self._initialize_weights()
self.quant = QuantStub()
self.dequant = DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.features(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.classifier(x)
x = self.dequant(x)
return x
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
if m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 0.01)
nn.init.constant_(m.bias, 0)
def make_layers(cfg, batch_norm=False):
layers = []
in_channels = 3
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
return nn.Sequential(*layers)
cfgs = {
'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}
def _vgg(arch, cfg, batch_norm, pretrained, progress, **kwargs):
if pretrained:
kwargs['init_weights'] = False
model = VGG(make_layers(cfgs[cfg], batch_norm=batch_norm), **kwargs)
if pretrained:
state_dict = load_state_dict_from_url(model_urls[arch],
progress=progress)
model.load_state_dict(state_dict)
return model
def vgg11(pretrained=False, progress=True, **kwargs):
r"""VGG 11-layer model (configuration "A") from
`"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _vgg('vgg11', 'A', False, pretrained, progress, **kwargs)
def vgg11_bn(pretrained=False, progress=True, **kwargs):
r"""VGG 11-layer model (configuration "A") with batch normalization
`"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _vgg('vgg11_bn', 'A', True, pretrained, progress, **kwargs)
def vgg13(pretrained=False, progress=True, **kwargs):
r"""VGG 13-layer model (configuration "B")
`"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _vgg('vgg13', 'B', False, pretrained, progress, **kwargs)
def vgg13_bn(pretrained=False, progress=True, **kwargs):
r"""VGG 13-layer model (configuration "B") with batch normalization
`"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _vgg('vgg13_bn', 'B', True, pretrained, progress, **kwargs)
def vgg16(pretrained=False, progress=True, **kwargs):
r"""VGG 16-layer model (configuration "D")
`"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _vgg('vgg16', 'D', False, pretrained, progress, **kwargs)
def vgg16_bn(pretrained=False, progress=True, **kwargs):
r"""VGG 16-layer model (configuration "D") with batch normalization
`"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _vgg('vgg16_bn', 'D', True, pretrained, progress, **kwargs)
def vgg19(pretrained=False, progress=True, **kwargs):
r"""VGG 19-layer model (configuration "E")
`"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _vgg('vgg19', 'E', False, pretrained, progress, **kwargs)
def vgg19_bn(pretrained=False, progress=True, **kwargs):
r"""VGG 19-layer model (configuration 'E') with batch normalization
`"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _vgg('vgg19_bn', 'E', True, pretrained, progress, **kwargs)
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self, name, fmt=':f'):
self.name = name
self.fmt = fmt
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def __str__(self):
fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
return fmtstr.format(**self.__dict__)
def accuracy(output, target, topk=(1,)):
"""Computes the accuracy over the k top predictions for the specified values of k"""
with torch.no_grad():
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)
res.append(correct_k.mul_(100.0 / batch_size))
return res
def evaluate(model, criterion, data_loader, neval_batches):
model.eval()
top1 = AverageMeter('Acc@1', ':6.2f')
top5 = AverageMeter('Acc@5', ':6.2f')
cnt = 0
with torch.no_grad():
for image, target in data_loader:
output = model(image)
loss = criterion(output, target)
cnt += 1
acc1, acc5 = accuracy(output, target, topk=(1, 5))
print('.', end = '')
top1.update(acc1[0], image.size(0))
top5.update(acc5[0], image.size(0))
if cnt >= neval_batches:
return top1, top5
return top1, top5
def load_model(model_file):
if model_file is None:
model = vgg16(pretrained=True)
if not model_file is None:
model = vgg16()
state_dict = torch.load(model_file)
model.load_state_dict(state_dict)
model.to('cpu')
return model
def print_size_of_model(model):
torch.save(model.state_dict(), "temp.p")
print('Size (MB):', os.path.getsize("temp.p")/1e6)
os.remove('temp.p')
'''
import requests
url = 'https://s3.amazonaws.com/pytorch-tutorial-assets/imagenet_1k.zip'
filename = '~/Downloads/imagenet_1k_data.zip'
r = requests.get(url)
with open(filename, 'wb') as f:
f.write(r.content)
'''
import torchvision
import torchvision.transforms as transforms
'''
imagenet_dataset = torchvision.datasets.ImageNet(
'data/imagenet_1k',
split='train',
download=True,
transform=transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
]))
'''
def prepare_data_loaders(data_path):
traindir = os.path.join(data_path, 'train')
valdir = os.path.join(data_path, 'val')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
dataset = torchvision.datasets.ImageFolder(
traindir,
transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
]))
dataset_test = torchvision.datasets.ImageFolder(
valdir,
transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
]))
train_sampler = torch.utils.data.RandomSampler(dataset)
test_sampler = torch.utils.data.SequentialSampler(dataset_test)
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=train_batch_size,
sampler=train_sampler)
data_loader_test = torch.utils.data.DataLoader(
dataset_test, batch_size=eval_batch_size,
sampler=test_sampler)
return data_loader, data_loader_test
data_path = 'data/imagenet_1k'
saved_model_dir = 'data/'
scripted_float_model_file = 'vgg16_quantization_scripted.pth'
scripted_quantized_model_file = 'vgg16_quantization_scripted_quantized.pth'
train_batch_size = 30
eval_batch_size = 30
data_loader, data_loader_test = prepare_data_loaders(data_path)
criterion = nn.CrossEntropyLoss()
float_model = load_model(None).to('cpu')
float_model.eval()
num_eval_batches = 10
print("Size of baseline model")
print_size_of_model(float_model)
top1, top5 = evaluate(float_model, criterion, data_loader_test, neval_batches=num_eval_batches)
print('Evaluation accuracy on %d images, %2.2f'%(num_eval_batches * eval_batch_size, top1.avg))
torch.jit.save(torch.jit.script(float_model), saved_model_dir + scripted_float_model_file)
num_calibration_batches = 10
per_channel_quantized_model = load_model(None).to('cpu')
per_channel_quantized_model.eval()
torch.backends.quantized.engine = 'qnnpack'
per_channel_quantized_model.qconfig = torch.quantization.get_default_qconfig('qnnpack')
print(per_channel_quantized_model.qconfig)
torch.quantization.prepare(per_channel_quantized_model, inplace=True)
evaluate(per_channel_quantized_model,criterion, data_loader, num_calibration_batches)
torch.quantization.convert(per_channel_quantized_model, inplace=True)
top1, top5 = evaluate(per_channel_quantized_model, criterion, data_loader_test, neval_batches=num_eval_batches)
print('Evaluation accuracy on %d images, %2.2f'%(num_eval_batches * eval_batch_size, top1.avg))
torch.jit.save(torch.jit.script(per_channel_quantized_model), saved_model_dir + scripted_quantized_model_file) |
st184424 | I printed your quantized model def before scripting: https://gist.github.com/vkuzo/edb2121a757d5789977935ad56820a24 6
One improvement would be to fuse subsequent Conv-ReLU modules together, so they can use the faster fused quantized kernel:
import torch
import torch.nn as nn
model = nn.Sequential(
nn.Conv2d(4, 4, 1),
nn.ReLU(),
)
# Fuse each Conv and ReLU (implement this for your model)
torch.quantization.fuse_modules(model, [['0', '1']], inplace=True)
print(model)
# prepare
torch.backends.quantized.engine = 'qnnpack'
model.qconfig = torch.quantization.get_default_qconfig('qnnpack')
torch.quantization.prepare(model, inplace=True)
# calibrate (toy example)
input_data = torch.randn(4, 4, 4, 4)
model(input_data)
# convert
torch.quantization.convert(model, inplace=True)
# should see QuantizedConvReLU2d module
print(model)
If you still see a performance gap after this, might be good to check if QNNPACK is enabled on your target device. |
st184425 | Thank you for your reply.
Even I used ‘fuse’ for conv+relu and linear+relu but there is no speed improvement.
The QNNPACK is well enabled because I checked quantized mobilenet in torchvision.models.quantization.mobilenet which uses qnnpack backend is faster than fp32 model.
Could you suggest another feasible solution? |
st184426 | it could also be related to op support in QNNPACK. PyTorch has a fork of QNNPACK which lives here (https://github.com/pytorch/pytorch/tree/172f31171a3395cc299044e06a9665fec676ddd6/aten/src/ATen/native/quantized/cpu/qnnpack 8), and the readme contains the supported ops.
Your model has a few modules which are not supported, which means they would still run but there aren’t fast ARM kernels: AdaptiveAvgPool2d, and Dropout. Just for debugging’s sake, you could check if removing these modules or replacing them with alternatives which are optimized for ARM fixes the speed issue |
st184427 | can you print your model right before scripting it and verify you get this (https://gist.github.com/vkuzo/edb2121a757d5789977935ad56820a24 8) ? |
st184428 | This is the output of the model before scripted.
VGG(
(conv1): QuantizedConvReLU2d(3, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.015086950734257698, zero_point=2, padding=(1, 1))
(relu1): Identity()
(conv2): QuantizedConvReLU2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.005462500732392073, zero_point=0, padding=(1, 1))
(relu2): Identity()
(conv3): QuantizedConvReLU2d(64, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.002446091501042247, zero_point=0, padding=(1, 1))
(relu3): Identity()
(conv4): QuantizedConvReLU2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.0008910637116059661, zero_point=1, padding=(1, 1))
(relu4): Identity()
(conv5): QuantizedConvReLU2d(128, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.0006946324720047414, zero_point=1, padding=(1, 1))
(relu5): Identity()
(conv6): QuantizedConvReLU2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.0002671453694347292, zero_point=1, padding=(1, 1))
(relu6): Identity()
(conv7): QuantizedConvReLU2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=0.00013638826203532517, zero_point=3, padding=(1, 1))
(relu7): Identity()
(conv8): QuantizedConvReLU2d(256, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.00012979305756743997, zero_point=0, padding=(1, 1))
(relu8): Identity()
(conv9): QuantizedConvReLU2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=0.00012682013039011508, zero_point=1, padding=(1, 1))
(relu9): Identity()
(conv10): QuantizedConvReLU2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=8.234349661506712e-05, zero_point=1, padding=(1, 1))
(relu10): Identity()
(conv11): QuantizedConvReLU2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=9.820431296247989e-05, zero_point=0, padding=(1, 1))
(relu11): Identity()
(conv12): QuantizedConvReLU2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=8.165000326698646e-05, zero_point=0, padding=(1, 1))
(relu12): Identity()
(conv13): QuantizedConvReLU2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=8.769309351919219e-05, zero_point=0, padding=(1, 1))
(relu13): Identity()
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(dropout): Dropout(p=0.5, inplace=False)
(avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
(fc1): QuantizedLinearReLU(
in_features=25088, out_features=4096, scale=6.691644375678152e-05, zero_point=0
(_packed_params): LinearPackedParams()
)
(relu14): Identity()
(fc2): QuantizedLinearReLU(
in_features=4096, out_features=4096, scale=8.03592411102727e-05, zero_point=0
(_packed_params): LinearPackedParams()
)
(relu15): Identity()
(fc3): QuantizedLinear(
in_features=4096, out_features=1000, scale=0.0001865544618340209, zero_point=131
(_packed_params): LinearPackedParams()
)
(softmax): Softmax(dim=1)
(quant): Quantize(scale=tensor([0.0186]), zero_point=tensor([114]), dtype=torch.quint8)
(dequant): DeQuantize()
)
In my model, there is a lot of Identity() layer. I thought that these layers are generated by fuse() function. I don’t think that these affect the performance. Do they affect the latency performance of execution? |
st184429 | thanks. The Identity layers do not do anything and shouldn’t contribute to performance. Your model def after quantization looks right. Unfortunately we don’t have a JetsonX2 so we can’t check locally, and your setup looks right. At this point might be good to try and bisect the issue - check if any particular layers are slow (in particular, ones not supported by QNNPACK). |
st184430 | FYI, in order to investigate the bottleneck of model execution, I profiled my quantized model using torch.autograd.profiler.profile().
I thought there is some problem about quantized::conv2d which QNNPACK supports.
I will share the further progress, thank you so much @Vasiliy_Kuznetsov !
--------------------------- --------------- --------------- --------------- --------------- --------------- ---------------
Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls
--------------------------- --------------- --------------- --------------- --------------- --------------- ---------------
quantized::conv2d 92.64% 2.221s 92.67% 2.222s 170.929ms 13
quantized::linear 5.69% 136.383ms 5.69% 136.452ms 45.484ms 3
_adaptive_avg_pool2d 0.64% 15.370ms 0.64% 15.370ms 15.370ms 1
relu_ 0.49% 11.650ms 0.49% 11.650ms 776.697us 15
quantized_max_pool2d 0.40% 9.491ms 0.40% 9.491ms 1.898ms 5
quantize_per_tensor 0.09% 2.261ms 0.09% 2.261ms 2.261ms 1
contiguous 0.02% 410.239us 0.02% 450.143us 28.134us 16
_empty_affine_quantized 0.01% 304.640us 0.01% 304.640us 17.920us 17
max 0.01% 160.672us 0.01% 160.672us 160.672us 1
q_scale 0.00% 113.983us 0.00% 113.983us 2.478us 46
clone 0.00% 102.719us 0.00% 102.719us 102.719us 1
dequantize 0.00% 50.016us 0.00% 50.016us 50.016us 1
q_zero_point 0.00% 45.728us 0.00% 45.728us 1.524us 30
view 0.00% 44.704us 0.00% 44.704us 44.704us 1
max_pool2d 0.00% 36.128us 0.40% 9.527ms 1.905ms 5
select 0.00% 31.680us 0.00% 31.680us 31.680us 1
reshape 0.00% 30.208us 0.01% 195.071us 97.535us 2
_unsafe_view 0.00% 17.440us 0.00% 17.440us 17.440us 1
empty_like 0.00% 13.888us 0.00% 39.904us 39.904us 1
_local_scalar_dense 0.00% 13.504us 0.00% 13.504us 4.501us 3
is_floating_point 0.00% 13.440us 0.00% 13.440us 13.440us 1
item 0.00% 12.448us 0.00% 25.952us 8.651us 3
flatten 0.00% 7.456us 0.01% 138.463us 138.463us 1
adaptive_avg_pool2d 0.00% 5.312us 0.64% 15.376ms 15.376ms 1
dropout 0.00% 5.216us 0.00% 5.216us 2.608us 2
qscheme 0.00% 4.736us 0.00% 4.736us 4.736us 1
is_complex 0.00% 3.360us 0.00% 3.360us 3.360us 1
sizes 0.00% 2.656us 0.00% 2.656us 2.656us 1
size 0.00% 2.240us 0.00% 2.240us 2.240us 1
--------------------------- --------------- --------------- --------------- --------------- --------------- --------------- |
st184431 | I am still trying to solve it. If there is meaningful result, I will share it here. |
st184432 | If you use the x86 machine like intel, I suggest you to use “fbgemm”, if you use arm that what qnnpack helps |
st184433 | when i do static quantization in BERT like this code:
quantized_model = model_class.from_pretrained(args.model_name_or_path, from_tf=bool(".ckpt" in args.model_name_or_path),
config=config, cache_dir=args.cache_dir if args.cache_dir else None, )
quantized_model.eval()
quantized_model.qconfig = torch.quantization.default_qconfig
print(‘quantized_model.qconfig’,quantized_model.qconfig)
# #for X86 architectures Quantizes weights on a per-channel basis
# quantized_model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
# print('quantized_model.qconfig', quantized_model.qconfig)
torch.quantization.prepare(quantized_model,inplace=True)
train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, data_type='train')
args.train_batch_size = args.per_gpu_train_batch_size * max(1, args.n_gpu)
# train_sampler = RandomSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset)
train_sampler = SequentialSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset)
train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=args.train_batch_size,
collate_fn=collate_fn)
with torch.no_grad():
for step, batch in enumerate(tqdm(train_dataloader,desc='post-training static quantization')):
inputs = {"input_ids": batch[0], "attention_mask": batch[1], "labels": batch[3]}
outputs = quantized_model(**inputs)
loss = outputs[0] # model outputs are always tuple in pytorch-transformers (see doc)
break
print('convert quantized model:')
# torch.quantization.convert(quantized_model, mapping={nn.Linear: nnq.Linear}, inplace=True)
torch.quantization.convert(quantized_model, inplace=True)
print('static quantized model')
print(quantized_model)
print_model_size(quantized_model)
t1 = time.time()
result = evaluate(args, quantized_model, tokenizer, prefix=prefix)
t2 = time.time()
an error occur:
Could not run ‘quantized::layer_norm’ with arguments from the ‘CPU’ backend. ‘quantized::layer_norm’ is only available for these backends: [QuantizedCPU]
how can i solve this problem? |
st184434 | Hello everyone. I recently use dynamic quantiztion to quant the model, when use torch.quantization.quantize_dynamic(model, dtype=torch.qint8) to quant the model, model from 39M to 30M, while use torch.quantization.quantize_dynamic(model, dtype=torch.float16) the model size has no changes. Does anybody know why? or Do I do the wrong way to quantize model to float16?
I’d appreciate if anybody can help me! Thanks in advance! |
st184435 | Solved by supriyar in post #2
Hi @huoge - for int8 dynamic quantization we quantize the weights to 8-bits so you see the expected size reduction.
For fp16 quantization, the weight values are cast to fp16 (taking saturation into account), but the dtype is still set to float32. This has to do with the type expected by the FBGEMM … |
st184436 | Hi @huoge - for int8 dynamic quantization we quantize the weights to 8-bits so you see the expected size reduction.
For fp16 quantization, the weight values are cast to fp16 (taking saturation into account), but the dtype is still set to float32. This has to do with the type expected by the FBGEMM backend when performing the gemm operation. |
st184437 | I am trying to quantize the model of image network. and I faced the error.
For the call the function below
dummy_input = torch.randn(1, 3, 320, 320).cpu()
script_model = torch.jit.trace(net, dummy_input)
the tensor operation of sum shows the Exception
norm = x.sum(dim=1, keepdim=True).sqrt()
RuntimeError: Could not run ‘aten::empty.memory_format’ with arguments from the ‘QuantizedCPU’ backend. ‘aten::empty.memory_format’ is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, BackendSelect, Autograd, Profiler, Tracer].
Is there any way bypassing this error? |
st184438 | Solved by jerryzh168 in post #2
it’s because quantized::sum is not supported, can you put dequant/quant around the sum op? |
st184439 | it’s because quantized::sum is not supported, can you put dequant/quant around the sum op? |
st184440 | jerryzh168:
put dequant/quant around the sum op
hi, how to put dequant/quant around the sum op?? sorry,i am new |
st184441 | There is an example here: https://pytorch.org/docs/stable/quantization.html 166 - if you search that page for torch.quantization.QuantStub() and torch.quantization.DeQuantStub(), that should help. |
st184442 | I have noticed that once Quantization-Aware Training starts memory consumption increases significantly. E.g. for a network that consumes 2GB during training, I see memory consumption jumping to 4.5GB once QAT starts.
While in part, this can be explained due to the various statistics that need to be stored, the difference in memory consumption is much larger than what these stats occupy. I am guessing that this might be due to additional intermediate activation maps that need to be kept around, but I was hoping that someone more informed could drop in and clarify if this is the reason. |
st184443 | Hi @Georgios_Georgiadis, one known problem is that fake_quantize modules are currently implemented as additional nodes in the computation graph, so their additional outputs (the fake_quantized versions weights and activations) contribute to the memory overhead during training. We have plans to improve the memory overhead in the future by adding fused fake_quant kernels for common layers such as conv and linear. |
st184444 | Can I perform inference on the GPU with a quantized model? Articles on the PyTorch website mention that 'PyTorch 1.3 doesn’t provide quantized operator implementations on CUDA yet ’ (https://pytorch.org/docs/stable/quantization.html 1) but there is no clarification if any additional GPU support has been added to PyTorch 1.6, or whether it is planned in any future releases. |
st184445 | Solved by supriyar in post #2
We don’t support quantized model inference on GPU currently. The docs have been updated to reflect that. |
st184446 | We don’t support quantized model inference on GPU currently. The docs have been updated to reflect that. |
st184447 | Hi, please enlighten me. How can I write a custom function to overcome this problem?
Is it possible to replace padding in the pre-trained model with some Quantization supported operator? |
st184448 | Solved by supriyar in post #2
Quantization support for reflection_pad1d was added in https://github.com/pytorch/pytorch/pull/37452. cc @Zafar
you can follow that PR to add support for additional reflection_pad operators if they aren’t supported. |
st184449 | Quantization support for reflection_pad1d was added in https://github.com/pytorch/pytorch/pull/37452. cc @Zafar
you can follow that PR to add support for additional reflection_pad operators if they aren’t supported. |
st184450 | hi, I quantized my model and I can save model succesfully, but when i evaluta model with torch.jit. there are some thing error:
feature2 = self.L2normalize(feature2)
File "/media/zst/8ec88aab-d885-4801-98ab-e3181c65261b/A/pro/re.py", line 20, in L2normalize
def L2normalize(self, x):
eps = 1e-6
norm = x ** 2
~~~~~~ <--- HERE
norm = norm.sum(dim=1, keepdim=True) + eps
#norm = torch.sum(norm,1) + eps
RuntimeError: Could not run 'aten::pow.Tensor_Scalar' with arguments from the 'QuantizedCPU' backend. 'aten::pow.Tensor_Scalar' is only available for these backends: [CPU, CUDA, SparseCPU, SparseCUDA, Named, Autograd, Profiler, Tracer, Autocast].
but when I test model with torch.load:
Unexpected key(s) in state_dict: "conv1a.0.scale", "conv1a.0.zero_point", "conv1aa.0.scale", "conv1aa.0.zero_point", "conv1b.0.scale", "conv1b.0.zero_point", "conv2a.0.scale", "conv2a.0.zero_point", "conv2aa.0.scale", "conv2aa.0.zero_point", "conv2b.0.scale", "conv2b.0.zero_point", "conv3a.0.scale", "conv3a...
While copying the parameter named "conv1a.0.weight", whose dimensions in the model are torch.Size([16, 3, 3, 3]) and whose dimensions in the checkpoint are torch.Size([16, 3, 3, 3]), an exception occured : ('Copying from quantized Tensor to non-quantized Tensor is not allowed, please use dequantize to get a float Tensor from a quantized Tensor',)... |
st184451 | Solved by supriyar in post #2
We don;t support quantized op implementation of aten::pow currently. To get around this you can add a quant/dequant stub around your operator.
Please refer to the tutorial for how to do this - https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html |
st184452 | We don;t support quantized op implementation of aten::pow currently. To get around this you can add a quant/dequant stub around your operator.
Please refer to the tutorial for how to do this - https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 65 |
st184453 | Using pytorch 1.6.
I’m trying to implement qat on a lstm based model I have.
class Net(torch.nn.Module):
def __init__(self, seq_length):
super(Net, self).__init__()
self.hidden_size = 16
self.input_size = 18
self.seq_length = seq_length
self.relu1 = torch.nn.ReLU()
# Need to specify input sizes up front
# batch_first specifies an input shape of (nBatches, nSeq, nFeatures),
# otherwise this is (nSeq, nBatch, nFeatures)
self.lstm = torch.nn.LSTM(input_size = self.input_size, hidden_size = self.hidden_size, batch_first = True)
self.linear1 = torch.nn.Linear(self.hidden_size, self.hidden_size)
self.dropout = torch.nn.Dropout(0.5) #self.squeeze = torch.squeeze
self.linearOut = torch.nn.Linear(self.hidden_size, 1)
self.sigmoidOut = torch.nn.Sigmoid()
self.sqeeze1 = torch.Tensor.squeeze
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
# Can pass the initial hidden state, but not necessary here
#x, h = self.gru1(x)#, self.h0)
#x, h = self.gru(x)#, self.h0)
x, (h,c) = self.lstm(x)#, self.h0)
# Get last output, x[:,l - 1,:], equivalent to (last) hidden state
# Squeeze to remove length 1 dim
x = self.sqeeze1(h)
x = self.dropout(x)
x = self.linear1(x)
x = self.relu1(x)
x = self.linearOut(x)
# Apply sigmoid either in the loss function, or in eval(...)
return x
def evaluate(self,x):
return self.sigmoidOut(self.forward(x))
Standard training works fine, and after preparing the model for qat with,
qat_model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
print(qat_model.qconfig)
qat_model = torch.quantization.prepare_qat(qat_model)
print(qat_model)
I’m running the same training loop, just with different learning rates.
for epoch in tqdm(range(qat_epochs)):
#model.train()
for batch in range(qat_nBatches):
start_time = time.time()
batch_data = data[batch * batch_size : (batch + 1) * batch_size]
batch_seq_lens = seq_lens[batch * batch_size : (batch + 1) * batch_size]
batch_labels = labels[batch * batch_size : (batch + 1) * batch_size]
packedData = pack_padded_sequence(batch_data,
batch_seq_lens,
batch_first = True,
enforce_sorted = False)
output = qat_model(packedData)
loss = lossF(output, batch_labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
pred = qat_model.evaluate(packedData).detach().cpu().numpy().flatten()
predClasses = np.zeros(pred.shape)
predClasses[pred > 0.5] = 1
losses.append(loss.detach().cpu().numpy())
accuracy.append(accuracy_score(batch_labels.detach().cpu().numpy().flatten(), predClasses))
packedDataTest = pack_padded_sequence(data[data.shape[0] // 2:],
seq_lens[data.shape[0] // 2:],
batch_first = True,
enforce_sorted = False)
labelsTest = labels[data.shape[0] // 2:]
quantised_model = torch.quantization.convert(qat_model.eval(), inplace = False)
predTestT = qat_model.evaluate(packedDataTest)
predTest = predTestT.detach().cpu().numpy().flatten()
predClassesTest = np.zeros(predTest.shape)
predClassesTest[predTest > 0.5] = 1
lossesTestQAT.append(lossF(predTestT, labelsTest).detach().cpu().numpy().flatten())
accuracyTestQAT.append(accuracy_score(labelsTest.detach().cpu().numpy().flatten(), predClassesTest))
However, I get this error
Traceback (most recent call last):
File "rnn_qat.py", line 307, in <module>
output = qat_model(packedData)
File "/mnt/storage/home/rs17751/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "rnn_qat.py", line 59, in forward
x, (h,c) = self.lstm(x)#, self.h0)
File "/mnt/storage/home/rs17751/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
hook_result = hook(self, input, result)
File "/mnt/storage/home/rs17751/.local/lib/python3.7/site-packages/torch/quantization/quantize.py", line 74, in _observer_forward_hook
return self.activation_post_process(output)
File "/mnt/storage/home/rs17751/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/storage/home/rs17751/.local/lib/python3.7/site-packages/torch/quantization/fake_quantize.py", line 91, in forward
self.activation_post_process(X.detach())
AttributeError: 'tuple' object has no attribute 'detach'
At first I thought this was to do with the LSTM outputting tuples, as I had to change from GRU to LSTM for quantisation. But, if that was the problem, surely the normal training loop would fail in the same way. Any help is appreciated |
st184454 | Solved by supriyar in post #3
That’s right, we currently do not support QAT for nn.LSTM. We support dynamic quantization of LSTM modules currently |
st184455 | I don’t think qat is supported for LSTM. cc @raghuramank100 @supriyar to confirm |
st184456 | That’s right, we currently do not support QAT for nn.LSTM. We support dynamic quantization of LSTM modules currently |
st184457 | @asorin, not at the moment. Please submit a feature request if you have a strong use-case for it and we can take a look. |
st184458 | Hi all,
I am trying the resnet50 model quantization with PyTorch and I tried these 3 lines of code :
the import, model=qn.resnet50(pretrain=true), and model.state_dict()), and why the coefficients being shown are all float values if this is the quarantined version of the model?
Noticed this while trying to figure out how to save/load the coefficients for a quantized model, and is there anything special you need to do convert model coefficients between float and int8.
Please let me know. Appreciate any help/suggestions , Thanks! |
st184459 | Solved by Vasiliy_Kuznetsov in post #3
check out https://pytorch.org/docs/stable/quantization.html#quantized-torch-tensor-operations, in partucular the “int_repr” function. By default, if you print out a quantized tensor you will see the dequantized values that the tensor represents. To see the raw int8 values, you can use x.int_repr().… |
st184460 | check out https://pytorch.org/docs/stable/quantization.html#quantized-torch-tensor-operations 41, in partucular the “int_repr” function. By default, if you print out a quantized tensor you will see the dequantized values that the tensor represents. To see the raw int8 values, you can use x.int_repr(). |
st184461 | HI,I tried this but it gives me this error:
RuntimeError: Could not run ‘aten::int_repr’ with arguments from the ‘CPUTensorId’ backend. ‘aten::int_repr’ is only available for these backends: [QuantizedCPUTensorId, VariableTensorId]. |
st184462 | I have resnet18 model which I quantized using graph quantization, each forward pass on my system takes about 100 ms (on cpu) and its size shrunk from 85 to 45 MBs.
I then went on and poruned this model to 14.5M parameters from the initial 25M and its size shrunk from 85 to 58MB. I then went on and quantized the resulting model hoping for improvements.
but in fact I’m seeing dimnishing results. That is, I dont see what I expected in inference speed. instead of being better, the newer model simply is worse than the initial model (quantized from initial model)
Is this expected behavior?
Here are the two models for comparision :
gofile.io
Gofile 4
and this is their runtime benchmark results :
pruned-quantized model :
1>[ RUN ] EmbedderModelForwardFixture.ModelEmbedderBench (10 runs, 10 iterations per run)
1>[ DONE ] EmbedderModelForwardFixture.ModelEmbedderBench (5778.116020 ms)
1>[ RUNS ] Average time: 577811.602 us (~26296.168 us)
1> Fastest time: 537246.562 us (-40565.040 us / -7.020 %)
1> Slowest time: 617859.662 us (+40048.060 us / +6.931 %)
1> Median time: 585275.362 us (1st quartile: 554311.262 us | 3rd quartile: 594753.362 us)
1>
1> Average performance: 1.73067 runs/s
1> Best performance: 1.86134 runs/s (+0.13067 runs/s / +7.55054 %)
1> Worst performance: 1.61849 runs/s (-0.11218 runs/s / -6.48174 %)
1> Median performance: 1.70860 runs/s (1st quartile: 1.80404 | 3rd quartile: 1.68137)
1>
1>[ITERATIONS] Average time: 57781.160 us (~2629.617 us)
1> Fastest time: 53724.656 us (-4056.504 us / -7.020 %)
1> Slowest time: 61785.966 us (+4004.806 us / +6.931 %)
1> Median time: 58527.536 us (1st quartile: 55431.126 us | 3rd quartile: 59475.336 us)
1>
1> Average performance: 17.30668 iterations/s
1> Best performance: 18.61343 iterations/s (+1.30675 iterations/s / +7.55054 %)
1> Worst performance: 16.18491 iterations/s (-1.12177 iterations/s / -6.48174 %)
1> Median performance: 17.08597 iterations/s (1st quartile: 18.04041 | 3rd quartile: 16.81369)
quantized from normal model (no pruning done beforehand) :
1>[ RUN ] EmbedderModelForwardFixture.ModelEmbedderBench (10 runs, 10 iterations per run)
1>[ DONE ] EmbedderModelForwardFixture.ModelEmbedderBench (5672.357520 ms)
1>[ RUNS ] Average time: 567235.752 us (~31674.053 us)
1> Fastest time: 530900.462 us (-36335.290 us / -6.406 %)
1> Slowest time: 640024.562 us (+72788.810 us / +12.832 %)
1> Median time: 561095.762 us (1st quartile: 548392.562 us | 3rd quartile: 577176.062 us)
1>
1> Average performance: 1.76294 runs/s
1> Best performance: 1.88359 runs/s (+0.12066 runs/s / +6.84409 %)
1> Worst performance: 1.56244 runs/s (-0.20050 runs/s / -11.37282 %)
1> Median performance: 1.78223 runs/s (1st quartile: 1.82351 | 3rd quartile: 1.73257)
1>
1>[ITERATIONS] Average time: 56723.575 us (~3167.405 us)
1> Fastest time: 53090.046 us (-3633.529 us / -6.406 %)
1> Slowest time: 64002.456 us (+7278.881 us / +12.832 %)
1> Median time: 56109.576 us (1st quartile: 54839.256 us | 3rd quartile: 57717.606 us)
1>
1> Average performance: 17.62935 iterations/s
1> Best performance: 18.83592 iterations/s (+1.20657 iterations/s / +6.84409 %)
1> Worst performance: 15.62440 iterations/s (-2.00495 iterations/s / -11.37282 %)
1> Median performance: 17.82227 iterations/s (1st quartile: 18.23511 | 3rd quartile: 17.32574)
Or to put it simply after 10 iterations:
r18_default : 805.72 ms (mean)
quantized_model : 560 ms (mean)
r18_pruned : 7,466.78 ms
pruned_then_quantized: 578 ms (mean)
Not only the second model is not faster, its worse, its become slower!
You can also see that the pruned model is extremely slow! 10x slower than the default model!
Note:
in case it matters, training (pruning, and finetuning the model) is done using pytorch 1.5.1 and the final graph quantization is done in windows using pytorch 1.6
Note2:
This is being tested and evaluted using libtorch(1.6) on windows 10 machine.
I’d greatly appreciate any kind of feedback on this.
Thank you all in advance |
st184463 | Pruning is a good way to reduce model size, but it won’t automatically give you performance improvements.
The reason is that when pruning a model you’re introducing zeros to Tensors (i.e. they become sparse). These tensors can be compressed (with some extra overhead) without much problem. Hence the reduced model size.
However, the underlying kernels (i.e. code that operates on the Tensors holding the values, like a matrix multiply) are by default written assuming dense Tensors, because hardware provides instructions to make those dense operations fast (e.g via vectorized on adjacent values in memory).
Unfortunately there’s not a lot of hardware, including CPUs, that can benefit directly from sparse Tensors.
There’s some work arounds to still speeding up sparse operations in some cases. In CPU, for example, doing structured pruning (i.e. removing blocks of values rather than one by one) in a way that matches the size of the registers used in those vectorized operations such that you can skip entire blocks of continuous zeros. For example, pruning blocks of 1x16 or 4x4 and quantizing afterwards, matches the register sizes of vectorized instructions in CPUs (128bit).
However, even if you use structured pruning, you will still depend on having underlying kernels for the operations used in your model being implemented to take advantage of the structured nature of your tensors (i.e. to know that they can skip entire sequences of 16 values when doing things like a matrix multiply).
Why is it slower?
In theory a pruned model shouldn’t be slower since the dense operations could ignore tensors coming in that happen to have a bunch of zeros. However, if a special representation is used to represent the sparse Tensor (i.e. to avoid having to represent the zeros), as it’s likely the case here, then that will cause inefficiencies when operating with those special sparse Tensors (e.g. some copying of values may be needed to restore the original structure of the Tensor to go through the dense kernels).
Hope this helps. |
st184464 | Thanks a lot really appreciate it.
But when the whole channel is removed, we are basically left with a dense model at the end, so we still need to be able to benefit from the dense operation efficiency! In my case I’m using Torch-Pruning 5 which removes entire channels and not setting neurons to zeros only as far as I understand. So to me it should be able to run as fast at the very least.
Also note that the final pruned model is then retrained therefore even if we have sparse neurons (set to zero) at first, we should be dealing with a dense model at the end that shouldn’t be performing like this.
Am I missing something here? |
st184465 | Could you print out and check the dimensions of the weight for the pruned quantized model during inference? If they are the same as before pruning you should see a similar performance. |
st184466 | Here is the live run of the process, showing the pruned model is indeed faster, but the quantized model for some reason is extremely slow :
pytorch_prune_neomL_pruned_quantzed1596×828 426 KB
I also tested this with another model, and the outcome is the same. the quantized model takes much more to finish:
pytorch_prune_neomL_pruned_quantzed_simpnet1596×828 488 KB
Note :
As you can see, we do not finetune the pruned model here. we are just testing whether the pruning by nature results in faster inference in our case or not, and as t he results show, it indeed does.
Finetuning an already pruned model for some reason, results in a sever slow down at inferece as you can see. |
st184467 | Error at
/usr/local/lib/python3.6/dist-packages/torch/nn/quantized/modules/functional_modules.py in add(self, x, y)
43 def add(self, x, y):
44 # type: (Tensor, Tensor) -> Tensor
---> 45 r = torch.add(x, y)
46 r = self.activation_post_process(r)
47 return r
Full error message
RuntimeError: Could not run 'aten::add.Tensor' with arguments from the 'QuantizedCPU' backend. 'aten::add.Tensor' is only available for these backends: [CPU, MkldnnCPU, SparseCPU, Meta, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
CPU: registered at /pytorch/build/aten/src/ATen/CPUType.cpp:2136 [kernel]
MkldnnCPU: registered at /pytorch/build/aten/src/ATen/MkldnnCPUType.cpp:144 [kernel]
SparseCPU: registered at /pytorch/build/aten/src/ATen/SparseCPUType.cpp:239 [kernel]
Meta: registered at /pytorch/aten/src/ATen/native/BinaryOps.cpp:1049 [kernel]
BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: fallthrough registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]
AutogradOther: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8041 [autograd kernel]
AutogradCPU: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8041 [autograd kernel]
AutogradCUDA: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8041 [autograd kernel]
AutogradXLA: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8041 [autograd kernel]
AutogradPrivateUse1: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8041 [autograd kernel]
AutogradPrivateUse2: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8041 [autograd kernel]
AutogradPrivateUse3: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8041 [autograd kernel]
Tracer: registered at /pytorch/torch/csrc/autograd/generated/TraceType_2.cpp:9726 [kernel]
Autocast: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:254 [backend fallback]
Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:531 [kernel]
VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
Code snippet
def forward(self, x, x2=None, x3=None):
x_size = x.size()
resl = x
for i in range(len(self.pools_sizes)):
y = self.convs[i](self.pools[i](x))
q_add0 = FloatFunctional()
#error is because of this line below
resl = q_add0.add(resl, nn.functional.interpolate(y, x_size[2:], mode='bilinear', align_corners=True)) #error is because of this line
resl = self.relu(resl)
if self.need_x2:
resl = nn.functional.interpolate(resl, x2.size()[2:], mode='bilinear', align_corners=True)
resl = self.conv_sum(resl)
if self.need_fuse:
q_add1 = FloatFunctional()
q_add2 = FloatFunctional()
resl = self.conv_sum_c(q_add1.add(q_add2.add(resl, x2), x3))
return resl
I tried to do as mentioned in post 3.
If I eval the quantized model the add operation is looks like
(conv_sum): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=2.668934655503108e-07, zero_point=66, padding=(1, 1), bias=False)
(conv_sum_c): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=1.018745024339296e-06, zero_point=56, padding=(1, 1), bias=False)
In case, I use torch.nn.quantized. QFunctional. The model will not be quantized. The error would something like from CPU backend to QuantizedCPU backend is not possible.
Any idea! Why?
Inference code
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
valdir = '/content/test'
dataset_test = torchvision.datasets.ImageFolder(
valdir,
transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
]))
test_sampler = torch.utils.data.SequentialSampler(dataset_test)
data_loader_test = torch.utils.data.DataLoader(
dataset_test, batch_size=1,
sampler=test_sampler)
model.load_state_dict(torch.load('/content/model.pth'))
model.eval()
with torch.no_grad():
for image, target in data_loader_test:
print(image.size()) #torch.Size([1, 3, 224, 224])
output = model(image)
print(output)
I think the inference code is fine. Problem is in add operation. Please give some ideas on how to solve this. |
st184468 | Solved by Zafar in post #4
Actually, I see what your problem is. You are building the model incorrectly: the FloatFunctional is a “stateful” layer that needs to be initialized in the model constructor. Otherwise, it will not be visible to the convert script. Here is how you can rewrite the model (just an example):
def __init… |
st184469 | How are you quantizing? Did you setup qconfigs for FloatFunctionals (q_add0, q_add1 and q_add2) as well? If you set up qconfigs for these, they will get converted to quantized::add and this op will work on quantized tensor. |
st184470 | Yea, it looks like the model was not converted to quantized version correctly. As @dskhudia mentioned, make sure you have the qconfigs in all the layers that need to be quantized |
st184471 | Actually, I see what your problem is. You are building the model incorrectly: the FloatFunctional is a “stateful” layer that needs to be initialized in the model constructor. Otherwise, it will not be visible to the convert script. Here is how you can rewrite the model (just an example):
def __init__(self):
super().__init__()
# ... Any other definitions
self.q_add0 = FloatFunctional()
self.q_add1 = FloatFunctional()
self.q_add2 = FloatFunctional()
# ... Any other definitions
def forward(self, x, x2=None, x3=None):
x_size = x.size()
resl = x
for i in range(len(self.pools_sizes)):
y = self.convs[i](self.pools[i](x))
# q_add0 = FloatFunctional()
#error is because of this line below
resl = self.q_add0.add(resl, nn.functional.interpolate(y, x_size[2:], mode='bilinear', align_corners=True)) #error is because of this line
resl = self.relu(resl)
if self.need_x2:
resl = nn.functional.interpolate(resl, x2.size()[2:], mode='bilinear', align_corners=True)
resl = self.conv_sum(resl)
if self.need_fuse:
# q_add1 = FloatFunctional()
# q_add2 = FloatFunctional()
resl = self.conv_sum_c(self.q_add1.add(self.q_add2.add(resl, x2), x3))
return resl |
st184472 | ok. Got it. Thank you for replying. It has solved my problem.
One last thing, do I have to create all summation operation unique in for loop?
For example,
resl = self.q_add00.add(resl, z0)
resl = self.q_add01.add(resl, z1)
resl = self.q_add02.add(resl, z2)
If I do like above mentioned, that part of the model will look something like:
(q_add00): QFunctional(
scale=1.027651309967041, zero_point=67
(activation_post_process): Identity()
)
(q_add01): QFunctional(
scale=1.0117942094802856, zero_point=68
(activation_post_process): Identity()
)
(q_add02): QFunctional(
scale=0.9806106686592102, zero_point=74
(activation_post_process): Identity()
)
I do not know what is wrong but the quantized model has 0 % accuracy. That is why I am trying different approaches.
Quantized aware training can be one solution. But, what are the other parameters/procedures to check/apply in Post-training static quantization for better accuracy for ResNet-50 backend models?
I thank you in advance. |
st184473 | Thank you for replying.
I am currently simply using quantize_model(model, 'fbgemm'). I have also tried taking down the quantize_model and executed it line by line. In both cases, QFunctional had never appeared in the model. I did not realize it supposed to be a layer. |
st184474 | I am sorry but what did you mean by,
dskhudia:
you set up qconfigs for these |
st184475 | parth15041995:
quantize_model
I am not sure what is this script? Is it from the tutorials? Either way, if you follow the steps as shown in the static quantization tutorial, it should convert your model to quantized version. As for the accuracy, it could be that low if you don’t calibrate your model before quantizing it: https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 17 |
st184476 | Thank you for your reply. It has mostly solved my problem.
I want to quantized available salient object detection models.
Zafar:
this script
is one of them. |
st184477 | It seems that pytorch qat doesn’t simulate bias quantization error during qat. And I found that qat.Conv2d only fake-quantize weight and activation. So pytorch’s quantization strategy does not quantize the bias, right? |
st184478 | Solved by jerryzh168 in post #7
We find modeling bias in qat is not very important since it doesn’t affect accuracy too much. one workaround you can do is to remove bias from Conv and add the bias explicitly outside of conv, so that adding bias can be modeled with add. |
st184479 | yes, we do not quantize bias. there have been some internal discussions on this before, the problem of quantizing bias is that it needs to be quantized with the quantization parameters of input and weight, but the input can come from dynamic paths e.g.:
if x > 0:
y = myConv1(x)
else:
y = myConv2(x)
z = myConv3(y)
and we have no way of getting this information in eager mode. currently we pass in bias in fp32 and it will be quantized inside the quantized ops like quantized::conv2d with quantization parameters of input and weight: y = conv(x_q,w_q) + bias/(w_scale*x_scale).
However, for qat, I think currently we do not simulate this behavior, I’m not sure how much impact this has though, we’ll discuss about it, thanks for the question. |
st184480 | jerryzh168:
y = conv(x_q,w_q) + bias/(w_scale*x_scale)
So, if I want to transfer the quantization aware trained network to my hardware, how exactly should i implement the bias part?
should I use the above formula to quantize it? |
st184481 | right now the quantization for bias is not modeled in quantization aware training, so there might be a little bit of discrepancy between the qat model and the model after convert, but I think it won’t matter too much. |
st184482 | Thank you for the response, Jerry.
So, what should I do with the bias parameter of the batch-norm module when I want to implement my quantized model on hardware? the final converted model (quantized) still has this parameter (in FP) in the quantized version of ConvBnReLU2d.
Would bias be totally ignored when we recall the quantized model for some input X (model.eval() )?
or the intermediate feature values are temporarily converted to FP to apply bias to them and then are converted back to INT8/INT32?
or bias is also converted to INT8 with a simple choice of sale or zero-point without the influence of the qat part? |
st184483 | bias is an input to quantized::conv2d op, it is applied in quantized::conv2d op itself, with this formula:
jerryzh168:
y = conv(x_q,w_q) + bias/(w_scale*x_scale)
this is in int32. then we’ll requantize y with output_scale and output_zero_point
cc @dskhudia could you link the fbgemm implementation for conv? |
st184484 | We find modeling bias in qat is not very important since it doesn’t affect accuracy too much. one workaround you can do is to remove bias from Conv and add the bias explicitly outside of conv, so that adding bias can be modeled with add. |
st184485 | I am trying to quantize a salient object detection model.
Originally, my ResNet class would look like:
import torch.nn as nn
import torch.nn.quantized as nnq
import torch.nn.functional as F
import torch.nn.quantized.functional as qF
class ResNet(nn.Module):
def __init__(self, block, layers):
self.inplanes = 64
super(ResNet, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
self.bn1 = nn.BatchNorm2d(64,affine = affine_par)
for i in self.bn1.parameters():
i.requires_grad = False
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1, ceil_mode=True) # changed to Quanti
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation__ = 2)
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, 0.01)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _make_layer(self, block, planes, blocks, stride=1,dilation__ = 1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion or dilation__ == 2 or dilation__ == 4:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion,affine = affine_par),
)
for i in downsample._modules['1'].parameters():
i.requires_grad = False
layers = []
layers.append(block(self.inplanes, planes, stride,dilation_=dilation__, downsample = downsample ))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes,dilation_=dilation__))
return nn.Sequential(*layers)
def forward(self, x):
tmp_x = []
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
tmp_x.append(x)
x = self.maxpool(x)
x = self.layer1(x)
tmp_x.append(x)
x = self.layer2(x)
tmp_x.append(x)
x = self.layer3(x)
tmp_x.append(x)
x = self.layer4(x)
tmp_x.append(x)
return tmp_x
And it works just fine. But if I replace everything with quantized functions like:
class ResNet(nn.Module):
def __init__(self, block, layers):
self.inplanes = 64
super(ResNet, self).__init__()
self.conv1 = nnq.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
self.bn1 = nnq.BatchNorm2d(64) #,affine = affine_par
for i in self.bn1.parameters():
i.requires_grad = False
self.relu = nnq.ReLU(inplace=False)
#self.maxpool = F.nn.MaxPool2d(kernel_size=3, stride=2, padding=1, ceil_mode=True) # change
self.maxpool = qF.max_pool2d(x = ??? ,kernel_size=3, stride=2, padding=1, ceil_mode=True)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation__ = 2)
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, 0.01)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _make_layer(self, block, planes, blocks, stride=1,dilation__ = 1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion or dilation__ == 2 or dilation__ == 4:
downsample = nn.Sequential(
nnq.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nnq.BatchNorm2d(planes * block.expansion), #,affine = affine_par
)
for i in downsample._modules['1'].parameters():
i.requires_grad = False
layers = []
layers.append(block(self.inplanes, planes, stride,dilation_=dilation__, downsample = downsample ))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes,dilation_=dilation__))
return nn.Sequential(*layers)
def forward(self, x):
tmp_x = []
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
tmp_x.append(x)
x = qF.max_pool2d(x,kernel_size=3, stride=2, padding=1, ceil_mode=True) #this certainly will not create a MaxPool layer.
#x = self.maxpool(x)
x = self.layer1(x)
tmp_x.append(x)
x = self.layer2(x)
tmp_x.append(x)
x = self.layer3(x)
tmp_x.append(x)
x = self.layer4(x)
tmp_x.append(x)
return tmp_x
Error:
TypeError: max_pool2d() missing 1 required positional argument: 'input'
So, I think the problem is in torch.nn.MaxPool2d does not need any input argument but torch.nn.quantized.functional.max_pool2d needs an input argument. DOES ANYONE KNOW ANY WAY ROUND? How can I successfully quantize like this one other custom classes? |
st184486 | Solved by parth15041995 in post #2
quantized maxpool2d and adaptiveavgpool2d should not be defined in the quantizable version. |
st184487 | quantized maxpool2d and adaptiveavgpool2d 3 should not be defined in the quantizable version. |
st184488 | I don’t think there is such thing as F.nn.MaxPool2d – F, which is an alias to functional in your case does not have stateful layers. However, in your case you are treating it as if it did. In both models you need to replace the max pooling definition to nn.MaxPool2d.
Also, in the second case, you cannot call qF.max_pool2d in the constructor. As mentioned before, the F and qF namespaces are stateless, meaning they operate directly on the input, and don’t require instance construction. The way you use it is:
Remove self.max_pool = F.max_pool2d(...) and self.max_pool = qF.max_pool2d(...) from the constructors.
Add them in the forward method, but without the self.... = part. Like this: max_pool_output = qF.max_pool2d(some_input, *some_arguments) |
st184489 | Hi,
I’m fairly new to PyTorch and I’d like to understand how to import a quantized TFLite model into PyTorch so I can work on it in PyTorch.
I already have a PyTorch model definition which matches the model used to create the .tfilte file – except for the fact that this tflite file has been quantized, presumably automatically at export time.
There are two aspects of this I want to understand better.
First, the conv2d kernels and biases in the TFLite file are float16. Of course, I load these tensors’ buffers into float16 numpy arrays when I am reading from the tflite file. But is it enough to use these float16 numpy arrays as the values when I am populating the state_dict for my PyTorch model? Or do I need to define the torch model differently, for instance when I initialize the nn.Conv2d modules?
Second, I notice that this TFLite model has many (about 75) automatically generated “dequantize” layers after the normal-seeming part of the model. Do I need to manually add layers to my PyTorch model to match all these TFLite dequantization layers?
I’d appreciate any advice, especially pointers to any examples of how to do this. |
st184490 | You can get the TF model weights, and load the into the non-quantized PyTorch model. After that you can just quantize the model using the PTQ flow described in here:
pytorch.org
(beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials... 63 |
st184491 | with the same layer, same initialize method, and the same seed I run two pieces of code independently but get a different weight, really confusing.
import random
import os
import numpy as np
import torch
seed = 2020
random.seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.enabled = True
def get_encoder():
input_dim = 109
encoder = nn.Sequential()
input_dims = [input_dim] + [
int(i) for i in np.exp(np.log(input_dim) * np.arange(num_layers - 1, 0, -1) / num_layers)
]
for layer_i, (input_dim, output_dim) in enumerate(zip(input_dims[:-1], input_dims[1:])):
encoder.add_module("fc_" + str(layer_i), nn.Linear(input_dim, output_dim))
encoder.add_module("fc_" + str(layer_i) + "_act", nn.Softsign())
model.add_module("output_layer", nn.Linear(n_hiddens, 1))
model.add_module("output_layer", nn.Linear(1, 1))
return encoder
model = get_encoder()
nn.init.kaiming_normal_(model.fc_0.weight)
for p in model.parameters():
print(p.sum())
break
# output tensor(-12.7479, grad_fn=<SumBackward0>)
import random
import os
import numpy as np
import torch
seed = 2020
random.seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.enabled = True
model = nn.Linear(109, 33)
nn.init.kaiming_normal_(model.weight)
for n, p in model.named_parameters():
print(p.sum())
break
# output tensor(-5.6983, grad_fn=<SumBackward0>) |
st184492 | Solved by Zafar in post #2
The determinism also depends on how many times did you call the random number generator. In you examples, there are two models, which are functionally the same. However, their initialization is different, because the number of random values generated after the seeding is different.
Here is what you… |
st184493 | The determinism also depends on how many times did you call the random number generator. In you examples, there are two models, which are functionally the same. However, their initialization is different, because the number of random values generated after the seeding is different.
Here is what you are doing for the first model:
Set the seed
Create linear layers with random weights 4 times (I am assuming num_layers = 4)
Initialize the weights of the first layer
The number of times you call the random number generator before the kaiming_normal is 4xNumber of elements in linear layers in the encoder.
Now here is what you do in the second model:
Set the seed
Create a single linear layer with random weights
Initialize the weights of that layer
The number of times you call the random number generator before the kaiming_normal is 1xNumber of elements in linear layers in the encoder. That would create different random numbers. If you want your initialization to be completely identical, you will have to either 1) call the linear constructor in the second case the same number of times as in the first case (including the arguments) or 2) Set the seed right before the kaiming_normal_.
P.S. If you want the question to get answered faster, you might need to change the tag. This question has nothing to do with quantization, so its visibility is limited to a wrong group |
st184494 | Hi,
The Graph Mode Post Training Quantization in Pytorch is awesome because it doesn’t need the model to be defined in certain way only using Modules. (https://pytorch.org/tutorials/prototype/graph_mode_static_quantization_tutorial.html 12)
Wondering if there is a plan for Graph Mode Quantization Aware Training as well in Pytorch?
Thanks,
Manu |
st184495 | Solved by dskhudia in post #2
Hi Manu,
Yes. It’s very much work in progress and the code is at https://github.com/pytorch/pytorch/tree/master/torch/quantization/fx Tutorials and docs will be released once it’s ready. |
st184496 | Hi Manu,
Yes. It’s very much work in progress and the code is at https://github.com/pytorch/pytorch/tree/master/torch/quantization/fx 13 Tutorials and docs will be released once it’s ready. |
st184497 | You can monitor the progress here: https://github.com/pytorch/pytorch/issues/45635 11 |
st184498 | Thanks Daya,
What would be the rough timeline for graph mode QAT to be available - will it be included in PyTorch 1.7 release?
Also wondering what would be the difference between quantize_jit and quantize_fx |
st184499 | It’s not a part of 1.7. quantize_jit is the current api that you used for graph-mode post-training quantization and it will be deprecated once quantize_fx is available (i.e., new graph mode). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.