id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st183400 | if you want to quantize multiplication, you’ll need to rewrite * to use functional modules: pytorch/functional_modules.py at master · pytorch/pytorch · GitHub 6, an example can be found here: pytorch/common_quantization.py at e61fc1c03b64e61ca4f5bbe278db7ee2cf35e8ff · pytorch/pytorch · GitHub 6
If you want to dequantize then quantize, it’s better to use different instances of quantize module. |
st183401 | Hello,
I am trying to apply different kinds of quantization (static, dynamic and quantization-aware trainings) to a BERT model taken from the transformers library. My model works as expected with no quantization or with dynamic quantization, but both static quantization and quantization-aware training return a strange error which makes it seem as though QuantStub and DeQuantStub are changing tensor shapes.
Here is a simplified version, the comments indicate where the errors are and the reasoning for adding the various tweaks. I included the two error stacks described in the comments (error A and B) at the end this post.
def __init__(self, ...):
...
self.model = AutoModelForTokenClassification.from_pretrained('bert-base-multilingual-cased', config=...)
...
def train(self, trainset, devset, num_epochs, **kwargs):
self.trainer = Trainer(model=self.model, args=..., train_dataset=trainset, eval_dataset=devset)
# Without any kind of quantization, the model works as expected
# Case 1: dynamic quantization for Linear layers, works as expected
if self.quantization == "dynamic":
self.trainer.train()
self.model = quant.quantize_dynamic(
self.model, {nn.Linear}, dtype=getattr(torch, self.quant_type)
)
self.trainer.model = self.model
# Case 2: quantization-aware training. The errors are exactly the same as for static quantization.
elif self.quantization == "qat":
self.model.quant = quant.QuantStub()
self.model.dequant = quant.DeQuantStub()
# This snippet is necessary in the first place because of https://discuss.pytorch.org/t/89154, otherwise I get a "AssertionError: The only supported dtype for nnq.Embedding is torch.quint8"
if isinstance(module, nn.Embedding):
# Both alternatives here cause different errors
module.qconfig = None # Causes Error A (presumably because of operating on both quantized and non-quantized tensors)
module.qconfig = quant.float_qparams_dynamic_qconfig # Causes Error B (why?)
self.model.qconfig = quant.get_default_qat_qconfig('fbgemm')
quant.prepare_qat(self.model, inplace=True)
Trainer.compute_loss = quant_compute_loss
Trainer.prediction_step = quant_prediction_step
self.model.train()
self.trainer.train()
self.model.eval()
self.model = quant.convert(self.model, inplace=True)
self.trainer.model = self.model
# Case 3: static quantization, not included here. The code is very similar and I get the exact same errors in the same place.
elif self.quantization == "static":
pass
self.trainer.save_model('model')
Error A (quantization disabled for Embedding layers):
Traceback (most recent calls WITHOUT Sacred internals):
File "/home/username/project/filename.py", line 402, in predict
batch_pred_ids, label_ids, _ = self.trainer.predict(dataset)
File "/home/username/.local/lib/python3.6/site-packages/transformers/trainer.py", line 1355, in predict
return self.prediction_loop(test_dataloader, description="Prediction")
File "/home/username/.local/lib/python3.6/site-packages/transformers/trainer.py", line 1417, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only)
File "/home/username/project/filename.py", line 79, in quant_prediction_step
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/username/.local/lib/python3.6/site-packages/transformers/modeling_bert.py", line 1539, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/username/.local/lib/python3.6/site-packages/transformers/modeling_bert.py", line 838, in forward
input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/username/.local/lib/python3.6/site-packages/transformers/modeling_bert.py", line 202, in forward
embeddings = self.LayerNorm(embeddings)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/quantized/modules/normalization.py", line 25, in forward
eps=self.eps, output_scale=self.scale, output_zero_point=self.zero_point)
RuntimeError: Could not run 'quantized::layer_norm' with arguments from the 'CPU' backend. 'quantized::layer_norm' is only available for these backends: [QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched,
VmapMode].
Error B (modified qconfig for embedding layers):
Traceback (most recent calls WITHOUT Sacred internals):
File "/home/username/project/filename.py", line 401, in predict
batch_pred_ids, label_ids, _ = self.trainer.predict(dataset)
File "/home/username/.local/lib/python3.6/site-packages/transformers/trainer.py", line 1355, in predict
return self.prediction_loop(test_dataloader, description="Prediction")
File "/home/username/.local/lib/python3.6/site-packages/transformers/trainer.py", line 1417, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only)
File "/home/username/project/filename.py", line 79, in quant_prediction_step
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/username/.local/lib/python3.6/site-packages/transformers/modeling_bert.py", line 1539, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/username/.local/lib/python3.6/site-packages/transformers/modeling_bert.py", line 838, in forward
input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/username/.local/lib/python3.6/site-packages/transformers/modeling_bert.py", line 201, in forward
embeddings = inputs_embeds + position_embeddings + token_type_embeddings
RuntimeError: The size of tensor a (1024) must match the size of tensor b (128) at non-singleton dimension 0
Errors A seems understandable to me, but I’m not sure what is going on with that second one. If I check manually, without quantization, inputs_embeds, position_embeddings and token_type_embeddings have shape (8, 128, 768), or sometimes (1, 128, 768) at the point where the error occurs. But with QAT (or static quantization), they have shape (1024, 768) or (128, 768) instead, as if the first two dimensions had been concatenated. Is there a way to change them back to their correct shape, maybe by changing the qconfig? I’d rather not edit the library, especially considering that my model type is determined at runtime. |
st183402 | pie3636:
transformers/modeling_bert.py
This is surprising, can you provide a smaller repro so that we can investigate this further, something like this snippet alone:
if inputs_embeds is None:
inputs_embeds = self.word_embeddings(input_ids)
token_type_embeddings = self.token_type_embeddings(token_type_ids)
embeddings = inputs_embeds + token_type_embeddings |
st183403 | @pie3636 did you resolve this issue? Which iterator are you using to iterate over modules when reassigning the config. |
st183404 | Hi, I have trained a YOLOv5 QAT model and exported it to ONNX. I noticed in ONNX that the biases are all set to 0. Is this supposed to be the case for QAT modules? Is this due to fuse_modules that causes the biases to be 0 in ONNX? |
st183405 | Hi @MrOCW, afaik there is no supported way to train models using pytorch quantization and then export the quantized models to ONNX. We currently do not plan to add support for ONNX export of pytorch quantized models. If this is important for your use case, feel free to submit a feature request or PR and our team can take a look. |
st183406 | Hi @supriyar, I am exporting the pre-quantized model with the fake quants and their q params, not the quantized model. |
st183407 | I see, if this happens for conv layers that have fusion with batch_norm then the possible reason is here pytorch/conv_fused.py at master · pytorch/pytorch · GitHub
where we set the biases to zero and add the original as part of the batch_norm operator. |
st183408 | Hi,
I’m trying to quantize torchcrepe module (GitHub - maxrmorrison/torchcrepe: Pytorch implementation of the CREPE pitch tracker)
I used PyTorch’s static quantization.
For all settings, the module size did reduce to about the quarter of the original size. The execution time, however, didn’t decrease always.
With 8 threads,
the execution time rather increased (23.349s → 26.471s)
With 1 thread,
the execution time did decrease (88.641s → 34.877s)
In order to investigate the root cause for the multi-cpu setting, I found the bottleneck–which was the second conv layer–I quantized that part only in a standalone module.
The results were:
the execution time reduced by about 8x for both multi- and single-CPU setting.
Seems like non-conv parts are not helping for speed-up in multi-CPU settings. Would anyone know about this phenomenon? Could you suggest any directions?
Below are the output and the code. Thanks in advance.
Output - Mulit-CPU
CUDA_VISIBLE_DEVICES="" python test.py
========================================================================
=== Before Quantization=================================================
Crepe(
(conv1): Conv2d(1, 1024, kernel_size=(512, 1), stride=(4, 1))
(conv1_BN): BatchNorm2d(1024, eps=0.001, momentum=0.0, affine=True, track_running_stats=False)
(conv2): Conv2d(1024, 128, kernel_size=(64, 1), stride=(1, 1))
(conv2_BN): BatchNorm2d(128, eps=0.001, momentum=0.0, affine=True, track_running_stats=False)
(conv3): Conv2d(128, 128, kernel_size=(64, 1), stride=(1, 1))
(conv3_BN): BatchNorm2d(128, eps=0.001, momentum=0.0, affine=True, track_running_stats=False)
(conv4): Conv2d(128, 128, kernel_size=(64, 1), stride=(1, 1))
(conv4_BN): BatchNorm2d(128, eps=0.001, momentum=0.0, affine=True, track_running_stats=False)
(conv5): Conv2d(128, 256, kernel_size=(64, 1), stride=(1, 1))
(conv5_BN): BatchNorm2d(256, eps=0.001, momentum=0.0, affine=True, track_running_stats=False)
(conv6): Conv2d(256, 512, kernel_size=(64, 1), stride=(1, 1))
(conv6_BN): BatchNorm2d(512, eps=0.001, momentum=0.0, affine=True, track_running_stats=False)
(classifier): Linear(in_features=2048, out_features=360, bias=True)
(quant): QuantStub()
(dequant): DeQuantStub()
)
Size (MB): 88.990
/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at ../c10/core/TensorImpl.h:1153.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Execution Time (s): 23.349
/opt/conda/lib/python3.8/site-packages/torch/quantization/observer.py:122: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch.
warnings.warn(
/opt/conda/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at ../aten/src/ATen/native/BinaryOps.cpp:448.)
return torch.floor_divide(self, other)
=== After Quantization==================================================
Crepe(
(conv1): QuantizedConv2d(1, 1024, kernel_size=(512, 1), stride=(4, 1), scale=0.027386803179979324, zero_point=65)
(conv1_BN): QuantizedBatchNorm2d(1024, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(1024, 128, kernel_size=(64, 1), stride=(1, 1), scale=0.009443704970180988, zero_point=66)
(conv2_BN): QuantizedBatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(conv3): QuantizedConv2d(128, 128, kernel_size=(64, 1), stride=(1, 1), scale=0.003322516568005085, zero_point=64)
(conv3_BN): QuantizedBatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(conv4): QuantizedConv2d(128, 128, kernel_size=(64, 1), stride=(1, 1), scale=0.001186757697723806, zero_point=66)
(conv4_BN): QuantizedBatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(conv5): QuantizedConv2d(128, 256, kernel_size=(64, 1), stride=(1, 1), scale=0.0003566498344298452, zero_point=66)
(conv5_BN): QuantizedBatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(conv6): QuantizedConv2d(256, 512, kernel_size=(64, 1), stride=(1, 1), scale=0.0001743721222737804, zero_point=64)
(conv6_BN): QuantizedBatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(classifier): QuantizedLinear(in_features=2048, out_features=360, scale=0.00038746290374547243, zero_point=62, qscheme=torch.per_channel_affine)
(quant): Quantize(scale=tensor([0.0079]), zero_point=tensor([0]), dtype=torch.quint8)
(dequant): DeQuantize()
)
Size (MB): 22.340
Execution time (s): 26.471
==================================================
========================================================================
=== Before Quantization=================================================
ConvModel(
(conv): Conv2d(1024, 128, kernel_size=(64, 1), stride=(1, 1))
(quant): QuantStub()
(dequant): DeQuantStub()
)
Size (MB): 33.556
Execution Time (s): 7.850
=== After Quantization==================================================
ConvModel(
(conv): QuantizedConv2d(1024, 128, kernel_size=(64, 1), stride=(1, 1), scale=0.022107142955064774, zero_point=56)
(quant): Quantize(scale=tensor([0.0079]), zero_point=tensor([0]), dtype=torch.quint8)
(dequant): DeQuantize()
)
Size (MB): 8.394
Execution time (s): 0.983
==================================================
Output - Single-CPU
➜ CUDA_VISIBLE_DEVICES="" python test.py
========================================================================
=== Before Quantization=================================================
Crepe(
(conv1): Conv2d(1, 1024, kernel_size=(512, 1), stride=(4, 1))
(conv1_BN): BatchNorm2d(1024, eps=0.001, momentum=0.0, affine=True, track_running_stats=False)
(conv2): Conv2d(1024, 128, kernel_size=(64, 1), stride=(1, 1))
(conv2_BN): BatchNorm2d(128, eps=0.001, momentum=0.0, affine=True, track_running_stats=False)
(conv3): Conv2d(128, 128, kernel_size=(64, 1), stride=(1, 1))
(conv3_BN): BatchNorm2d(128, eps=0.001, momentum=0.0, affine=True, track_running_stats=False)
(conv4): Conv2d(128, 128, kernel_size=(64, 1), stride=(1, 1))
(conv4_BN): BatchNorm2d(128, eps=0.001, momentum=0.0, affine=True, track_running_stats=False)
(conv5): Conv2d(128, 256, kernel_size=(64, 1), stride=(1, 1))
(conv5_BN): BatchNorm2d(256, eps=0.001, momentum=0.0, affine=True, track_running_stats=False)
(conv6): Conv2d(256, 512, kernel_size=(64, 1), stride=(1, 1))
(conv6_BN): BatchNorm2d(512, eps=0.001, momentum=0.0, affine=True, track_running_stats=False)
(classifier): Linear(in_features=2048, out_features=360, bias=True)
(quant): QuantStub()
(dequant): DeQuantStub()
)
Size (MB): 88.990
/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at ../c10/core/TensorImpl.h:1153.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Execution Time (s): 88.641
/opt/conda/lib/python3.8/site-packages/torch/quantization/observer.py:122: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch.
warnings.warn(
/opt/conda/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at ../aten/src/ATen/native/BinaryOps.cpp:448.)
return torch.floor_divide(self, other)
=== After Quantization==================================================
Crepe(
(conv1): QuantizedConv2d(1, 1024, kernel_size=(512, 1), stride=(4, 1), scale=0.025484636425971985, zero_point=66)
(conv1_BN): QuantizedBatchNorm2d(1024, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(1024, 128, kernel_size=(64, 1), stride=(1, 1), scale=0.009398533962666988, zero_point=71)
(conv2_BN): QuantizedBatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(conv3): QuantizedConv2d(128, 128, kernel_size=(64, 1), stride=(1, 1), scale=0.003245722968131304, zero_point=58)
(conv3_BN): QuantizedBatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(conv4): QuantizedConv2d(128, 128, kernel_size=(64, 1), stride=(1, 1), scale=0.001026029814966023, zero_point=57)
(conv4_BN): QuantizedBatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(conv5): QuantizedConv2d(128, 256, kernel_size=(64, 1), stride=(1, 1), scale=0.0004001582565251738, zero_point=72)
(conv5_BN): QuantizedBatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(conv6): QuantizedConv2d(256, 512, kernel_size=(64, 1), stride=(1, 1), scale=0.00016336666885763407, zero_point=62)
(conv6_BN): QuantizedBatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(classifier): QuantizedLinear(in_features=2048, out_features=360, scale=0.00038912895251996815, zero_point=61, qscheme=torch.per_channel_affine)
(quant): Quantize(scale=tensor([0.0079]), zero_point=tensor([0]), dtype=torch.quint8)
(dequant): DeQuantize()
)
Size (MB): 22.340
Execution time (s): 34.877
==================================================
========================================================================
=== Before Quantization=================================================
ConvModel(
(conv): Conv2d(1024, 128, kernel_size=(64, 1), stride=(1, 1))
(quant): QuantStub()
(dequant): DeQuantStub()
)
Size (MB): 33.556
Execution Time (s): 31.405
=== After Quantization==================================================
ConvModel(
(conv): QuantizedConv2d(1024, 128, kernel_size=(64, 1), stride=(1, 1), scale=0.021491192281246185, zero_point=65)
(quant): Quantize(scale=tensor([0.0079]), zero_point=tensor([0]), dtype=torch.quint8)
(dequant): DeQuantize()
)
Size (MB): 8.394
Execution time (s): 3.334
==================================================
Code (borrowed from the crepe repo)
import torch
import torchcrepe
import functools
import torch.nn.functional as F
import numpy as np
import os
from torch.quantization import QuantStub, DeQuantStub
import time
class Crepe(torch.nn.Module):
"""Crepe model definition"""
def __init__(self, model='full'):
super().__init__()
# Model-specific layer parameters
if model == 'full':
in_channels = [1, 1024, 128, 128, 128, 256]
out_channels = [1024, 128, 128, 128, 256, 512]
self.in_features = 2048
elif model == 'tiny':
in_channels = [1, 128, 16, 16, 16, 32]
out_channels = [128, 16, 16, 16, 32, 64]
self.in_features = 256
else:
raise ValueError(f'Model {model} is not supported')
# Shared layer parameters
kernel_sizes = [(512, 1)] + 5 * [(64, 1)]
strides = [(4, 1)] + 5 * [(1, 1)]
# Overload with eps and momentum conversion given by MMdnn
batch_norm_fn = functools.partial(torch.nn.BatchNorm2d,
eps=0.001,
momentum=0.0,)
# Layer definitions
self.conv1 = torch.nn.Conv2d(
in_channels=in_channels[0],
out_channels=out_channels[0],
kernel_size=kernel_sizes[0],
stride=strides[0])
self.conv1_BN = batch_norm_fn(
num_features=out_channels[0])
self.conv2 = torch.nn.Conv2d(
in_channels=in_channels[1],
out_channels=out_channels[1],
kernel_size=kernel_sizes[1],
stride=strides[1])
self.conv2_BN = batch_norm_fn(
num_features=out_channels[1])
self.conv3 = torch.nn.Conv2d(
in_channels=in_channels[2],
out_channels=out_channels[2],
kernel_size=kernel_sizes[2],
stride=strides[2])
self.conv3_BN = batch_norm_fn(
num_features=out_channels[2])
self.conv4 = torch.nn.Conv2d(
in_channels=in_channels[3],
out_channels=out_channels[3],
kernel_size=kernel_sizes[3],
stride=strides[3])
self.conv4_BN = batch_norm_fn(
num_features=out_channels[3])
self.conv5 = torch.nn.Conv2d(
in_channels=in_channels[4],
out_channels=out_channels[4],
kernel_size=kernel_sizes[4],
stride=strides[4])
self.conv5_BN = batch_norm_fn(
num_features=out_channels[4])
self.conv6 = torch.nn.Conv2d(
in_channels=in_channels[5],
out_channels=out_channels[5],
kernel_size=kernel_sizes[5],
stride=strides[5])
self.conv6_BN = batch_norm_fn(
num_features=out_channels[5])
self.classifier = torch.nn.Linear(
in_features=self.in_features,
out_features=torchcrepe.PITCH_BINS)
self.quant = QuantStub()
self.dequant = DeQuantStub()
def forward(self, x, embed=False):
x = self.quant(x)
# Forward pass through first five layers
x = self.embed(x)
if embed:
return x
# Forward pass through layer six
x = self.layer(x, self.conv6, self.conv6_BN)
# shape=(batch, self.in_features)
x = x.permute(0, 2, 1, 3).reshape(-1, self.in_features)
# Compute logits
x = torch.sigmoid(self.classifier(x))
x = self.dequant(x)
return x
###########################################################################
# Forward pass utilities
###########################################################################
def embed(self, x):
"""Map input audio to pitch embedding"""
# shape=(batch, 1, 1024, 1)
x = x[:, None, :, None]
# Forward pass through first five layers
x = self.layer(x, self.conv1, self.conv1_BN, (0, 0, 254, 254))
x = self.layer(x, self.conv2, self.conv2_BN)
x = self.layer(x, self.conv3, self.conv3_BN)
x = self.layer(x, self.conv4, self.conv4_BN)
x = self.layer(x, self.conv5, self.conv5_BN)
return x
def layer(self, x, conv, batch_norm, padding=(0, 0, 31, 32)):
"""Forward pass through one layer"""
x = F.pad(x, padding)
x = conv(x)
x = F.relu(x)
x = batch_norm(x)
return F.max_pool2d(x, (2, 1), (2, 1)).contiguous()
def print_size_of_model(model):
torch.save(model.state_dict(), "temp.p")
print(f'Size (MB): {os.path.getsize("temp.p")/1e6 : .3f}')
os.remove('temp.p')
class ConvModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv = torch.nn.Conv2d(1024, 128, kernel_size=(64, 1), stride=(1, 1))
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = self.dequant(x)
return x
torch.set_num_threads(1)
def run(model, input):
print("=" * (23 + 49))
print("=== Before Quantization" + "=" * 49)
# Original
print(model)
print_size_of_model(model)
start = time.time()
model(input)
print(f"Execution Time (s): {time.time() - start :.3f}")
# Quantized
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(model, inplace=True)
# Calibration
model(input)
torch.quantization.convert(model, inplace=True)
print("=== After Quantization" + "=" * 50)
print(model)
print_size_of_model(model)
start = time.time()
model(input)
print(f"Execution time (s): {time.time() - start :.3f}")
print("=" * (23 + 49))
crepe_input = torch.rand(1299, 1024).contiguous()
crepe = Crepe()
conv_model_input = torch.rand([1299, 1024, 128, 1])
conv_model = ConvModel()
# Set to eval()
crepe.eval()
conv_model.eval()
# Disable running stats for measurement
crepe.conv1_BN.track_running_stats = False
crepe.conv2_BN.track_running_stats = False
crepe.conv3_BN.track_running_stats = False
crepe.conv4_BN.track_running_stats = False
crepe.conv5_BN.track_running_stats = False
crepe.conv6_BN.track_running_stats = False
run(crepe, crepe_input)
run(conv_model, conv_model_input) |
st183409 | Hi @jasonhuh,
This may have something to do with the shape of the input used in the Conv layers. My guess is that if the shape is not big enough then the work per thread is not significant enough for us to see the wins.
@dskhudia do you have any additional insights here about how FBGEMM may be behaving differently for single vs multi-thread here? |
st183410 | Hi, I use _fake_quantize_learnable_per_tensor_affine api as a component of my quantization layer. I found in some cases, this api will make the gradients of scale, zero_point and input wrong. The gradients generated by backward is different from the ones calculated by hand, even generated nan error. After I replaced _fake_quantize_learnable_per_tensor_affine with my own written code, the nan disappeared. My pytorch version 1.8.2. |
st183411 | Hi @sherylwang , do you have an example that could reproduce the behavior? Typically a small representative model would help us understand and debug the problem better. |
st183412 | Hello,
I am trying to deploy my CNN inpainting model to Android mobile app, so I am following this tutorial.
https://pytorch.org/tutorials/recipes/quantization.html
model = PartialConvUNet()
backend = 'qnnpack'
model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
model_static_quantized = torch.quantization.prepare(model, inplace=False)
model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False)
print_size_of_model(model_static_quantized)
And I run this, I get this error.
Traceback (most recent call last):
File "C:\PycharmProjects\MyCNN\mymodel.py", line 219, in <module>
torch.backends.quantized.engine = 'qnnpack'
File "C:\Anaconda3\envs\deeplearning\lib\site-packages\torch\backends\quantized\__init__.py", line 29, in __set__
torch._C._set_qengine(_get_qengine_id(val))
RuntimeError: quantized engine QNNPACK is not supported
If I set
backend='fbgemm'
It works, but this is the backend for the x86 server, which will not be compatible to Android environment, correct? |
st183413 | Solved by dalseeroh in post #6
According to this article: A developer-friendly guide to model quantization with PyTorch
I needa ARM CPU…
“Since these libraries are architecture-dependent, static quantization must be performed on a machine with the same architecture as your deployment target. If you are using FBGEMM, you must p… |
st183414 | This probably means that the machine you are doing quantization on does not support QNNPACK. Could you share what machine and envoronment you are using, and what PyTorch version? |
st183415 | Hello Vasiliy,
OS: Windows 10 x64
CPU: Intel I9-10885H 2.40GHz
GPU: NVIDIA GeForce 1650Ti
PyTorch Version: 1.10.0.dev20210629 |
st183416 | dalseeroh:
inpainting model to Android mobile app
Are you building pytorch for windows on your machine or are you cross compiling for android? I see that you want to deploy on android so there qnnpack is definitely supported. But on windows for OSS I am not sure. I will check and get back to you. |
st183417 | Yes. I wrote the model in PyTorch on Windows machine. I am following this deployment workflow.
pytorch.org
PyTorch 2
An open source machine learning framework that accelerates the path from research prototyping to production deployment.
Write a model → Quantize → Script/trace → Optimize → Maven(?) |
st183418 | According to this article: A developer-friendly guide to model quantization with PyTorch 4
I needa ARM CPU…
“Since these libraries are architecture-dependent, static quantization must be performed on a machine with the same architecture as your deployment target. If you are using FBGEMM, you must perform the calibration pass on an x86 CPU (usually not a problem); if you are using QNNPACK, calibration needs to happen on an ARM CPU (this is quite a bit harder).” |
st183419 | hi @dalseeroh ,
“Since these libraries are architecture-dependent, static quantization must be performed on a machine with the same architecture as your deployment target. If you are using FBGEMM, you must perform the calibration pass on an x86 CPU (usually not a problem); if you are using QNNPACK, calibration needs to happen on an ARM CPU (this is quite a bit harder).”
That quote does not seem to be accurate. In general, you can use QNNPACK on x86, this is a widely used functionality on Meta where models are calibrated on linux machines with QNNPACK for inference on arm. I think the issues you are hitting on your machine might be specific to your environment. |
st183420 | Hello,
I recently wrote a custom image inpainting model, an altered form of UNet. Now I want to demo run this model on Android environment, which will be Samsung Galaxy S10. After a few days of research, I got to know that I need to quantize the model for speedy mobile performance. I chose to follow ‘Post training static quantization’ in the link: Quantization Recipe — PyTorch Tutorials 1.10.0+cu102 documentation. But I just encountered a roadblock with a question that, which machine should I run this code below? I wrote this model in x64 Windows 10. Should I run this in pycharm IDE and save the model and then import the model in Android studio? Can someone give me a little bit of guidance to me?
backend = "qnnpack"
model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
model_static_quantized = torch.quantization.prepare(model, inplace=False)
model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False)
Development Environment:
OS: Windows 10 x64
CPU: Intel I9-10885H 2.40GHz
GPU: NVIDIA GeForce 1650Ti
PyTorch Version: 1.10.0.dev20210629
Target Environment:
Android OS: 4.4+
Device: Samsung Galaxy S10 |
st183421 | A lot of deployment is also done through torchscript, you can get some more info here: TorchScript for Deployment — PyTorch Tutorials 1.10.0+cu102 documentation 1 |
st183422 | I am trying to do quantization-aware-training before torch scripting to reduce the model size and to keep the accuracy. “My question is:” qnnpack only runs on arm architecture. Where should I run this if I am using Windows machine with Intel CPU? |
st183423 | Of course yes. I am still trying to look for an answer of: Where should I run ‘qnnpack’ backend if I plan to do quantization-aware-training? Jetson Nano? Coral Devboard? Raspberry Pi? Cross compile? |
st183424 | dalseeroh:
“My question is:” qnnpack only runs on arm architecture. Where should I run this if I am using Windows machine with Intel CPU?
QNNPACK is most performant on arm, but it does run on x86 with the same numerics. You can use any machine which is compiled with qnnpack to calibrate your model. |
st183425 | Hello there. I am new to quantization topic and have been trying to explore this. I have a question about training a pre-trained, quantized Resnet-18 model. Is this possible? If yes, is there a sample on how to do this? Thanks in advance. |
st183426 | Solved by tom in post #2
To train this model, you would likely need to convert it back to floating point and then train (because the quantized operators typically do not have autograd support).
I am unaware of an automated way to do this.
Best regards
Thomas |
st183427 | To train this model, you would likely need to convert it back to floating point and then train (because the quantized operators typically do not have autograd support).
I am unaware of an automated way to do this.
Best regards
Thomas |
st183428 | Hello,
I recently wrote a UNet-like model which receives 2 inputs(input, mask) of same shape and returns 1 output. I am trying to fuse convolutions/batchnorm/Relu like below but it returns error. Is there any way to fuse the input/mask conv layers at the same time?
for m in model.modules():
if type(m) == PartialConvLayer:
torch.quantization.fuse_modules(m, ["input_conv", "mask_conv", "activation"], inplace=True)
print(type(m))
AssertionError: did not find fuser method for: (<class 'torch.nn.modules.conv.Conv2d'>, <class 'torch.nn.modules.conv.Conv2d'>, <class 'torch.nn.modules.activation.ReLU'>)
The architecture is as below:
PartialConvUNet(
(encoder_1): PartialConvLayer(
(input_conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(mask_conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(activation): ReLU()
)
(encoder_2): PartialConvLayer(
(input_conv): Conv2d(64, 128, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), bias=False)
(mask_conv): Conv2d(64, 128, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), bias=False)
(batch_normalization): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(encoder_3): PartialConvLayer(
(input_conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(encoder_4): PartialConvLayer(
(input_conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(encoder_5): PartialConvLayer(
(input_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(encoder_6): PartialConvLayer(
(input_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(encoder_7): PartialConvLayer(
(input_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(decoder_5): PartialConvLayer(
(input_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_6): PartialConvLayer(
(input_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_7): PartialConvLayer(
(input_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_4): PartialConvLayer(
(input_conv): Conv2d(768, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(768, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_3): PartialConvLayer(
(input_conv): Conv2d(384, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(384, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_2): PartialConvLayer(
(input_conv): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_1): PartialConvLayer(
(input_conv): Conv2d(67, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(mask_conv): Conv2d(67, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
)
) |
st183429 | Solved by dalseeroh in post #5
Hello!
Think I figured it out. Looks like I did correctly fused some operations?
for name, module in model.named_modules():
if type(module) == PartialConvLayer:
# encoder_1 fusion
if "encoder_1" in name:
torch.quantization.fuse_modules(module, … |
st183430 | I think fusing conv, conv, relu isn’t supported yet. The only layers you can currently fuse are:
conv, bn
conv, bn, relu
conv, relu
linear, relu
bn, relu
From: Fuse Modules Recipe — PyTorch Tutorials 1.10.0+cu102 documentation 2
Maybe try fusing just the last conv and relu? |
st183431 | Thanks Karthik,
One more question,
If I would like to fuse input_conv + batch_norm + activation at encoder layers, and mask_conv + batch_norm at decoder layers, how can I distinguish those layers(encoder_1, encoder_2, …, decoder_1, decoder_2,…) when iterating the loop?
For example,
for m in model.modules():
if m.methods_for_layers_name == "encoder_1":
torch.quantization.fuse_modules(m, ["input_conv", "batch_normalization", "activation"], inplace=True)
elif m.methods_for_layers_name == "decoder_1":
torch.quantization.fuse_modules(m, ["mask_conv", "batch_normalization"], inplace=True)
Any idea? |
st183432 | The fusing is based on the layer names. So in your case that would be encoder_1', ‘encoder_2’ etc. However, I don’t think PyTorch supports fusing those layers. And I’m not sure you can even fuse the conv2d and bn layers since they are actually part of PartialConvLayer.
So if you really wanted to fuse then you may have to write that code yourself. |
st183433 | Hello!
Think I figured it out. Looks like I did correctly fused some operations?
for name, module in model.named_modules():
if type(module) == PartialConvLayer:
# encoder_1 fusion
if "encoder_1" in name:
torch.quantization.fuse_modules(module, [['input_conv', 'activation']], inplace=True)
# encoder_2 ~ encoder_7 fusion
elif "enc" in name:
torch.quantization.fuse_modules(module, [['input_conv', 'batch_normalization', 'activation']], inplace=True)
# decoder_2 ~ decoder_7 fusion
elif "decoder_1" not in name:
torch.quantization.fuse_modules(module, [['input_conv', 'batch_normalization']], inplace=True)
print("Fusion completed")
print(model)
This results in,
Fusion completed
PartialConvUNet(
(encoder_1): PartialConvLayer(
(input_conv): ConvReLU2d(
(0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(1): ReLU()
)
(mask_conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(activation): Identity()
)
(encoder_2): PartialConvLayer(
(input_conv): ConvBnReLU2d(
(0): Conv2d(64, 128, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(mask_conv): Conv2d(64, 128, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), bias=False)
(batch_normalization): Identity()
(activation): Identity()
)
(encoder_3): PartialConvLayer(
(input_conv): ConvBnReLU2d(
(0): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(mask_conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): Identity()
(activation): Identity()
)
(encoder_4): PartialConvLayer(
(input_conv): ConvBnReLU2d(
(0): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(mask_conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): Identity()
(activation): Identity()
)
(encoder_5): PartialConvLayer(
(input_conv): ConvBnReLU2d(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(mask_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): Identity()
(activation): Identity()
)
(encoder_6): PartialConvLayer(
(input_conv): ConvBnReLU2d(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(mask_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): Identity()
(activation): Identity()
)
(encoder_7): PartialConvLayer(
(input_conv): ConvBnReLU2d(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(mask_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): Identity()
(activation): Identity()
)
(decoder_7): PartialConvLayer(
(input_conv): ConvBn2d(
(0): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(mask_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): Identity()
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_6): PartialConvLayer(
(input_conv): ConvBn2d(
(0): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(mask_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): Identity()
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_5): PartialConvLayer(
(input_conv): ConvBn2d(
(0): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(mask_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): Identity()
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_4): PartialConvLayer(
(input_conv): ConvBn2d(
(0): Conv2d(768, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(mask_conv): Conv2d(768, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): Identity()
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_3): PartialConvLayer(
(input_conv): ConvBn2d(
(0): Conv2d(384, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(mask_conv): Conv2d(384, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): Identity()
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_2): PartialConvLayer(
(input_conv): ConvBn2d(
(0): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(mask_conv): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): Identity()
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_1): PartialConvLayer(
(input_conv): Conv2d(67, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(mask_conv): Conv2d(67, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
)
) |
st183434 | Hi, yes looks like it did fuse at least some of the layers. Hope that works for you! |
st183435 | Hello,
I recently wrote an UNet-like model that receives 2 inputs(input image, mask) and returns 1 output. Now I am trying to fuse the modules before quantization. The model architecture looks like below.
PartialConvUNet(
(encoder_1): PartialConvLayer(
(input_conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(mask_conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(activation): ReLU()
)
(encoder_2): PartialConvLayer(
(input_conv): Conv2d(64, 128, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), bias=False)
(mask_conv): Conv2d(64, 128, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), bias=False)
(batch_normalization): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(encoder_3): PartialConvLayer(
(input_conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(encoder_4): PartialConvLayer(
(input_conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(encoder_5): PartialConvLayer(
(input_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(encoder_6): PartialConvLayer(
(input_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(encoder_7): PartialConvLayer(
(input_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): ReLU()
)
(decoder_5): PartialConvLayer(
(input_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_6): PartialConvLayer(
(input_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_7): PartialConvLayer(
(input_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_4): PartialConvLayer(
(input_conv): Conv2d(768, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(768, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_3): PartialConvLayer(
(input_conv): Conv2d(384, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(384, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_2): PartialConvLayer(
(input_conv): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(batch_normalization): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_1): PartialConvLayer(
(input_conv): Conv2d(67, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(mask_conv): Conv2d(67, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
)
)
Problem:
After fusing the mask_conv + etc operations, the model returns different(wrong) output compared to the one before fusing. What would be the reason?
for name, module in model.named_modules():
if type(module) == PartialConvLayer:
# Fusion in encoder_1 layer
if "encoder_1" in name:
torch.quantization.fuse_modules(module, [['mask_conv', 'activation']], inplace=True)
# Fusion in encoder_2 ~ encoder_7 layers
elif "enc" in name:
torch.quantization.fuse_modules(module, [['mask_conv', 'batch_normalization', 'activation']], inplace=True)
# Fusion in decoder_2 ~ decoder_7 layers
elif "decoder_1" not in name:
torch.quantization.fuse_modules(module, [['mask_conv', 'batch_normalization']], inplace=True)
PartialConvUNet(
(encoder_1): PartialConvLayer(
(input_conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(mask_conv): ConvReLU2d(
(0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(1): ReLU()
)
(activation): Identity()
)
(encoder_2): PartialConvLayer(
(input_conv): Conv2d(64, 128, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), bias=False)
(mask_conv): ConvReLU2d(
(0): Conv2d(64, 128, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2))
(1): ReLU()
)
(batch_normalization): Identity()
(activation): Identity()
)
(encoder_3): PartialConvLayer(
(input_conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): ConvReLU2d(
(0): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): ReLU()
)
(batch_normalization): Identity()
(activation): Identity()
)
(encoder_4): PartialConvLayer(
(input_conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): ConvReLU2d(
(0): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): ReLU()
)
(batch_normalization): Identity()
(activation): Identity()
)
(encoder_5): PartialConvLayer(
(input_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): ConvReLU2d(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): ReLU()
)
(batch_normalization): Identity()
(activation): Identity()
)
(encoder_6): PartialConvLayer(
(input_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): ConvReLU2d(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): ReLU()
)
(batch_normalization): Identity()
(activation): Identity()
)
(encoder_7): PartialConvLayer(
(input_conv): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(mask_conv): ConvReLU2d(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): ReLU()
)
(batch_normalization): Identity()
(activation): Identity()
)
(decoder_7): PartialConvLayer(
(input_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(batch_normalization): Identity()
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_6): PartialConvLayer(
(input_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(batch_normalization): Identity()
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_5): PartialConvLayer(
(input_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(batch_normalization): Identity()
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_4): PartialConvLayer(
(input_conv): Conv2d(768, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(768, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(batch_normalization): Identity()
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_3): PartialConvLayer(
(input_conv): Conv2d(384, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(384, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(batch_normalization): Identity()
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_2): PartialConvLayer(
(input_conv): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(mask_conv): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(batch_normalization): Identity()
(activation): LeakyReLU(negative_slope=0.2)
)
(decoder_1): PartialConvLayer(
(input_conv): Conv2d(67, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(mask_conv): Conv2d(67, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
)
) |
st183436 | Hi, I quantized YOLOv5 by adding QuantStub and DeQuantStubs to all the ConvBnReLU2d in the network, exported to ONNX and folded the QDQ nodes into QLinearConv.
https://drive.google.com/file/d/11-7Mnss4g6_RJ2CizITIWFo9bGPDNkO_/view?usp=sharing 3
However, during inference, this is the result for the whole image: |
st183437 | Hi, after doing some searching and readings, I notice that NVIDIA’s QAT process is different from PyTorch’s.
NVIDIA seems to first calibrate the model offline, then train(QAT) the calibrated model
Whereas in PyTorch, we fuse, prepare qat, enable observer and fake quant, train (QAT) then disables observers and freeze batchnorm stats after a few epochs.
Is the enabling and disabling of the observers equivalent to NVIDIA’s offline calibration? Just that in PyTorch it is done during training? |
st183438 | After the QAT is done I saved the model with fake_quant (torch.quantization.convert(model_fp32_prepared)) and the quantized model (after torch.quantization.convert(model_fp32_prepared))
when doing inference, I notice a big difference between these two models (see below pic, left:model with fake_quant, right: quantized model ).
Is there a way to reduce the diff? |
st183439 | can you give a repro? the issue could be any number of things, without more context its impossible to tell. |
st183440 | Sorry. For some reason, I can’t share the repo.
One thing is suspected is I replaced all the activation functions with RELU.
Could that be an issue? |
st183441 | it certainly sounds like it, without a way to reproduce your result though its difficult to say much more. |
st183442 | Hi, I’m trying to quantize only the activations of a model not its weights. I followed the quantization documentation for QAT provided by PyTorch. It is difficult to find a content what I want in the documentation.
Thanks. |
st183443 | Solved by chatterboy in post #2
PyTorch is supporting it as default_activation_only_qconfig in qconfig.py. I’m using PyTorch 1.6.0. |
st183444 | PyTorch is supporting it as default_activation_only_qconfig in qconfig.py. I’m using PyTorch 1.6.0. |
st183445 | To add to that,
I’m unsure about the goal here, pretty much the only reason to do QAT is to learn the best weights when weight quantization is applied. QAT’s central thesis is about how you can propagate gradients across a discontinuous function that quantizes teh weights, so doing QAT without weight quantization is a bit of a mismatch.
If you only want to quantize the activations, you may want to consider using something more suited to it. |
st183446 | For QAT, which kind of activation function is suggested to use? (RElu, Relu6, Hardthanh) |
st183447 | if you are going to use QAT and need an activation, its probably a good idea to fuse your op if possible. A list of fusions can be found here: pytorch/fusion_patterns.py at 10411e356160e1d0f406a9cee435e51f95fa90fa · pytorch/pytorch · GitHub 1 |
st183448 | I’m applying quantization for my model which is fp32. I’m having convolution layers in my model. Once quantization is done, I have int8 weights. I’m using those int8 weights and in quantize process I need to subtract the zero_point from every weight and multiply the output with scale.
Since the zero_point subtraction is in the loop (need to do it for every weight , We are getting high CPU . Now, we want to avoid that zero subtraction in the loop and plan to place store the weight - zero_point value as weights and only need to do scale multiplication in the dequantization process.
But, when I tried this facing some issues with output mismatch. is there any way to avoid this zero_point subtraction in run time? if not what is the reason? please help
reference link which we are following Lei Mao's Log Book – Quantization for Neural Networks
Thanks in advance! |
st183449 | Can you give a repro of the issue? I’m not sure exactly what you are trying to do, the quantization APIs should be handling all of that for you. |
st183450 | we are not using quantization APIs , I’m doing quantization using C code.
The peace of code I’m using for quantization is this
struct QuantizationParams {
float scale;
unsigned char zero_point;
};
void Quantize(const QuantizationParams& qparams, float* src, unsigned char* dst, int size) {
for (std::size_t i = 0; i < size; i++) {
const float real_val = src[i];
const float transformed_val = qparams.zero_point + real_val / qparams.scale;
const float clamped_val = std::max(0.f, std::min(255.f, transformed_val));
dst[i] = static_cast<std::uint8_t>(std::round(clamped_val));
}
}
let’s say Im having a function with input and weight . And i’m using quantized weight for the computation . weight is unsigned char , zero point is unsigned char and scale is float .
Then my function implementation is something like given below , and it is working absolutely fine
for (i_l = 0; i_l <= size1; i_l += size2)
{
A_lp = &QA_lp[i_l];
for (j_l = 0; j_l < size5; j_l++)
{
B_lp = (unsigned char*)&B[j_l * size3];
sum_l = 0;
for (k_l = 0; k_l < size4; k_l ++)
{
sum_l += (A_lp[k_l] ) * (B_lp[k_l] - ZERO_POINT);
}
*C++ = (float) sum_l *SCALE
}
sum_l += (A_lp[k_l] ) * (B_lp[k_l] - ZERO_POINT);
Now , what I’m trying is , the ZERO_POINT subtraction i want to do it before it self and save it as my weights.
Now my weight will be weight - ZERO_POINT .And my function code will be
{
sum_l += (A_lp[k_l] ) * (B_lp[k_l] );
}
*C++ = (float) sum_l *SCALE
}
I’m trying to avoid that subtraction in the loop .And here my data type of weights (weight - ZERO_POINT) is char.
When I’m doing this, the output is not matching with the original one. What will be reason ?
Please help me with this. |
st183451 | Hi, I’ve set the qconfig = None for the Detect layer of YOLOv5
image602×850 72.4 KB
However, when exported to ONNX, the weights still appears to be quantized?
image1168×353 48 KB
How do i have the Detect layer be fully non-quantized?
I am doing this as the post processed QAT model does not have weights attached to the Conv2Ds of the Detect layer, which I am guessing is causing further exporting issues? The ‘+’ sign is missing form the weights
image975×632 52 KB
Thanks! |
st183452 | can you provide more context? how are you quantizing the model exactly, its hard to answer about what’s going wrong without understanding what you are specifically doing. |
st183453 | image933×269 40.8 KB
i added QDQs in the Conv class
fuse the model [conv bn relu] |
st183454 | You haven’t quantized the model in what you’ve walked through so far. Is there a convert step somewhere that you haven’t listed?
Can you provide a repro that demonstrates the error? |
st183455 | Yup! the actual quantization happens in ONNX. I’m referring to this GitHub - neuralmagic/sparseml: Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models repo for the ONNX quantization process. But that isnt the main issue here |
st183456 | Would you have a small reproducible example to demonstrate the issue? It would be hard to debug / provide some recommendation without more context. |
st183457 | Hi,
I’m new to this topic.Plz provide the clear insight on the following questions.
What is the difference b/w symmetric and asymmetric quantization?
How to choose the suitable scheme for our model? Does that depend on the weights or on the quantization dtype?
Thanks |
st183458 | Tensorflow has a good section on this: Spesifikasi kuantisasi TensorFlow Lite 8-bit 5 |
st183459 | This whitepaper was made by one of the pytorch quantization team members and informs a lot of the implementation.
arxiv.org
1806.08342.pdf 3
858.24 KB
it shows how symmetric quantization is essentially just when the zero point is set to 0. Note the signed vs unsigned implementation depends on the dtype i.e. quint8 vs qint8.
The best information we have in the documentation is:
https://pytorch.org/docs/stable/torch.quantization.html#torch-quantization 1
which is not great, I created an issue for this here:
github.com/pytorch/pytorch
Improve documentation about meaning of different qschemes 1
opened
Nov 17, 2021
HDCharles
oncall: quantization
What exactly is symmetric and affine quantization?
The best doucmentation is …kind of a side point here: https://pytorch.org/docs/stable/torch.quantization.html?highlight=torch%20minmaxobserver#torch.quantization.MinMaxObserver
there should be a clearer explanation for exactly what symmetric and affine quantization means rather than having to go backwards from the calculation of the qparams
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo
Also if you want a code definition, note that symmetric is generally handled as a special case of affine quantization and all that happens is the way qparams are calculated are different. Here is where that happens:
github.com
pytorch/pytorch/blob/b0bdf588ea575928a94264c30999385d5ff2bc32/torch/ao/quantization/observer.py#L283
return torch.tensor([1.0], device=min_val.device.type), torch.tensor([0], device=min_val.device.type)
quant_min, quant_max = self.quant_min, self.quant_max
min_val_neg = torch.min(min_val, torch.zeros_like(min_val))
max_val_pos = torch.max(max_val, torch.zeros_like(max_val))
device = min_val_neg.device
scale = torch.ones(min_val_neg.size(), dtype=torch.float32, device=device)
zero_point = torch.zeros(min_val_neg.size(), dtype=torch.int64, device=device)
if (
self.qscheme == torch.per_tensor_symmetric
or self.qscheme == torch.per_channel_symmetric
):
max_val_pos = torch.max(-min_val_neg, max_val_pos)
scale = max_val_pos / (float(quant_max - quant_min) / 2)
scale = torch.max(scale, self.eps)
if self.dtype == torch.quint8:
if self.has_customized_qrange:
# When customized quantization range is used, down-rounded midpoint of the range is chosen.
zero_point = zero_point.new_full( |
st183460 | Hi,
I’m trying to analyze the reliability of a quantized model
But I have a question:
How can I change the respondent value to the model
I change the value directly like what I do in normal models that will not affect the value of the model’s value
like:
number.int_repr()[0]=number.int_repr()[0]*2
number.int_repr()[0]
# Don't have any change
number.dequantize()=number.dequantize()*2
number.dequantize()
# Don't change too
Thank you |
st183461 | Solved by jerryzh168 in post #5
I see, thanks for the clarification, I think what you need is pytorch/native_functions.yaml at master · pytorch/pytorch · GitHub and pytorch/native_functions.yaml at master · pytorch/pytorch · GitHub which re-assembles quantized Tensor from int_repr, please let me know if it works, thanks |
st183462 | what kind of changes you want? if you want multiplications by constant then you can just to quantized_tensor = quantized_tensor * 2 I think. we also have a list of tensor methods defined here: Quantization API Reference — PyTorch master documentation 1 |
st183463 | My fault,
Thank you for your patience
I have read this API Reference, but it can’t help my work
I want to change a number at specified position of matrix
First I get a value from the original matrix
Then I change this value
I tried to assign this value to the respondent number
But, it doesn’t affect this matrix at all. |
st183464 | I see, thanks for the clarification, I think what you need is pytorch/native_functions.yaml at master · pytorch/pytorch · GitHub 2 and pytorch/native_functions.yaml at master · pytorch/pytorch · GitHub 2 which re-assembles quantized Tensor from int_repr, please let me know if it works, thanks |
st183465 | Thank you,
I successfully changed weight in quant models.
But when I use _make_per_tensor_quantized_tensor, it’s changeable.
However, when I use _make_per_channel_quantized_tensor, it’s not changeable |
st183466 | ah, maybe it’s a bug, would you like to file an issue and attach a small repro for it? |
st183467 | Sure, I’m glad to do that.
Here’s issue Quantization: torch._make_per_channel_quantized_tensor doesn’t work well · Issue #68322 · pytorch/pytorch (github.com) 3
And a few days ago you give me a prototype of FX Graph Mode pytorch/quantized_resnet_test.py at master · pytorch/pytorch (github.com) 1
But I still don’t know how to inference with CUDA
Does this implementation have some examples |
st183468 | You pretty much can’t do quantized inference with cuda, there are no native quantized cuda kernels atm, our team is working to support lowering to custom backends using Fx to TRT but its not complete yet.
also maybe @jerryzh168 can confirm, but I believe the intended solution was to do:
goal: set int_repr of a quantized tensor x to 3.
# x is per channel quantized tensor
x_int = x.int_repr()
x_int[0][0][0][0]=3
x_new = torch._make_per_tensor_quantized_tensor(x_int, x.per_channel_scales(), ... )
not
x.int_repr()=3
which i’m fairly sure is not intended to work |
st183469 | here is the example that you can run int8 model in TensorRT: pytorch/quantized_resnet_test.py at master · pytorch/pytorch · GitHub 2 |
st183470 | You are right,
I should change its int_repr() tensor
before _make_per_tensor_quantized_tensor
Thank you |
st183471 | I’m writing my own custom implementation of quantized layers but I don’t seem to be getting the same answer as the PyTorch quantizedCPU backend.
PyTorch output: tensor([[ 0.1730, 0.7621, -0.3675, -0.2648, -0.4000]])
Manual output: tensor([[ 0.2096, 0.7595, -0.3691, -0.2660, -0.4016]])
Of course, the values aren’t that far off but they still seem different enough that I’m wondering if there’s more to it than just rounding error? Here’s how I am calculating my outputs. Does PyTorch do something very different, which would account for this difference?
import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.quant = torch.quantization.QuantStub()
self.fc_1 = nn.Linear(in_features=10, out_features=5)
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.fc_1(x)
x = self.dequant(x)
return x
torch.manual_seed(0)
testInput = torch.rand(5, 10)
model = Net() # Model will be initalized to random values.
quant_net = Net().cpu()
quant_net.load_state_dict(model.state_dict())
quant_net.eval()
quant_net.qconfig = torch.quantization.get_default_qconfig('qnnpack')
quant_net_prepared = torch.quantization.prepare(quant_net)
for i in range(5):
singleInput = torch.unsqueeze(testInput[i], 0)
quant_net_prepared(singleInput)
quant_model = torch.quantization.convert(quant_net_prepared)
singleInput = torch.unsqueeze(testInput[0], 0)
print(f"PyTorch output: {quant_model(singleInput)}")
# Unpack the values in the quantized model's state dict to run the layer manually.
quant_scale, quant_zero_point = quant_model.state_dict()['quant.scale'].item(), quant_model.state_dict()['quant.zero_point'].item()
fc_out_scale, fc_out_zero_point = quant_model.state_dict()['fc_1.scale'].item(), quant_model.state_dict()['fc_1.zero_point'].item()
weight_tensor, bias = quant_model.state_dict()['fc_1._packed_params._packed_params']
fc_scale, fc_zero_point = weight_tensor.q_scale(), weight_tensor.q_zero_point()
weight = weight_tensor.dequantize()
# Quantize the input and convert them to INT8. Store them in floats to avoid overflows.
singleInputInt8 = torch.quantize_per_tensor(singleInput, quant_scale, quant_zero_point, dtype=torch.quint8)
singleInputInt8 = torch.int_repr(singleInputInt8).float()
weightInt8 = torch.quantize_per_tensor(weight, fc_scale, fc_zero_point, dtype=torch.qint8)
weightInt8 = torch.int_repr(weightInt8).float()
weightInt8 = weightInt8.transpose(0, 1)
m, n, k = singleInputInt8.size(0), singleInputInt8.size(1), weightInt8.size(1)
out = torch.zeros(m, k)
for i in range(m):
for j in range(k):
sum = 0
for h in range(n):
A = singleInputInt8[i, h].item() - quant_zero_point
B = weightInt8[h, j].item() - fc_zero_point
sum += A*B
out[i, j] = (sum*quant_scale*fc_scale) + bias[j]
print(f"Manual output: {out}") |
st183472 | You may want to check out the unit tests to see how the reference implementation is defined there and try to pass those unit tests instead of making your own and being unsure about whats different.
github.com
pytorch/pytorch/blob/7ee84ad321207e31a29a93ed4ea1e5890125ecec/test/quantization/core/test_quantized_op.py#L3161
q_mod = torch.nn.quantized.ConvTranspose3d
dq_op = torch.ops.quantized.conv_transpose3d_dynamic
dim = 5
dtype = torch.quint8
if qengine_is_qnnpack():
return # TODO: fix MakeDeConvOutputShape overflowing for convT3d with qnnpack
self._test_qconv_op_impl(q_mod, dq_op, dim, dtype)
class TestQuantizedLinear(TestCase):
"""Tests the correctness of the quantized linear and linear_relu op."""
@given(batch_size=st.integers(1, 4),
input_channels=st.integers(16, 32),
output_channels=st.integers(4, 8),
use_bias=st.booleans(),
use_relu=st.booleans(),
use_multi_dim_input=st.booleans(),
use_channelwise=st.booleans())
@override_qengines
def test_qlinear(self, batch_size, input_channels, output_channels, use_bias,
and
github.com
pytorch/pytorch/blob/7ee84ad321207e31a29a93ed4ea1e5890125ecec/test/test_nnapi.py#L644
qpt([[1.0, 2.0]], 0.25, 128),
qpt([[3.0, 4.0]], 0.25, 128),
],
convert_args=[
qpt(torch.zeros((1, 2)), 0.25, 128),
qpt(torch.zeros((1, 2)), 0.25, 128),
]
)
# NOTE: NNAPI qadd supports broadcast, but PT does not.
def test_qlinear(self):
torch.manual_seed(29)
weight = qpt(torch.randn(16, 32), 0.125, 0, torch.qint8)
bias = torch.randn(16)
mod = torch.nn.quantized.Linear(32, 16)
mod.set_weight_bias(weight, bias)
inp = qpt(torch.randn(2, 32), 0.05, 130, torch.quint8)
self.check(mod, inp)
def test_seblock_mul(self):
class MulModel(torch.nn.Module):
although i’d rather not dig through the actual linear implementation’s c++ code, I believe its something along the lines of Lei Mao's Log Book – Quantization for Neural Networks |
st183473 | Thank you! I did come across that link you posted and based my implementation on that. But I didn’t think of using the unit tests, so I’ll be sure to try that and reply back to close the issue. |
st183474 | Hello everyone,
I am trying to quantize a MobileNetV3 model with the fx graph quantization. https://pytorch.org/tutorials/prototype/fx_graph_mode_ptq_static.html 2
The quantization itself worked, since when I print the “quantized model” it prints out:
GraphModule(
(features): Module(
(0): Module(
(0): QuantizedConv2d(3, 16, kernel_size=(3, 3), stride=(2, 2), scale=0.09166989475488663, zero_point=64, padding=(1, 1))
(2): QuantizedHardswish()
)
(1): Module(
(block): Module(
(0): Module(
(0): QuantizedConvReLU2d(16, 16, kernel_size=(3, 3), stride=(1, 1), scale=0.05154227465391159, zero_point=0, padding=(1, 1), groups=16)
)
(1): Module(
(0): QuantizedConv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), scale=0.09911715239286423, zero_point=58)
(2): Identity()
)
)
)
(2): Module(
(block): Module(
(0): Module(
(0): QuantizedConvReLU2d(16, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.0551120825111866, zero_point=0)
)
(1): Module(
(0): QuantizedConvReLU2d(64, 64, kernel_size=(3, 3), stride=(2, 2), scale=0.055621854960918427, zero_point=0, padding=(1, 1), groups=64)
)
(2): Module(
(0): QuantizedConv2d(64, 24, kernel_size=(1, 1), stride=(1, 1), scale=0.09501516073942184, zero_point=66)
(2): Identity()
)
)
)
(3): Module(
(block): Module(
(0): Module(
(0): QuantizedConvReLU2d(24, 72, kernel_size=(1, 1), stride=(1, 1), scale=0.05194235220551491, zero_point=0)
)
(1): Module(
(0): QuantizedConvReLU2d(72, 72, kernel_size=(3, 3), stride=(1, 1), scale=0.05939812585711479, zero_point=0, padding=(1, 1), groups=72)
)
(2): Module(
(0): QuantizedConv2d(72, 24, kernel_size=(1, 1), stride=(1, 1), scale=0.09852656722068787, zero_point=66)
(2): Identity()
)
)
)
(4): Module(
(block): Module(
(0): Module(
(0): QuantizedConvReLU2d(24, 72, kernel_size=(1, 1), stride=(1, 1), scale=0.052594587206840515, zero_point=0)
)
(1): Module(
(0): QuantizedConvReLU2d(72, 72, kernel_size=(5, 5), stride=(2, 2), scale=0.047535136342048645, zero_point=0, padding=(2, 2), groups=72)
)
(2): Module(
(fc1): QuantizedConvReLU2d(72, 24, kernel_size=(1, 1), stride=(1, 1), scale=0.030134592205286026, zero_point=0)
(fc2): QuantizedConv2d(24, 72, kernel_size=(1, 1), stride=(1, 1), scale=0.038337405771017075, zero_point=74)
)
(3): Module(
(0): QuantizedConv2d(72, 40, kernel_size=(1, 1), stride=(1, 1), scale=0.09368766099214554, zero_point=68)
(2): Identity()
)
)
)
(5): Module(
(block): Module(
(0): Module(
(0): QuantizedConvReLU2d(40, 120, kernel_size=(1, 1), stride=(1, 1), scale=0.04279119148850441, zero_point=0)
)
(1): Module(
(0): QuantizedConvReLU2d(120, 120, kernel_size=(5, 5), stride=(1, 1), scale=0.043220121413469315, zero_point=0, padding=(2, 2), groups=120)
)
(2): Module(
(fc1): QuantizedConvReLU2d(120, 32, kernel_size=(1, 1), stride=(1, 1), scale=0.03446542099118233, zero_point=0)
(fc2): QuantizedConv2d(32, 120, kernel_size=(1, 1), stride=(1, 1), scale=0.046296607702970505, zero_point=61)
)
(3): Module(
(0): QuantizedConv2d(120, 40, kernel_size=(1, 1), stride=(1, 1), scale=0.0773073136806488, zero_point=66)
(2): Identity()
)
)
)
(6): Module(
(block): Module(
(0): Module(
(0): QuantizedConvReLU2d(40, 120, kernel_size=(1, 1), stride=(1, 1), scale=0.0431692935526371, zero_point=0)
)
(1): Module(
(0): QuantizedConvReLU2d(120, 120, kernel_size=(5, 5), stride=(1, 1), scale=0.046419981867074966, zero_point=0, padding=(2, 2), groups=120)
)
(2): Module(
(fc1): QuantizedConvReLU2d(120, 32, kernel_size=(1, 1), stride=(1, 1), scale=0.02330135554075241, zero_point=0)
(fc2): QuantizedConv2d(32, 120, kernel_size=(1, 1), stride=(1, 1), scale=0.03669281676411629, zero_point=54)
)
(3): Module(
(0): QuantizedConv2d(120, 40, kernel_size=(1, 1), stride=(1, 1), scale=0.07971568405628204, zero_point=61)
(2): Identity()
)
)
)
(7): Module(
(block): Module(
(0): Module(
(0): QuantizedConv2d(40, 240, kernel_size=(1, 1), stride=(1, 1), scale=0.08144847303628922, zero_point=66)
(2): QuantizedHardswish()
)
(1): Module(
(0): QuantizedConv2d(240, 240, kernel_size=(3, 3), stride=(2, 2), scale=0.09795959293842316, zero_point=61, padding=(1, 1), groups=240)
(2): QuantizedHardswish()
)
(2): Module(
(0): QuantizedConv2d(240, 80, kernel_size=(1, 1), stride=(1, 1), scale=0.09617093950510025, zero_point=64)
(2): Identity()
)
)
)
(8): Module(
(block): Module(
(0): Module(
(0): QuantizedConv2d(80, 200, kernel_size=(1, 1), stride=(1, 1), scale=0.09652244299650192, zero_point=62)
(2): QuantizedHardswish()
)
(1): Module(
(0): QuantizedConv2d(200, 200, kernel_size=(3, 3), stride=(1, 1), scale=0.10839337855577469, zero_point=60, padding=(1, 1), groups=200)
(2): QuantizedHardswish()
)
(2): Module(
(0): QuantizedConv2d(200, 80, kernel_size=(1, 1), stride=(1, 1), scale=0.09403073787689209, zero_point=62)
(2): Identity()
)
)
)
(9): Module(
(block): Module(
(0): Module(
(0): QuantizedConv2d(80, 184, kernel_size=(1, 1), stride=(1, 1), scale=0.08897814154624939, zero_point=62)
(2): QuantizedHardswish()
)
(1): Module(
(0): QuantizedConv2d(184, 184, kernel_size=(3, 3), stride=(1, 1), scale=0.10986955463886261, zero_point=64, padding=(1, 1), groups=184)
(2): QuantizedHardswish()
)
(2): Module(
(0): QuantizedConv2d(184, 80, kernel_size=(1, 1), stride=(1, 1), scale=0.09475481510162354, zero_point=67)
(2): Identity()
)
)
)
(10): Module(
(block): Module(
(0): Module(
(0): QuantizedConv2d(80, 184, kernel_size=(1, 1), stride=(1, 1), scale=0.09242968261241913, zero_point=66)
(2): QuantizedHardswish()
)
(1): Module(
(0): QuantizedConv2d(184, 184, kernel_size=(3, 3), stride=(1, 1), scale=0.10907693207263947, zero_point=59, padding=(1, 1), groups=184)
(2): QuantizedHardswish()
)
(2): Module(
(0): QuantizedConv2d(184, 80, kernel_size=(1, 1), stride=(1, 1), scale=0.10109627991914749, zero_point=65)
(2): Identity()
)
)
)
(11): Module(
(block): Module(
(0): Module(
(0): QuantizedConv2d(80, 480, kernel_size=(1, 1), stride=(1, 1), scale=0.09138453751802444, zero_point=62)
(2): QuantizedHardswish()
)
(1): Module(
(0): QuantizedConv2d(480, 480, kernel_size=(3, 3), stride=(1, 1), scale=0.10354351252317429, zero_point=61, padding=(1, 1), groups=480)
(2): QuantizedHardswish()
)
(2): Module(
(fc1): QuantizedConvReLU2d(480, 120, kernel_size=(1, 1), stride=(1, 1), scale=0.0344068706035614, zero_point=0)
(fc2): QuantizedConv2d(120, 480, kernel_size=(1, 1), stride=(1, 1), scale=0.037343572825193405, zero_point=62)
)
(3): Module(
(0): QuantizedConv2d(480, 112, kernel_size=(1, 1), stride=(1, 1), scale=0.08420202136039734, zero_point=62)
(2): Identity()
)
)
)
(12): Module(
(block): Module(
(0): Module(
(0): QuantizedConv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), scale=0.08766956627368927, zero_point=67)
(2): QuantizedHardswish()
)
(1): Module(
(0): QuantizedConv2d(672, 672, kernel_size=(3, 3), stride=(1, 1), scale=0.09654679149389267, zero_point=61, padding=(1, 1), groups=672)
(2): QuantizedHardswish()
)
(2): Module(
(fc1): QuantizedConvReLU2d(672, 168, kernel_size=(1, 1), stride=(1, 1), scale=0.03521481901407242, zero_point=0)
(fc2): QuantizedConv2d(168, 672, kernel_size=(1, 1), stride=(1, 1), scale=0.04141794145107269, zero_point=64)
)
(3): Module(
(0): QuantizedConv2d(672, 112, kernel_size=(1, 1), stride=(1, 1), scale=0.08542641252279282, zero_point=66)
(2): Identity()
)
)
)
(13): Module(
(block): Module(
(0): Module(
(0): QuantizedConv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), scale=0.08445318043231964, zero_point=64)
(2): QuantizedHardswish()
)
(1): Module(
(0): QuantizedConv2d(672, 672, kernel_size=(5, 5), stride=(2, 2), scale=0.0685504674911499, zero_point=67, padding=(2, 2), groups=672)
(2): QuantizedHardswish()
)
(2): Module(
(fc1): QuantizedConvReLU2d(672, 168, kernel_size=(1, 1), stride=(1, 1), scale=0.04330335929989815, zero_point=0)
(fc2): QuantizedConv2d(168, 672, kernel_size=(1, 1), stride=(1, 1), scale=0.05540220066905022, zero_point=68)
)
(3): Module(
(0): QuantizedConv2d(672, 160, kernel_size=(1, 1), stride=(1, 1), scale=0.05899979919195175, zero_point=65)
(2): Identity()
)
)
)
(14): Module(
(block): Module(
(0): Module(
(0): QuantizedConv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), scale=0.06171604245901108, zero_point=65)
(2): QuantizedHardswish()
)
(1): Module(
(0): QuantizedConv2d(960, 960, kernel_size=(5, 5), stride=(1, 1), scale=0.05438883602619171, zero_point=59, padding=(2, 2), groups=960)
(2): QuantizedHardswish()
)
(2): Module(
(fc1): QuantizedConvReLU2d(960, 240, kernel_size=(1, 1), stride=(1, 1), scale=0.0247460026293993, zero_point=0)
(fc2): QuantizedConv2d(240, 960, kernel_size=(1, 1), stride=(1, 1), scale=0.027777383103966713, zero_point=69)
)
(3): Module(
(0): QuantizedConv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), scale=0.06441330909729004, zero_point=60)
(2): Identity()
)
)
)
(15): Module(
(block): Module(
(0): Module(
(0): QuantizedConv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), scale=0.05954812839627266, zero_point=60)
(2): QuantizedHardswish()
)
(1): Module(
(0): QuantizedConv2d(960, 960, kernel_size=(5, 5), stride=(1, 1), scale=0.05335007980465889, zero_point=68, padding=(2, 2), groups=960)
(2): QuantizedHardswish()
)
(2): Module(
(fc1): QuantizedConvReLU2d(960, 240, kernel_size=(1, 1), stride=(1, 1), scale=0.02065999060869217, zero_point=0)
(fc2): QuantizedConv2d(240, 960, kernel_size=(1, 1), stride=(1, 1), scale=0.024499310180544853, zero_point=64)
)
(3): Module(
(0): QuantizedConv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), scale=0.05633355677127838, zero_point=66)
(2): Identity()
)
)
)
(16): Module(
(0): QuantizedConv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), scale=0.05573510006070137, zero_point=59)
(2): QuantizedHardswish()
)
)
(avgpool): AdaptiveAvgPool2d(output_size=1)
(classifier): Module(
(0): QuantizedLinear(in_features=960, out_features=1280, scale=0.07583604007959366, zero_point=53, qscheme=torch.per_channel_affine)
(1): QuantizedHardswish()
(2): Dropout(p=0.2, inplace=True)
(3): QuantizedLinear(in_features=1280, out_features=1000, scale=0.3153918385505676, zero_point=18, qscheme=torch.per_channel_affine)
)
)
Now I want to look at the top1 and top5 accuracy.
top1, top5 = evaluate(quantized_model, criterion, data_loader_test)
Then I get the following Error:
**---------------------------------------------------------------------------
NotImplementedError Could not run ‘aten::hardsigmoid.out’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit xxx for possible resolutions. ‘aten::hardsigmoid.out’ is only available for these backends: [CPU, CUDA, Meta, BackendSelect, Named, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode]. |
st183475 | Thanks for reporting! This is a bug, I filed FX graph mode quantization broken for torchvision MobileNetV3 · Issue #68250 · pytorch/pytorch · GitHub 15 for this and someone on our team will take a look ASAP. |
st183476 | Hi, I am trying QAT for YOLOv5 and this is the 1st portion of the model
image834×269 37.6 KB
Using:
torch.quantization.fuse_modules(module, [["conv","bn","relu"]],inplace=True)
the model does not seemed fused
image1590×699 119 KB
Whereas if i use:
torch.quantization.fuse_modules(module, [["conv","bn"]],inplace=True)
the model is fused
image1601×683 111 KB
Is this a bug? |
st183477 | Solved by Vasiliy_Kuznetsov in post #2
Looking at the screenshot, is the relu stored under “act”? Could you try
torch.quantization.fuse_modules(module, [["conv","bn","act"]],inplace=True) |
st183478 | Looking at the screenshot, is the relu stored under “act”? Could you try
torch.quantization.fuse_modules(module, [["conv","bn","act"]],inplace=True) |
st183479 | Hi, I am trying to write a custom layer for a quantized int8 model. I did static eager mode quantization of a model and I am able to run it with a custom model, using layers from torch.nn.intrinsic.quantized. But when I try to use my own layer, I get an error saying:
NotImplementedError: Could not run ‘aten::empty.memory_format’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend,
This happens when I try to initalize the weights in my layer. How should I initialize my weights to work with the ‘QuantizedCPU’ backend? Full code below:
import torch
import torch.quantization
import torch.nn as nn
import torch.nn.quantized as nnq
import torch.nn.intrinsic.quantized as nniq
from collections import OrderedDict
class custom_linear(torch.autograd.Function):
@staticmethod
def forward(ctx, X, weight, bias):
ctx.save_for_backward(X, weight, bias)
return torch.addmm(bias, X, weight.transpose(0, 1))
@staticmethod
def backward(ctx, grad_output):
X, weight, bias = ctx.saved_tensors
grad_input = grad_weight = grad_bias = None
if ctx.needs_input_grad[0]:
grad_input = grad_output.mm(weight)
if ctx.needs_input_grad[1]:
grad_weight = grad_output.t().mm(input)
if ctx.needs_input_grad[2]:
grad_bias = grad_output.sum(0)
return grad_input, grad_weight, grad_bias
class QuantLinearReLU(nn.Module):
def __init__(self, in_features, out_features):
super(QuantLinearReLU, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = torch.nn.Parameter(torch.randn(out_features, in_features, dtype=torch.qint8))
self.bias = torch.nn.Parameter(torch.randn(out_features))
self.scale = torch.nn.Parameter(torch.randn(1))
self.zero_point = torch.nn.Parameter(torch.randn(1))
def forward(self, x):
x = custom_linear.apply(x, self.weight, self.bias)
return x
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.quant = torch.quantization.QuantStub()
self.fc = nn.Linear(in_features=10, out_features=2)
self.relu = nn.ReLU()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.fc(x)
x = self.relu(x)
x = self.dequant(x)
return x
class custom_quant_model(nn.Module):
def __init__(self):
super(custom_quant_model, self).__init__()
self.quant = nnq.Quantize(scale=1.0, zero_point=0, dtype=torch.quint8)
# Model works fine with the nniq layer but not the custom one.
# self.fc = nniq.LinearReLU(in_features=10, out_features=2)
self.fc = QuantLinearReLU(in_features=10, out_features=2)
self.dequant = nnq.DeQuantize()
def forward(self, x):
x = self.quant(x)
x = self.fc(x)
x = self.dequant(x)
return x
device = torch.device('cpu')
model = Net()
testInput = torch.rand(10, 10).cpu()
quant_net = Net()
quant_net.eval()
quant_net.qconfig = torch.quantization.get_default_qconfig('fbgemm')
quant_net_fused = torch.quantization.fuse_modules(quant_net, [['fc', 'relu']])
quant_net_prepared = torch.quantization.prepare(quant_net)
quant_net_prepared(testInput)
quant_int8 = torch.quantization.convert(quant_net_prepared)
quant_custom = custom_quant_model()
#Have to work with new ordered dicts to avoid access issues.
state_dict_copy = quant_int8.state_dict()
new_state_dict = OrderedDict()
new_state_dict['quant.scale'] = state_dict_copy['quant.scale']
new_state_dict['quant.zero_point'] = state_dict_copy['quant.zero_point']
new_state_dict['fc.scale'] = state_dict_copy['fc.scale']
new_state_dict['fc.zero_point'] = state_dict_copy['fc.zero_point']
new_state_dict['fc.weight'], new_state_dict['fc.bias'] = state_dict_copy['fc._packed_params._packed_params']
print("\nNew state dict:\n")
print(new_state_dict)
quant_custom.load_state_dict(new_state_dict)
Thank you! |
st183480 | Update: I fixed this issue by changing
self.weight = torch.nn.Parameter(torch.randn(out_features, in_features, dtype=torch.qint8))
to
self.weight = torch._empty_affine_quantized([out_features, in_features], scale=1.0, zero_point=0, dtype=torch.qint8))
but now I am getting a new error that says:
RuntimeError: Only Tensors of floating point and complex dtype can require gradients |
st183481 | Currently quantized tensors are only supported during inference, there is no support for autograd. If you are interested in simulating quantization numerics during training, you could fake quantize your tensors using the torch.quantization.FakeQuantize module or the torch.quantize_per_tensor function. Would that help? |
st183482 | Hi @Vasiliy_Kuznetsov , thank you! So I ended up doing something similar, which is basically casting the quantized tensors to regular float32 tensors. I just need to pass them to my own cuda backend, so just before I do, I use quantize() and int_repr() to convert them to char for the GPU to process. But storing them as float32 tensors allows me to use the nn.functional functions on them, which is a big plus for me.
Also, I’m not sure if I will need the backward pass for what I am doing but if I do, would the regular backward pass (e.g., shown here) work for that? Since all my tensors are float32 anyway and I quantize them (using the scale and zero_point which I have saved as parameters), it should be able to do a backward pass the same as a normal nn.functional.linear function, am I right? |
st183483 | when i use quantization aware training , The weight tensor scaling factors is a standard floating point number.
I want to convert my model as 8bit at FPGA, so the weight tensor scaling factor must be an integer power-of-two value exponent. Is there such an option? what should I do |
st183484 | It seems that the quantization scheme is a little bit different. You can see from this https://github.com/pytorch/pytorch/wiki/torch_quantization_design_proposal 11 |
st183485 | Depending on the fixed-point arithmetic you use, you can convert float multiplier to quantized_multiplier (integer) and right shift (integer). Please checkout https://github.com/pytorch/FBGEMM/blob/master/src/QuantUtils.cc#L107-L157 15 |
st183486 | Hi,
I’m trying to create a quantized version of deeplab_v3_resnet50 model. I did the following changes in the Resnet and deeplab model class defintions
replaced addition with nn.quantized.FloatFunctional().add() in BottleNeck forward function inside resnet.py
added quant() and dequant() operations before and after the forward function definition in the deeplab_v3 model class.
But when creating an object of the above class, I get the following error : AttributeError: ‘Graph’ object has no attribute ‘_tracer_cls’.
The traceback is as follows:
~/Downloads/thinkAutonomous/modelOptimization/dl-optimization-repos/dl_scripts/dl-optimization/quantizedModel/deeplabv3Model.py in _deeplabv3_resnet(backbone, num_classes, aux)
118 if aux:
119 return_layers["layer3"] = "aux"
--> 120 backbone = create_feature_extractor(backbone, return_layers)
121
122 aux_classifier = FCNHead(1024, num_classes) if aux else None
~/Downloads/thinkAutonomous/modelOptimization/dl-optimization-repos/dl_scripts/dl-optimization/quantizedModel/feature_extraction.py in create_feature_extractor(model, return_nodes, train_return_nodes, eval_return_nodes, tracer_kwargs, suppress_diff_warning)
490
491 # Build the final graph module
--> 492 graph_module = DualGraphModule(model, graphs["train"], graphs["eval"], class_name=name)
493
494 # Restore original training mode
~/Downloads/thinkAutonomous/modelOptimization/dl-optimization-repos/dl_scripts/dl-optimization/quantizedModel/feature_extraction.py in __init__(self, root, train_graph, eval_graph, class_name)
261 # to re-create the Graph during deserialization.
262 assert (
--> 263 self.eval_graph._tracer_cls == self.train_graph._tracer_cls
264 ), "Train mode and eval mode should use the same tracer class"
265 self._tracer_cls = None
AttributeError: 'Graph' object has no attribute '_tracer_cls'
From my understanding the graph = tracer.trace(model) function in feature_extraction.py 3 returns a object of type nn.fx.Graph which doesn’t have a _tracer_cls function. I don’t understand how this error is related to the changes I made. By default, the deeplab_v3_resnet50 model works fine, but when I make these changes this error pops up.
It’d be helpful if someone could help me solve the issue. |
st183487 | Update
I had a typo in my code which was causing the error. I debugged the code from start and was able to quantize deeplabv3_resnet50() model. The peculiar thing is that the quantized model is at best of same inference speed compared to the float model. |
st183488 | If your model is symbolically traceable we would recommend using (prototype) FX Graph Mode Post Training Static Quantization — PyTorch Tutorials 1.10.0+cu102 documentation 2
performance might be related to the hardware you are running on as well, we may be able to help if you can provide more details on where do you run the model, is it on server or mobile, what OS/CPU architecture does it use? |
st183489 | Hi @jerryzh168,
Thanks for helping out.
I tried running my model on my local system (AMD64 processor, OS:Ubuntu 20.04, Conda environment with pytorch 1.9.1 version) as well as Google Collab instance. The results are similar in both cases.
I am using the fbgemm config parameter when quantizing the model. The output model size is ~4x less in size and achieves same accuracy test set compared to the float model, but no inference speed improvement.
This is the output when I print the quantized model.
deeplabv3_cityScapes(
(backbone): DeepLabV3(
(backbone): IntermediateLayerGetter(
(conv1): QuantizedConvReLU2d(3, 64, kernel_size=(7, 7), stride=(2, 2), scale=0.09285419434309006, zero_point=0, padding=(3, 3))
(bn1): Identity()
(relu): Identity()
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): Bottleneck(
(conv1): QuantizedConv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.14764662086963654, zero_point=77)
(bn1): Identity()
(conv2): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.32371971011161804, zero_point=56, padding=(1, 1))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(64, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.24840933084487915, zero_point=0)
(bn3): Identity()
(relu): Identity()
(downsample): Sequential(
(0): QuantizedConv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.13252848386764526, zero_point=69)
(1): Identity()
)
(skip_add): QFunctional(
scale=0.3724604547023773, zero_point=27
(activation_post_process): Identity()
)
)
(1): Bottleneck(
(conv1): QuantizedConv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.4247148036956787, zero_point=55)
(bn1): Identity()
(conv2): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.4553159773349762, zero_point=67, padding=(1, 1))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(64, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.25678393244743347, zero_point=0)
(bn3): Identity()
(relu): Identity()
(skip_add): QFunctional(
scale=0.5840356945991516, zero_point=23
(activation_post_process): Identity()
)
)
(2): Bottleneck(
(conv1): QuantizedConv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.42139047384262085, zero_point=71)
(bn1): Identity()
(conv2): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=0.4264393150806427, zero_point=54, padding=(1, 1))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(64, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.19536954164505005, zero_point=0)
(bn3): Identity()
(relu): Identity()
(skip_add): QFunctional(
scale=0.509103000164032, zero_point=19
(activation_post_process): Identity()
)
)
)
(layer2): Sequential(
(0): Bottleneck(
(conv1): QuantizedConv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), scale=0.45401355624198914, zero_point=60)
(bn1): Identity()
(conv2): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), scale=0.5511052012443542, zero_point=51, padding=(1, 1))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(128, 512, kernel_size=(1, 1), stride=(1, 1), scale=0.44677823781967163, zero_point=0)
(bn3): Identity()
(relu): Identity()
(downsample): Sequential(
(0): QuantizedConv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), scale=0.5842137336730957, zero_point=58)
(1): Identity()
)
(skip_add): QFunctional(
scale=0.9698345065116882, zero_point=37
(activation_post_process): Identity()
)
)
(1): Bottleneck(
(conv1): QuantizedConv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), scale=0.6305021643638611, zero_point=57)
(bn1): Identity()
(conv2): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.5440534949302673, zero_point=63, padding=(1, 1))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(128, 512, kernel_size=(1, 1), stride=(1, 1), scale=0.2438904494047165, zero_point=0)
(bn3): Identity()
(relu): Identity()
(skip_add): QFunctional(
scale=1.0002213716506958, zero_point=35
(activation_post_process): Identity()
)
)
(2): Bottleneck(
(conv1): QuantizedConv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), scale=0.5003912448883057, zero_point=53)
(bn1): Identity()
(conv2): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.5733075141906738, zero_point=56, padding=(1, 1))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(128, 512, kernel_size=(1, 1), stride=(1, 1), scale=0.2752479612827301, zero_point=0)
(bn3): Identity()
(relu): Identity()
(skip_add): QFunctional(
scale=0.9422421455383301, zero_point=35
(activation_post_process): Identity()
)
)
(3): Bottleneck(
(conv1): QuantizedConv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), scale=0.5504205822944641, zero_point=58)
(bn1): Identity()
(conv2): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=0.721775472164154, zero_point=60, padding=(1, 1))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(128, 512, kernel_size=(1, 1), stride=(1, 1), scale=0.35953018069267273, zero_point=0)
(bn3): Identity()
(relu): Identity()
(skip_add): QFunctional(
scale=1.1053467988967896, zero_point=29
(activation_post_process): Identity()
)
)
)
(layer3): Sequential(
(0): Bottleneck(
(conv1): QuantizedConv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), scale=0.8592106699943542, zero_point=64)
(bn1): Identity()
(conv2): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=1.3425265550613403, zero_point=59, padding=(1, 1))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), scale=1.0867135524749756, zero_point=0)
(bn3): Identity()
(relu): Identity()
(downsample): Sequential(
(0): QuantizedConv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), scale=1.2045390605926514, zero_point=55)
(1): Identity()
)
(skip_add): QFunctional(
scale=1.6519306898117065, zero_point=40
(activation_post_process): Identity()
)
)
(1): Bottleneck(
(conv1): QuantizedConv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), scale=1.3347007036209106, zero_point=58)
(bn1): Identity()
(conv2): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=1.9396781921386719, zero_point=60, padding=(2, 2), dilation=(2, 2))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), scale=1.0596840381622314, zero_point=0)
(bn3): Identity()
(relu): Identity()
(skip_add): QFunctional(
scale=1.878090500831604, zero_point=35
(activation_post_process): Identity()
)
)
(2): Bottleneck(
(conv1): QuantizedConv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), scale=1.13986337184906, zero_point=58)
(bn1): Identity()
(conv2): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=2.240360736846924, zero_point=48, padding=(2, 2), dilation=(2, 2))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), scale=0.8806281685829163, zero_point=0)
(bn3): Identity()
(relu): Identity()
(skip_add): QFunctional(
scale=2.5012898445129395, zero_point=25
(activation_post_process): Identity()
)
)
(3): Bottleneck(
(conv1): QuantizedConv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), scale=1.59221613407135, zero_point=57)
(bn1): Identity()
(conv2): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=3.3066182136535645, zero_point=51, padding=(2, 2), dilation=(2, 2))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), scale=1.3484315872192383, zero_point=0)
(bn3): Identity()
(relu): Identity()
(skip_add): QFunctional(
scale=2.731464385986328, zero_point=21
(activation_post_process): Identity()
)
)
(4): Bottleneck(
(conv1): QuantizedConv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), scale=2.1792967319488525, zero_point=58)
(bn1): Identity()
(conv2): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=3.85483717918396, zero_point=57, padding=(2, 2), dilation=(2, 2))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), scale=3.908252716064453, zero_point=0)
(bn3): Identity()
(relu): Identity()
(skip_add): QFunctional(
scale=5.5938615798950195, zero_point=15
(activation_post_process): Identity()
)
)
(5): Bottleneck(
(conv1): QuantizedConv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), scale=3.8774356842041016, zero_point=63)
(bn1): Identity()
(conv2): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=7.216492176055908, zero_point=53, padding=(2, 2), dilation=(2, 2))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), scale=3.1228652000427246, zero_point=0)
(bn3): Identity()
(relu): Identity()
(skip_add): QFunctional(
scale=5.101546287536621, zero_point=14
(activation_post_process): Identity()
)
)
)
(layer4): Sequential(
(0): Bottleneck(
(conv1): QuantizedConv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), scale=5.755306720733643, zero_point=88)
(bn1): Identity()
(conv2): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=9.84350299835205, zero_point=50, padding=(2, 2), dilation=(2, 2))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), scale=10.301526069641113, zero_point=0)
(bn3): Identity()
(relu): Identity()
(downsample): Sequential(
(0): QuantizedConv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), scale=6.859118461608887, zero_point=70)
(1): Identity()
)
(skip_add): QFunctional(
scale=16.29696273803711, zero_point=32
(activation_post_process): Identity()
)
)
(1): Bottleneck(
(conv1): QuantizedConv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), scale=12.506922721862793, zero_point=62)
(bn1): Identity()
(conv2): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=13.2088041305542, zero_point=64, padding=(4, 4), dilation=(4, 4))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), scale=9.620034217834473, zero_point=0)
(bn3): Identity()
(relu): Identity()
(skip_add): QFunctional(
scale=23.53208351135254, zero_point=31
(activation_post_process): Identity()
)
)
(2): Bottleneck(
(conv1): QuantizedConv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), scale=12.823467254638672, zero_point=68)
(bn1): Identity()
(conv2): QuantizedConv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), scale=24.23908233642578, zero_point=63, padding=(4, 4), dilation=(4, 4))
(bn2): Identity()
(conv3): QuantizedConvReLU2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), scale=17.22470474243164, zero_point=0)
(bn3): Identity()
(relu): Identity()
(skip_add): QFunctional(
scale=34.330257415771484, zero_point=21
(activation_post_process): Identity()
)
)
)
)
(classifier): DeepLabHead(
(0): ASPP(
(convs): ModuleList(
(0): Sequential(
(0): QuantizedConvReLU2d(2048, 256, kernel_size=(1, 1), stride=(1, 1), scale=8.72632122039795, zero_point=0)
(1): Identity()
(2): Identity()
)
(1): ASPPConv(
(0): QuantizedConvReLU2d(2048, 256, kernel_size=(3, 3), stride=(1, 1), scale=13.005331993103027, zero_point=0, padding=(12, 12), dilation=(12, 12))
(1): Identity()
(2): Identity()
)
(2): ASPPConv(
(0): QuantizedConvReLU2d(2048, 256, kernel_size=(3, 3), stride=(1, 1), scale=7.3559088706970215, zero_point=0, padding=(24, 24), dilation=(24, 24))
(1): Identity()
(2): Identity()
)
(3): ASPPConv(
(0): QuantizedConvReLU2d(2048, 256, kernel_size=(3, 3), stride=(1, 1), scale=9.811503410339355, zero_point=0, padding=(36, 36), dilation=(36, 36))
(1): Identity()
(2): Identity()
)
(4): ASPPPooling(
(0): AdaptiveAvgPool2d(output_size=1)
(1): QuantizedConvReLU2d(2048, 256, kernel_size=(1, 1), stride=(1, 1), scale=15.513148307800293, zero_point=0)
(2): Identity()
(3): Identity()
)
)
(project): Sequential(
(0): QuantizedConvReLU2d(1280, 256, kernel_size=(1, 1), stride=(1, 1), scale=10.449341773986816, zero_point=0)
(1): Identity()
(2): Identity()
(3): Dropout(p=0.5, inplace=False)
)
)
(1): QuantizedConvReLU2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=8.116198539733887, zero_point=0, padding=(1, 1))
(2): Identity()
(3): Identity()
(4): QuantizedConv2d(256, 10, kernel_size=(1, 1), stride=(1, 1), scale=32.9682502746582, zero_point=78)
)
)
(quant): Quantize(scale=tensor([0.0402]), zero_point=tensor([62]), dtype=torch.quint8)
(dequant): DeQuantize()
)
Strangely when I fuse the ReLu layers along with Conv2d and BatchNorm2d layers, the output is full of zeros. When I fuse the Conv2d and BatchNorm2d layers alone, the output is similar to float model.
What do you mean by symbolically traceable model? What are the requirements for a model to be symbolically traceable?
I saw the tutorial earlier on Graph mode Static Quantization but had some doubts. It seems the approach is similar to eager mode quantization but with helper functions prepare_fx and convert_fx. Do we need to specify the modules to be fused inside qconfig_dict or the function itself automatically detects the fusable modules? |
st183490 | Hi,
I am trying to quantize a UNet model using builtin static quantization functions.
Pytorch CPU version 1.9.1
Ubuntu 20.04 LTS (conda env)
The model itself is referenced from here 1. I modified the model as follows (showing the quantization parts alone) :
class UNet(nn.Module):
def __init__(self, num_classes, quantize=False):
super(UNet, self).__init__()
self.num_classes = num_classes
""" QUANTIZED VERSION ADDITIONS """
self.quantize = quantize
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, X):
# Outputs are dequantized
if self.quantize == True:
output_out = self.dequant(output_out)
# pass through other layers
# Outputs are dequantized
if self.quantize == True:
output_out = self.dequant(output_out)
return output_out
The quantization function is as follows:
def quantizeUNet(model, device, dataLoader, use_fbgemm=False):
model.to(device)
model.eval()
modules_to_fuse = [['contracting_11.0', 'contracting_11.2'],
['contracting_11.3', 'contracting_11.5'],
['contracting_21.0', 'contracting_21.2'],
['contracting_21.3', 'contracting_21.5'],
['contracting_31.0', 'contracting_31.2'],
['contracting_31.3', 'contracting_31.5'],
['contracting_41.0', 'contracting_41.2'],
['contracting_41.3', 'contracting_41.5'],
['middle.0', 'middle.2'],
['middle.3', 'middle.5'],
['expansive_12.0', 'expansive_12.2'],
['expansive_12.3', 'expansive_12.5'],
['expansive_22.0', 'expansive_22.2'],
['expansive_22.3', 'expansive_22.5'],
['expansive_32.0', 'expansive_32.2'],
['expansive_32.3', 'expansive_32.5'],
['expansive_42.0', 'expansive_42.2'],
['expansive_42.3', 'expansive_42.5']]
#print(modules_to_fuse)
model = torch.quantization.fuse_modules(model, modules_to_fuse)
if use_fbgemm == True:
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
else:
model.qconfig = torch.quantization.default_qconfig
torch.quantization.prepare(model, inplace=True)
## Calibrate Quantization parameters on input dataset
print('Calibrating Quantization parameters on input dataset ...')
model.eval()
with torch.no_grad():
for data, target in dataLoader:
model(data)
torch.quantization.convert(model, inplace=True)
print('### Static Quantization complete ###')
return model
During inference, the output tensor (shape [1, 10, 256, 256]) contains 0s only.
pytorch error621×531 22.2 KB
I expected the output to have probabilities for each class (10 classes in total). But its essentially zero matrix. Is there something I’m missing? How to do static quantization of the model correctly? |
st183491 | Surya_J:
def forward(self, X):
# Outputs are dequantized
if self.quantize == True:
output_out = self.dequant(output_out)
# pass through other layers
# Outputs are dequantized
if self.quantize == True:
output_out = self.dequant(output_out)
return output_out
It might be a typo, but it should be something like
# quantize inputs (currently your code doesn't have this in what you pasted)
# run quantized model
# dequantize outputs
Other than that, your code looks right, as long as your calibration dataset is representative of your inference dataset. Are you sure the expected output from your input is not close to a matrix of zeros? Do you get the same thing for other outputs?
You could also try using PyTorch Numeric Suite Tutorial — PyTorch Tutorials 1.9.1+cu102 documentation 3 to see if you can bisect the difference to a specific layer. |
st183492 | Hi @Vasiliy_Kuznetsov,
Thanks for the reply. Yes, I pasted the wrong code here. This is my actual code is :
def forward(self, X):
# Input are quantized
if self.quantize == True:
X = self.quant(X)
The output is zero for the entire test set (I’m using a subset of the CityScapes dataset). The un-quantized model gives floating point output and the predictions are good. So, I assume there’s something missing when I quantize the model. I’ll try debugging using the link you shared. Thanks for the help. |
st183493 | Update
It seems that ConvTranspose2d is not yet supported for quantization. Hence, you have to dequantize the output before passing through each of the unsupported layers, which is slower than the original float model in my case. Related Forum post 6
I guess it;s better to look for models which contain only supported layers in case of static quantization. |
st183494 | I think convtranspose is supported: pytorch/quantization_mappings.py at master · pytorch/pytorch · GitHub 3 |
st183495 | Hi @jerryzh168,
Thanks for reaching out.
Based on the forum post, I jumped to that conclusion that ConvTranspose2d is not supported yet. Will look into the model and check for the problem. |
st183496 | workflow for the qat now is:
using the same precision in each fake_quant for EVERY LAYER.
fp32 → fake_quant → fp32
problem i meet:
1st
input data may be 8bit in most common cases.
when qat int4 model, first layer fake_quant “8bit data into 4bit” (or we call cut the data spread).
in this process we lost too much(precision drop happens in the input data) …
IF we can treat first layer with 8bit qconfig, and treat other layer with 4bit qconfig.
we can keep some more necessary input data.
2nd
is there any doc to use 2 or more qconfig in the same qat process.
3rd
ive notice Add MKLDNN quantization backend by Xia-Weiwen · Pull Request #67177 · pytorch/pytorch · GitHub mkldnn
but its a bit different, mkldnn just change internal compute logic,
i just wanna to add a new backend.
i find some reference here: Extending PyTorch Quantization to Custom Backends · pytorch/pytorch Wiki · GitHub 2
Is there any suggestion on develop a new backend, especially for qat.
Purpose for me:
im working to develop a new int4 qat qconfig(or we say a new int4 backend) for a specific dla,
in my opinion using 4bit in all layers may cause precision drop, especially for the first layer.
Im try to deal with the problem in the first layer to keep as more information as possible to prevent precision drop.
also, im try to find some use case/demo on how to use hybrid quant schemem, for example using 8bit qconfig and fp16 qconfig in the same qat process. any user interface.
im searching for some hybrid quant qat demo, do u have some?
any suggestion on develop a new int4 qat qconfig (or we say a new int4 backend). |
st183497 | Yeah, I would recommend using FX Graph Mode Quantization for this. We have post training quantization tutorial here: (prototype) FX Graph Mode Post Training Static Quantization — PyTorch Tutorials 1.10.0+cu102 documentation 2 (we might add a QAT tutorial later). You can use prepare_qat_fx and use the qconfig_dict api to do this.
We do have a quint4x2 dtype currently: pytorch/test_quantize_fx.py at master · pytorch/pytorch · GitHub 1, although I think this is mostly for weight. To support this with activation, I think you need:
Add support for quint4x2 in quantize_per_tensor pytorch/QTensor.cpp at master · pytorch/pytorch · GitHub
Use the is_reference option during convert_fx, which will produce a model with (dequant - float_op - quant) patterns representing the model (you can take a look at Extending PyTorch Quantization to Custom Backends · pytorch/pytorch Wiki · GitHub for reasons)
lower the model to the dla you are building, this can be through fx/torchscript or any ways you prefer |
st183498 | Is there any guide for hybrid-quant for qat, mix int8 and int4 when training.
It would be better for use int8 in first and last layer, and use int4 in the inner layer.
first layer with int8 may prevent source data to be losted.
last layer with int8 may help some other process after inference (like video output, other accelerator). |
st183499 | I am using Kaggle GPU to train resnet18. After training the model which I imported rom torchvision.models.quantization import resnet18 I perform the quantization upon it as shown below.
model = Q_resnet18()
model.load_state_dict(torch.load('./my_model2.pth'))
print_model_size(model)
backend = "qnnpack"
model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
model_static_quantized = torch.quantization.prepare(model, inplace=False)
model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False)
print_model_size(model_static_quantized)
And now I want to evaluate my quantized model with other models. For which I use following function.
def get_lr(optimizer):
for param_group in optimizer.param_groups:
return param_group['lr']
def fit_one_cycle(epochs, max_lr, model, train_loader, val_loader, weight_decay=0, grad_clip=None, opt_func = torch.optim.Adam):
torch.cuda.empty_cache()
history = []
optimizer = opt_func(model.parameters(), max_lr, weight_decay=weight_decay)
# set up one cycle lr scheduler
sched = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, epochs=epochs, steps_per_epoch=len(train_loader))
for epoch in range(epochs):
# Training phase
model.train()
train_losses = []
lrs = []
for batch in tqdm(train_loader):
loss = model.training_step(batch)
train_losses.append(loss)
# calculates gradients
loss.backward()
# check gradient clipping
if grad_clip:
nn.utils.clip_grad_value_(model.parameters(), grad_clip)
# perform gradient descent and modifies the weights
optimizer.step()
# reset the gradients
optimizer.zero_grad()
# record and update lr
lrs.append(get_lr(optimizer))
# modifies the lr value
sched.step()
# Validation phase
result = evaluate(model, val_loader)
result['train_loss'] = torch.stack(train_losses).mean().item()
result['lrs'] = lrs
model.epoch_end(epoch, result)
history.append(result)
return history
@torch.no_grad()
def evaluate(model, val_loader):
model.eval()
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
When I use evaluate(model_name, val_dl) I encounter an error saying
/opt/conda/lib/python3.7/site-packages/torch/nn/quantized/modules/__init__.py in forward(self, X)
47 def forward(self, X):
48 return torch.quantize_per_tensor(X, float(self.scale),
---> 49 int(self.zero_point), self.dtype)
50
51 @staticmethod
RuntimeError: quantize_tensor_per_tensor_affine expects a quantized and float tensors to be on the same device.
I tried evaluate(model_name.to(device), val_dl) but it didn’t work. (I checked and the device is ‘cuda’)
What do I have to do to solve this error?? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.