id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st184100 | @ruka – For the SNR metric – the higher is the better and I usually go with the rule of thumb of 15-20dB is good for quantization. However, this number might not be good for all the models. I’ll take a look at the model + data you sent |
st184101 | @Zafar, I am facing the same issue. It seems to be a Keypoints issue. I trained same model on Keypoints data and classification data ( just last layer changed). I got following results on quantization.
Trained for Keypoints:
Weights SNR: 40 - 47 (almost perfect)
model outputs stats SNR: 23(first layer) - 0.01 (end layers). The value kept on reducing with every layer. Why there is no correlation between weights and output.
Quantized model accuracy: 0.01
Trained for Classification:
Weights SNR: 43-48
model outputs stats SNR: 10-22 (random)
Float model accuracy: 92%
quantized model accuracy: 91.6%
I will be waiting for your analysis. |
st184102 | @Mamta, @ruka Thank you for reporting this – I will take a look at these models. I currently have a similar issue for the generator models – not sure if it is related, but I will be out looking at this issue more closely |
st184103 | Zafar:
Currently, the ConvTranspose is only supported using the QNNPACK. The FBGEMM version is planned, but there is no specific date specified for it. Meanwhile, you have two options for the eager mode: replace the ConvTranspose: 1) Replace the instances of the ConvTranspose with dequant->ConvTranspose->quant construct 2) Set the torch.backends.quantized.engine = 'qnnpack' before running your model. You also might need to set the qconfig = torch.quantization.get_default_qconfig('qnnpack') or qconfig = torch.quantization.get_default_qat_qconfig('qnnpack')
hi @Zafar: I am running into this error when trying to use a ConvTranspose2d with a FBGEMM backend. I am trying to implement workaround (1) that you suggested, i.e. to wrap the ConvTranspose2d with dequant and quant steps, but am struggling to get it right. That is, even if I run those steps, the error (‘FBGEMM doesn’t support transpose packing yet!’) appears. Do you know of an example of how to implement such wrapping that would allow quantization to go through? Thank you!
What I did was to create QuantStub and DeQuantStub instances during initialization, and then during the forward() did
x = self.dequant(x)
x = self.transpose_conv2d(x)
x = self.quant(x)
… but this still gives the same error. |
st184104 | Hi, @rfejgin , may I know which version of pytorch you are using? Did they announce that ConvTranspose2d is supported on FBGEMM backend in their latest release? As far as I know, about a month ago, ConvTranspose2d is still not supported on FBGEMM backend yet. |
st184105 | I’m on 1.7.0. Yes, I’m aware that ConvTranspose2d is not supported on the FBGEMM backend, so I was trying the workaround suggested by @Zafar, which is to wrap the calls to the module with dequant->conv2d_transpose->quant, but couldn’t get that to work. |
st184106 | I’ve made it available here: https://wintics-opensource.s3.eu-west-3.amazonaws.com/torch-1.3.0a0%2Bdeadc27-cp37-cp37m-linux_armv7l.whl 1.0k.
Have fun ! |
st184107 | I run pip3 install *.whl of your wheels ,but when I import torch in my code,it says ImportError: No module named torch
How Can I solve this problem? |
st184108 | Hi, do you have wheels for Python 3.8 or could you explain me how you compiled it?
I’m using a Rockchip processor with Manjaro. PyTorch doesn’t compile for me 10 and it is really a pain to downgrade to python 3.7. |
st184109 | For the moment, I don’t have access to the RPI where I’ve made the source code modifications (I’m under lockdown and far from my office). But as far as I can remember, I just changed the commit of third_party/protobuf to something more recent. I did this because I noticed an error coming from atomic types, and I saw an issue in the protobuf repo (I can’t remember which one) related to the same kind of problem. So naturally I hooked PyTorch source code to a commit fixing the issue.
Just to make things clear, I don’t suffer from any kind of memory loss desease (at least I can’t remember being diagnosed with it ). If I don’t remember anything it is because I actually did it in September 2019 (for compiling torch 1.3), and I just kind of did the same steps some weeks ago without thinking too much, and it just worked.
BTW, the binaries generated by my compilation aren’t perfect, for instance, quantization isn’t working. But the main torch functionalities are working just fine, so I think it is useful making the binaries available for everyone to use. |
st184110 | Hi @Sundar_Krishna, nope I didn’t compile it. I guess it should be easy to compile it from source, just cloning the repo and running python3 setup.py install.
If it doesn’t work, I’d suggest you to post an issue at the repo, or contacting @fmassa |
st184111 | Actually I went to piwheels website to download torchvision , pip installed it and was successful! Anyways thank you so much sir! @LeviViana |
st184112 | Hello,
I tried to perform the installation on a RPI4 with the armv7l but it doesn’t work. On which system did you perform it?
Sincerely |
st184113 | For Raspberry Pi OS 64bit 23, I put up wheels of PyTorch 1.6 126.
Best regards
Thomas |
st184114 | Hi, here’s my wheel for Pytorch 1.6.0 build on Raspberry Pi 4 (should work for Pi 3 too I assume, have not verified.)
Link to Download 125 |
st184115 | I am having trouble with one of the packages “pytorch_lightning”, when I intalled the pytorch in Raspberry pi. I was trying to execute on python pgm where there was a package named “from pytorch_lightning.metrics.functional import accuracy”. I can’t find this package to install and without this I am not able to execute the file either. need a little help! |
st184116 | I want to use “prepare_jit” and “convert_jit” to quantize Resnet18. But I can’t specific ‘layer1.0.conv1’ to different qconfig.
my code:
model = models.dict 'resnet18
model = torch.jit.script(model.eval())
qconfig1 = torch.quantization.QConfig(
activation=torch.quantization.HistogramObserver.with_args(
reduce_range=False),
weight=torch.quantization.default_per_channel_weight_observer)
torch.quantization.prepare_jit(model, {‘layer1.0.conv1’:qconfig1}, True)
model(torch.randn(1, 3, 224, 224))
torch.quantization.convert_jit(model, True, False)
But it will fail as below message:
File “/home/xxx/python3.7/site-packages/torch/quantization/quantize_jit.py”, line 58, in _prepare_jit
quant_type)
RuntimeError: torch.torch.nn.modules.conv.___torch_mangle_67.Conv2d (of Python compilation unit at: 0x56088f811c00) is not compatible with the type torch.torch.nn.modules.conv.___torch_mangle_66.Conv2d (of Python compilation unit at: 0x56088f811c00) for the field ‘conv1’
It seems the key ‘layer1.0.conv1’ is not correct.
How can I do? |
st184117 | Is the goal here to only quantize one layer in the entire model with qconfig1? Can you try without specifying the inplace option for prepare_jit and convert_jit?
i.e torch.quantization.prepare_jit(model, {‘layer1.0.conv1’:qconfig1})
cc @jerryzh168 for additional insight. |
st184118 | penghuic:
File “/home/xxx/python3.7/site-packages/torch/quantization/quantize_jit.py”, line 58, in _prepare_jit
quant_type)
RuntimeError: torch.torch.nn.modules.conv.___torch_mangle_67.Conv2d (of Python compilation unit at: 0x56088f811c00) is not compatible with the type torch.torch.nn.modules.conv.___torch_mangle_66.Conv2d (of Python compilation unit at: 0x56088f811c00) for the field ‘conv1’
prepare_jit/convert_jit is no longer being maintained, for automatic quantization please try fx graph mode quantization: (prototype) FX Graph Mode Post Training Static Quantization — PyTorch Tutorials 1.8.0 documentation 3
if you encounter problems with symbolic tracing, you can take a look at: (prototype) FX Graph Mode Quantization User Guide — PyTorch Tutorials 1.8.0 documentation 2 |
st184119 | Hello,
I am trying to quantize post training an augmented resnet model that uses tanh activation on the extracted features. I know that the model fusion currently supports conv+bn+relu combinations. But is there a way to fuse a tanh activation to its previous layer?
Here is my model setup for the last layers of my augmented resnet:
class Resnet(nn.Module):
def __init__():
[previous layers]
self.layer4 = self._make_layer(block, 512, layers[3], stride=2,
dilate=replace_stride_with_dilation[2])
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
# adding conv layer
self.features_conv = nn.Sequential(
nn.Conv2d(self.last_channel, hidden_feats, kernel_size=1),
norm_layer(hidden_feats),
nn.ReLU(inplace=True)
)
self.attention = nn.Sequential(
nn.Conv2d(hidden_feats, hidden_feats, kernel_size=1),
nn.Tanh()
)
def forward():
[previous layers]
x = self.layer4(x)
out = self.features_conv(x)
x_att = self.attention(out)
out = out * x_att
out = self.avgpool(out)
out = torch.flatten(out, 1)
return out |
st184120 | Solved by supriyar in post #3
We currently don’t have plans to support this. Please file a feature request on github or feel free to submit a PR to implement this and we can review it.
Thanks! |
st184121 | At the moment the tanh fusion is not supported. @raghuramank100 Any plans for this? |
st184122 | We currently don’t have plans to support this. Please file a feature request on github or feel free to submit a PR to implement this and we can review it.
Thanks! |
st184123 | @ptrblck
How do I quantize my model to FP16 after training it normally in pytorch. |
st184124 | It depends on what kind of quantization you are talking about. The answer will be different if you are using PTQ or QAT.
PTQ
If you are just asking about post-training quantization (PTQ), you can simply cast the data types to torch.float16, like so:
from torch import nn
# This is for reference
model = nn.Linear(3, 3)
x = torch.randn(128, 3)
y = model(x)
# This is the quantized model
model_16 = copy.deepcopy(model)
model_16.weight = nn.Parameter(model_16.weight.half())
model_16.bias = nn.Parameter(model_16.bias.half())
# This is the quantized computation
x_16 = x.half()
y_16 = model_16(x_16)
# Check the quantization error:
x_norm = torch.norm(x - x_16)
y_norm = torch.norm(y - y_16)
print(f'x error norm: {x_norm:.2f}, x abs max: {x.abs().max():.2f}')
print(f'y error norm: {y_norm:.2f}, y abs max: {y.abs().max():.2f}')
x_sqnr = 20 * torch.log10(torch.norm(x) / (torch.norm(x - x_16)))
y_sqnr = 20 * torch.log10(torch.norm(y) / (torch.norm(y - y_16)))
print(f'x sqnr: {x_sqnr:.2f} dB')
print(f'y sqnr: {y_sqnr:.2f} dB')
## Results:
# x error norm: 0.00, x abs max: 3.13
# y error norm: 0.01, y abs max: 2.30
# x sqnr: 73.35 dB
# y sqnr: 67.21 dB
QAT
For QAT, please, refer to the Quantization — PyTorch 1.8.0 documentation 5 (search for the “fakequantize” and “fake quantization”. |
st184125 | we are providing a way to do fp16 static quantization in fx graph mode quantization as well, it is ready in master. you can find an example of related tests:
github.com
pytorch/pytorch/blob/master/test/quantization/test_quantize_fx.py#L2145 2
# weight
ns.call_method("to"): 1 if is_reference else 0
}
self.checkGraphModeFxOp(
model, data, QuantType.DYNAMIC, qlinear_fun,
is_reference=is_reference,
custom_qconfig_dict={"": float16_dynamic_qconfig},
prepare_expected_node_occurrence=prepare_node_occurrence,
expected_node_occurrence=convert_node_occurrence)
def test_linear_static_fp16(self):
class FuncLinear(torch.nn.Module):
def __init__(self, use_bias, has_relu, f_relu):
super(FuncLinear, self).__init__()
self.w = torch.randn(4, 30)
self.b = torch.randn(4)
self.use_bias = use_bias
if has_relu:
if f_relu:
self.relu = F.relu
else: |
st184126 | Hi, I have recently looked at the tutorial for post training static quantization but this is relevant to classifiers. Is there a tutorial/capability to quantize an entire object detection model? If not, what would be the difference if I have a fully trained model and want to quantize only the backbone? Thanks |
st184127 | We don’t have a tutorial to quantize a detection model, we can consider it for the future. You can quantize the backbone only as follows, in pseudocode:
# original model (pseudocode)
class M(torch.nn.Module):
def __init__(self, ...):
...
self.backbone = ...
self.rpn = ...
self.head = ...
def forward(self, x):
features = self.backbone(x)
proposals = self.rpn(features)
head_results = self.head(features, proposals)
return head_results
# modify M, place quants/dequants to prepare backbone for quantization
class MQuantizeable(torch.nn.Module):
def __init__(self, ...):
...
self.backbone = ...
self.rpn = ...
self.head = ...
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
# wrap backbone in quant/dequant
x_quant = self.quant(x)
features_quant = self.backbone(x)
features = self.dequant(features_quant)
proposals = self.rpn(features)
head_results = self.head(features, proposals)
return head_results
# quantization call
m = MQuantizeable(...)
# set qconfig on the backbone only
m.backbone.qconfig = ...
m = torch.quantization.prepare(m, ...) |
st184128 | When using the above example and converting my model, I get this error:
Traceback (most recent call last):
File “detection/main_mp.py”, line 734, in
main()
File “detection/main_mp.py”, line 592, in main
p = torch.quantization.convert(myModel)
File “/home/megan/.local/lib/python2.7/site-packages/torch/quantization/quantize.py”, line 293, in convert
convert(mod, mapping, inplace=True)
File “/home/megan/.local/lib/python2.7/site-packages/torch/quantization/quantize.py”, line 294, in convert
reassign[name] = swap_module(mod, mapping)
File “/home/megan/.local/lib/python2.7/site-packages/torch/quantization/quantize.py”, line 316, in swap_module
new_mod = mapping[type(mod)].from_float(mod)
File “/home/megan/.local/lib/python2.7/site-packages/torch/nn/quantized/modules/conv.py”, line 243, in from_float
qweight = _quantize_weight(mod.weight.float(), weight_observer)
File “/home/megan/.local/lib/python2.7/site-packages/torch/nn/quantized/modules/utils.py”, line 12, in _quantize_weight
wt_scale.to(torch.double), wt_zp.to(torch.int64), 0, torch.qint8)
RuntimeError: No function is registered for schema aten::quantize_per_channel(Tensor self, Tensor scales, Tensor zero_points, int axis, ScalarType dtype) -> Tensor on tensor type CUDATensorId; available functions are CPUTensorId, VariableTensorId
I have moved my model to cpu and confirmed it is not running on CUDA. |
st184129 | meganlrowe:
RuntimeError: No function is registered for schema aten::quantize_per_channel(Tensor self, Tensor scales, Tensor zero_points, int axis, ScalarType dtype) -> Tensor on tensor type CUDATensorId; available functions are CPUTensorId, VariableTensorId
The error message is in fact saying that something is on CUDA. Does something like this pass for your model?
def assert_and_get_unique_device(module: torch.nn.Module) -> Any:
"""
Returns the unique device for a module, or None if no device is found.
Throws an error if multiple devices are detected.
"""
devices = {p.device for p in module.parameters()} | \
{p.device for p in module.buffers()}
assert len(devices) <= 1, (
"prepare only works with cpu or single-device CUDA modules, "
"but got devices {}".format(devices)
)
device = next(iter(devices)) if len(devices) > 0 else None
return device |
st184130 | Hi, is it possible to use this quantized implementation of resnet : torchvision quantized resnet 5 as a backbone ?
I tried to use it by changing a little torchvision detection backbone_utils 1 (from torchvision.models.quantization import resnet and add a parameter quantize=true for backbone).
When I try to use this with a pretrained maskrcnn_resnet50 I get an error :
KeyError: 'backbone.body.conv1.bias'
Any idea what I missed/didn’t understand ?
Thanks |
st184131 | I have a pytorch quantised model (model.pt), need help to load this model.pt for object detetcion |
st184132 | looks like it’s not loaded correctly, are you sure the pretrain weight matches the model you are loading the weight into? |
st184133 | Hello
I’ve been having an issue with torch.quantization.convert after performing QAT -
I modified the model (face detector) 1 to do QAT by adding the lines
net.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
torch.quantization.prepare_qat(net, inplace=True)
in train.py and the QuantStub/DeStub in the forward() of Mb_Tiny_RFB() (vision/nn/mb_tiny_rfb.py) following this tutorial 3, and then saved the model via torch.save(self.state_dict(), path)
train.py
DEVICE = torch.device("cuda:0" if torch.cuda.is_available() and args.use_cuda else "cpu")
if __name__ == '__main__':
timer = Timer()
create_net = create_Mb_Tiny_RFB_fd
train_transform = TrainAugmentation(config.image_size, config.image_mean, config.image_std)
target_transform = MatchPrior(config.priors, config.center_variance,
config.size_variance, args.overlap_threshold)
test_transform = TestTransform(config.image_size, config.image_mean_test, config.image_std)
datasets = []
for dataset_path in args.datasets:
if args.dataset_type == 'voc':
dataset = VOCDataset(dataset_path, transform=train_transform,
target_transform=target_transform, img_size = config.image_size)
label_file = os.path.join(args.checkpoint_folder, "voc-model-labels.txt")
store_labels(label_file, dataset.class_names)
num_classes = len(dataset.class_names)
else:
raise ValueError(f"Dataset type {args.dataset_type} is not supported.")
datasets.append(dataset)
train_dataset = ConcatDataset(datasets)
train_loader = DataLoader(train_dataset, args.batch_size,
num_workers=args.num_workers,
shuffle=True, pin_memory=True)
val_dataset = VOCDataset(args.validation_dataset, transform=test_transform,
target_transform=target_transform, is_test=True)
val_loader = DataLoader(val_dataset, args.batch_size,
num_workers=args.num_workers,
shuffle=False)
net = create_net(num_classes)
min_loss = -10000.0
last_epoch = -1
base_net_lr = args.base_net_lr if args.base_net_lr is not None else args.lr
extra_layers_lr = args.extra_layers_lr if args.extra_layers_lr is not None else args.lr
params = [
{'params': net.base_net.parameters(), 'lr': base_net_lr},
{'params': itertools.chain(
net.source_layer_add_ons.parameters(),
net.extras.parameters()
), 'lr': extra_layers_lr},
{'params': itertools.chain(
net.regression_headers.parameters(),
net.classification_headers.parameters()
)}
]
if args.resume:
logging.info(f"Resume from the model {args.resume}")
net.load(args.resume)
criterion = MultiboxLoss(config.priors, neg_pos_ratio=3,
center_variance=0.1, size_variance=0.2, device=DEVICE)
optimizer = torch.optim.SGD(params, lr=args.lr, momentum=args.momentum,
weight_decay=args.weight_decay)
...
net.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
torch.quantization.prepare_qat(net, inplace=True)
net.to(DEVICE)
for epoch in range(last_epoch + 1, args.num_epochs):
train(train_loader, net, criterion, optimizer,
device=DEVICE, debug_steps=args.debug_steps, epoch=epoch)
if epoch > 3:
# Freeze quantizer parameters
net.apply(torch.quantization.disable_observer)
if epoch > 2:
# Freeze batch norm mean and variance estimates towards the end of training to better match inference numerics.
net.apply(torch.nn.intrinsic.qat.freeze_bn_stats)
if epoch % args.validation_epochs == 0 or epoch == args.num_epochs - 1:
logging.info("lr rate :{}".format(optimizer.param_groups[0]['lr']))
val_loss, val_regression_loss, val_classification_loss = test(val_loader, net, criterion, DEVICE)
net.eval()
quant_model = torch.quantization.convert(net.cpu(), inplace=False) # <-- error happens here
model_path = os.path.join(args.checkpoint_folder, f"{args.net}-Epoch-{epoch}-Loss-{val_loss}.pth")
net.save(model_path)
When I tried to call quantization.convert() before saving, I got the error:
Traceback (most recent call last):
File "train.py", line 432, in <module>
quant_model = torch.quantization.convert(net.module.eval().cpu(), inplace=False)
File "/home/user/anaconda3/envs/FaceDetector/lib/python3.8/site-packages/torch/quantization/quantize.py", line 299, in convert
module = copy.deepcopy(module)
File "/home/user/anaconda3/envs/FaceDetector/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/user/anaconda3/envs/FaceDetector/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/user/anaconda3/envs/FaceDetector/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/user/anaconda3/envs/FaceDetector/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/user/anaconda3/envs/FaceDetector/lib/python3.8/copy.py", line 161, in deepcopy
rv = reductor(4)
TypeError: cannot pickle 'module' object
So instead I tried to load the QAT’d parameters into a less confusing form of the model and then tried converting again, but got the same error:
import torchvision
from torch import nn
from vision.utils import box_utils
from vision.ssd.config.fd_config import define_img_size
define_img_size(640)
from vision.ssd.mb_tiny_RFB_fd import create_Mb_Tiny_RFB_fd
import torch.nn.functional as F
import cv2
import numpy as np
class_names = ['background', 'face']
net_1 = create_Mb_Tiny_RFB_fd(len(class_names), is_test=True, device='cpu')
net_1.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
torch.quantization.prepare_qat(net_1, inplace=True)
# load definition: self.load_state_dict(torch.load(model, map_location=lambda storage, loc: storage))
net_1.load(model_path) # load the previously QAT'd model (without quantisation conversion)
class SimpleNet(nn.Module):
def __init__(self, base_net, regression_headers, classification_headers, extras, priors, config):
super(SimpleNet, self).__init__()
self.backbone0 = base_net[:8]
self.backbone1 = base_net[8:11]
self.backbone2 = base_net[11:13]
self.last_chunk = base_net[13:]
self.regression_headers0 = regression_headers[0]
self.regression_headers1 = regression_headers[1]
self.regression_headers2 = regression_headers[2]
self.regression_headers3 = regression_headers[3]
self.classification_headers0 = classification_headers[0]
self.classification_headers1 = classification_headers[1]
self.classification_headers2 = classification_headers[2]
self.classification_headers3 = classification_headers[3]
self.extras = extras
self.num_classes = 2
self.priors = priors
self.config = config
self.last_op = nn.Softmax(dim=-1)
def forward(self, x):
confidences = []
locations = []
x = self.backbone0(x)
confidence = self.classification_headers0(x)
confidence = confidence.permute(0, 2, 3, 1).contiguous()
confidence = confidence.view(confidence.size(0), -1, self.num_classes)
location = self.regression_headers0(x)
location = location.permute(0, 2, 3, 1).contiguous()
location = location.view(location.size(0), -1, 4)
confidences.append(confidence)
locations.append(location)
x = self.backbone1(x)
confidence = self.classification_headers1(x)
confidence = confidence.permute(0, 2, 3, 1).contiguous()
confidence = confidence.view(confidence.size(0), -1, self.num_classes)
location = self.regression_headers1(x)
confidences.append(confidence)
locations.append(location)
x = self.backbone2(x)
confidence = self.classification_headers2(x)
confidence = confidence.permute(0, 2, 3, 1).contiguous()
confidence = confidence.view(confidence.size(0), -1, self.num_classes)
location = self.regression_headers2(x)
confidences.append(confidence)
locations.append(location)
x = self.last_chunk.forward(x)
x = self.extras(x)
confidence = self.classification_headers3(x)
confidence = confidence.permute(0, 2, 3, 1).contiguous()
confidence = confidence.view(confidence.size(0), -1, self.num_classes)
location = self.regression_headers3(x)
confidences.append(confidence)
locations.append(location)
confidences = torch.cat(confidences, 1)
confidences = self.last_op(confidences)
locations = torch.cat(locations, 1)
boxes = box_utils.convert_locations_to_boxes(
locations, self.priors, torch.tensor([0.1]), torch.tensor([0.2]) #self.config.center_variance, self.config.size_variance
)
boxes = box_utils.center_form_to_corner_form(boxes)
return confidences, boxes
model = SimpleNet(
net_1.base_net,
net_1.regression_headers,
net_1.classification_headers,
net_1.extras[0],
net_1.priors,
net_1.config)
model.eval()
model = torch.quantization.convert(model, inplace=False) # error here
Heres the `print(model)` output:
SimpleNet(
(backbone0): Sequential(
(0): Sequential(
(0): Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(1): Sequential(
(0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=16, bias=False)
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(16, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(2): Sequential(
(0): Conv2d(32, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(3): Sequential(
(0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(4): Sequential(
(0): Conv2d(32, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(5): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(6): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(7): BasicRFB(
(branch0): Sequential(
(0): BasicConv(
(conv): Conv2d(64, 8, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(8, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
)
(1): BasicConv(
(conv): Conv2d(8, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(16, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(activation_fn): ReLU()
)
(2): BasicConv(
(conv): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
(bn): BatchNorm2d(16, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
)
)
(branch1): Sequential(
(0): BasicConv(
(conv): Conv2d(64, 8, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(8, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
)
(1): BasicConv(
(conv): Conv2d(8, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(16, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(activation_fn): ReLU()
)
(2): BasicConv(
(conv): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(3, 3), dilation=(3, 3), bias=False)
(bn): BatchNorm2d(16, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
)
)
(branch2): Sequential(
(0): BasicConv(
(conv): Conv2d(64, 8, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(8, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
)
(1): BasicConv(
(conv): Conv2d(8, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(12, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(activation_fn): ReLU()
)
(2): BasicConv(
(conv): Conv2d(12, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(16, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(activation_fn): ReLU()
)
(3): BasicConv(
(conv): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(5, 5), dilation=(5, 5), bias=False)
(bn): BatchNorm2d(16, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
)
)
(ConvLinear): BasicConv(
(conv): Conv2d(48, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
)
(shortcut): BasicConv(
(conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
)
(activation_fn): ReLU()
)
)
(backbone1): Sequential(
(8): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=64, bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(9): Sequential(
(0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(10): Sequential(
(0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
)
(backbone2): Sequential(
(11): Sequential(
(0): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=128, bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(12): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
)
(last_chunk): Sequential()
(regression_headers0): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64)
(1): ReLU()
(2): Conv2d(64, 12, kernel_size=(1, 1), stride=(1, 1))
)
(regression_headers1): Sequential(
(0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128)
(1): ReLU()
(2): Conv2d(128, 8, kernel_size=(1, 1), stride=(1, 1))
)
(regression_headers2): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
(1): ReLU()
(2): Conv2d(256, 8, kernel_size=(1, 1), stride=(1, 1))
)
(regression_headers3): Conv2d(256, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(classification_headers0): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64)
(1): ReLU()
(2): Conv2d(64, 6, kernel_size=(1, 1), stride=(1, 1))
)
(classification_headers1): Sequential(
(0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128)
(1): ReLU()
(2): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))
)
(classification_headers2): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
(1): ReLU()
(2): Conv2d(256, 4, kernel_size=(1, 1), stride=(1, 1))
)
(classification_headers3): Conv2d(256, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(extras): Sequential(
(0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
(1): ReLU()
(2): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=64)
(1): ReLU()
(2): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
)
(3): ReLU()
)
(last_op): Softmax(dim=-1)
)
It appeared that something could not be pickled in the model - so I tried using dill:
dill.detect.trace(True)
dill.detect.errors(model)
and the output is
Output
T2: <class '__main__.SimpleNet'>
F2: <function _create_type at 0x7f574cb6a670>
# F2
T1: <class 'type'>
F2: <function _load_type at 0x7f574cb6a5e0>
# F2
# T1
T4: <class 'torch.nn.modules.module.Module'>
# T4
D2: <dict object at 0x7f574c9f4b40>
F1: <function SimpleNet.__init__ at 0x7f574cb86f70>
F2: <function _create_function at 0x7f574cb6a700>
# F2
Co: <code object __init__ at 0x7f57af0eb240, file "simple_ul.py", line 38>
F2: <function _create_code at 0x7f574cb6a790>
# F2
# Co
D1: <dict object at 0x7f57af1d7f00>
# D1
Ce: <cell at 0x7f574cb5c790: type object at 0x5637763f3840>
F2: <function _create_cell at 0x7f574cb6ab80>
# F2
T5: <class '__main__.SimpleNet'>
# T5
# Ce
D2: <dict object at 0x7f574c9f48c0>
# D2
# F1
F1: <function SimpleNet.forward at 0x7f574cb86ee0>
Co: <code object forward at 0x7f57af0f9450, file "simple_ul.py", line 60>
# Co
D1: <dict object at 0x7f57af1d7f00>
# D1
D2: <dict object at 0x7f574c9f4a00>
# D2
# F1
# D2
# T2
D2: <dict object at 0x7f574c9fa740>
T4: <class 'collections.OrderedDict'>
# T4
T4: <class 'torch.nn.modules.container.Sequential'>
# T4
D2: <dict object at 0x7f574c9fa4c0>
D2: <dict object at 0x7f574d8c7580>
T4: <class 'torch.nn.qat.modules.conv.Conv2d'>
# T4
D2: <dict object at 0x7f574ca8db40>
F2: <function _rebuild_parameter at 0x7f57ad6594c0>
# F2
F2: <function _rebuild_tensor_v2 at 0x7f57ad659280>
# F2
/home/user/anaconda3/envs/FaceDetector/lib/python3.8/site-packages/torch/storage.py:34: FutureWarning: pickle support for Storage will be removed in 1.5. Use `torch.save` instead
warnings.warn("pickle support for Storage will be removed in 1.5. Use `torch.save` instead", FutureWarning)
F2: <function _load_from_bytes at 0x7f5759b7b310>
# F2
T4: <class 'torch.quantization.fake_quantize.FakeQuantize'>
# T4
D2: <dict object at 0x7f5759343500>
T4: <class 'torch.quantization.observer.MovingAverageMinMaxObserver'>
# T4
D2: <dict object at 0x7f574d8b7ac0>
# D2
# D2
D2: <dict object at 0x7f574ca8de00>
T4: <class 'torch.quantization.observer.MovingAveragePerChannelMinMaxObserver'>
# T4
D2: <dict object at 0x7f574ca8de40>
# D2
# D2
T6: <class 'torch.quantization.qconfig.QConfig'>
F2: <function _create_namedtuple at 0x7f574cb6f0d0>
# F2
# T6
T4: <class 'torch.quantization.observer._with_args.<locals>._PartialWrapper'>
but I don't really understand the output and how to solve this problem... Am I doing QAT -> quantisation correctly? Is it correct to again set the qconfig and to prepare_qat before loading a QAT'd model? Any help would be greatly appreciated. |
st184134 | Solved by kekpirat in post #3
Hello, yea I realised that deepcopy did not work on my original model either, and found the issue - I had some unpicklable objects saved in the init of my model
Thank you! |
st184135 | kekpirat:
cannot pickle 'module' object
looks like it’s failing to copy.deepcopy(module). Just to confirm, does copy.deepcopy work on your model instance before you do QAT? |
st184136 | Hello, yea I realised that deepcopy did not work on my original model either, and found the issue - I had some unpicklable objects saved in the init of my model
Thank you! |
st184137 | could you share what’s in your original model causing deepcopy to fail? I am having the same problem and need some clues to fix it. Thanks for your help. |
st184138 | Hi every one,
I’m trying to quantize a GAN model using static quantization. The model is quantized successfully but when I’m using the quantized model for inference, it throws Runtime error.
Here is the stack trace
Screenshot 2021-03-17 213700838×564 33.7 KB
And my module that throw the error:
class AdaIN(nn.Module):
def __init__(self):
super().__init__()
self.quant_1 = torch.quantization.QuantStub()
self.quant_2 = torch.quantization.QuantStub()
self.quant_3 = torch.quantization.QuantStub()
self.add_gamma_functional = nn.quantized.FloatFunctional()
self.mul_gamma_with_norm = nn.quantized.FloatFunctional()
self.add_with_beta = nn.quantized.FloatFunctional()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, gamma, x_normalized, beta):
print('')
# print(type(gamma), type(beta))
gamma = self.quant_1(gamma)
x_normalized = self.quant_2(x_normalized)
beta = self.quant_3(beta)
print(gamma.shape, beta.shape, x_normalized.shape)
# return (1 + gamma) * self.norm(x) + beta
# return (1 + gamma) * x_normalized + beta
# Change to below to make module quantizable
result1 = self.add_gamma_functional.add_scalar(gamma, 1)
result2 = self.mul_gamma_with_norm.mul(result1, x_normalized)
result3 = self.add_with_beta.add(result2, beta)
result3 = self.dequant(result3)
return result3
Is it a bug or I have to manually change the code somehow so that the broadcast operator is used when inference with the quantized model?
Here is the link to Colab the reproduce the issue.
P/s: The shape for gamma, x_nomalized and beta are:
(1, 512, 1, 1)
(1, 512, 16, 16)
(1, 512, 1, 1)
The module is doing only a simple expression:
result = (gamma + 1) * x_normalized + beta |
st184139 | Solved by Zafar in post #2
I believe broadcasting for addition is not supported. But there is a simple hack of repeating along some axis by using y.repeat(1, x.shape[1], x.shape[2])
from time import time
x = torch.ones(512, 16, 16)
y = torch.ones(512, 1, 1)
z = x + y
repeat_total_time = 0.0
add_total_time = 0.0
for _ in r… |
st184140 | I believe broadcasting for addition is not supported. But there is a simple hack of repeating along some axis by using y.repeat(1, x.shape[1], x.shape[2])
from time import time
x = torch.ones(512, 16, 16)
y = torch.ones(512, 1, 1)
z = x + y
repeat_total_time = 0.0
add_total_time = 0.0
for _ in range(1000):
qx = torch.quantize_per_tensor(x, 1.0, 0, torch.qint8)
qy = torch.quantize_per_tensor(y, 1.0, 0, torch.qint8)
start_time = time()
qy = qy.repeat(1, qx.shape[1], qx.shape[2]).contiguous()
repeat_total_time += time() - start_time
start_time = time()
qz = torch.ops.quantized.add(qx, qy, 1.0, 0)
add_total_time += time() - start_time
print(f'Time to repeat: {repeat_total_time * 1000:.2f} ms')
print(f'Time to add: {add_total_time * 1000:.2f} ms')
print(f'Overhead: {(repeat_total_time + add_total_time) / add_total_time:.2f}x')
# Time to repeat: 27.84 ms
# Time to add: 73.29 ms
# Overhead: 1.38x
In your code, you would need to make this change:
def forward(self, gamma, x_normalized, beta):
# ...
result1 = self.add_gamma_functional.add_scalar(gamma, 1)
result2 = self.mul_gamma_with_norm.mul(result1, x_normalized)
if result2.shape != beta.shape:
result2 = result2.repeat(1, beta.shape[1], beta.shape[2]).contiguous()
result3 = self.add_with_beta.add(result2, beta)
result3 = self.dequant(result3)
return result3
P.S. You might skip the contiguous – I think repeat result is contiguous, but I am not 100% sure |
st184141 | Hello! I am a beginner in quantizing PyTorch models, so please forgive me for this is a noob question. I am trying to apply this static quantization example. I was able to run the example’s code successfully, but I am experiencing problems applying it to my model.
This is the model (CNN + LSTM interleaved) that I have declared:
class CRNN_Net(nn.Module) :
def __init__(self, input_dim, conv_out, kernel, stride, out_dim, hidden_dim, num_hidden):
super(CRNN_Net, self).__init__()
self.conv1d_1 = nn.Conv1d(input_dim, conv_out, kernel, stride, padding=int(np.floor(kernel/2)))
self.bn1d_1 = nn.BatchNorm1d(conv_out)
self.relu_1 = nn.ReLU()
self.conv1d_2 = nn.Conv1d(hidden_dim, conv_out, kernel, stride, padding=int(np.floor(kernel/2)))
self.bn1d_2 = nn.BatchNorm1d(conv_out)
self.relu_2 = nn.ReLU()
self.lstm_1 = nn.LSTM(conv_out, hidden_dim, num_hidden, batch_first=True)
self.lstm_2 = nn.LSTM(conv_out, hidden_dim, num_hidden, batch_first=True)
self.linear = nn.Linear(hidden_dim, out_dim)
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = x.permute(0, 2, 1)
first = self.conv1d_1(x)
first = self.bn1d_1(first)
first = self.relu_1(first)
first_out, _ = self.lstm_1(first.permute(0,2,1))
second = self.conv1d_2(first_out.permute(0,2,1))
second = self.bn1d_2(second)
second = self.relu_2(second)
BN_feat, _ = self.lstm_2(second.permute(0,2,1))
output = self.linear(BN_feat)
output = self.dequant(output)
return output
And it works fine when I forward this data in regular fashion:
model_fp32 = CRNN_Net(input_dim=80, conv_out=512, kernel=5, stride=1, out_dim=5768, hidden_dim=256, num_hidden=1)
model_fp32.eval()
model_fp32(torch.rand([1, 300, 80]))
Now I am trying to quantize this model. Here is my code attempting to quantize it:
model_fp32 = CRNN_Net(input_dim=80, conv_out=512, kernel=5, stride=1, out_dim=5768, hidden_dim=256, num_hidden=1)
model_fp32.eval()
model_fp32.qconfig = torch.quantization.get_default_qconfig('fbgemm')
# I'm unsure if this layer fusion is what I'm supposed to do.
model_fp32_fused = torch.quantization.fuse_modules(model_fp32, [['conv1d_1', 'bn1d_1', 'relu_1'],
['conv1d_2', 'bn1d_2', 'relu_2']])
model_fp32_prepared = torch.quantization.prepare(model_fp32_fused)
input_fp32 = torch.rand([1, 300, 80]) # An error is experienced here.
model_fp32_prepared(input_fp32)
model_int8 = torch.quantization.convert(model_fp32_prepared)
res = model_int8(input_fp32)
The error I experience is AttributeError: 'tuple' object has no attribute 'numel', which occurs in the forward function of my model when it’s “prepared” (i.e. model_fp32_prepared). This seems to occur when feeding into the LSTM model. Why is this occuring?
Additional questions: Does my current setup for static quantization in my model’s forward function look correct? I’m unsure if I’m supposed to do an overall QuantStub() and DeQuantStub() or if I should do this for each layer. Is how I’m doing the layer fusion ideal?
Any insights would be a great help to me. Thank you for your time! |
st184142 | LSTM has multiple outputs, which prevents the correct observation. You might want to try the “Quantizable LSTM”. Take a look at this test for an example: pytorch/test_quantized_op.py at bb21aea37add0400eaa4ea8317656b7469b38a94 · pytorch/pytorch · GitHub 24 Let me know if it’s confusion – I’ll make a simple writeup |
st184143 | Hi,
I followed tutorials/quantization 1and tried to PTQ MobileNetV2 from torchvision.
I have quantized with the following script.
However, the accuracy is very poor.
What can I do to improve the accuracy?
original model
Top1: 64.1% Top5:85.8%
quantized model
Top1: 22.0% Top5:41.9%
tested by ImageNet-1K
model = torchvision.models.quantization.mobilenet_v2(pretrained=True)
def calibrate_model(model, loader, device=torch.device("cpu:0")):
model.to(device)
model.eval()
if len(loader) == 2:
print("data size is ", len(loader[0]))
inputs = loader[0].to(device)
labels = loader[1].to(device)
else:
for inputs, labels in tqdm(loader):
inputs = inputs.to(device)
labels = labels.to(device)
_ = model(inputs)
model.eval()
backend = "fbgemm"
torch.backends.quantized.engine = backend
model.fuse_model()
model.qconfig = torch.quantization.QConfig(
activation=torch.quantization.MinMaxObserver.with_args(reduce_range=True), #default_observer
weight=torch.quantization.PerChannelMinMaxObserver.with_args(dtype=torch.qint8, qscheme=torch.per_channel_symmetric) #default_per_channel_weight_observer
)
torch.quantization.prepare(model, inplace=True)
calibrate_model(model=model, loader=valid_queue)
torch.quantization.convert(model, inplace=True)
Thank you. |
st184144 | You can try QAT or dynamic quantization. I think QAT might get your accuracy slightly higher. In addition to that, if you leave the qconfig as default, it might give you slightly better numerics. |
st184145 | Thank you. After I merged conv and batchnorm. I have solved this problem, but I have encountered this problem in training - aware quantification. Do you have any suggestions?
File “/home/g/anaconda3/lib/python3.7/site-packages/torch/quantization/observer.py”, line 165, in _calculate_qparams
zero_point = qmin - round(min_val / scale)
ValueError: cannot convert float NaN to integer
Is there a problem with my data?
Need to add exception capture?
It was later found that Nan existed in conv2d.weight.
And that’s happened in the process of training. |
st184146 | Solved by ptrblck in post #2
If your weights got a NaN value, this might be due to a NaN input or a faulty weight update caused by e.g. a high learning rate.
Did you observe the loss during training?
If some weights are exploding, you would usually see a NaN loss. |
st184147 | If your weights got a NaN value, this might be due to a NaN input or a faulty weight update caused by e.g. a high learning rate.
Did you observe the loss during training?
If some weights are exploding, you would usually see a NaN loss. |
st184148 | I have the similar problem when model use the training - aware quantification and the training loss is not nan. How to solve it?
File “/mnt/storage1/doris/miniconda3/envs/torch-nightly-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 540, in call
result = self.forward(*input, **kwargs)
File “model-code/resnet34/train_age_quan.py”, line 1130, in forward
x = self.ConvBNReLU1(x)
File “/mnt/storage1/doris/miniconda3/envs/torch-nightly-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 540, in call
result = self.forward(*input, **kwargs)
File “/mnt/storage1/doris/miniconda3/envs/torch-nightly-py3.6/lib/python3.6/site-packages/torch/nn/modules/container.py”, line 100, in forward
input = module(input)
File “/mnt/storage1/doris/miniconda3/envs/torch-nightly-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 540, in call
result = self.forward(*input, **kwargs)
File “/mnt/storage1/doris/miniconda3/envs/torch-nightly-py3.6/lib/python3.6/site-packages/torch/nn/intrinsic/qat/modules/conv_fused.py”, line 243, in forward
return self.activation_post_process(F.relu(ConvBn2d._forward(self, input)))
File “/mnt/storage1/doris/miniconda3/envs/torch-nightly-py3.6/lib/python3.6/site-packages/torch/nn/intrinsic/qat/modules/conv_fused.py”, line 95, in _forward
conv = self._conv_forward(input, self.weight_fake_quant(scaled_weight))
File “/mnt/storage1/doris/miniconda3/envs/torch-nightly-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 540, in call
result = self.forward(*input, **kwargs)
File “/mnt/storage1/doris/miniconda3/envs/torch-nightly-py3.6/lib/python3.6/site-packages/torch/quantization/fake_quantize.py”, line 81, in forward
self.scale, self.zero_point = self.calculate_qparams()
File “/mnt/storage1/doris/miniconda3/envs/torch-nightly-py3.6/lib/python3.6/site-packages/torch/quantization/fake_quantize.py”, line 76, in calculate_qparams
return self.activation_post_process.calculate_qparams()
File “/mnt/storage1/doris/miniconda3/envs/torch-nightly-py3.6/lib/python3.6/site-packages/torch/quantization/observer.py”, line 481, in calculate_qparams
return self._calculate_per_channel_qparams(self.min_vals, self.max_vals)
File “/mnt/storage1/doris/miniconda3/envs/torch-nightly-py3.6/lib/python3.6/site-packages/torch/quantization/observer.py”, line 150, in _calculate_per_channel_qparams
), “min {} should be less than max {}”.format(min_vals[i], max_vals[i])
AssertionError: min nan should be less than max nan |
st184149 | chihyu:
File “/mnt/storage1/doris/miniconda3/envs/torch-nightly-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 540, in call
can you check if values in weights/activations contains nan? |
st184150 | jiacheng1gujiaxin:
ValueError: cannot convert float NaN to integer
You can avoid this with a mask method. Note first that in python NaN is defined as the number which is not equal to itself:
>float('nan') == float('nan')
False
It might be worth avoiding use of np.NaN altogether. NaN literally means “not a number”, and it cannot be converted to an integer. In general, Python prefers raising an exception to returning NaN, so things like sqrt(-1) and log(0.0) will generally raise instead of returning NaN. However, you may get this value back from some other library. From v0.24, you actually can. Pandas introduces Nullable Integer Data Types which allows integers to coexist with NaNs. Also, even at the lastest versions of pandas if the column is object type you would have to convert into float first, something like:
df['column_name'].astype(np.float).astype("Int32")
NB: You have to go through numpy float first and then to nullable Int32, for some reason. |
st184151 | Hi!
I followed tutorials/quantization 10 and tried to PTQ MobileNetV2 from torchvision.
However, when I tried to predict with the quantized model, I got the following error and could not run it. How can I solve this problem?
By the way, do I need to insert QuantStub() and DeQuantStub() in the foward when I do a PTQ?
I’m confused because there are so many ways to do this.
What is the correct way to do a PTQ in Pytorch 1.7.1?
Quantization — PyTorch 1.7.1 documentation 1
torch.quantization — PyTorch 1.7.1 documentation 1
Quantization Recipe — PyTorch Tutorials 1.7.1 documentation 10
Error
Traceback (most recent call last):
File "ptq_imagenet_pth.py", line 137, in <module>
res = model_static_quantized(x.clone().detach().to(device, dtype=torch.float))
...
...
RuntimeError: Could not run 'quantized::conv2d.new' with arguments from the 'CUDA' backend. 'quantized::conv2d.new' is only available for these backends: [QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].
QuantizedCPU: registered at /pytorch/aten/src/ATen/native/quantized/cpu/qconv.cpp:858 [kernel]
BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
AutogradOther: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback]
AutogradXLA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback]
Tracer: fallthrough registered at /pytorch/torch/csrc/jit/frontend/tracer.cpp:967 [backend fallback]
Autocast: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:254 [backend fallback]
Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:511 [backend fallback]
VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
PTQ script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = torchvision.models.mobilenet_v2(pretrained=True)
model.eval()
backend = "fbgemm"
model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
model_static_quantized = torch.quantization.prepare(model, inplace=False)
model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False)
model_static_quantized = model_static_quantized.to(device)
#x is input tensor whose shape is (100, 3, 224, 224)
res = model_static_quantized(x.clone().detach().to(device, dtype=torch.float))
Environment
Ubuntu: 18.0
CUDA: 11.0
Python: 3.6.10
PyTorch: 1.7.1
torchvision: 0.8.2
Thank you! |
st184152 | Solved by crook52 in post #10
It was solved in this topic.
Thanks all!!! |
st184153 | Quantized inference is not supported on CUDA at the moment. You can move the model to CPU and it should work. |
st184154 | Thank you for your response!
When I used CPU, I got the bellow error which looks like using CUDA.
untimeError: Could not run 'quantized::conv2d.new' with arguments from the 'CPU' backend. 'quantized::conv2d.new' is only available for these backends: [QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode]. |
st184155 | Same type of error when trying this on a yolov3 implementation. What’s the main cause? |
st184156 | The example listed in Quantization — PyTorch 1.7.1 documentation 13 shows how to use quant/dequant stubs to statically quantize the model using eager mode.
The error you are seeing is probably because you are missing a quant stub operator before the conv operator. |
st184157 | Thanks for your reply!
supriyar:
The error you are seeing is probably because you are missing a quant stub operator before the conv operator.
I understood that I have to insert quant/dequant stub.
But I don’t know how to insert these in the model whose architecture is unknown, as in pretrained model.
Do I have to get mdoel’s architecture and I write model class with quant/dequan stub added??
If I use torch.quantization.QuantWrapper(module ), will be this problem solved?
Thank you! |
st184158 | Do I have to get mdoel’s architecture and I write model class with quant/dequan stub added??
If you wish to skip quantizing certain layers then yes, this is the recommended way with eager model quantization
If I use torch.quantization.QuantWrapper(module ) , will be this problem solved?
It will add a quant/dequant around the entire model and not for individual modules. |
st184159 | Thank you very much for your support, @supriyar .
I want to quantize entire model, so I used QuantWrapper().
However I couldn’t quantize well and got bellow error.
RuntimeError: Could not run 'aten::add.Tensor' with arguments from the 'QuantizedCPU' backend. 'aten::add.Tensor' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, Meta, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
And I tried to insert quant/dequant stub to model by myself.
class QuantizedMobileNetV2(nn.Module):
def __init__(self, model_fp32):
super(QuantizedMobileNetV2, self).__init__()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
self.model_fp32 = model_fp32
def forward(self, x):
x = self.quant(x)
x = self.model_fp32(x)
x = self.dequant(x)
return x
But I got same error when I used QuantWrapper(). |
st184160 | I couldn’t solve this problem 3, so I moved it here. Sorry about that.
I followed tutorials/quantization 4 and tried to PTQ MobileNetV2 from torchvision.
However, when I tried to predict with the quantized model, I got the following error and could not run it.
How can I solve this problem?
Error
Traceback (most recent call last):
File "ptq_imagenet_pth.py", line 184, in <module>
res = model_static_quantized(x.clone().detach().to(device, dtype=torch.float))
....
RuntimeError: Could not run 'aten::add.Tensor' with arguments from the 'QuantizedCPU' backend. 'aten::add.Tensor' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, Meta, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
CPU: registered at /pytorch/build/aten/src/ATen/CPUType.cpp:2127 [kernel]
CUDA: registered at /pytorch/build/aten/src/ATen/CUDAType.cpp:2983 [kernel]
MkldnnCPU: registered at /pytorch/build/aten/src/ATen/MkldnnCPUType.cpp:144 [kernel]
SparseCPU: registered at /pytorch/build/aten/src/ATen/SparseCPUType.cpp:239 [kernel]
SparseCUDA: registered at /pytorch/build/aten/src/ATen/SparseCUDAType.cpp:320 [kernel]
Meta: registered at /pytorch/aten/src/ATen/native/BinaryOps.cpp:1049 [kernel]
BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: fallthrough registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]
AutogradOther: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
AutogradCPU: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
AutogradCUDA: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
AutogradXLA: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
AutogradPrivateUse1: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
AutogradPrivateUse2: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
AutogradPrivateUse3: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
Tracer: registered at /pytorch/torch/csrc/autograd/generated/TraceType_2.cpp:9654 [kernel]
Autocast: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:254 [backend fallback]
Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:515 [kernel]
VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
Script
class QuantizedMobileNetV2(nn.Module):
def __init__(self, model_fp32):
super(QuantizedMobileNetV2, self).__init__()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
self.model_fp32 = model_fp32
def forward(self, x):
x = self.quant(x)
x = self.model_fp32(x)
x = self.dequant(x)
return x
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = torchvision.models.mobilenet_v2(pretrained=True)
model.eval()
model = QuantizedMobileNetV2(model_fp32=model)
backend = "fbgemm"
model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
model_static_quantized = torch.quantization.prepare(model, inplace=False)
model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False)
model_static_quantized = model_static_quantized.to(device)
#x is input tensor whose shape is (100, 3, 224, 224)
res = model_static_quantized(x.clone().detach().to(device, dtype=torch.float))
I got same error when I used QuantWrapper().
Environment
Ubuntu: 18.0
CUDA: 11.0
Python: 3.6.10
PyTorch: 1.7.1
torchvision: 0.8.2 |
st184161 | Solved by Vasiliy_Kuznetsov in post #2
Hi @crook52 , with Eager mode quantization the user needs to place quants and dequants at every place in the model where tensors need to convert from fp32 to int8, and vice versa. If it’s not working for you if you wrap the model with quant/dequant, that likely means that there are places inside yo… |
st184162 | Hi @crook52 , with Eager mode quantization the user needs to place quants and dequants at every place in the model where tensors need to convert from fp32 to int8, and vice versa. If it’s not working for you if you wrap the model with quant/dequant, that likely means that there are places inside your model where the same thing needs to be done. This is often a pretty tedious process.
For MobileNetV2, we have a quantizeable model with all the quants/dequants/fusions done here (vision/mobilenetv2.py at master · pytorch/vision · GitHub 20), so you are welcome to use that if that works for your use case. |
st184163 | Hi @Vasiliy_Kuznetsov, sorry for my late reply and thank you for your answer!
I understood why I can’t convert mobilenetv2.
I was able to convert the MobileNetV2 which you referred me to.
Also, based on the implementation of a quantizeable model, I modified the model of original MobileNetv2 as follows and was able to convert it!!!
def forward(self, x):
if self.use_res_connect:
# return x + self.conv(x)
return self.skip_add.add(x, self.conv(x))
else:
return self.conv(x)
Unfortunately, the accuracy of both methods is very bad.
However, thanks to you, I was able to quantize it.
Thank you very much!!! |
st184164 | Pytorch official documents mentioned that “the weights are quantized ahead of time but the activations are dynamically quantized during inference” . It also offers code for the simplest implementation.
However, when i tried to convert a floating-type model to one int-type model, the weights are quantized ahead of time, while the activations of intermidiate layer (e.g. Linear layer) seems that still keep floating type.
Is there some discrepancy or just my mistakenly understanding that question?
For example, the quantized fc layer just do this inference:
input_fp32 X FC_params_int8 → output_fp32. I can’t find how it quantize activations dynamically. |
st184165 | Yes, activations are quantized dynamically: i.e for every batch, the activations are quantized prior to the linear operation. This is done by calculating the dynamic range of the activations (min and max) and then quantizing the activations to 8 bits. This happens in C++ as part of the operator implementation itself, you can see the details at:
github.com
pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/qlinear_dynamic.cpp#L64 2
/*m=*/input_ptr, /*min=*/&x_min, /*max=*/&x_max, /*len=*/input.numel());// Input tensor is quantized as 8-bit unsigned valuesstatic constexpr int precision = 8;static constexpr bool is_signed = false;// Calculate scale and zero point for quantization of input tensorauto q_params = quant_utils::ChooseQuantizationParams( /*min=*/x_min, /*max=*/x_max, /*qmin=*/is_signed ? -(1 << (precision - 1)) : 0, /*qmax=*/ is_signed ? ((1 << (precision - 1)) - 1) : (1 << precision) - 1, /*preserve_sparsity=*/false, /*force_scale_power_of_two=*/false, /*reduce_range=*/reduce_range);q_params.precision = precision; |
st184166 | Hi,
I am trying to do QAT for SRCNN following this tutorial 1
My code does not throw any error but when I train, the loss is always constant and there are lots of 0s in the output. Here is the code of my model:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.name = "QSRCNN"
self.quant = torch.quantization.QuantStub()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=9, padding=9 // 2)
self.conv2 = nn.Conv2d(in_channels=64, out_channels=32, kernel_size=5, padding=5 // 2)
self.conv3 = nn.Conv2d(in_channels=32, out_channels=3, kernel_size=5, padding=5 // 2)
self.relu1 = nn.ReLU(inplace=True)
self.relu2 = nn.ReLU(inplace=True)
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
# Convert to NHWC format from NCHW
x = x.contiguous(memory_format=torch.channels_last)
x = self.quant(x)
x = self.conv1(x)
x = self.relu1(x)
x = self.conv2(x)
x = self.relu2(x)
x = self.conv3(x)
x = self.dequant(x)
# Convert back to NCHW format
x = x.contiguous(memory_format=torch.contiguous_format)
return x
def q_config(self, model):
model.qconfig = torch.quantization.get_default_qat_qconfig('qnnpack')
model = torch.quantization.fuse_modules(model, [['conv1', 'relu1'], ["conv2", "relu2"]])
model = torch.quantization.prepare_qat(model, inplace=False)
return model
And the training loop:
loss_fn = nn.L1Loss()
# Optimizer
optimizer = optim.Adam(model.parameters(), lr=10e-4)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=10, verbose=True)
# Detect the device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Init the model and do the quantization
model.to(device)
model = model.q_config(model)
# Training loop
for epoch in range(MAX_EPOCH):
epoch_start_time = time.time()
running_loss = 0.0
for i, sample in enumerate(dataloader, 0):
lr, hr = sample["LR"], sample["HR"]
lr = lr.to(device)
hr = hr.to(device)
# Forward Pass
sr = model(lr)
# Loss Calculation
optimizer.zero_grad()
loss = loss_fn(sr, hr).to(device)
running_loss += sr.shape[0] * loss.item()
# Backward pass
loss.backward()
optimizer.step()
# Epoch loss
epoch_loss = running_loss / len(dataset)
scheduler.step(epoch_loss)
epoch_end_time = time.time()
print("Epoch [{} / {}] || Loss:{:.4f} || Epoch Time:{:.4f}".format(
epoch + 1, MAX_EPOCH, epoch_loss, epoch_end_time - epoch_start_time))
When I comment out the line model = model.q_config(model) everything works fine and the model is trained properly. Any idea about what is wrong ?
Thanks |
st184167 | Solved by ekremcet in post #2
Okay found the mistake, I defined the optimizer before the quantized model. Moving the optimizer line after the model declaration fixed the issue |
st184168 | Okay found the mistake, I defined the optimizer before the quantized model. Moving the optimizer line after the model declaration fixed the issue |
st184169 | Yes, since we change the model, it is important to call the optimizer after creating the quantized model |
st184170 | Hi!
Let me have the following module:
m = nn.quantized.dynamic.Linear(20, 30, dtype=torch.qint8)
m = torch.jit.script(m)
How can I get the weights of the m module back? |
st184171 | Solved by raghuramank100 in post #2
Use the _weight_bias() method to get the weights back:
(wt,bias) = m._packed_params._weight_bias() |
st184172 | marka17:
m = nn.quantized.dynamic.Linear(20, 30, dtype=torch.qint8)
m = torch.jit.script(m)
Use the _weight_bias() method to get the weights back:
(wt,bias) = m._packed_params._weight_bias()
github.com
pytorch/pytorch/blob/master/torch/nn/quantized/modules/linear.py#L32 6
def set_weight_bias(self, weight: torch.Tensor, bias: Optional[torch.Tensor]) -> None: if self.dtype == torch.qint8: self._packed_params = torch.ops.quantized.linear_prepack(weight, bias) elif self.dtype == torch.float16: self._packed_params = torch.ops.quantized.linear_prepack_fp16(weight, bias) else: raise RuntimeError('Unsupported dtype on dynamic quantized linear!')@torch.jit.exportdef _weight_bias(self): if self.dtype == torch.qint8: return torch.ops.quantized.linear_unpack(self._packed_params) elif self.dtype == torch.float16: return torch.ops.quantized.linear_unpack_fp16(self._packed_params) else: raise RuntimeError('Unsupported dtype on dynamic quantized linear!')def forward(self, x): return x |
st184173 | How were the weights of these networks created? By post-training static quantization of the weights of the corresponding networks in torchvision.models? By quantization-aware training of the weights of the corresponding networks in torchvision.models? Thanks in advance! |
st184174 | I’ve been trying to static quantize the mobilenetV2 model written by the PyTorch team.
Unfortunately, the model outputs all zeros and I’m not sure I understand where the problem is coming from …
Any help would be appreciated.
Code
The slightly modified mobilenetV2 code from PyTorch. Essentially, what I’ve changed is the forward method of the InvertedResidual block, and have used FloatFunctionals for the addition.
from torch import nn
from torch import Tensor
from typing import Callable, Any, Optional, List
__all__ = ['MobileNetV2', 'mobilenet_v2']
model_urls = {
'mobilenet_v2': 'https://download.pytorch.org/models/mobilenet_v2-b0353104.pth',
}
def _make_divisible(v: float, divisor: int, min_value: Optional[int] = None) -> int:
"""
This function is taken from the original tf repo.
It ensures that all layers have a channel number that is divisible by 8
It can be seen here:
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py
"""
if min_value is None:
min_value = divisor
new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
# Make sure that round down does not go down by more than 10%.
if new_v < 0.9 * v:
new_v += divisor
return new_v
class ConvBNActivation(nn.Sequential):
def __init__(
self,
in_planes: int,
out_planes: int,
kernel_size: int = 3,
stride: int = 1,
groups: int = 1,
norm_layer: Optional[Callable[..., nn.Module]] = None,
activation_layer: Optional[Callable[..., nn.Module]] = None,
dilation: int = 1,
) -> None:
padding = (kernel_size - 1) // 2 * dilation
if norm_layer is None:
norm_layer = nn.BatchNorm2d
if activation_layer is None:
activation_layer = nn.ReLU6
super(ConvBNReLU, self).__init__(
nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, dilation=dilation, groups=groups,
bias=False),
norm_layer(out_planes),
activation_layer(inplace=True)
)
self.out_channels = out_planes
# necessary for backwards compatibility
ConvBNReLU = ConvBNActivation
class InvertedResidual(nn.Module):
def __init__(
self,
inp: int,
oup: int,
stride: int,
expand_ratio: int,
norm_layer: Optional[Callable[..., nn.Module]] = None
) -> None:
super(InvertedResidual, self).__init__()
self.stride = stride
assert stride in [1, 2]
if norm_layer is None:
norm_layer = nn.BatchNorm2d
hidden_dim = int(round(inp * expand_ratio))
self.use_res_connect = self.stride == 1 and inp == oup
layers: List[nn.Module] = []
if expand_ratio != 1:
# pw
layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1, norm_layer=norm_layer))
layers.extend([
# dw
ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim, norm_layer=norm_layer),
# pw-linear
nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
norm_layer(oup),
])
self.conv = nn.Sequential(*layers)
self.out_channels = oup
self._is_cn = stride > 1
self.floatFunctional = nn.quantized.FloatFunctional()
def forward(self, x: Tensor) -> Tensor:
if self.use_res_connect:
return self.floatFunctional.add(x, self.conv(x))
else:
return self.conv(x)
class MobileNetV2(nn.Module):
def __init__(
self,
num_classes: int = 1000,
width_mult: float = 1.0,
inverted_residual_setting: Optional[List[List[int]]] = None,
round_nearest: int = 8,
block: Optional[Callable[..., nn.Module]] = None,
norm_layer: Optional[Callable[..., nn.Module]] = None
) -> None:
"""
MobileNet V2 main class
Args:
num_classes (int): Number of classes
width_mult (float): Width multiplier - adjusts number of channels in each layer by this amount
inverted_residual_setting: Network structure
round_nearest (int): Round the number of channels in each layer to be a multiple of this number
Set to 1 to turn off rounding
block: Module specifying inverted residual building block for mobilenet
norm_layer: Module specifying the normalization layer to use
"""
super(MobileNetV2, self).__init__()
if block is None:
block = InvertedResidual
if norm_layer is None:
norm_layer = nn.BatchNorm2d
input_channel = 32
last_channel = 1280
if inverted_residual_setting is None:
inverted_residual_setting = [
# t, c, n, s
[1, 16, 1, 1],
[6, 24, 2, 2],
[6, 32, 3, 2],
[6, 64, 4, 2],
[6, 96, 3, 1],
[6, 160, 3, 2],
[6, 320, 1, 1],
]
# only check the first element, assuming user knows t,c,n,s are required
if len(inverted_residual_setting) == 0 or len(inverted_residual_setting[0]) != 4:
raise ValueError("inverted_residual_setting should be non-empty "
"or a 4-element list, got {}".format(inverted_residual_setting))
# building first layer
input_channel = _make_divisible(input_channel * width_mult, round_nearest)
self.last_channel = _make_divisible(last_channel * max(1.0, width_mult), round_nearest)
features: List[nn.Module] = [ConvBNReLU(3, input_channel, stride=2, norm_layer=norm_layer)]
# building inverted residual blocks
for t, c, n, s in inverted_residual_setting:
output_channel = _make_divisible(c * width_mult, round_nearest)
for i in range(n):
stride = s if i == 0 else 1
features.append(block(input_channel, output_channel, stride, expand_ratio=t, norm_layer=norm_layer))
input_channel = output_channel
# building last several layers
features.append(ConvBNReLU(input_channel, self.last_channel, kernel_size=1, norm_layer=norm_layer))
# make it nn.Sequential
self.features = nn.Sequential(*features)
# building classifier
self.classifier = nn.Sequential(
nn.Dropout(0.2),
nn.Linear(self.last_channel, num_classes),
)
# weight initialization
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out')
if m.bias is not None:
nn.init.zeros_(m.bias)
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.ones_(m.weight)
nn.init.zeros_(m.bias)
elif isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 0.01)
nn.init.zeros_(m.bias)
def _forward_impl(self, x: Tensor) -> Tensor:
# This exists since TorchScript doesn't support inheritance, so the superclass method
# (this one) needs to have a name other than `forward` that can be accessed in a subclass
x = self.features(x)
# Cannot use "squeeze" as batch-size can be 1 => must use reshape with x.shape[0]
x = nn.functional.adaptive_avg_pool2d(x, (1, 1)).reshape(x.shape[0], -1)
x = self.classifier(x)
return x
def forward(self, x: Tensor) -> Tensor:
return self._forward_impl(x)
def mobilenet_v2() -> MobileNetV2:
model = MobileNetV2()
return model
The inputs that I give to the model are OK and they are in their correct format as I have used the original code from the PyTorch guide. The input is shaped like: [numberOfImages, 3, 224, 224].
The quantization and inference code is:
import mobilenet
import torch
class NewModel(torch.nn.Module):
def __init__(self, model):
super(NewModel, self).__init__()
self.quant = torch.quantization.QuantStub()
self.model = model
self.dequant = torch.quantization.DeQuantStub()
self.softmax = torch.nn.Softmax()
def forward(self, x):
# return self.softmax(self.dequant(self.model(self.quant(x))))
return self.model(self.quant(x))
mobileNetModel = mobilenet.mobilenet_v2()
# mobileNetModel.eval()
# mobileNetModel.qconfig = torch.quantization.get_default_qconfig('fbgemm')
newModel = NewModel(mobileNetModel)
newModel.eval()
newModel.qconfig = torch.quantization.get_default_qconfig('fbgemm')
# mQuan = torch.quantization.fuse_modules(model, [])
mQuan = torch.quantization.prepare(newModel)
mQuan(batchedInputs[1:,...])
mQuan = torch.quantization.convert(mQuan)
res = mQuan(batchedInputs[0:1, ...])
print(res)
for logit in res[0]:
if logit!=0:
print("yey")
As it is evident from the above block, I don’t fuse any parts since there are none to fuse (or that I don’t know how to fuse the given blocks…).
The output that I get is:
/usr/local/lib/python3.7/dist-packages/torch/quantization/observer.py:121: UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch.
reduce_range will be deprecated in a future release of PyTorch."
/usr/local/lib/python3.7/dist-packages/torch/quantization/observer.py:990: UserWarning: must run observer before calling calculate_qparams. Returning default scale and zero point
Returning default scale and zero point "
tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
size=(1, 1000), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=1.1920928955078125e-07,
zero_point=0)
What have I done wrong?
I don’t think it’s important, but nonetheless, I’ve run the code both on google colab cpu and on my own system. My system has the following related specs:
python 3.8.6
tensorboard 2.4.1
tensorboard-plugin-wit 1.8.0
thop 0.0.31.post2005241907
torch 1.7.0+cpu
torchaudio 0.7.0
torchvision 0.8.1+cpu |
st184175 | Solved by taha_entesari in post #7
I just wanted to let everybody know what the problem was. Hoping that nobody makes this foolish mistake
The issue was that I didn’t train my network. I mean, now that I load a pre-trained model and then quantize, the model output isn’t all zeros (I haven’t checked accuracy difference).
I hadn’t i… |
st184176 | /usr/local/lib/python3.7/dist-packages/torch/quantization/observer.py:990: UserWarning: must run observer before calling calculate_qparams. Returning default scale and zero point
Returning default scale and zero point "
This error message means that there are observers in the network which have not been calibrated. If observers are not calibrated, then they will use scale=1.0 and zero_point=0, which is probably not going to be useful. Could you verify that your calibration is working correctly, and all observers have scale and zero_point collected based on your calibration data? |
st184177 | taha_entesari:
mQuan(batchedInputs[1:,...])
it looks like you calibrate here. This may be an area to debug, you could try calibrating with more data, and then looking at the observers and verifying that all of them have collected statistics. If you print out the model, the observer statistics will be included. |
st184178 | I actually first thought that this might be a problem, but when I first searched for this specific error, I found this issue. With respect to the mentioned issue, this is just probably due to the fact the InvertedResidual block will follow one of two flows for calculation, and this warning is probably due to that.
I mean, when I print the model, it seems that the layers are properly quantized (not sure about this though since this is essentially my first quantized model and I don’t know what to expect). The print output is:
NewModel(
(quant): Quantize(scale=tensor([0.0374]), zero_point=tensor([57]), dtype=torch.quint8)
(model): MobileNetV2(
(features): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), scale=0.03452397137880325, zero_point=58, padding=(1, 1), bias=False)
(1): QuantizedBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=0.008563793264329433, zero_point=52, padding=(1, 1), groups=32, bias=False)
(1): QuantizedBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): QuantizedConv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), scale=0.008242965675890446, zero_point=79, bias=False)
(2): QuantizedBatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(2): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), scale=0.004633823875337839, zero_point=58, bias=False)
(1): QuantizedBatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), scale=0.0007947214762680233, zero_point=75, padding=(1, 1), groups=96, bias=False)
(1): QuantizedBatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), scale=0.0007379651651717722, zero_point=64, bias=False)
(3): QuantizedBatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(3): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), scale=0.0004368616209831089, zero_point=63, bias=False)
(1): QuantizedBatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(144, 144, kernel_size=(3, 3), stride=(1, 1), scale=6.097953882999718e-05, zero_point=67, padding=(1, 1), groups=144, bias=False)
(1): QuantizedBatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(144, 24, kernel_size=(1, 1), stride=(1, 1), scale=8.161034202203155e-05, zero_point=62, bias=False)
(3): QuantizedBatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=0.0007560973172076046, zero_point=64
(activation_post_process): Identity()
)
)
(4): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), scale=0.00046657773782499135, zero_point=65, bias=False)
(1): QuantizedBatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(144, 144, kernel_size=(3, 3), stride=(2, 2), scale=6.993811985012144e-05, zero_point=67, padding=(1, 1), groups=144, bias=False)
(1): QuantizedBatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), scale=6.939686136320233e-05, zero_point=69, bias=False)
(3): QuantizedBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(5): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=5.0384467613184825e-05, zero_point=58, bias=False)
(1): QuantizedBatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), scale=4.988682576367864e-06, zero_point=65, padding=(1, 1), groups=192, bias=False)
(1): QuantizedBatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), scale=4.922353127767565e-06, zero_point=70, bias=False)
(3): QuantizedBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=7.048285624478012e-05, zero_point=68
(activation_post_process): Identity()
)
)
(6): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=3.9113976527005434e-05, zero_point=64, bias=False)
(1): QuantizedBatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), scale=4.21840013586916e-06, zero_point=59, padding=(1, 1), groups=192, bias=False)
(1): QuantizedBatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), scale=6.124475930846529e-06, zero_point=69, bias=False)
(3): QuantizedBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=6.926279456820339e-05, zero_point=68
(activation_post_process): Identity()
)
)
(7): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=4.882919165538624e-05, zero_point=58, bias=False)
(1): QuantizedBatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), scale=6.18789727013791e-06, zero_point=65, padding=(1, 1), groups=192, bias=False)
(1): QuantizedBatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), scale=4.26427686761599e-06, zero_point=69, bias=False)
(3): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(8): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), scale=2.777898998829187e-06, zero_point=66, bias=False)
(1): QuantizedBatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), scale=2.495579565220396e-07, zero_point=74, padding=(1, 1), groups=384, bias=False)
(1): QuantizedBatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), scale=2.758661423740705e-07, zero_point=70, bias=False)
(3): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=4.288005584385246e-06, zero_point=69
(activation_post_process): Identity()
)
)
(9): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), scale=3.1199779186863452e-06, zero_point=65, bias=False)
(1): QuantizedBatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), scale=2.582654303751042e-07, zero_point=80, padding=(1, 1), groups=384, bias=False)
(1): QuantizedBatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), scale=2.4579065893703955e-07, zero_point=68, bias=False)
(3): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=4.267273652658332e-06, zero_point=69
(activation_post_process): Identity()
)
)
(10): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), scale=2.7751552806876134e-06, zero_point=67, bias=False)
(1): QuantizedBatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), scale=2.0391455279877846e-07, zero_point=71, padding=(1, 1), groups=384, bias=False)
(1): QuantizedBatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), scale=2.1148737516796245e-07, zero_point=59, bias=False)
(3): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=4.199701834295411e-06, zero_point=70
(activation_post_process): Identity()
)
)
(11): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), scale=2.6447246455063578e-06, zero_point=64, bias=False)
(1): QuantizedBatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), scale=2.6378887696409947e-07, zero_point=79, padding=(1, 1), groups=384, bias=False)
(1): QuantizedBatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(384, 96, kernel_size=(1, 1), stride=(1, 1), scale=2.007121366887077e-07, zero_point=70, bias=False)
(3): QuantizedBatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(12): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=57, bias=False)
(1): QuantizedBatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=5, padding=(1, 1), groups=576, bias=False)
(1): QuantizedBatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=6, bias=False)
(3): QuantizedBatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=1.9502708425989113e-07, zero_point=72
(activation_post_process): Identity()
)
)
(13): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.2644522939808667e-07, zero_point=64, bias=False)
(1): QuantizedBatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=6, padding=(1, 1), groups=576, bias=False)
(1): QuantizedBatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=6, bias=False)
(3): QuantizedBatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=1.9090400371624128e-07, zero_point=72
(activation_post_process): Identity()
)
)
(14): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.250595147439526e-07, zero_point=60, bias=False)
(1): QuantizedBatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(2, 2), scale=1.1920928955078125e-07, zero_point=6, padding=(1, 1), groups=576, bias=False)
(1): QuantizedBatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(576, 160, kernel_size=(1, 1), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=5, bias=False)
(3): QuantizedBatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(15): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=3, bias=False)
(1): QuantizedBatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=0, padding=(1, 1), groups=960, bias=False)
(1): QuantizedBatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=0, bias=False)
(3): QuantizedBatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=1.1920928955078125e-07, zero_point=5
(activation_post_process): Identity()
)
)
(16): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=3, bias=False)
(1): QuantizedBatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=0, padding=(1, 1), groups=960, bias=False)
(1): QuantizedBatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=0, bias=False)
(3): QuantizedBatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=1.1920928955078125e-07, zero_point=5
(activation_post_process): Identity()
)
)
(17): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): QuantizedConv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=3, bias=False)
(1): QuantizedBatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): QuantizedConv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=0, padding=(1, 1), groups=960, bias=False)
(1): QuantizedBatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(2): QuantizedConv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=0, bias=False)
(3): QuantizedBatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(floatFunctional): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(18): ConvBNActivation(
(0): QuantizedConv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), scale=1.1920928955078125e-07, zero_point=0, bias=False)
(1): QuantizedBatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
)
(classifier): Sequential(
(0): Dropout(p=0.2, inplace=False)
(1): QuantizedLinear(in_features=1280, out_features=1000, scale=1.1920928955078125e-07, zero_point=0, qscheme=torch.per_channel_affine)
)
)
(dequant): DeQuantize()
(softmax): Softmax(dim=None)
) |
st184179 | It could be that your model is sensitive to quantization. There is a prototype tool to help narrow this down to a particular layer: PyTorch Numeric Suite Tutorial — PyTorch Tutorials 1.7.1 documentation 3 . One thing to try could be to run an example input through this tool and see if there is a particular problematic layer where things diverge. |
st184180 | Thanks
I tried the package and it did help a bit. I did have a problem trying to get the same kinds of outputs from the code similar to the guide but was unable to do so. I mean, Compare the weights of float and quantized models worked as expected but the other two guides from the tutorial didn’t, and the output dictionaries didn’t have the quantized activations.
The problem that I found is that the output layer of the original float model outputs values in the range 10^(-9) but the scale of the output tensor from the quantized model is 10^(-7) and thus if I understand correctly, 10^(-9) would be considered zero in this case.
How can I change this? I checked all the inputs that I fed to the model (for the observer) and all their outputs maxed out around 10^(-9) and it seems odd that the quantized model has chosen 10^(-7) as the scale! |
st184181 | I just wanted to let everybody know what the problem was. Hoping that nobody makes this foolish mistake
The issue was that I didn’t train my network. I mean, now that I load a pre-trained model and then quantize, the model output isn’t all zeros (I haven’t checked accuracy difference).
I hadn’t initially used a pre-trained model because since I had to change the underlying network a bit, I didn’t bother to modify the loading of state_dict and was just initializing the model randomly. I originally thought that this wouldn’t cause a problem because I thought PyTorch could quantize the activations well enough. But it seems that since the output is so random and not organized, the observer for the activations doesn’t work that well (the weights were correctly quantized as I had checked that the SNR of layers was good). |
st184182 | RuntimeError: Could not run ‘aten::mm’ with arguments from the ‘QuantizedCPU’ backend. ‘aten::mm’ is only available for these backends: [CPU, CUDA, SparseCPU, SparseCUDA, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivate Use2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
h0 = torch.matmul(input, self.W[0])
i was trying to apply quantization to a graph convolutional neural network and during evaluation of the network this operation was not permitted. what options do i have to do torch.matmul as a quantizable operation. |
st184183 | hi @naveen_raj , we currently do not have a quantized kernel for aten::mm. There are a couple of options here:
you could leave this op in fp32 (this is going to be the fastest in terms of dev time)
if someone is up for helping write the quantized kernel for aten::mm, we would accept a PR |
st184184 | Is there a list of currently supported operations for quantized tensors?
I run into issues quantizing a network requiring tensor additions:
RuntimeError: Could not run 'aten::add.Tensor' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::add.Tensor' is only available for these backends: [SparseCPUTensorId, CPUTensorId, VariableTensorId, MkldnnCPUTensorId].
Others have reported running into issues with div and cat operations - I presume these are also not supported atm. |
st184185 | Solved by Zafar in post #3
Add is supported, but not as a at::add.Tensor. The reason is that addition (or any arithmetic) requires output scale/zero_point. Also, often times quantized ops need to be stateful. Hence, there are stateful FloatFunctional and QFunctional |
st184186 | In case this is helpful to anyone, there are:
https://pytorch.org/docs/stable/quantization.html#torch.nn.quantized.FloatFunctional 214
and
https://pytorch.org/docs/stable/quantization.html#torch.nn.quantized.QFunctional 162
that support add and other operations. |
st184187 | Add is supported, but not as a at::add.Tensor. The reason is that addition (or any arithmetic) requires output scale/zero_point. Also, often times quantized ops need to be stateful. Hence, there are stateful FloatFunctional and QFunctional |
st184188 | Hey there
I don’t see, how I can apply this solution
Do I have to replace the my out += residual operation in my model with QFunctional?
Regards LMW |
st184189 | No, you need to follow the static quantization flow, and replace all occurances of the addition with a layer FloatFunctional. For example, if you have a model that looks like that:
class Foo(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
y = x + x
z = y + 2 * x
return x
it is not “quantizable”. The reason is that the additions and multiplication are in the forward, and don’t store the state of the scale and zero_point needed for the quantization.
Your quantizable model will looks something like that:
class Foo(nn.Module):
def __init__(self):
super().__init__()
self.first_functional = nn.quantized.FloatFunctional()
self.second_functional = nn.quantized.FloatFunctional()
self.third_functional = nn.quantized.FloatFunctional()
def forward(self, x):
y = self.first_functional.add(x, x)
x2 = self.second_functional.mul_static(x, 2)
z = self.third_functional.add(y, x2)
return z
Notice that we don’t reuse the float functional modules, and use a different one for every arithmetic operator. This is done so that the quantization parameters for each operation would be computed differently.
Once you rewrite your models, you can run the static quantization (prepare, calibrate, convert steps): Quantization — PyTorch 1.7.1 documentation 15 |
st184190 | Hello everyone,
I am quantizing the retinanet using standard methods of Pytorch, namely PTQ and QAT and got a great results. The model size has been reduced from 139MB to 39MB and Inference time on cpu from 90min to 20min for a big valid dataset by accuracy loss smaller that 1%. Thus, although the results are great, I tried to check the weights of the quantized network and found out, if I use
print(model.head.cls_subnet[0].conv.weight().int_repr())
I get a really quantized integer tensor like
[[-33, 6, -56],
[-36, 47, 24],
[ 12, 1, 25]],
[[-22, 18, 22],
[-45, 43, -55],
[ 4, 1, -58]],
[[ 19, 27, 10],
[-73, 9, -53],
[ 2, -38, -24]]]], dtype=torch.int8)
But if I access a weight without int_repr()
print(model.head.cls_subnet[0].conv.weight())
I get a tensor like
[[-0.0096, 0.0017, -0.0163],
[-0.0105, 0.0137, 0.0070],
[ 0.0035, 0.0003, 0.0073]],
[[-0.0064, 0.0052, 0.0064],
[-0.0131, 0.0125, -0.0160],
[ 0.0012, 0.0003, -0.0169]],
[[ 0.0055, 0.0079, 0.0029],
[-0.0212, 0.0026, -0.0154],
[ 0.0006, -0.0111, -0.0070]]]], size=(256, 256, 3, 3),
dtype=torch.qint8, quantization_scheme=torch.per_channel_affine,
scale=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003,
So the question is: Was the quantitation done correctly or I still use the full-precision weights? Why does it look like? Is it an intern representations of weights?
Thank you in advance.
Best regards,
yayapa |
st184191 | Solved by Vasiliy_Kuznetsov in post #2
If you print a quantized tensor, it’s expected to see the floating point values, a scale and a zero point. The internal representation is stored in integers, and you can see that with int_repr(). You can use the equation fp = clamp(std::nearbyint(q - zp) * scale, qmin, qmax) to convert from int +… |
st184192 | yayapa:
So the question is: Was the quantitation done correctly or I still use the full-precision weights? Why does it look like? Is it an intern representations of weights?
If you print a quantized tensor, it’s expected to see the floating point values, a scale and a zero point. The internal representation is stored in integers, and you can see that with int_repr(). You can use the equation fp = clamp(std::nearbyint(q - zp) * scale, qmin, qmax) to convert from int + scale + zp to float. |
st184193 | Hello everyone, hope you are all having a great day.
I have been building and running my quantized models just fine until now that I have tried to build and execute my tests in a VM.
Whenever I try to load a quantized model I face errors such as the following :
An exception has occured in 'class MTCNNImpl * __ptr64' : Unknown qengine
Exception raised from operator () at C:\Users\User\pytorch-1.7.0\aten\src\ATen\native\quantized\cpu\fbgemm_utils.cpp:308 (most recent call first):
00007FFD83E3F86800007FFD83E3F800 c10.dll!c10::Error::Error [<unknown file> @ <unknown line number>]
00007FFD4DCE40B400007FFD4DCE2D30 torch_cpu.dll!at::QTensorImpl::operator= [<unknown file> @ <unknown line number>]
00007FFD4DCE442200007FFD4DCE2D30 torch_cpu.dll!at::QTensorImpl::operator= [<unknown file> @ <unknown line number>]
00007FFD4DCE6C0500007FFD4DCE2D30 torch_cpu.dll!at::QTensorImpl::operator= [<unknown file> @ <unknown line number>]
00007FFD4DCE6EBF00007FFD4DCE2D30 torch_cpu.dll!at::QTensorImpl::operator= [<unknown file> @ <unknown line number>]
00007FFD4FC69D3B00007FFD4FC5E5D0 torch_cpu.dll!torch::jit::registerOperator [<unknown file> @ <unknown line number>]
00007FFD4FC6AB6200007FFD4FC5E5D0 torch_cpu.dll!torch::jit::registerOperator [<unknown file> @ <unknown line number>]
00007FFD4F25A8AC00007FFD4F256C30 torch_cpu.dll!torch::jit::TypeNameUniquer::getUniqueName [<unknown file> @ <unknown line number>]
00007FFD4F25DE3800007FFD4F25CCE0 torch_cpu.dll!torch::jit::Unpickler::readInstruction [<unknown file> @ <unknown line number>]
00007FFD4F260FA000007FFD4F260ED0 torch_cpu.dll!torch::jit::Unpickler::run [<unknown file> @ <unknown line number>]
00007FFD4F25B71200007FFD4F25B6E0 torch_cpu.dll!torch::jit::Unpickler::parse_ivalue [<unknown file> @ <unknown line number>]
00007FFD4FC6DD4300007FFD4FC6D950 torch_cpu.dll!torch::jit::readArchiveAndTensors [<unknown file> @ <unknown line number>]
00007FFD4FC6D91F00007FFD4FC6C050 torch_cpu.dll!torch::jit::load [<unknown file> @ <unknown line number>]
00007FFD4FC6B49200007FFD4FC5E5D0 torch_cpu.dll!torch::jit::registerOperator [<unknown file> @ <unknown line number>]
00007FFD4FC6C20300007FFD4FC6C050 torch_cpu.dll!torch::jit::load [<unknown file> @ <unknown line number>]
00007FFD4FC6C03B00007FFD4FC6BFC0 torch_cpu.dll!torch::jit::load [<unknown file> @ <unknown line number>]
00007FFD83BBAC1B00007FFD83BBA830 Detector_MTCNN.dll!MTCNNImpl::MTCNNImpl [D:\cpp_port\Detector_MTCNN\MTCNN.cpp @ 481]
00007FFD83BC815000007FFD83BC8010 Detector_MTCNN.dll!MTCNN::MTCNN [D:\cpp_port\Detector_MTCNN\MTCNN.cpp @ 1645]
00007FF6E13D46A000007FF6E13D44D0 Detector_MTCNN_Test.exe!test19 [D:\cpp_port\Detector_MTCNN_Test\Detector_MTCNN_Test.cpp @ 744]
00007FF6E13D522000007FF6E13D4E90 Detector_MTCNN_Test.exe!main [D:\cpp_port\Detector_MTCNN_Test\Detector_MTCNN_Test.cpp @ 941]
00007FF6E13F01F000007FF6E13F00E4 Detector_MTCNN_Test.exe!__scrt_common_main_seh [d:\agent\_work\63\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl @ 288]
00007FFDAEAD7C2400007FFDAEAD7C10 KERNEL32.DLL!BaseThreadInitThunk [<unknown file> @ <unknown line number>]
00007FFDB00AD4D100007FFDB00AD4B0 ntdll.dll!RtlUserThreadStart [<unknown file> @ <unknown line number>]
What am I missing here? I mean what prerequisite(especially hardware related such as instruction sets, etc) is needed that’s missing here and is thus causing this exception? my vm runs on an old Intel® Xeon® CPU E5-2630 v2 @ 2.60GHz by the way)
Thanks a lot in advance |
st184194 | Solved by Shisho_Sama in post #2
OK, I guess I found the culprit.
I built fbgemm on my local system which has avx2 support. but my vm CPU only supports avx instruction set and that’s why all hell breaks loose!
FBGEMM requires gcc 5+ and a CPU with support for avx2 instruction set or higher
Thanks everyone! |
st184195 | OK, I guess I found the culprit.
I built fbgemm on my local system which has avx2 support. but my vm CPU only supports avx instruction set and that’s why all hell breaks loose!
FBGEMM requires gcc 5+ and a CPU with support for avx2 instruction set or higher
Thanks everyone! |
st184196 | Hi, I have a specific case and need some help/pointer.
I have designed a specialized normalization layer (with learnable parms) from nn.Module and like to apply QAT to this one. But, I couldn’t find a doc on how to make a corresponding module for QAT: such as attaching processes weight_fake_quant, activation_post_process as well.
Any starter on this? |
st184197 | Solved by thyeros in post #4
I figured it out. For anyone with the same interest, hope this template helpful. Jerry, please comment if anything missing here.
class qatCustom (nn.Module):
def __init__(..., qconfig=None):
super().__init__(....)
self.qconifg = qconfig
#skip this if there is no learnabl… |
st184198 | we have examples for qat.Conv2d here: pytorch/conv.py at master · pytorch/pytorch · GitHub
then you can add the mapping in:
github.com
pytorch/pytorch/blob/master/torch/quantization/quantization_mappings.py#L68
nniqat.ConvBnReLU1d: nniq.ConvReLU1d, nniqat.ConvBnReLU2d: nniq.ConvReLU2d, nniqat.ConvReLU2d: nniq.ConvReLU2d, nniqat.LinearReLU: nniq.LinearReLU, # QAT modules: nnqat.Linear: nnq.Linear, nnqat.Conv2d: nnq.Conv2d,}# Default map for swapping float module to qat modulesDEFAULT_QAT_MODULE_MAPPINGS : Dict[Callable, Any] = { nn.Conv2d: nnqat.Conv2d, nn.Linear: nnqat.Linear, nn.modules.linear._LinearWithBias: nnqat.Linear, # Intrinsic modules: nni.ConvBn1d: nniqat.ConvBn1d, nni.ConvBn2d: nniqat.ConvBn2d, nni.ConvBnReLU1d: nniqat.ConvBnReLU1d, nni.ConvBnReLU2d: nniqat.ConvBnReLU2d, nni.ConvReLU2d: nniqat.ConvReLU2d, nni.LinearReLU: nniqat.LinearReLU
or pass in a mapping that includes the new qat module in pytorch/quantize.py at master · pytorch/pytorch · GitHub 2 |
st184199 | Hi, Jerry, thanks for sharing that. Yes, I also saw the mapping table, but (I should have been more clear on this): For a custom layer, I need to make a corresponding QAT version of that customer layer. Is there any particular requirement for such QAT versions (in doc or example)? such as a particular set of functions or attributes? One of them looks like ‘from_float’. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.