id
stringlengths
3
8
text
stringlengths
1
115k
st183700
Hi, I’ve read your answer, but I am confused. You need first an onnx model which you later convert to caffe2. But if I get an error when exporting to onnx, how I can get to second step?
st183701
I installed the nightly version of Pytorch. torch.quantization.convert(model, inplace=True) torch.onnx.export(model, img, “8INTmodel.onnx”, verbose=True) Traceback (most recent call last): File "check_conv_op.py", line 92, in <module> quantize(img) File "check_conv_op.py", line 59, in quantize torch.onnx.export(model, img, "8INTmodel.onnx", verbose=True) File "/usr/local/lib/python3.7/site-packages/torch/onnx/__init__.py", line 168, in export custom_opsets, enable_onnx_checker, use_external_data_format) File "/usr/local/lib/python3.7/site-packages/torch/onnx/utils.py", line 69, in export use_external_data_format=use_external_data_format) File "/usr/local/lib/python3.7/site-packages/torch/onnx/utils.py", line 485, in _export fixed_batch_size=fixed_batch_size) File "/usr/local/lib/python3.7/site-packages/torch/onnx/utils.py", line 334, in _model_to_graph graph, torch_out = _trace_and_get_graph_from_model(model, args, training) File "/usr/local/lib/python3.7/site-packages/torch/onnx/utils.py", line 282, in _trace_and_get_graph_from_model orig_state_dict_keys = _unique_state_dict(model).keys() File "/usr/local/lib/python3.7/site-packages/torch/jit/__init__.py", line 302, in _unique_state_dict filtered_dict[k] = v.detach() AttributeError: 'torch.dtype' object has no attribute 'detach'
st183702
dassima: filtered_dict[k] = v.detach() looks like it’s calling detach on a dtype object, could you paste check_conv_op.py?
st183703
Hi @dassima and @jerryzh168 - did you manage to get to the bottom of this? I’m seeing exactly the same error. A simple model exports fine without quantization. Setting a break on the point of failure, I’m seeing the object to be detached is torch.qint8 Then dumping the state_dict for both non-quantized and quantized versions, the quantized version has this as an entry - (‘fc1._packed_params.dtype’, torch.qint8). The non quantized version has only tensors. Any thoughts as to what’s going on greatly appreciated! Thanks.
st183704
it’s probably because of this: https://github.com/pytorch/pytorch/blob/master/torch/nn/quantized/modules/linear.py#L60 16 what version of pytorch are you using? if you update to nightly the problem should be gone since we changed the serialization format for linear: https://github.com/pytorch/pytorch/blob/master/torch/nn/quantized/modules/linear.py#L220 10
st183705
Many thanks for getting back. I was on 1.5.1 but just pulled 1.7.0.dev20200705+cpu but alas, still no joy. Anything I can do to help debug this?
st183706
@jerryzh168, any ideas on next steps? Not sure if it’s something I’m doing incorrectly or a general problem with exporting. Many thanks.
st183707
Hi @jerryzh168, yes. Updated initially to 1.7.0.dev20200705+cpu and just tried torch-1.7.0.dev20200724+cpu. No luck with either. As I hijacked an old thread, I thought best to raise a separate issue with a simple example (single fully connected layer) to replicate - Simple quantized model doesn't export to ONNX quantization Hello, I’m having problems exporting a very simple quantized model to ONNX. The error message I’m seeing is - AttributeError: 'torch.dtype' object has no attribute 'detach' The cause of this is that (‘fc1._packed_params.dtype’, torch.qint8) is ends up in the state_dict. I asked on a previous (and old) thread if there was a solution and the answer was that this could be solved in the latest version of PyTorch. So I installed 1.7.0.dev20200705+cpu, but no joy. I’ve pasted the example below. A… I’ve had one reply with comment explaining that exporting of quantized models is not yet supported and a link to another thread. Sounds like it’s WIP. Would be good to get your take on the example in the other thread. Many thanks again.
st183708
G4V: Hello, I’m having problems exporting a very simple quantized model to ONNX. The error message I’m seeing is - AttributeError: ‘torch.dtype’ object has no attribute ‘detach’ The cause of this is that (‘fc1._packed_params.dtype’, torch.qint8) is ends up in the state_dict. I asked on a previous (and old) thread if there was a solution and the answer was that this could be solved in the latest version of PyTorch. So I installed 1.7.0.dev20200705+cpu, but no joy. I’ve pasted the example below. A… cc @supriyar is quantized Linear supported in ONNX?
st183709
what is the error message? i think linear is supported according to https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic_caffe2.py 7
st183710
How are you exporting the quantized model to ONNX? Like previously mentioned we only currently support a custom conversion flow through ONNX to Caffe2 for quantized models. The models aren’t represented in native ONNX format, but a format specific to Caffe2. If you wish to export model to caffe2, you can follow the steps here to do so (model needs to be traced first and need to set operator_export_type to ONNX_ATEN_FALLBACK) github.com pytorch/pytorch/blob/master/test/onnx/test_pytorch_onnx_caffe2_quantized.py#L17-L35 19 torch.backends.quantized.engine = "qnnpack" pt_inputs = tuple(torch.from_numpy(x) for x in sample_inputs) model.qconfig = torch.quantization.get_default_qconfig('qnnpack') q_model = torch.quantization.prepare(model, inplace=False) q_model = torch.quantization.convert(q_model, inplace=False) traced_model = torch.jit.trace(q_model, pt_inputs) buf = io.BytesIO() torch.jit.save(traced_model, buf) buf.seek(0) q_model = torch.jit.load(buf) q_model.eval() output = q_model(*pt_inputs) f = io.BytesIO() torch.onnx.export(q_model, pt_inputs, f, input_names=input_names, example_outputs=output, operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK) f.seek(0)
st183711
Export of pytorch QAT models to ONNX standard is supported now. You should be able to export the model without operator_export_type = ONNX_ATEN_FALLBACK
st183712
@neginraoof Can you post a minimal example of PyTorch QAT → ONNX and clarify which PyTorch version this became supported in? I’m on 1.8.1 and still seeing similar errors as @G4V. In all the discussion I’m able to find (forum + Github issues), @supriyar says the only support path is still involving Caffe2
st183713
hello. I am trying to quantize my custom model and apply nnapi. nnapi prototype “(Prototype) Convert MobileNetV2 to NNAPI — PyTorch Tutorials 1.7.1 documentation 1” I followed the example provided here. First, as in mobilenet v2, a quant pair was created, inherited as quantizable_my_model, and the pretrained weight was loaded, and the example was followed. But nnapi_model = torch.backends._nnapi.prepare.convert_model_to_nnapi(MY_MODEL,MY_TENSOR) The following error occurred in. RuntimeError: Module'NnapiModule' has no attribute'weights' (This attribute exists on the Python module, but we failed to convert Python type:'list' to a TorchScript type.): File "/home/ubuntu/anaconda3/envs/nnapi_2/lib/python3.6/site-packages/torch/backends/_nnapi/prepare.py", line 35 def init(self): assert self.comp is None self.weights = [w.contiguous() for w in self.weights] ~~~~~~~~~~~~ <--- HERE comp = torch.classes._nnapi.Compilation() comp.init(self.ser_model, self.weights) Please help me what to do.
st183714
I follow the tutorial of tutorials/static_quantization_tutorial.rst at master · pytorch/tutorials · GitHub 3 MobileNet-V2 baseline and PTQ work as expected but QAT top1 is only 67.88 after 8 epoch. log here Unknown server log [#TRHsVfl] - mclo.gs 2 Thanks.
st183715
Solved by raghuramank100 in post #2 The tutorial shows an easy to run example, where you are using only a few batches of data for training (num_train_batches = 20). Also, the result of 71.5% is after training over 30 epochs on the full training dataset. If you are training on the full imagenet dataset, your results look ok. A full scr…
st183716
The tutorial shows an easy to run example, where you are using only a few batches of data for training (num_train_batches = 20). Also, the result of 71.5% is after training over 30 epochs on the full training dataset. If you are training on the full imagenet dataset, your results look ok. A full script to reproduce the training numbers is available at: vision/train_quantization.py at master · pytorch/vision · GitHub 1
st183717
@raghuramank100 Thanks for your kindly reply. Your answer is right. I am reproducing the accuracy as your suggested and result looks good. Bug 30125 3 and comments provide some useful info. Believe it or not, QAT of static_quantization_tutorial.rst 1 is NOT a good guide because it cant reproduce 71.5%. I find some differences between static_quantization_tutorial.rst 1 and train_quantization.py 1 which seems to cause different accuracy. Actually, hyperparameter and training tricks make sense, right? BTW, when and why to insert hyperparameter/freeze-operation/disable-operation is not easy during training. Any QAT trick or routine for other networks instead of mobilenet-v2, such as more classification or detection or segmentation? That is much helpful. Thanks.
st183718
hi, is there a way to compute histogram for each row of matrix of size (n, m) using torch.histc 2 so far, histc computes the histogram of the the entire tensor. thanks
st183719
not conveniently, you’d have to do something like x = torch.randn(3,3,3,3)*10 a = torch.tensor([torch.histc(x[i], 100, 0, 20).tolist() for i in range(3)])
st183720
I am trying to convert a quantied model trained in pytorch to onnx. And then got File "test_QATmodel.py", line 276, in test torch.onnx.export(model_new, sample, 'quantized.onnx')#, opset_version=11, operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK) File "/monly/workspaces/bigtree/miniconda3/envs/color_detcet/lib/python3.7/site-packages/torch/onnx/__init__.py", line 280, in export custom_opsets, enable_onnx_checker, use_external_data_format) File "/monly/workspaces/bigtree/miniconda3/envs/color_detcet/lib/python3.7/site-packages/torch/onnx/utils.py", line 94, in export use_external_data_format=use_external_data_format) File "monly/workspaces/bigtree/miniconda3/envs/color_detcet/lib/python3.7/site-packages/torch/onnx/utils.py", line 695, in _export dynamic_axes=dynamic_axes) File "monly/workspaces/bigtree/miniconda3/envs/color_detcet/lib/python3.7/site-packages/torch/onnx/utils.py", line 459, in _model_to_graph _retain_param_name) File "monly/workspaces/bigtree/miniconda3/envs/color_detcet/lib/python3.7/site-packages/torch/onnx/utils.py", line 422, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "monly/workspaces/bigtree/miniconda3/envs/color_detcet/lib/python3.7/site-packages/torch/onnx/utils.py", line 370, in _trace_and_get_graph_from_model orig_state_dict_keys = _unique_state_dict(model).keys() File "monly/workspaces/bigtree/miniconda3/envs/color_detcet/lib/python3.7/site-packages/torch/jit/_trace.py", line 71, in _unique_state_dict filtered_dict[k] = v.detach() AttributeError: 'NoneType' object has no attribute 'detach' Does the quantized model->onnx supported today?
st183721
Solved by HDCharles in post #2 I believe this thread goes through the existing support for ONNX: ONNX export of quantized model - #23 by ZyrianovS
st183722
I believe this thread goes through the existing support for ONNX: ONNX export of quantized model - #23 by ZyrianovS 51
st183723
Hello, I’m having problems exporting a very simple quantized model to ONNX. The error message I’m seeing is - AttributeError: 'torch.dtype' object has no attribute 'detach' The cause of this is that (‘fc1._packed_params.dtype’, torch.qint8) is ends up in the state_dict. I asked on a previous (and old) thread if there was a solution and the answer was that this could be solved in the latest version of PyTorch. So I installed 1.7.0.dev20200705+cpu, but no joy. I’ve pasted the example below. Any thoughts on whether this is a fault on my part, a bug, or not supported, greatly appreciated. #Import libraries import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim #Needed for quantization from torch.quantization import QuantStub, DeQuantStub import torch.quantization class Net(nn.Module): def __init__(self): #create instance of base class super().__init__() self.fc1 = nn.Linear(28*28, 10) #Inputs, outputs #Optimizer parameters self.learning_rate = 0.01 self.epochs = 10 self.log_interval = 10 self.batch_size=200 #Needed for quantization, per pytorch examples self.quant = QuantStub() self.dequant = DeQuantStub() #Training related functions self.optimizer = optim.SGD(self.parameters(), lr=self.learning_rate, momentum=0.9) self.criterion = nn.NLLLoss() def forward(self, x, save_intermediate = False, count=0): x1 = self.quant(x) x2 = self.fc1(x1) x3 = self.dequant(x2) return x3 net = Net() net.qconfig = torch.quantization.default_qconfig torch.quantization.prepare(net, inplace=True) torch.quantization.convert(net, inplace=True) torch.onnx.export(net, torch.zeros([1,784]), 'simple.onnx', opset_version=11, verbose=True )
st183724
Solved by supriyar in post #10 We currently don’t support exporting pytorch quantized models to ONNX. We welcome suggestions and contributions for this!
st183725
General export of quantized models to ONNX isn’t currently supported. We currently only support conversion to ONNX for Caffe2 backend. This thread has additional context on what we currently support - ONNX export of quantized model 165
st183726
As mentioned in the other thread by @supriyar can you try torch.onnx.export(q_model, pt_inputs, f, input_names=input_names, example_outputs=output, operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK)
st183727
@supriyar and @jerryzh168, many thanks again. Following the example, I’ve managed to get the model to convert. Is it the ATen fallback that forces the exporter to export specifically to Caffe2? Is Caffe2 effectively a subset? My goal is to get this model through the Glow compiler. This supports both importing ONNX and Caffe2 but still seeing issues with unknown element kind. A long shot as this is not the correct forum, but any experience on whether this is doable?
st183728
I’m not familiar with onnx export, here is the doc for that: https://github.com/pytorch/pytorch/blob/master/docs/source/onnx.rst#onnx-aten-fallback 24 I think we are integrating with glow now, I’ll ask someone from glow team to answer the question.
st183729
still seeing issues with unknown element kind Is this error during importing to Glow or with ONNX export? Glow’s onnx importer doesn’t match up with the latest ONNX very closely so you could run into some issues. Not sure what model you’re working with but you could also try using to_glow to lower from PyTorch to Glow more directly, though this path is fairly new and has been tested mostly on ResNet-like models. (like this 4)
st183730
We currently don’t support exporting pytorch quantized models to ONNX. We welcome suggestions and contributions for this!
st183731
Hello, this is my first post. I wanted to know If I want to perform quantization aware training (QAT) and build the model with the torch.nn.intrinsic modules, will the performance of the model drop even if I don’t convert it to a quantized model? Basically I want to build a model that can achieve it’s best possible performance on a GPU and can be converted into a quantized model whenever needed. Thanks in advance!
st183732
Solved by supriyar in post #2 Hi @pritom-kun Can you clarify what you mean by performance of the model here? Are you referring to model numerics or the training time due to enabling QAT on the model? Regarding the numerics, the QAT step does alter the model weights assuming the model will be quantized at a later stage, so ther…
st183733
Hi @pritom-kun Can you clarify what you mean by performance of the model here? Are you referring to model numerics or the training time due to enabling QAT on the model? Regarding the numerics, the QAT step does alter the model weights assuming the model will be quantized at a later stage, so there may be differences if we compare the result with an FP32 trained model.
st183734
That’s what I wanted to know, by the performance I was referring to the prediction accuracy. Thanks for your answer!
st183735
I read the quantization paper in pytorch website and realize the post dynamic quantization and static quantization.dynamic quantization is good for LSTM and Linear, and static quantization is good for CNNs, I wanna ask: when I use the CRNN model,the model is like: CNN + LSTM + Linear, what is the best way to quantize my model, or is there some tricks to mix the two quantization methods? I’d appreciate if anybody can help me! Thanks in advance!
st183736
I think it’s possible, you may apply static quantization to the CNN part of the model and dynamic quantization on LSTM + Linear part of the model, since both of them will have float data in the input and output, the combined model should work.
st183737
1.fix rnn and linear layers, quantize cnn layers (post-training static quantization) 2.fix rnn and linear layers, quantize cnn layers (quantization-aware training, this step is optional) 3.fix quantized cnn layers, quantize rnn and linear layers(post-training dynamic quantization)
st183738
Quantization is controlled by the qconfig, so when quantize cnn layers you can remove the qconfig of rnn layer, this way rnn layer will not be quantized.
st183739
You can control the layers to quantize by specifying quant/dequant stubs around the layers. For more details you can refer to the tutorial in (beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials 1.9.0+cu102 documentation 2
st183740
I have following model, which I want to run on Android. class depthwise_separable_conv(nn.Module): def __init__(self, nin, nout, kernel_size, kernels_per_layer=1): super(depthwise_separable_conv, self).__init__() self.depthwise = nn.Conv2d(nin, nin * kernels_per_layer, kernel_size=kernel_size, padding=1, groups=nin) self.pointwise = nn.Conv2d(nin * kernels_per_layer, nout, kernel_size=1) self.relu = nn.ReLU(inplace=False) def forward(self, x): out = self.depthwise(x) out = self.pointwise(out) out = self.relu(out) return out class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = depthwise_separable_conv(1, 6, 5) self.conv2 = depthwise_separable_conv(6, 16, 5) self.conv3 = depthwise_separable_conv(16, 32, 5) self.pool = nn.AvgPool2d(2, 2) self.lrn = nn.LocalResponseNorm(2) self.fc1 = nn.Linear(32 * 6 * 13, 250) self.relu1 = nn.ReLU(inplace=False) self.fc2 = nn.Linear(250, 84) self.relu2 = nn.ReLU(inplace=False) self.fc3 = nn.Linear(84, 2) self.soft = nn.Softmax(dim=1) self.quant = QuantStub() self.dequant = DeQuantStub() def forward(self, x): x = self.quant(x) x = self.pool((self.conv1(x))) x = self.pool((self.conv2(x))) x = self.pool((self.conv3(x))) x = self.dequant(x) x = self.lrn(x) x = self.quant(x) x = x.reshape(-1, 32 * 6 * 13) x = self.relu1(self.fc1(x)) x = self.relu2(self.fc2(x)) x = self.fc3(x) x = self.dequant(x) x = self.soft(x) return x I created two versions, with and without Quantization(this model doesn’t have quant() and dequant() parts). I performed quantization using the following code backend = "qnnpack" qconfig = torch.quantization.get_default_qconfig(backend) net.qconfig = qconfig torch.backends.quantized.engine = backend qconfig_dict = {"": qconfig} quant_net = net quant_net = prepare_fx(quant_net, qconfig_dict) quant_net(torch.Tensor(batch)) #calibrate quant_net = convert_fx(quant_net) and scripted both models using traced_script_module = torch.jit.script(quant_net) traced_script_module_optimized = optimize_for_mobile(traced_script_module) traced_script_module_optimized._save_for_lite_interpreter(MODEL_DIR + "stQuant_lite.ptl") In python, I’m able to see inference time reduction using the quantization. But, reverse happens on Android. Moreover, in Android, the RAM usage is also more in the case of quantized model. Also, the weirdest thing is happening in Android. If I rename the scripted model “stQuant_lite.ptl” to “stQuant_lite_11.ptl”, keeping everything else same(I literally just use refactor->rename), I get the following error : Could not run ‘quantized::conv2d.new’ with arguments from the ‘CPU’ backend. I’m using torch 1.9.0 in python on a linux OS and pytorch_lite:1.9.0 in Android. My app is almost the HelloWorldApp, except that i feed an empty FloatBuffer to the model instead of an Image. Any help is greatly appreciated.
st183741
Solved by Lakshya1 in post #5 Created another topic : https://discuss.pytorch.org/t/quantization-causing-reduced-performance-on-pytorch-android/129434
st183742
Lakshya1: In python, I’m able to see inference time reduction using the quantization. But, reverse happens on Android. Moreover, in Android, the RAM usage is also more in the case of quantized model. both questions sounds like related more to PyTorch Mobile actually, could you add a PyTorch Mobile tag?
st183743
Sorry, I’m new and I can’t see any edit topic options to change the category to mobile.
st183744
could you make a post in PyTorch Mobile: Mobile - PyTorch Forums 1? we can close this one
st183745
Created another topic : https://discuss.pytorch.org/t/quantization-causing-reduced-performance-on-pytorch-android/129434 5
st183746
Hello, my friends I have a question about int16 quantization of pytorch. Does pytorch have the plan to support int16 quantization in the future? I'm looking forward to your reply. Thanks.
st183747
maybe, it depends on the trend in the hardwares we want to support, we do not have concrete plans to do that currently though. Please let us know if you have a prevailing use case for that.
st183748
I have tried post training dynamic quantization with YOLOv5 model. Model file is available here (https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt 1) When I tried to quantize, model size is increasing twice and inferencing time is same with FP32 model. Based on pytorch tutorial (Dynamic Quantization — PyTorch Tutorials 1.9.0+cu102 documentation 1). model = attempt_load(weights, map_location=device) quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Conv2d, torch.nn.Linear}, dtype=torch.qint8) #int8 # model = torch.quantization.quantize_dynamic(model, dtype=torch.qint8) #int8 I am not sure {torch.nn.Conv2d, torch.nn.Linear} ( list of submodule names in model to apply quantization to) is correctly using or not when applying it to YOLO model I even checked the dtype using for param in quantized_model.parameters(): print(param.dtype) It is still fp32 Thank you for advances beforehand Thank you for advances beforehand
st183749
Abdulaziz_Gaybulayev: When I tried to quantize, model size is increasing twice and inferencing time is same with FP32 model. can you print the model after quantize_dynamic? also I think we do not support dynamic quantization for torch.nn.Conv2d right now.
st183750
Hi! I have quantized object detection model using static quantization strategy. The model got quantized and works fine. The problem I am facing is that, the very first two forward passes take way longer time than the following passes after that. By “longer” I mean a couple of seconds, where as after the first two passes the model execution takes about ~5ms on average. I have followed the official documentations in implementing the training script but it seems to me there is a bug somewhere that causes the problem. Could you please help me out with this? Below I have attached the model and the training script. Thanks a lot! class BasicConv2d(nn.Module): def __init__(self, in_channels, out_channels, **kwargs): super(BasicConv2d, self).__init__() self.conv = nn.Conv2d(in_channels, out_channels, bias=False, **kwargs) self.bn = nn.BatchNorm2d(out_channels, eps=1e-5) self.relu = nn.ReLU() def forward(self, x): x = self.conv(x) x = self.bn(x) x = self.relu(x) return x # return nn.functional.relu(x, inplace=True) # study the model in more depth class FaceBoxesMobNet(nn.Module): def __init__(self, phase, num_classes=1, quantization=False): super(FaceBoxesMobNet, self).__init__() self.phase = phase self.num_classes = num_classes self.backbone = MobileNetV2() # self.backbone = MobileNetV3() # why don't we just use feature maps from the backbone self.avg = nn.Conv2d(1280, 128, kernel_size=1, bias=False, groups=128) # self.avg = nn.Conv2d(96, 128, kernel_size=1, bias=False) self.conv3_1 = BasicConv2d(128, 128, kernel_size=1, stride=1, padding=0) self.conv3_2 = BasicConv2d(128, 256, kernel_size=3, stride=2, padding=1) self.conv4_1 = BasicConv2d(256, 128, kernel_size=1, stride=1, padding=0) self.conv4_2 = BasicConv2d(128, 256, kernel_size=3, stride=2, padding=1) self.get_multibox(self.num_classes) self.quantization = quantization self.quant = QuantStub() self.dequant = DeQuantStub() # if self.phase == 'test': self.prob = nn.Sigmoid() if self.phase == 'train': for m in self.modules(): if isinstance(m, nn.Conv2d): if m.bias is not None: nn.init.xavier_normal_(m.weight.data) m.bias.data.fill_(0.02) else: m.weight.data.normal_(0, 0.01) elif isinstance(m, nn.BatchNorm2d): m.weight.data.fill_(1) m.bias.data.zero_() def get_multibox(self, num_classes): self.loc0 = nn.Conv2d(128, 3 * 4, kernel_size=3, padding=1) self.conf0 = nn.Conv2d(128, 3 * num_classes, kernel_size=3, padding=1) self.loc1 = nn.Conv2d(256, 3 * 4, kernel_size=3, padding=1) self.conf1 = nn.Conv2d(256, 3 * num_classes, kernel_size=3, padding=1) self.loc2 = nn.Conv2d(256, 3 * 4, kernel_size=3, padding=1) self.conf2 = nn.Conv2d(256, 3 * num_classes, kernel_size=3, padding=1) # self.loc0 = nn.Conv2d(128, 2 * 4, kernel_size=3, padding=1) # self.conf0 = nn.Conv2d(128, 2 * num_classes, kernel_size=3, padding=1) # self.loc1 = nn.Conv2d(256, 1 * 4, kernel_size=3, padding=1) # self.conf1 = nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1) # self.loc2 = nn.Conv2d(256, 1 * 4, kernel_size=3, padding=1) # self.conf2 = nn.Conv2d(256, 1 * num_classes, kernel_size=3, padding=1) def fuse_model(self): for m in self.modules(): if type(m) == ConvBNReLU: torch.quantization.fuse_modules(m, ['0', '1', '2'], inplace=True) if type(m) == BasicConv2d: torch.quantization.fuse_modules(m, ['conv', 'bn', 'relu'], inplace=True) if type(m) == InvertedResidual: for idx in range(len(m.conv)): if type(m.conv[idx]) == nn.Conv2d: torch.quantization.fuse_modules(m.conv, [str(idx), str(idx + 1)], inplace=True) def forward(self, x): # here we are doing static quantazation x = self.quant(x) # <-- where tensors are quatized x = self.backbone(x) # print('output shape of backbne: {}'.format(x.shape)) x = self.avg(x) loc0 = self.loc0(x).permute(0, 2, 3, 1).contiguous() loc0 = loc0.view(loc0.size(0), -1) conf0 = self.conf0(x).permute(0, 2, 3, 1).contiguous() conf0 = conf0.view(conf0.size(0), -1) x = self.conv3_1(x) x = self.conv3_2(x) loc1 = self.loc1(x).permute(0, 2, 3, 1).contiguous() loc1 = loc1.view(loc1.size(0), -1) conf1 = self.conf1(x).permute(0, 2, 3, 1).contiguous() conf1 = conf1.view(conf1.size(0), -1) x = self.conv4_1(x) x = self.conv4_2(x) loc2 = self.loc2(x).permute(0, 2, 3, 1).contiguous() loc2 = loc2.view(loc1.size(0), -1) conf2 = self.conf2(x).permute(0, 2, 3, 1).contiguous() conf2 = conf2.view(conf2.size(0), -1) loc0 = self.dequant(loc0) conf0 = self.dequant(conf0) loc1 = self.dequant(loc1) conf1 = self.dequant(conf1) loc2 = self.dequant(loc2) conf2 = self.dequant(conf2) # <-- here the tensors are converted back to floating point percisionj loc = torch.cat([loc0, loc1, loc2], dim=1) conf = torch.cat([conf0, conf1, conf2], dim=1) if self.phase == "test": output = (loc.view(loc.size(0), -1, 4), self.prob(conf.view(-1, self.num_classes)), None) else: output = (loc.view(loc.size(0), -1, 4), conf.view(conf.size(0), -1, self.num_classes), None) return output def run_static_quantization(mode='per_tensor'): dataset_path = 'path/to/data/' device = 'cpu' model = FaceBoxesMobNet('test') model = model.to(device) model.eval() # model.load_state_dict(remove_prefix(torch.load(os.path.join(args.save_folder, # f'{experiment_name()}/{args.quantized_model}'), # map_location=device), 'module.')) model.load_state_dict(torch.load(os.path.join(args.save_folder, f'{experiment_name()}/{args.quantized_model}'), map_location=device)) print("Loaded the model successfully. Start measuring the speed") s = run_speed_test(net=model, device=device) print(f"Speed of model before quantization: {s} ms/frame") print("Size of model before quantization") print_size_of_model(model) # Fuse Conv, bn and relu model.fuse_model() # Specify quantization configuration # Start with simple min/max range estimation and per-tensor quantization of weights model.qconfig = torch.quantization.default_qconfig if mode == 'per_channel' \ else torch.quantization.get_default_qconfig('fbgemm') print(model.qconfig) torch.quantization.prepare(model, inplace=True) # Calibrate with the val set run_evaluation(net=model, device=device, dataset_path=dataset_path) print(f'Post Training {mode} Quantization: Calibration done') # Convert to quantized model torch.quantization.convert(model, inplace=True) print('Post Training Quantization: Convert done') # print('\n Inverted Residual Block: After fusion and quantization, note fused modules: \n\n', # model.features[1].conv) print(f"Size of model after {mode} quantization") print_size_of_model(model) print("Performance of model after quantization") run_evaluation(net=model, device='cpu', dataset_path=dataset_path) s = run_speed_test(net=model, device=device) print(f"Speed of model after {mode} quantization: {s} ms/frame") torch.jit.save(torch.jit.script(model), os.path.join(args.save_folder, f'{experiment_name()}/static_final{mode}.pth'))
st183751
def run_speed_test(weights=None, net=None, device='cpu', iters=1000): image = torch.rand(480, 640).numpy() predictor = Predictor(weights=weights, net=net, device=device) t0 = time() for i in range(iters): x = predictor(image) delta_t = time() - t0 return delta_t / float(iters) * 1000 def print_size_of_model(model): torch.save(model.state_dict(), "temp.p") print('Size (MB):', os.path.getsize("temp.p")/1e6) os.remove('temp.p') Here the Pedictor is a wrapper class that does all preprocessing and post processing. The slowness of the first two passes is not caused by the wrapper class. I tested the model separately, the problem is still present.
st183752
I see, does this happen for other models as well? or just your model? is it possible to narrow down the part of the model that has this issue? (by commenting out part of the models)
st183753
Yes, I did try other models. The same behavior is present in all models. Both, the models I tried and the quatization scheme were from the official torchvision models zoo. As far as I am concerned this is a features of torch 1.8 and above. I am wondering, what benefits are from this slow initial passes and is there a way to switch that behavior off somehow?
st183754
we do not know why this happens actually, can you use PyTorch Profiler — PyTorch Tutorials 1.9.0+cu102 documentation to see the breakdown and where the slowness comes from?
st183755
Hello, I’ve tried doing dynamic quantization on the XLNet model during inference, and I got this error message: RuntimeError: Could not run 'quantized::linear_dynamic' with arguments from the 'CUDA' backend. 'quantized::linear_dynamic' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode]. This leads me to believe dynamic quantization doesn’t support CUDA, and if so, do you guys plan to have CUDA support for quantization for both training and inference? I couldn’t find any issues relating to this on Github. Thanks!
st183756
Solved by jerryzh168 in post #2 yeah it is not supported on CUDA, quantized::linear_dynamic is only supported in CPU. We do not have immediate plans to support CUDA but we plan to publish a doc for custom backends which will make the extension easier.
st183757
MattHatter: RuntimeError: Could not run 'quantized::linear_dynamic' with arguments from the 'CUDA' backend. 'quantized::linear_dynamic' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode]. yeah it is not supported on CUDA, quantized::linear_dynamic is only supported in CPU. We do not have immediate plans to support CUDA but we plan to publish a doc for custom backends which will make the extension easier.
st183758
will think about post one in OSS, please keep an eye out for that in github issues page, we are currently working on enabling CUDA path through TensorRT as well, had a prototype here: [not4land] Test PT Quant + TRT path by jerryzh168 · Pull Request #60589 · pytorch/pytorch · GitHub 20 I can share the doc early with you if you message me your email. but we may make some modifications before publishing in oss
st183759
hi, i save the quantization model with torch.save(model.state_dict(),"quanted_model.pkl" and load it with model.fuse_model() model.encoder.qconfig = torch.quantization.get_default_qconfig('fbgemm') torch.quantization.prepare(model.encoder, inplace=True) torch.quantization.convert(model.encoder, inplace=True) model.load_state_dict(torch.load(modelfile,map_location='cpu')) the quantization step is same. when i pass the calibrate step, i can get the same output from the load model, however , after i do the calibrate, the output is different betwen the reloaded model and the quantization model. I compare the state_dict() and the metadata, but, they are same. my model is transformer.
st183760
Solved by crane in post #2 ok, this is because the new parameter of QuantizedLayerNorm didn’t save.
st183761
hello, crane. How do you quantize the layer_norm module. I try to use this code to convert the encoder-decoder transformer from fp32 to int8, but it seems the layer_norm is still fp32. from torch.quantization import QuantStub, DeQuantStub, float_qparams_weight_only_qconfig, default_qconfig eager mode backend = “fbgemm” #model.qconfig = torch.quantization.get_default_qconfig(backend) model.qconfig = default_qconfig model.encoder.embeddings.qconfig = float_qparams_weight_only_qconfig model.decoder.embeddings.qconfig = float_qparams_weight_only_qconfig model_fp32_prepared = torch.quantization.prepare(model) model_int8 = torch.quantization.convert(model_fp32_prepared) And when I try to inference the model, it raise error about Could not run 'quantized::layer_norm' with arguments from the 'CPU' backend.
st183762
when I do static quantization in BERT with pytorch 1.6,an error occurs: Could not run ‘quantized::layer_norm’ with arguments from the ‘CPU’ backend. 'quantized::layer_norm’ is only available for these backends: [QuantizedCPU] My model codes are as bellow: class BertSoftmaxForNerQuantized(BertPreTrainedModel): def __init__(self, config): super(BertSoftmaxForNerQuantized, self).__init__(config) self.quant = torch.quantization.QuantStub() self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.loss_type = config.loss_type self.dequant = torch.quantization.DeQuantStub() def forward(self, input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, labels=None): # outputs = self.bert(input_ids = input_ids,attention_mask=attention_mask,token_type_ids=token_type_ids) input_ids = self.quant(input_ids) attention_mask = self.quant(attention_mask) labels = self.quant(labels) outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask) sequence_output = outputs[0] # sequence_output = self.quant(sequence_output) sequence_output = self.dropout(sequence_output) logits = self.classifier(sequence_output) logits = self.dequant(logits) outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here if labels is not None: assert self.loss_type in ['lsr', 'focal', 'ce'] if self.loss_type == 'lsr': loss_fct = LabelSmoothingCrossEntropy(ignore_index=0) elif self.loss_type == 'focal': loss_fct = FocalLoss(ignore_index=0) else: loss_fct = CrossEntropyLoss(ignore_index=0) # Only keep active parts of the loss if attention_mask is not None: active_loss = attention_mask.contiguous().view(-1) == 1 active_logits = logits.contiguous().view(-1, self.num_labels)[active_loss] active_labels = labels.contiguous().view(-1)[active_loss] loss = loss_fct(active_logits, active_labels) else: loss = loss_fct(logits.contiguous().view(-1, self.num_labels), labels.view(-1)) outputs = (loss,) + outputs return outputs # (loss), scores, (hidden_states), (attentions) the evluate codes are as bellow: quantized_model.eval() quantized_model.qconfig = torch.quantization.default_qconfig print('quantized_model.qconfig',quantized_model.qconfig) torch.quantization.prepare(quantized_model,inplace=True) train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, data_type='train') args.train_batch_size = args.per_gpu_train_batch_size * max(1, args.n_gpu) train_sampler = SequentialSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset) train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=args.train_batch_size, collate_fn=collate_fn) with torch.no_grad(): for step, batch in enumerate(tqdm(train_dataloader,desc='post-training static quantization')): inputs = {"input_ids": batch[0], "attention_mask": batch[1], "labels": batch[3]} outputs = quantized_model(**inputs) loss = outputs[0] torch.quantization.convert(quantized_model, inplace=True) print_model_size(quantized_model) eval_output_dir = args.output_dir if not os.path.exists(eval_output_dir) and args.local_rank in [-1, 0]: os.makedirs(eval_output_dir) eval_dataset = load_and_cache_examples(args, args.task_name,tokenizer, data_type='dev') args.eval_batch_size = args.per_gpu_eval_batch_size * max(1, args.n_gpu) eval_sampler = SequentialSampler(eval_dataset) if args.local_rank == -1 else DistributedSampler(eval_dataset) eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args.eval_batch_size, collate_fn=collate_fn) for step, batch in enumerate(eval_dataloader): model.eval() with torch.no_grad(): inputs = {"input_ids": batch[0], "attention_mask": batch[1], "labels": batch[3]} if args.model_type != "distilbert": outputs = quantized_model(**inputs) is something wrong in my codes? or It is just the pytorch vision is too latest?
st183763
Solved by Vasiliy_Kuznetsov in post #2 Hi @HUSTHY, this error message means that a fp32 tensor is being passed into the quantized layernorm kernel. There are at least two options to address this: add a torch.quantization.QuantStub() to convert the tensor to int8 before it enters the quantized layernorm layer. For example, # init ..…
st183764
HUSTHY: Could not run ‘quantized::layer_norm’ with arguments from the ‘CPU’ backend. 'quantized::layer_norm’ is only available for these backends: [QuantizedCPU] Hi @HUSTHY, this error message means that a fp32 tensor is being passed into the quantized layernorm kernel. There are at least two options to address this: add a torch.quantization.QuantStub() to convert the tensor to int8 before it enters the quantized layernorm layer. For example, # init ... self.quant = torch.quantization.QuantStub() self.layer_norm = torch.nn.LayerNorm(...) ... # forward ... x = self.quant(x) # convert a fp32 tensor to int8 x = self.layer_norm(x) ... leave the layernorm in fp32. This can be done by setting qconfig = None to prevent a layer from getting quantized.
st183765
hi, @HUSTHY . Do you solve this problem? I meet the almost same issue about layer_norm.
st183766
The PyTorch Quantization doc 2 suggests that for efficient optimization, we must use a CPU that has AVX2 support or higher. If we were to consider transformer class models trained/quantized and served on x86 architectures using FBGEMM as the Quantization Engine, Does INT8 quantization using native pytorch APIs take advantage of AVX512 instruction set, and if so in which version of PyTorch was this support introduced? If AVX512 is supported, what should we expect in terms of performance impact on latency and throughput by moving from AVX2 to AVX512 using native pytorch quantization support? Might there be any benchmarks which already show the quantified improvement? Notes: By Native pytorch quantization, we mean what is natively supported by PyTorch without export to ONNX runtime I See a discussion here on Vec512 and AVX512 3, and couldn’t quite follow if A) this support is already introduced, and B) what the actual performance difference is Would be great to get some insight here.
st183767
Solved by Vasiliy_Kuznetsov in post #2 Hi @saykarthik, please feel free to take a look at Add AVX512 support in ATen & remove AVX support by imaginary-person · Pull Request #56992 · pytorch/pytorch · GitHub which is in progress and which aims to add this support. There are performance numbers planned to be added the PR before it’s ready…
st183768
Hi @saykarthik, please feel free to take a look at Add AVX512 support in ATen & remove AVX support by imaginary-person · Pull Request #56992 · pytorch/pytorch · GitHub 49 which is in progress and which aims to add this support. There are performance numbers planned to be added the PR before it’s ready to land (they might not be there yet).
st183769
Hello, I am trying to statically quantize the YOLOv5 model. A link to the repo is: GitHub - ultralytics/yolov5: YOLOv5 in PyTorch > ONNX > CoreML > TFLite 15. I am loading the model into a nn.Module container class in order to apply the quantization and dequantization stubs. The code looks like this: class QuantizationModule(nn.Module): def __init__(self, model): super(QuantizationModule, self).__init__() self.model = model self.quant = torch.quantization.QuantStub() self.dequant = torch.quantization.DeQuantStub() def forward(self, x): x = self.quant(x) x = self.model(x) x = self.dequant(x) return x model = QuantizationModule(model) # model here is loaded by the code in repo in export.py model.qconfig = torch.quantization.get_default_qconfig('qnnpack') torch.backends.quantized.engine = "qnnpack" model_static_quantized = torch.quantization.prepare(model, inplace=False) model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False) img = torch.zeros(1, 3, 640, 640).to(device) ts = torch.jit.trace(model_static_quantized, img) However, there seems to be an issue: TorchScript export failure: Could not run ‘aten::mul.Tensor’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. ‘aten::mul.Tensor’ is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode]. CPU: registered at /pytorch/build/aten/src/ATen/RegisterCPU.cpp:5925 [kernel] CUDA: registered at /pytorch/build/aten/src/ATen/RegisterCUDA.cpp:7100 [kernel] MkldnnCPU: registered at /pytorch/build/aten/src/ATen/RegisterMkldnnCPU.cpp:284 [kernel] SparseCPU: registered at /pytorch/build/aten/src/ATen/RegisterSparseCPU.cpp:557 [kernel] SparseCUDA: registered at /pytorch/build/aten/src/ATen/RegisterSparseCUDA.cpp:655 [kernel] BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback] Named: fallthrough registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel] AutogradOther: registered at /pytorch/torch/csrc/autograd/generated/VariableType_4.cpp:8707 [autograd kernel] AutogradCPU: registered at /pytorch/torch/csrc/autograd/generated/VariableType_4.cpp:8707 [autograd kernel] AutogradCUDA: registered at /pytorch/torch/csrc/autograd/generated/VariableType_4.cpp:8707 [autograd kernel] AutogradXLA: registered at /pytorch/torch/csrc/autograd/generated/VariableType_4.cpp:8707 [autograd kernel] AutogradNestedTensor: registered at /pytorch/torch/csrc/autograd/generated/VariableType_4.cpp:8707 [autograd kernel] UNKNOWN_TENSOR_TYPE_ID: registered at /pytorch/torch/csrc/autograd/generated/VariableType_4.cpp:8707 [autograd kernel] AutogradPrivateUse1: registered at /pytorch/torch/csrc/autograd/generated/VariableType_4.cpp:8707 [autograd kernel] AutogradPrivateUse2: registered at /pytorch/torch/csrc/autograd/generated/VariableType_4.cpp:8707 [autograd kernel] AutogradPrivateUse3: registered at /pytorch/torch/csrc/autograd/generated/VariableType_4.cpp:8707 [autograd kernel] Tracer: registered at /pytorch/torch/csrc/autograd/generated/TraceType_4.cpp:10612 [kernel] Autocast: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:250 [backend fallback] Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:1020 [kernel] VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback] Do you have any idea why this might be? Is it an unsupported operation in the backend or is there something wrong with my code? I have to mention that forwarding the image through the quantized module does give me the right results. Any help is much appreciated, thanks!
st183770
I have to mention that forwarding the image through the quantized module does give me the right results. to clarify, does this mean that you are able to run forward on your quantized model, you just cannot trace it? Knowing the answer to this will help us debug further.
st183771
Hi @vladsb94 , I checked out yolov5 and applied the code you provided (gist:d863f53c8809198b3e0a4fd2af1563a7 · GitHub 9). I’m not sure how you got things to work without additional changes. This model uses the SiLU activation which does not have an int8 kernel. To make this model quantizeable to int8, there are a couple of options: add an int8 kernel for SiLU (we would happily accept a PR) add a custom quantizeable module for SiLU (it can just do dequant → SiLU → quant) make yolov5 symbolically traceable and use FX graph mode quantization
st183772
Hi @Vasiliy_Kuznetsov, thanks for taking the time to answer and debug the model. I have tried method 2, however there are other operations within the model that require dequant and quant and I end up modifying too much of the code. Moreover, I tried method 3, but I still have some issues: here is a snippet of the code used for export using fx graph mode quantization, I put it in export.py right after line 61: model_to_quantize = copy.deepcopy(model) model_to_quantize.eval() qconfig = torch.quantization.get_default_qconfig('qnnpack') qconfig_dict = {"": qconfig} from torch.quantization.quantize_fx import prepare_fx, convert_fx prepared_model = prepare_fx(model_to_quantize, qconfig_dict) quantized_model = convert_fx(prepared_model) yy = quantized_model(img) but then, when it’s being traced ts = torch.jit.trace(quantized_model, img) I get a very strange output which I find difficult to interpret. Also the predictions performed with the quantized model are not correct, I get a lot’s of small bounding boxes. It looks something like this (I won’t copy all of it here because it’s very large): <eval_with_key_5>:7: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! quantize_per_tensor_1 = torch.quantize_per_tensor(getitem, model_0_input_scale_0, model_0_input_zero_point_0, model_0_input_dtype_0); getitem = model_0_input_scale_0 = model_0_input_zero_point_0 = model_0_input_dtype_0 = None <eval_with_key_5>:12: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! quantize_per_tensor_2 = torch.quantize_per_tensor(getitem_1, model_0_input_scale_1, model_0_input_zero_point_1, model_0_input_dtype_1); getitem_1 = model_0_input_scale_1 = model_0_input_zero_point_1 = model_0_input_dtype_1 = None ... Maybe you can help me with the source of these warnings or at least point me in the right direction. Thanks for your time!
st183773
Hi @Vasiliy_Kuznetsov , hope you are fine. I am facing the same issue, and there are some layers that are not being quantized, like SiLU,Batchnorm1d etc. Here you can see: QuantizationModule( (model): Sequential( (0): Sequential( (0): QuantizedConv2d(3, 40, kernel_size=(3, 3), stride=(2, 2), scale=1.0, zero_point=0) (1): QuantizedBatchNorm2d(40, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (2): SiLU(inplace=True) (3): Sequential( (0): Sequential( (0): DepthwiseSeparableConv( (conv_dw): QuantizedConv2d(40, 40, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=40) (bn1): QuantizedBatchNorm2d(40, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(40, 10, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(10, 40, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pw): QuantizedConv2d(40, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn2): QuantizedBatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): Identity() ) (1): DepthwiseSeparableConv( (conv_dw): QuantizedConv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=24) (bn1): QuantizedBatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(24, 6, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(6, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pw): QuantizedConv2d(24, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn2): QuantizedBatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): Identity() ) ) (1): Sequential( (0): InvertedResidual( (conv_pw): QuantizedConv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): Conv2dSame(144, 144, kernel_size=(3, 3), stride=(2, 2), groups=144, bias=False) (bn2): QuantizedBatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(144, 6, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(6, 144, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): InvertedResidual( (conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=192) (bn2): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(192, 8, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(8, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (2): InvertedResidual( (conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=192) (bn2): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(192, 8, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(8, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (2): Sequential( (0): InvertedResidual( (conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): Conv2dSame(192, 192, kernel_size=(5, 5), stride=(2, 2), groups=192, bias=False) (bn2): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(192, 8, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(8, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(192, 48, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): InvertedResidual( (conv_pw): QuantizedConv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(288, 288, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=288) (bn2): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(288, 12, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(12, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (2): InvertedResidual( (conv_pw): QuantizedConv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(288, 288, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=288) (bn2): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(288, 12, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(12, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (3): Sequential( (0): InvertedResidual( (conv_pw): QuantizedConv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): Conv2dSame(288, 288, kernel_size=(3, 3), stride=(2, 2), groups=288, bias=False) (bn2): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(288, 12, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(12, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(288, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): InvertedResidual( (conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576) (bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (2): InvertedResidual( (conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576) (bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (3): InvertedResidual( (conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576) (bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (4): InvertedResidual( (conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576) (bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (4): Sequential( (0): InvertedResidual( (conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(576, 576, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=576) (bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(576, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): InvertedResidual( (conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816) (bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (2): InvertedResidual( (conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816) (bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (3): InvertedResidual( (conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816) (bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (4): InvertedResidual( (conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816) (bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (5): Sequential( (0): InvertedResidual( (conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): Conv2dSame(816, 816, kernel_size=(5, 5), stride=(2, 2), groups=816, bias=False) (bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(816, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): InvertedResidual( (conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392) (bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (2): InvertedResidual( (conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392) (bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (3): InvertedResidual( (conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392) (bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (4): InvertedResidual( (conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392) (bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (5): InvertedResidual( (conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392) (bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (6): Sequential( (0): InvertedResidual( (conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=1392) (bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(1392, 384, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): InvertedResidual( (conv_pw): QuantizedConv2d(384, 2304, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn1): QuantizedBatchNorm2d(2304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): SiLU(inplace=True) (conv_dw): QuantizedConv2d(2304, 2304, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=2304) (bn2): QuantizedBatchNorm2d(2304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act2): SiLU(inplace=True) (se): SqueezeExcite( (conv_reduce): QuantizedConv2d(2304, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (act1): SiLU(inplace=True) (conv_expand): QuantizedConv2d(96, 2304, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) (conv_pwl): QuantizedConv2d(2304, 384, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (bn3): QuantizedBatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (4): QuantizedConv2d(384, 1536, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) (5): QuantizedBatchNorm2d(1536, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (6): SiLU(inplace=True) ) (1): Sequential( (0): AdaptiveConcatPool2d( (ap): AdaptiveAvgPool2d(output_size=1) (mp): AdaptiveMaxPool2d(output_size=1) ) (1): Flatten(full=False) (2): BatchNorm1d(3072, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): Dropout(p=0.25, inplace=False) (4): QuantizedLinear(in_features=3072, out_features=512, scale=1.0, zero_point=0, qscheme=torch.per_tensor_affine) (5): ReLU(inplace=True) (6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (7): Dropout(p=0.5, inplace=False) (8): QuantizedLinear(in_features=512, out_features=73, scale=1.0, zero_point=0, qscheme=torch.per_tensor_affine) ) ) (quant): Quantize(scale=tensor([1.]), zero_point=tensor([0]), dtype=torch.quint8) (dequant): DeQuantize() ) Now I can do filter out SiLU, by following: for name, layer in model_static_quantized2.named_modules(): if isinstance(layer, nn.SiLU): print(name, layer) But, now how can I specifically quant, and dequant the layers in the model. Thanks for any help…
st183774
Hi @Muhammad_Ali ,hope you are doing well, Did you find solution for that. Thanks in advance.
st183775
I tried to run tutorial https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 17 and faced this problem. I ran at a supercomputer,cuda 10.1,pytorch 1.4,torchvision 0.5. total information is Traceback (most recent call last): File “q.py”, line 327, in torch.quantization.convert(myModel,inplace=True) File “/SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/quantization/quantize.py”, line 316, in convert convert(mod, mapping, inplace=True) File “/SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/quantization/quantize.py”, line 316, in convert convert(mod, mapping, inplace=True) File “/SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/quantization/quantize.py”, line 317, in convert reassign[name] = swap_module(mod, mapping) File “/SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/quantization/quantize.py”, line 339, in swap_module new_mod = mapping[type(mod)].from_float(mod) File “/SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/nn/intrinsic/quantized/modules/conv_relu.py”, line 49, in from_float return super(ConvReLU2d, cls).from_float(mod) File “/SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/nn/quantized/modules/conv.py”, line 257, in from_float mod.bias is not None, mod.padding_mode) File “/SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/nn/intrinsic/quantized/modules/conv_relu.py”, line 29, in init padding_mode=padding_mode) File “/SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/nn/quantized/modules/conv.py”, line 187, in init groups, bias, padding_mode) File “/SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/nn/quantized/modules/conv.py”, line 48, in init self.set_weight_bias(qweight, bias_float) File “/SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/nn/quantized/modules/conv.py”, line 195, in set_weight_bias w, b, self.stride, self.padding, self.dilation, self.groups) RuntimeError: Didn’t find engine for operation quantized::conv2d_prepack NoQEngine (operator() at /pytorch/aten/src/ATen/native/quantized/cpu/qconv_prepack.cpp:63) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7f414c66d193 in /SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: + 0x1e5c2ba (0x7f414e91c2ba in /SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #2: + 0x1e5d083 (0x7f414e91d083 in /SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #3: + 0x33eb009 (0x7f414feab009 in /SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #4: + 0x40e8547 (0x7f4150ba8547 in /SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #5: + 0x6de077 (0x7f4197a4e077 in /SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #6: + 0x6a95c4 (0x7f4197a195c4 in /SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #7: + 0x2961c4 (0x7f41976061c4 in /SISDC_GPFS/Home_SE/songxy-jnu/hejy-jnu/.conda/envs/mjx/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #61: __libc_start_main + 0xf5 (0x7f41abb49445 in /lib64/libc.so.6)
st183776
This means you didn’t compile fbgemm, can you print torch.backends.quantized.supported_engines?
st183777
Hello, Jerry I have a similar problem, and this is my code error: UserWarning: must run observer before calling calculate_qparams. Returning default scale and zero point Returning default scale and zero point " Traceback (most recent call last): File “mobile_test.py”, line 122, in data_loader,bn_list[0],net_config,device=‘cuda:0’)) File “mobile_test.py”, line 55, in prepare_subnet torch.quantization.convert(subnet, inplace=True) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/quantization/quantize.py”, line 473, in convert convert_custom_config_dict=convert_custom_config_dict) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/quantization/quantize.py”, line 508, in _convert custom_module_class_mapping) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/quantization/quantize.py”, line 508, in _convert custom_module_class_mapping) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/quantization/quantize.py”, line 508, in _convert custom_module_class_mapping) [Previous line repeated 3 more times] File “/home/yuantian/.local/lib/python3.6/site-packages/torch/quantization/quantize.py”, line 509, in _convert reassign[name] = swap_module(mod, mapping, custom_module_class_mapping) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/quantization/quantize.py”, line 534, in swap_module new_mod = mapping[type(mod)].from_float(mod) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/nn/intrinsic/quantized/modules/conv_relu.py”, line 97, in from_float return super(ConvReLU2d, cls).from_float(mod) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/nn/quantized/modules/conv.py”, line 418, in from_float return _ConvNd.from_float(cls, mod) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/nn/quantized/modules/conv.py”, line 220, in from_float return cls.get_qconv(mod, activation_post_process, weight_post_process) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/nn/quantized/modules/conv.py”, line 191, in get_qconv mod.bias is not None, mod.padding_mode) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/nn/intrinsic/quantized/modules/conv_relu.py”, line 74, in init padding_mode=padding_mode) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/nn/quantized/modules/conv.py”, line 376, in init False, _pair(0), groups, bias, padding_mode) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/nn/quantized/modules/conv.py”, line 74, in _init self.set_weight_bias(qweight, bias_float) File “/home/yuantian/.local/lib/python3.6/site-packages/torch/nn/quantized/modules/conv.py”, line 384, in set_weight_bias w, b, self.stride, self.padding, self.dilation, self.groups) RuntimeError: Didn’t find engine for operation quantized::conv2d_prepack NoQEngine I try to print the code torch.backends.quantized.supported_engines But the result is [None] And my system is ubuntu
st183778
And here is my code: #fuse model fuse_model(subnet) #quantize model subnet.qconfig = torch.quantization.get_default_qconfig('qnnpack') torch.quantization.prepare(subnet, inplace=True) # Calibrate print(torch.backends.quantized.supported_engines) torch.quantization.convert(subnet, inplace=True) #optimize script_subnet = torch.jit.script(model) script_subnet_optimized = optimize_for_model(script_model)
st183779
well qnnpack is disabled by default on x86 systems per the info here: RuntimeError: Didn't find engine for operation quantized::conv_prepack NoQEngine · Issue #29327 · pytorch/pytorch · GitHub 30 that link also includes a workaround, did you try that? I can see that you posted there as well. more info here too: The Feature Request of Loading Quantized TorchScript Model on Windows with libtorch · Issue #31684 · pytorch/pytorch · GitHub 5 if you still can’t solve this, could you open a new issue in the forum and include your env information?
st183780
Hello, I’m wondering what the fast way to convert from bytes to a pytorch tensor is. I’ve found the reverse here: https://pytorch.org/docs/stable/generated/torch.Tensor.byte.html 103 Thanks
st183781
Solved by HDCharles in post #2 You can use .to(torch.float) or .to(torch.uint8) to go in either direction.
st183782
Hello! Since it seems there is no implementation for quantized embeddings with the Functional interace available, I want to ask if there is a workaround to this? To be specific: How can I use a Functional embedding with quantization? Thanks!
st183783
Solved by HDCharles in post #2 The only thing we support for quantized embeddings are modules. I think the only real workaround is converting to embedding modules unfortunately.
st183784
The only thing we support for quantized embeddings are modules. I think the only real workaround is converting to embedding modules unfortunately.
st183785
I’m wondering about the behavior of quantization observer, e.g. HistogramObserver, when the model is wrapped with DistributedDataParallel ? Will the estimated histograms (or min/max values) synchronized across multiple GPUs?
st183786
Solved by supriyar in post #2 The observer min/max values are stored as buffers in the module. So they get broadcast from the rank 0 machine when run with DDP. This ensures the values are synchronized across all machines.
st183787
The observer min/max values are stored as buffers in the module. So they get broadcast from the rank 0 machine when run with DDP. This ensures the values are synchronized across all machines.
st183788
Thx for clarify. Does this mean only a part of samples (samples on GPU-0) are used to estimate the quantization parameters ?
st183789
I am building an object detection faster rcnn model using the quantized mobilenet v as backbone. So far I have managed to run quantization aware training by running the following functions before the training loop model.backbone.fuse_model() -- (1) model.backbone.qconfig = torch.quantization.default_qat_qconfig -- (2) model = torch.quantization.prepare_qat(model, inplace=True) -- (3) model.train() During training, I run the model on CUDA. After the training I run the following to convert the model back to cpu, quantize it, evaluate and save the model weights. trained_model.cpu() trained_model_quantized = convert(trained_model, inplace=**False**) -- (4) trained_model_quantized.eval() test_model(trained_model_quantized, dataloader_test) torch.save(trained_model_quantized.state_dict(), "trained_model_quantized.pt") On another session, I managed to load the model weights back in by creating a new model, running the quantization steps (1), (2), (3) and (4) on the model and calling model.load(torch.load(PATH_TO_SAVE_WEIGHTS_DICT)) When I attempt to test the model on the same dataset I tested with, I get nonsensical results, as if the model has not been trained before. One example is as follows below: [{'boxes': tensor([[2.2284e+02, 6.9949e-01, 2.5754e+02, 3.5039e+01], [1.9123e+02, 1.4250e+01, 2.4345e+02, 6.1587e+01], [1.8320e+01, 6.3590e-01, 5.3564e+01, 3.5422e+01], [2.9484e+02, 2.3753e-01, 3.2000e+02, 1.2851e+01], [2.0365e+02, 5.0668e+00, 2.5801e+02, 5.1970e+01], [3.0826e+01, 7.2213e-01, 6.6313e+01, 3.4504e+01], [1.1139e+01, 2.5874e+02, 6.4464e+01, 3.0664e+02], [2.2778e+02, 4.8784e+00, 2.8297e+02, 5.0020e+01], [1.1499e+02, 1.8001e+02, 1.6611e+02, 2.2816e+02], [1.8020e+02, 2.9482e+02, 2.0485e+02, 3.2000e+02], [1.4158e+02, 4.6937e-01, 1.6599e+02, 2.5700e+01], [1.6759e+02, 4.9485e-01, 2.1854e+02, 2.4960e+01], [0.0000e+00, 2.7912e+02, 9.5221e+00, 3.1460e+02], [1.2910e+02, 4.4345e-01, 1.7992e+02, 2.4563e+01], [9.0239e+01, 5.0788e-01, 1.4100e+02, 2.3526e+01], [5.1181e+01, 2.8405e+02, 1.0670e+02, 3.2000e+02], [3.7199e+01, 1.5634e+02, 9.1446e+01, 2.0341e+02], [5.2274e+01, 9.1355e-01, 1.0492e+02, 3.7529e+01], [2.4848e+02, 6.8777e-01, 2.8421e+02, 3.4754e+01], [2.6224e+01, 2.4715e-01, 5.1620e+01, 1.3250e+01], [0.0000e+00, 2.1927e+02, 7.0992e+01, 2.9383e+02], [6.2371e+01, 6.7974e-01, 2.1039e+02, 3.4605e+01], [1.4141e+02, 2.2420e-01, 1.6636e+02, 1.2484e+01], [0.0000e+00, 2.6777e-01, 1.2510e+01, 1.2869e+01], [2.6803e+02, 3.6809e-01, 3.2000e+02, 1.8864e+01], [1.2125e+02, 6.3132e-01, 1.5740e+02, 3.4166e+01], [3.7071e+01, 1.8162e+02, 9.0445e+01, 2.2925e+02], [1.0853e+02, 7.0764e-01, 1.4476e+02, 3.4172e+01], [0.0000e+00, 6.8656e-01, 7.1893e+01, 3.4567e+01], [0.0000e+00, 2.9469e+02, 1.2800e+01, 3.2000e+02], [0.0000e+00, 2.4723e+02, 1.7331e+01, 3.1682e+02], [0.0000e+00, 2.4664e+02, 1.6348e+01, 2.6562e+02], [2.3170e+02, 4.9811e-01, 2.8183e+02, 2.4400e+01], [0.0000e+00, 2.9816e+02, 4.5395e+00, 3.1654e+02], [1.2854e+02, 2.3414e-01, 1.5374e+02, 1.2642e+01], [1.3461e+01, 2.5192e-01, 3.9521e+01, 1.2805e+01], [1.7189e+02, 2.7412e+02, 2.0622e+02, 3.2000e+02], [0.0000e+00, 2.4050e+02, 1.0271e+01, 2.7527e+02], [2.9492e+02, 4.7859e-01, 3.2000e+02, 2.5724e+01], [0.0000e+00, 1.0180e+02, 2.5666e+01, 1.4887e+02], [7.5661e+01, 2.0977e+02, 1.2900e+02, 2.5681e+02], [2.5359e+02, 9.9282e-01, 3.2000e+02, 5.0407e+01], [0.0000e+00, 2.5901e+02, 4.0501e+01, 3.0663e+02], [2.0364e+02, 6.6663e+01, 2.5525e+02, 1.1442e+02], [2.0547e+02, 4.2242e+01, 2.5756e+02, 8.8740e+01], [0.0000e+00, 4.0732e-01, 3.8736e+01, 1.9843e+01], [1.1571e+02, 2.8211e-01, 1.4147e+02, 1.3143e+01], [0.0000e+00, 2.2975e+00, 1.5405e+02, 1.0976e+02], [2.9816e+02, 2.2668e+01, 3.1766e+02, 5.5445e+01], [5.8469e-01, 2.7204e-01, 2.5698e+01, 1.3041e+01], [0.0000e+00, 5.1108e+01, 1.3950e+01, 7.7417e+01], [1.4076e+02, 2.8269e+02, 1.9360e+02, 3.2000e+02], [2.3352e+01, 1.3232e+02, 7.7268e+01, 1.7938e+02], [2.2999e+02, 1.6494e+00, 3.2000e+02, 7.5008e+01], [0.0000e+00, 6.3699e+01, 1.3418e+01, 8.9596e+01], [6.2804e+01, 2.1912e+02, 1.1618e+02, 2.6685e+02], [0.0000e+00, 2.4379e+02, 2.6497e+01, 2.9244e+02], [1.7839e+02, 2.8245e+02, 2.3241e+02, 3.2000e+02], [8.9713e+01, 3.6764e+00, 1.4348e+02, 5.1272e+01], [5.8155e+00, 6.0722e-01, 4.1412e+01, 3.4975e+01], [0.0000e+00, 1.6554e+02, 2.6498e+01, 2.1379e+02], [2.5622e+02, 5.2321e-01, 3.0743e+02, 2.5142e+01], [2.6498e+02, 2.0179e+01, 3.2000e+02, 5.5930e+01], [7.4018e+01, 1.1732e+02, 1.3037e+02, 1.6463e+02], [2.7450e+02, 7.7239e-01, 3.1118e+02, 3.5411e+01], [2.1570e+02, 2.7996e+01, 2.6940e+02, 7.5383e+01], [0.0000e+00, 2.5685e+02, 1.3539e+01, 2.8232e+02], [1.0324e+02, 2.8183e+02, 1.5529e+02, 3.2000e+02], [5.7554e+01, 9.1042e+00, 2.8737e+02, 2.3098e+02], [2.4072e+01, 2.4965e+02, 7.9286e+01, 2.9569e+02], [1.0386e+02, 2.5957e+01, 1.2860e+02, 5.1814e+01], [0.0000e+00, 2.1812e+02, 2.7382e+01, 2.6644e+02], [0.0000e+00, 2.6893e+02, 1.3575e+01, 2.9465e+02], [2.4422e+02, 2.8380e+02, 2.9725e+02, 3.2000e+02], [0.0000e+00, 2.6925e+02, 7.4092e+01, 3.2000e+02], [0.0000e+00, 5.4974e+01, 3.8673e+01, 1.0265e+02], [0.0000e+00, 3.8855e+01, 1.3879e+01, 6.4523e+01], [2.1896e+02, 1.5610e+02, 2.6900e+02, 2.0563e+02], [1.3965e+02, 9.2438e+01, 1.9291e+02, 1.4087e+02], [0.0000e+00, 1.4937e+02, 5.0214e+01, 2.4080e+02], [1.1467e+02, 2.3391e+00, 1.6681e+02, 4.8360e+01], [0.0000e+00, 2.5561e+01, 2.7101e+01, 7.3610e+01], [1.3790e+02, 2.0647e+02, 1.9230e+02, 2.5392e+02], [9.0749e+01, 1.6114e+02, 3.2000e+02, 3.1993e+02], [1.3997e+02, 7.7576e-01, 1.9317e+02, 3.8217e+01], [2.2809e+02, 1.0366e+02, 2.8295e+02, 1.5036e+02], [0.0000e+00, 7.9377e+00, 4.7207e+01, 1.0263e+02], [2.4093e+02, 1.1915e+02, 2.9302e+02, 1.6518e+02], [1.0355e+02, 2.9433e+02, 1.2756e+02, 3.1988e+02], [0.0000e+00, 1.6336e+02, 1.4555e+02, 3.2000e+02], [2.8921e+02, 2.8545e+01, 3.2000e+02, 4.7252e+01], [2.0485e+02, 2.8252e+02, 2.5715e+02, 3.2000e+02], [0.0000e+00, 1.7163e+01, 1.8367e+02, 2.4122e+02], [1.1222e+02, 4.1794e+01, 1.6605e+02, 8.9092e+01], [0.0000e+00, 2.2235e+02, 1.7401e+01, 2.9148e+02], [1.7722e+02, 2.7057e+01, 2.3399e+02, 7.3110e+01], [2.4246e+02, 1.7376e+01, 2.9653e+02, 6.3945e+01], [4.5475e+01, 2.7231e+02, 1.8952e+02, 3.2000e+02], [1.8212e+02, 2.6980e+02, 3.2000e+02, 3.2000e+02], [1.7683e+02, 6.9158e-01, 2.3024e+02, 3.7515e+01]], grad_fn=<StackBackward>), 'labels': tensor([14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14]), 'scores': tensor([0.0700, 0.0661, 0.0660, 0.0657, 0.0652, 0.0650, 0.0650, 0.0649, 0.0646, 0.0645, 0.0645, 0.0645, 0.0644, 0.0644, 0.0643, 0.0643, 0.0642, 0.0642, 0.0642, 0.0642, 0.0641, 0.0640, 0.0640, 0.0640, 0.0638, 0.0637, 0.0636, 0.0635, 0.0635, 0.0635, 0.0633, 0.0633, 0.0633, 0.0632, 0.0631, 0.0630, 0.0629, 0.0629, 0.0628, 0.0628, 0.0628, 0.0627, 0.0626, 0.0625, 0.0625, 0.0625, 0.0623, 0.0623, 0.0621, 0.0621, 0.0620, 0.0619, 0.0619, 0.0618, 0.0618, 0.0618, 0.0617, 0.0616, 0.0616, 0.0616, 0.0616, 0.0616, 0.0615, 0.0615, 0.0614, 0.0614, 0.0613, 0.0613, 0.0612, 0.0612, 0.0612, 0.0612, 0.0611, 0.0611, 0.0610, 0.0610, 0.0610, 0.0610, 0.0609, 0.0608, 0.0607, 0.0607, 0.0607, 0.0606, 0.0606, 0.0605, 0.0604, 0.0604, 0.0604, 0.0603, 0.0603, 0.0602, 0.0602, 0.0602, 0.0602, 0.0601, 0.0601, 0.0601, 0.0600, 0.0599], grad_fn=<IndexBackward>)}] Sometimes I do not receive any output from the model which is weird. [{‘boxes’: tensor([], size=(0, 4), grad_fn=), ‘labels’: tensor([], dtype=torch.int64), ‘scores’: tensor([], grad_fn=)}] I am not sure why there might be a discrepancy when I am loading the same weights trained earlier. Appreciate any advice and guidance.
st183790
Hi @yichong96 To clarify, does the first time you evaluate (after running convert) work fine? I would try to inspect the model parameters and state_dict using both approaches (i.e. directly running eval, vs re-creating the quantized model and loading weights). If you have a minimal example that could repro this issue, we are happy to take a look.
st183791
Hey thanks for your reply. I realised that I had extra classifier layers in the backbone which were not used since I took the whole mobilenet model. I managed to remedy it by just taking the .features portion of the original backbone network.
st183792
I tried to understand the computation flow of pytorch MobileNet V2 int8 model and want to know how the bias, scale and zero-point are applied to a fused convolution layer. For instance as following, this layer has 4 params in state_dict: weight, bias, scale, zero-point. The weight is quantized from FP32 to INT8 with its own scale 0.106 and zero point; and the scale 0.0693 is supposed to convert accumulated result from FP32 to INT8 for next layer. But how to apply the bias? Does the bias applied to accumulated results after multiplication? These bias looks pretty small number comparing to accumulated results. (‘features.1.conv.0.0.weight’, tensor([[[[ -0.1069, -0.1069, -0.1069], [ -0.1069, 0.0000, 0.8550], [ -0.1069, -0.1069, 0.1069]]], … [[[ 0.9619, -0.4275, -0.7482], [ 4.3820, -0.3206, -3.9545], [ 0.9619, -0.2138, -0.5344]]]], size=(32, 1, 3, 3), dtype=torch.qint8, quantization_scheme=torch.per_tensor_affine, scale=0.10687889158725739, zero_point=0)), (‘features.1.conv.0.0.bias’, tensor([-1.1895e-02, 8.7035e-01, -6.8617e-02, 3.8501e-01, 3.2915e-01, … 8.4619e-01, -1.9708e-01], requires_grad=True)), (‘features.1.conv.0.0.scale’, tensor(0.0693)), (‘features.1.conv.0.0.zero_point’, tensor(0)), (‘features.1.conv.1.weight’,
st183793
For Conv and Linear operations the bias stored in the state_dict is in FP32. For FBGEMM the bias is not quantized and we add the bias to the result of the final quantized matrix multiplication. For QNNPACK the bias is quantized to int32 (internally in the operator) and then added to the intermediate quantized output.
st183794
What is the best way to handle different training and inference backends? In this blog 3 it says: static quantization must be performed on a machine with the same architecture as your deployment target. If you are using FBGEMM, you must perform the calibration pass on an x86 CPU; if you are using QNNPACK, calibration needs to happen on an ARM CPU But I can’t find anything about this in the official tutorial 1. How accurate is this statement? Is it true for both options(post-training calibration and quantization-aware training) or only for calibration-based one?
st183795
Hi, I meant to reply here, but forgot. There are some subtle caveats, but I don’t think the author’s description is completely accurate here. So the quantization configuration is backend specific and the operator coverage may differ. This means that it is likely preferable to use the same quantization backend, i.e. when I do this, I use QNNPACK for both the training/conversion (on x86) and on the ARM target. Except for bugs, I would expect that there is actually less potential for variation in the computation results for the quantized model, as the machine precision deviations originating from floating point non-commutativity should go away for the quantized (part of the) computation. However, there is no firm requirement to apply absolutely the same. If there were, we’d be in lots of trouble with Quantization Aware Training… Best regards Thomas P.S.: I have an ongoing four-part series on quantizing an audio model. In the part 2 posted today 1 I am trying to cover my world-view of what is going on with quantization and in the next part we’ll actually do the quantization.
st183796
Thanks @tom this helps a lot! One more thing I’d like to confirm: Once you train, calibrate, and quantize the model on QNNPACK is it still fine to evaluate the quantized model on x86 machine? Can we expect an accurate score, or should we use the target device(target CPU architecture) for the evaluation run?
st183797
Hi, I have been trying to implement a quantized mask rcnn for a project I am working on but I am not having much success. I have been following the Torchvision Object Detection Finetuning Tutorial here. I have changed some of the code but the majority of it is still the same. I have implemented a class to wrap the model in a quantise/dequantise block and added to the get model function to quantise using a post static method. I have also tried to quantise just the backbone of the model with another class instead. However, I have encountered the same error shown below. Can someone please help me, where am I going wrong? My code can be found here 1. Error code: File "/home/harry/Downloads/pendan/PennFudanPed/quantised_mask_rcnn_model.py", line 196, in main train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10) File "/home/harry/Downloads/pendan/PennFudanPed/engine.py", line 30, in train_one_epoch loss_dict = model(images, targets) File "/home/harry/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() takes 2 positional arguments but 3 were given Get Segmentation model function: def get_model_instance_segmentation(num_classes): # load an instance segmentation model pre-trained pre-trained on COCO model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True) # get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) # now get the number of input features for the mask classifier in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels hidden_layer = 256 # and replace the mask predictor with a new one model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask, hidden_layer, num_classes) model.train() quant_mod = MQuantise(model) quant_mod.qconfig = torch.quantization.get_default_qconfig('qnnpack') torch.backends.quantized.engine = "qnnpack" model_static_quantized = torch.quantization.prepare_qat(quant_mod, inplace=True) return model_static_quantized Quantise model class: class MQuantise(torch.nn.Module): def __init__(self, model): super(MQuantise, self).__init__() self.quant = torch.quantization.QuantStub() self.dequant = torch.quantization.DeQuantStub() self.model = model def forward(self, x): x = self.quant(x) x = self.model(x) x = self.dequant(x) return x Quantise backbone class: class MQuantise_backbone(torch.nn.Module): def __init__(self, model): super(MQuantise_backbone, self).__init__() self.quant = torch.quantization.QuantStub() self.dequant = torch.quantization.DeQuantStub() self.backbone = model.backbone self.rpn = model.rpn self.head = model.roi_heads def forward(self, x): x = self.quant(x) features_quant = self.backbone(x) features = self.dequant(features_quant) proposals = self.rpn(features) head_results = self.head(features, proposals) return head_results
st183798
I have trained u2net model. I have traced it into torchscript model to run on mobile. It’s all good. Results are the same as on PC. But model takes almost 200MB of storage space so I decided to quantize it using static quantization. I have added Quant and Dequant stubs into model and used them in start of forward() method (Quant) and in end (Dequant). Then I get the following error: ... return torch.add(src_x, x2) RuntimeError: Could not run ‘aten::add.Tensor’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. ‘aten::add.Tensor’ is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, Meta, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode]. As already discussed here 3 I added FloatFunctional object into the module class self.ff = FloatFunctional() and replaced my addition instruction return torch.add(src_x, x2) with new one return self.ff.add(src_x, x2). After that trace() passed with no errors and resulting model decreased in size (45MB instead of nearly 200MB). I just replaced my old model on mobile with this new one and Its output is very bad; inference time increased from ~7sec to ~18sec. Code for tracing: import torch from torch.utils.mobile_optimizer import optimize_for_mobile from lib import U2NET_full model_select = 'checkpoints/checkpoint.pth' checkpoint = torch.load(model_select) model = U2NET_full() model = model.to('cpu') if 'model' in checkpoint: model.load_state_dict(checkpoint['model']) else: model.load_state_dict(checkpoint) model.eval() input = torch.rand(1, 3, 448, 448) backend = "qnnpack" model.qconfig = torch.quantization.get_default_qconfig(backend) torch.backends.quantized.engine = backend model_static_quantized = torch.quantization.prepare(model, inplace=False) model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False) torchscript_model = torch.jit.trace(model_static_quantized, input) optimized_torchscript_model = optimize_for_mobile(torchscript_model) optimized_torchscript_model.save("optimized_torchscript_model.pt") Would you suggest any ways how to fix this?
st183799
I followed the documentation 4 and replaced my torch operations with FloatFunctional alternative ones (add and cat in my case); made fusion on fused-available modules (conv2d->bn->relu sequence in my case); But results of my model are still wrong.