id
stringlengths
3
8
text
stringlengths
1
115k
st183600
this is eager mode api, not fx graph mode. for an introduction about apis we have, please take a look at Quantization — PyTorch 1.9.0 documentation 2
st183601
@jerryzh168 Hi Is there a solution to fix this issue? RuntimeError: Could not run 'aten::thnn_conv2d_forward' with arguments from the 'QuantizedCPU' backend. I am facing the same issue. efficientNet_unet( (pretrained): Module( (layer1): Sequential( (0): Conv2dSameExport(3, 32, kernel_size=(3, 3), stride=(2, 2), bias=False) (1): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (2): QuantizedReLU6(inplace=True) (3): Sequential( (0): DepthwiseSeparableConv( (conv_dw): QuantizedConv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=0.5102088451385498, zero_point=58, padding=(1, 1), groups=32, bias=False) (bn1): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (act1): QuantizedReLU6(inplace=True) ....... ) (quant): Quantize(scale=tensor([0.0375]), zero_point=tensor([57]), dtype=torch.quint8) (dequant): DeQuantize() The log shows the issue happens at layer1. Could this be related to Conv2dSameExport?
st183602
I have converted a fp32 model to 8bit model using post training static quantization. I tried to save the model using torch.save() and torch.jit.save() but both methods are not working. And then i tried to just save the state_dict, but then when i load it, the results are not consistent. Is there any other way to save a quantized model? If you need any more info please let me know. Thanks in advance.
st183603
loading/saving state_dict is the preferred method. Please save state_dict and then before loading it to a quantized model, make sure to follow the quantization steps, e.g., fusion. Also see Loading of Quantized Model 160
st183604
I did exactly what you told, but the results are different. I tried with fusing and without fusing, but it’s just not working. I can see that all the zero point and scales are same, all the weights are same, but the results are not same.
st183605
@flash87c could you share a small repro of what you did so that we can take a look?
st183606
cc @Vasiliy_Kuznetsov have we solve the serialization issue? maybe we can make a post here if that is the case
st183607
You can use API torch.jit.save() to save quantized models. Just as what has been done in PyTorch Quantization Tutorial. (beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials 1.9.1+cu102 documentation 47
st183608
Hi, I have replaced torch.cat with self.ff.cat as I encountered an error with torch.cat After replacing, I got another error which is: NotImplementedError: Could not run ‘quantized::cat’ with arguments from the ‘QuantizedCUDA’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘quantized::cat’ is only available for these backends: [QuantizedCPU, BackendSelect, Named, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, Tracer, Autocast, Batched, VmapMode]. and so, i tried adding x.to(“cpu”) since QuantizedCPU is available but i got YET another error pointing to x.to(“cpu”): NotImplementedError: Could not run ‘aten::empty_strided’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::empty_strided’ is only available for these backends: [CPU, CUDA, Meta, BackendSelect, Named, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
st183609
MrOCW: NotImplementedError: Could not run ‘aten::empty_strided’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::empty_strided’ is only available for these backends: [CPU, CUDA, Meta, BackendSelect, Named, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode]. we do not have any operators for QuantizedCUDA backend currently, although the other path (x.to(“cou”)) should work, which pytorch version are you using? or are you using master. cc @HDCharles could you take a look?
st183610
sorry it points to x.to(‘cpu’) as i did not reassign the tensor. x = x.to(‘cpu’) helped solve the aten::empty_strided issue
st183611
in this case, would it be better to dequant → torch.cat with CUDA → quant or forward pass with ff.cat QuantizedCPU?
st183612
I feel probably run everything on CPU would be better, running one op in CUDA sounds a bit weird… not sure about your goal though, if you find running torch.cat on CUDA is faster and you need to squeeze every bit of perf then maybe that is OK as well
st183613
Hi, I am performing QAT and encounter the error: AttributeError: 'Sequential' object has no attribute 'Conv2d' When running the line: torch.quantization.fuse_modules(basic_block, [["Conv2d", "BatchNorm2d", "ReLU"]], inplace=True) where the basic_block is: Sequential( (0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): Sequential( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(2, 2), groups=256) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1)) ) ) Any suggestion what should I modify to correct the error? Thanks a lot.
st183614
Solved by ynjiun_wang in post #2 change torch.quantization.fuse_modules(basic_block, [["Conv2d", "BatchNorm2d", "ReLU"]], inplace=True) to torch.quantization.fuse_modules(basic_block, [["0", "1", "2"]], inplace=True) Then it works.
st183615
change torch.quantization.fuse_modules(basic_block, [["Conv2d", "BatchNorm2d", "ReLU"]], inplace=True) to torch.quantization.fuse_modules(basic_block, [["0", "1", "2"]], inplace=True) Then it works.
st183616
I am trying quantize aware training on a GAN model and am quantizing the generator part. I use two approaches: I quantize the entire network including all the layers in my network I quantize only the first conv layer (conv2d, relu) and leave the rest of the layers as is. (I do set qconfig accordingly during training, None for the ones I’m not quantizing and qnnpack for the first conv layer) However, my model size for both the cases are same. Could anyone tell me how can that be possible?
st183617
Hi all, I’m fairly new to model optimization and I’ve tried ONNX PTQ methods. However, I am required to explore QAT for YOLO pytorch models and I’m not sure what to start with. Should I use Eager Mode or FX Graph Mode Quantization? Which of them is easier and more general to different models? Thanks in advance!
st183618
What is the difference between using torch.quantization.quantize_qat and following the tutorials available? i.e. fusing modules, adding quant and dequant etc
st183619
Hi @MrOCW , eager mode quantization is manual as in you would have to change the modeling code do add quants/dequants and specify fusions. FX graph mode quantization is automatic but it requires the model to be symbolically traceable. Usually for new models I’d recommend trying FX graph mode quantization first. If the model is not symbolically traceable, then you would have to either make it symbolically traceable (difficulty depends on the model), or use eager mode. I remember looking at yolov3 a few months ago and that model was challenging to make symbolically traceable.
st183620
Guess I’ll work with eager for now since it seems to be more general? just that more work has to be done. However, I am not too sure about where to add the Quant and DeQuant modules? Currently working with a model that has Conv2d-BatchNorm2d-SiLU and since SiLU is not supported… do i fuse conv+bn then add a quant and dequant for every other instance of such module? for e.g. Quant → ConvBn → DeQuant → SiLU ===> Quant → ConvBn → DeQuant → SiLU ===> …
st183621
yes, quant/dequant control which areas of the model you want to be in which dtype (torch.float vs torch.quint8). Quant → ConvBn → DeQuant → SiLU ===> Quant → ConvBn → DeQuant → SiLU yep, that sounds right. There is some example code here (Quantization — PyTorch 1.9.1 documentation 3) with similar toy examples.
st183622
For my case, I just used Quant → model - > Dequant with SiLUs being in the model and it still trains but with terrible accuracy. Is the SiLU being included in the Quant Dequant the main reason?
st183623
QAT accuracy could depend on many things, including when do you turn on fake_quant and how many batches you train the model, are you training from scratch or from pretrained weights? Also could you paste the modified model here?
st183624
@jerryzh168 Heres a YOLOX_small model prepared for QAT. the YOLOX small pretrained weights were loaded first, then i applied fuse_modules with [“conv”,“bn”] and then prepare_qat QAT model https://drive.google.com/file/d/1rUtNvHKzkR5mum3n9KHSwe11Ym-oQNR6/view?usp=sharing 4 Converted model https://drive.google.com/file/d/1q_OOxc3jmdowLXXU9P7bANYHRbzVYJAi/view?usp=sharing 4
st183625
the structure of prepared and converted model looks OK I think. how do you do QAT? are you turning on the fakequant in the very beginning? it’s typically suggested to turn on fakequant after a few batches so that the observer can be populated with correct stats first I think
st183626
Sorry how do i control that? Currently, I am doing Quant → fused model → Dequant and then just running the training loop without any edits. Which way is recommended? 1.Train without QAT, load the trained weights, fused and quant dequant, then repeat training 2.Start QAT on my custom data right from the official pretrained weights What are some hyperparameters I should take note of when performing QAT? (eg. epochs, learning rate, etc)
st183627
OK I tried by training without QAT, then load the pretrained model and train with QAT for 1 epoch and the mAP is close to the non QAT model. Is this workflow supposed to be the “correct” way? Am I not supposed to train the QAT from scratch since doing so doesnt increase my mAP from 0 at all
st183628
the flow should be: 1. prepared = prepare_qat(model, ...) 2. disable fake_quant, but enable observation, prepared.apply(torch.ao.quantization.disable_fake_quant) prepared.apply(torch.ao.quantization.enable_observer) 3. run a few epochs 4. enable fake_quant, and do QAT prepared.apply(torch.ao.quantization.disable_fake_quant) 5. train a few epochs 6. convert to quantized model how many epochs do we train before turning on fake_quant and after turning on fake_quant are the hyperparameters that we can experiment with.
st183629
We are attempting to replicate the logic of Dynamic Quantization for LSTMs. While the observer scheme for the weights if clear from the default_qconfig for Dynamic Quantization, the scheme used for the activations isn’t clear to me. I have attempted to use the MinMaxObserver, MovingAverageMinMaxObserver, and the HistogramObserver but the results I obtain do not match the ones I get from using the DynamicQuantizedLSTM directly. I would appreciate any inputs on the scheme used for activations. P.S. The overarching objective of attempting this exercise is to be able to implement Static Quantization and QAT for LSTMs which currently does not exist in PyTorch, and this would be a very welcome feature.
st183630
Hi all…I am new to PyTorch as well as Quantization. I want to quantize a CNN model to custom bitwidhts. Could anyone provide me a link to source code so that I can get some idea…Thank you all…
st183631
we do not support 4 bit currently, but contributions are welcome, do you just want to try quantization aware training or do you want to run 4 bit kernels etc.?
st183632
Thank you for your time Jerry…I want to perform quantize aware training for a cnn model to lower bit precision than int8. I want to know the exact procedure. I found some articles on the internet. But they are mostly telling about how to calculate the scale, zero point and how to quantize and dequantize…I want to know how perform quantization aware training…
st183633
Hi @Kai123, You can check this thread 6. Currently, there is pytorch-quantization 9 by NVIDIA. You can change the number of bits.
st183634
Kai123: Thank you for your time Jerry…I want to perform quantize aware training for a cnn model to lower bit precision than int8. I want to know the exact procedure. I found some articles on the internet. But they are mostly telling about how to calculate the scale, zero point and how to quantize and dequantize…I want to know how perform quantization aware training… If you just need to do QAT then you can try setting quant_min, quant_max in FakeQuantize module I think you can find the way we configure FakeQuantize here: pytorch/qconfig.py at master · pytorch/pytorch · GitHub 2, we just need to configure FakeQuantize with quant_min, quant_max for 4 bit, e.g. -8, 7 and then define the qconfig based on that.
st183635
Dear Pytorch community, For my research, I am recreating the convolution function using my code. However, my manual conv2d result cannot match what’s given by torch.nn.quantized.functional. I appreciate any advice on how a convolution work on 3-dimension input and kernels under quantized parameters. For a specific example, I’ve been working on the first conv layer of a Resnet50 model. I have quantized input images of (1, 3, 224, 224). They are padded so that the input to the conv layer is (1, 3, 230, 230). The convolution has 64 kernels of (3, 7, 7) and a stride of 2, which gives an output of (64, 112, 112). Basically, I’m trying to compare my result, manual_res, with the element output_ref from the same location in the output tensor at every iteration of the kernel. The indexing shouldn’t be a problem here because I tried qF.conv2d on the two tensors and it is able to match with output_ref. Here’s what I’ve written as the convolution loop. import torch.nn.functional as F from torch.nn.quantized import functional as qF conv1_pad = (3, 3, 3, 3) after_pad = F.pad(after_quant, conv1_pad, "constant", 0) # input to the conv layer print("after pad: ", after_pad.shape) # 1, 3, 230, 230 my_conv1_result = torch.zeros(after_conv1.shape) for c in range(0, conv1_weight.shape[0]): # 64 output channels kernel = conv1_weight[c] # 3x7x7 kernel target_y = 0 # index in result tensor for start_y in range(0, after_pad.shape[2] - 7, 2): # 112 target_x = 0 # print(start_y, end=", ") for start_x in range(0, after_pad.shape[3] - 7, 2): # 112 input_tensor = after_pad[0, :, start_x:start_x + 7, start_y:start_y + 7] # 3x7x7 manual_res = torch.tensor(0, dtype=torch.int8) # uint8 for activation output_ref = after_conv1[0, c, target_x, target_y] # print(input_tensor.int_repr()) # print(kernel.int_repr()) print("output_ref:", output_ref.int_repr(), end=" ===== ") for i in range(kernel.shape[0]): # 3 for j in range(kernel.shape[1]): # 7 for k in range(kernel.shape[2]): # 7 ##################### # Multiply and accumulate temp = (input_tensor.int_repr()[i, j, k] - input_tensor.q_zero_point()) * (kernel.int_repr()[i, j, k] - kernel.q_zero_point()) manual_res = manual_res + temp ##################### manual_res = conv1.zero_point + (manual_res * (input_tensor.q_scale() * kernel.q_scale() / conv1.scale)).round() manual_res = 255 if manual_res > 255 else 0 if manual_res < 0 else manual_res print("manual_res:", manual_res, end=" ===== ") my_conv1_result[0, c, target_x, target_y] = manual_res qf_conv_res = qF.conv2d(input_tensor.reshape((1, 3, 7, 7)), kernel.reshape((1, 3, 7, 7)), bias=torch.tensor([0], dtype=torch.float), scale=conv1.scale, zero_point=conv1.zero_point) # conv1 is the first conv layer with its scale and zp print("qF.conv2d ref:", qf_conv_res.int_repr()) target_x += 1 target_y += 1 The printed shows a mismatch between my manual_res and output_ref or qF.conv2d ref. output_ref: tensor(66, dtype=torch.uint8) ===== manual_res: tensor(75.) ===== qF.conv2d ref: tensor([[[[66]]]], dtype=torch.uint8) output_ref: tensor(66, dtype=torch.uint8) ===== manual_res: tensor(79.) ===== qF.conv2d ref: tensor([[[[66]]]], dtype=torch.uint8) output_ref: tensor(66, dtype=torch.uint8) ===== manual_res: tensor(64.) ===== qF.conv2d ref: tensor([[[[66]]]], dtype=torch.uint8) I think the problem goes with the handling of scales and zero_point during the loop. I am referring to this paper: https://openaccess.thecvf.com/content_cvpr_2018/papers/Jacob_Quantization_and_Training_CVPR_2018_paper.pdf 2 It states that image740×266 18.5 KB . In my case, q3 would be manual_res, Z1-3 and S1-3 are input’s, kernel’s and conv1’s zero_point and scales, q1, q2 are elements from the input and kernel. I am desperate to know why my implementation is not correct. (A side note, I tried my implementation with random 2-dimension tensors with the same handling of scales and zp, it seems to work fine.) Again, thanks for your help!
st183636
I think what we have is this (copying from our internal design doc): z = qconv(wq, xq) # z is at scale (weight_scale*input_scale) and at int32 # Convert to int32 and perform 32 bit add bias_q = round(bias/(input_scale*weight_scale)) z_int = z + bias_q # rounding to 8 bits z_out = round[(z_int)*(input_scale*weight_scale)/output_scale) - z_zero_point] z_out = saturate(z_out)
st183637
The pytorch quantization converts the INT32 MAC value into 8b in the backend. How can we access this compute layer INT32 value prior to conversion.
st183638
unfortunately int32 value for fbgemm/qnnpack is not accessible from outside. Why do you need this?
st183639
I need to impose some limits on the Accumulated values, how would that be feasible in the quantized model format?
st183640
can you describe the whole flow? Are you trying to impose the limit on the kernel level or when people train the model? If you need a kernel that imposes some limits on the int32 value, then I think the best thing to do is to reimplement the kernel (possibly by calling fbgemm implementations if you need high performance: GitHub - pytorch/FBGEMM: FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/)
st183641
actually you might be able to modify the operator implementation a little bit and implement your own version of ops like quantized::conv: pytorch/qconv.cpp at master · pytorch/pytorch · GitHub 1
st183642
Is there any sort of support for a method of getting the continuous ranked probability score (Continuous Ranked Probability Score (CRPS) 4) for mixture density network quickly in pytorch to use as a loss function? Thanks
st183643
Solved by jerryzh168 in post #2 this does not look like related to quantization, could you post in a different sub-category? thanks
st183644
this does not look like related to quantization, could you post in a different sub-category? thanks
st183645
I have a model which is trained in Kaldi and I’m able to load the model parameters in PyTorch as tensors. I am trying to perform post-quantization of the weight matrices and I’ve tried to use the quantize_per_tensor function. For. ex: a = torch.rand(10) b = torch.rand(10) scale_a = (max_a - min_a) / (qmax - qmin) zpt_a = qmin - min_a / scale_a scale_b = (max_b - min_b) / (qmax - qmin) zpt_b = qmin - min_b / scale_b a_quant = torch.quantize_per_tensor(a, scale_a, -127, torch.qint8) b_quant = torch.quantize_per_tensor(b, scale_b, -127, torch.qint8) a_quant + b_quant When I add the 2 quantized tensors, I get the below error Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: Could not run 'aten::add.Tensor' with arguments from the 'QuantizedCPU' backend. 'aten::add.Tensor' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, Meta, Named, Autograd, Profiler, Tracer]. It seems that I can convert fp32 to int8 but not perform any integer arithmetic . Any help as to how to use this will be appreciated. Thanks!
st183646
@aprasad: If you are just looking to quantize weights only and want to keep activations in fp32, please look into dynamic quantization. It does exactly that and you it will calculate scale/zero_points automatically for you as well.
st183647
BTW if you want to use add (or other such operations) on quantized tensor you can use in the following way. qfn = torch.nn.quantized.QFunctional() qfn.add(a_quant, b_quant) https://pytorch.org/docs/stable/quantization.html#qfunctional 27
st183648
@dskhudia: Thank you for the reply. The quantized functional works. I want to quantize both the weights and the activations and run inference in Pytorch with my custom class. Is there a function that calculates the scale/zero_points automatically for that?
st183649
@dskhudia: I tried using the qfn to multiply 2 matrices and I get the below error. For a of shape (1,10) and b of shape (10,20) if i do qfn.mul(a_quant, b_quant) I get, r = ops.quantized.mul(x, y, scale=self.scale, zero_point=self.zero_point) RuntimeError: The size of tensor a (10) must match the size of tensor b (20) at non-singleton dimension 1 Is there any function for matrix multiplication of quantized tensors?
st183650
Hi, I have a question about the quantization scheme. when I do xq = torch.quantize_per_tensor(x, scale = 0.25, zero_point = 15, dtype=torch.quint8), the result is a “quantization_scheme=torch.per_tensor_affine” tensor. I wonder how can I change the scheme to per_tensor_symmetric tensor?
st183651
We do not have per_tensor_symmetric tensor in the backend actually since per_tensor_symmetric can be represented by per_tensor_affine tensor, e.g. a torch.per_tensor_symmetric, torch.qint8 tensor with a scale would be the same as a torch.per_tensor_symmetric, torch.qint8 with the same scale and a zero_point of 0. You can use per_tensor_symmetric in observers though.
st183652
Thanks for your reply. I’m checking histogram observer recently. But I wonder, is the norm computation correct? Why there are 3 norms computed ? jerryzh168: We do not have per_tensor_symmetric tensor in the backend actually since per_tensor_symmetric can be represented by per_tensor_affine tensor, e.g. a torch.per_tensor_symmetric, torch.qint8 tensor with a scale would be the same as a torch.per_tensor_symmetric, torch.qint8 with the same scale and a zero_point of 0. You can use per_tensor_symmetric in observers though. def _compute_quantization_error(self, next_start_bin: int, next_end_bin: int): r""" Compute the quantization error if we use start_bin to end_bin as the min and max to do the quantization. """ # print('at _compute_quantization_error, next_start, next_end', next_start_bin, next_end_bin) bin_width = (self.max_val.item() - self.min_val.item()) / self.bins # compute new bin_width dst_bin_width = bin_width * (next_end_bin - next_start_bin + 1) / self.dst_nbins # divided by 256, quantized range is [-128, 127] if dst_bin_width == 0.0: return 0.0 src_bin = torch.arange(self.bins).to(self.device) # distances from the beginning of first dst_bin to the beginning and # end of src_bin src_bin_begin = (src_bin - next_start_bin) * bin_width src_bin_end = src_bin_begin + bin_width # which dst_bins the beginning and end of src_bin belong to? dst_bin_of_begin = torch.clamp(src_bin_begin // dst_bin_width, 0, self.dst_nbins - 1) dst_bin_of_begin_center = (dst_bin_of_begin + 0.5) * dst_bin_width dst_bin_of_end = torch.clamp(src_bin_end // dst_bin_width, 0, self.dst_nbins - 1) dst_bin_of_end_center = (dst_bin_of_end + 0.5) * dst_bin_width density = self.histogram / bin_width norm = torch.zeros(self.bins) delta_begin = src_bin_begin - dst_bin_of_begin_center delta_end = dst_bin_width / 2 # print('type of delta_begin, delta_end', type(delta_begin), type(delta_end)) delta_begin= delta_begin.to(self.device) ## compute norm from 3 parts: begin of each new bin, center of each new bin, end of each new bin norm += self._get_norm(delta_begin, torch.ones(self.bins) * delta_end, density) norm += (dst_bin_of_end - dst_bin_of_begin - 1) * self._get_norm( torch.tensor(-dst_bin_width / 2), torch.tensor(dst_bin_width / 2), density ) dst_bin_of_end_center = ( dst_bin_of_end * dst_bin_width + dst_bin_width / 2 ) delta_begin = -dst_bin_width / 2 delta_end = src_bin_end - dst_bin_of_end_center norm += self._get_norm(torch.tensor(delta_begin), delta_end, density) return norm.sum().item()
st183653
I am fusing the layers for quantization This is the part of my model, which I am going to fuse. My method is use named_modules go through each submodule and check if they are conv2d bachnorm or relu. (scratch): Module( (layer1_rn): Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (layer2_rn): Conv2d(40, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (layer3_rn): Conv2d(112, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (layer4_rn): Conv2d(320, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (activation): ReLU() (refinenet4): FeatureFusionBlock_custom( (out_conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (resConfUnit1): ResidualConvUnit_custom( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activation): ReLU() (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (resConfUnit2): ResidualConvUnit_custom( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activation): ReLU() (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) for name, module in m.named_modules(): print(name, module) I found if It missed the scratch.refinenet4.resConfUnit1.activation The same thing happens at the activation in resConfUnit2. Is this a bug? scratch.layer1_rn Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) scratch.layer2_rn Conv2d(40, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) scratch.layer3_rn Conv2d(112, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) scratch.layer4_rn Conv2d(320, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) scratch.activation ReLU() here scratch.refinenet4 FeatureFusionBlock_custom( (out_conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (resConfUnit1): ResidualConvUnit_custom( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activation): ReLU() (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (resConfUnit2): ResidualConvUnit_custom( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activation): ReLU() (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) scratch.refinenet4.out_conv Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) scratch.refinenet4.resConfUnit1 ResidualConvUnit_custom( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activation): ReLU() (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) > scratch.refinenet4.resConfUnit1.conv1 Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) > scratch.refinenet4.resConfUnit1.conv2 Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) > scratch.refinenet4.resConfUnit1.skip_add FloatFunctional( > (activation_post_process): Identity() > ) scratch.refinenet4.resConfUnit1.skip_add.activation_post_process Identity() scratch.refinenet4.resConfUnit2 ResidualConvUnit_custom( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activation): ReLU() (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) scratch.refinenet4.resConfUnit2.conv1 Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) scratch.refinenet4.resConfUnit2.conv2 Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) scratch.refinenet4.resConfUnit2.skip_add FloatFunctional( (activation_post_process): Identity() ) scratch.refinenet4.resConfUnit2.skip_add.activation_post_process Identity() scratch.refinenet4.skip_add FloatFunctional( (activation_post_process): Identity() ) scratch.refinenet4.skip_add.activation_post_process Identity() scratch.refinenet3 FeatureFusionBlock_custom( (out_conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)) (resConfUnit1): ResidualConvUnit_custom( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activation): ReLU() (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (resConfUnit2): ResidualConvUnit_custom( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (activation): ReLU() (skip_add): FloatFunctional( (activation_post_process): Identity() ) ) (skip_add): FloatFunctional( (activation_post_process): Identity() ) )
st183654
Can you post your code that does the fusion? It is virtually impossible what is going wrong when looking at the model only.
st183655
named_modules skips duplicated modules I think, if you want to see all modules you can use named_modules(remove_duplicate=False)
st183656
I wrote logs to TensorBoard using PyTorch but want to generate customized visualization. I know TensorBoard provide links to download files, but I want know whether it is possible to directly read TensorBoard logs using PyTorch without installing the TensorBoard library.
st183657
Solved by jerryzh168 in post #2 this is probably mislabeled, are you planning to post in tensorboard - PyTorch Forums
st183658
this is probably mislabeled, are you planning to post in tensorboard - PyTorch Forums
st183659
will mark as solved since this is not related to quantization, please make a new post in tensorboard subcategory
st183660
Hi, I’ve tried to export simple model using ONNX export and faced an error that ask me to report a bug. import torch import onnx import io import torch._C as _C OperatorExportTypes = _C._onnx.OperatorExportTypes class Net(torch.nn.Module): def __init__(self): super().__init__() self.quant = torch.quantization.QuantStub() self.cnn = torch.nn.Conv2d(1,1,1) def forward(self, x): x = self.quant(x) return self.cnn(x) model = Net() model.qconfig = torch.quantization.get_default_qconfig('fbgemm') torch.backends.quantized.engine = 'fbgemm' model = torch.quantization.prepare(model, inplace=False) torch.quantization.convert(model, inplace=True) print(model) inputs = torch.ones((1,10,224,224)) with torch.no_grad(): with io.BytesIO() as f: torch.onnx.export( model, inputs, f, operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK, # verbose=True, # NOTE: uncomment this for debugging # export_params=True, ) onnx_model = onnx.load_from_string(f.getvalue()) Net( (quant): Quantize(scale=tensor([1.]), zero_point=tensor([0]), dtype=torch.quint8) (cnn): QuantizedConv2d(1, 1, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0) ) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-42-9f9e68519c44> in <module> 27 model, 28 inputs, ---> 29 f, 30 # operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK, 31 # verbose=True, # NOTE: uncomment this for debugging ~/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/__init__.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format) 170 do_constant_folding, example_outputs, 171 strip_doc_string, dynamic_axes, keep_initializers_as_inputs, --> 172 custom_opsets, enable_onnx_checker, use_external_data_format) 173 174 ~/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format) 90 dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs, 91 custom_opsets=custom_opsets, enable_onnx_checker=enable_onnx_checker, ---> 92 use_external_data_format=use_external_data_format) 93 94 ~/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, propagate, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, enable_onnx_checker, use_external_data_format) 508 example_outputs, propagate, 509 _retain_param_name, val_do_constant_folding, --> 510 fixed_batch_size=fixed_batch_size) 511 512 # TODO: Don't allocate a in-memory string for the protobuf ~/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py in _model_to_graph(model, args, verbose, input_names, output_names, operator_export_type, example_outputs, propagate, _retain_param_name, do_constant_folding, _disable_torch_constant_prop, fixed_batch_size) 348 model.graph, tuple(in_vars), False, propagate) 349 else: --> 350 graph, torch_out = _trace_and_get_graph_from_model(model, args) 351 state_dict = _unique_state_dict(model) 352 params = list(state_dict.values()) ~/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py in _trace_and_get_graph_from_model(model, args) 305 306 trace_graph, torch_out, inputs_states = \ --> 307 torch.jit._get_trace_graph(model, args, _force_outplace=False, _return_inputs_states=True) 308 warn_on_static_input_change(inputs_states) 309 ~/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py in _get_trace_graph(f, args, kwargs, _force_outplace, return_inputs, _return_inputs_states) 275 if not isinstance(args, tuple): 276 args = (args,) --> 277 outs = ONNXTracedModule(f, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) 278 return outs 279 ~/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 556 result = self._slow_forward(*input, **kwargs) 557 else: --> 558 result = self.forward(*input, **kwargs) 559 for hook in self._forward_hooks.values(): 560 hook_result = hook(self, input, result) ~/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py in forward(self, *args) 358 in_vars + module_state, 359 _create_interpreter_name_lookup_fn(), --> 360 self._force_outplace, 361 ) 362 ~/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py in wrapper(*args) 342 trace_inputs = _unflatten(args[:len(in_vars)], in_desc) 343 --> 344 ret_inputs.append(tuple(x.clone(memory_format=torch.preserve_format) for x in args)) 345 if self._return_inputs_states: 346 inputs_states.append(_unflatten(args[:len(in_vars)], in_desc)) ~/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py in <genexpr>(.0) 342 trace_inputs = _unflatten(args[:len(in_vars)], in_desc) 343 --> 344 ret_inputs.append(tuple(x.clone(memory_format=torch.preserve_format) for x in args)) 345 if self._return_inputs_states: 346 inputs_states.append(_unflatten(args[:len(in_vars)], in_desc)) RuntimeError: self.qscheme() == at::kPerTensorAffine INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1586761698468/work/aten/src/ATen/native/quantized/QTensor.cpp:190, please report a bug to PyTorch. clone for quantized Tensor only works for PerTensorAffine scheme right now What do I do incorrectly?
st183661
Not sure if it helps but here https://pytorch.org/docs/stable/tensor_attributes.html 127 I have found this “Quantized and complex types are not yet supported.”
st183662
It is about torch.Tensor instantiating. Which types are supported as dtype parameter
st183663
I forgot to mention that I used pytorch verdion 1.6.0 from nightly build via conda
st183664
Hi @zetyquickly, it is currently only possible to convert quantized model to Caffe2 using ONNX. The onnx file generated in the process is specific to Caffe2. If this is something you are still interested in, then you need to run a traced model through the onnx export flow. You can use the following code for reference class ConvModel(torch.nn.Module): def __init__(self): super(ConvModel, self).__init__() self.qconfig = torch.quantization.default_qconfig self.fc1 = torch.quantization.QuantWrapper(torch.nn.Conv2d(3, 5, 2, bias=True).to(dtype=torch.float)) def forward(self, x): x = self.fc1(x) return x torch.backends.quantized.engine = "qnnpack" qconfig = torch.quantization.default_qconfig model = ConvModel() model.qconfig = qconfig model = torch.quantization.prepare(model) model = torch.quantization.convert(model) x_numpy = np.random.rand(1, 3, 6, 6).astype(np.float32) x = torch.from_numpy(x_numpy).to(dtype=torch.float) outputs = model(x) input_names = ["x"] outputs = model(x) traced = torch.jit.trace(model, x) buf = io.BytesIO() torch.jit.save(traced, buf) buf.seek(0) model = torch.jit.load(buf) f = io.BytesIO() torch.onnx.export(model, x, f, input_names=input_names, example_outputs=outputs, operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK) f.seek(0) onnx_model = onnx.load(f)
st183665
@supriyar thank you very much for your answer. You’re right I am interested in conversion to Caffe2. There are some moments in example that confuse me. Could please reveal it for us? torch.jit.trace and torch.onnx.export. I thought that they are mutually exclusive functionalities: one for TorchScript and the second for ONNX conversion. While ONNX model needs backend to be executed, TorchScript is standalone. Why do we need TorchScript conversion here before ONNX export? Previously I saw opinions like that https://github.com/pytorch/pytorch/issues/27569#issuecomment-539738922 23 In general words how are connected Pytorch JIT, TorchScript and ONNX? Why do we still need to convert anything from PyTorch to Caffe2 if TorchScript model is created?
st183666
The flow is slightly different for quantized ops (so the regular pytorch -> onnx conversion flow rule doesn’t directly apply). We tried to re-use some of the existing functionality of converting traced ops from pytorch to onnx for quantized models hence it is necessary to first trace it. Similarly it is also necessary to set operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK during the conversion flow for quantized ops. TorchScript models are not directly runnable on Caffe2 backend. So we need to convert it to the expected backend using onnx.
st183667
Thanks for your reply @supriyar , Does it mean that we can convert to onnx scripted parts of network (using torch.jit.script) ? What if our network contains of operators that aren’t available in TorchScript but available in Caffe2 (e.g. RoIAlign)? Optionally is it possible to use quantized layers with TorchScript backend on mobile (I mean without additional conversion to Caffe2 using ONNX)?
st183668
Does it mean that we can convert to onnx scripted parts of network (using torch.jit.script ) ? I haven’t tried torch.jit.script for quantized pytorch network to onnx to Caffe2. But torch.jit.trace should work. What if our network contains of operators that aren’t available in TorchScript but available in Caffe2 (e.g. RoIAlign)? At this point this is only limited to operators present in both quantized Pytorch and quantized Caffe2 framework. Optionally is it possible to use quantized layers with TorchScript backend on mobile (I mean without additional conversion to Caffe2 using ONNX)? You can directly run quantized pytorch network on mobile using PyTorch Mobile which is highly recommended over converting to Caffe2. Check out https://pytorch.org/mobile/home/ 25.
st183669
@supriyar Dose it now support converting quantized model to ONNX in dev-version or stable version?
st183670
General export of quantized models to ONNX isn’t currently supported. We only support conversion to ONNX for Caffe2 backend. This thread has additional context on what we currently support - ONNX export of quantized model
st183671
Is generic onnx export support for quantized models (eg for import with onnx runtime) on the roadmap?
st183672
supriyar: input_names = ["x"] outputs = model(x) @supriyar this workaround fails too in JIT while calling torch.onnx.export. Bug Filed at https://github.com/pytorch/pytorch/issues/47204#issue-734716887 26
st183673
Experiencing the same issue. However if qconfig is set to qnnpack ( model.qconfig = torch.quantization.get_default_qconfig(‘qnnpack’)), this error goes away, but another issue pop up. I am getting the following error for same code except qconfig set to qnnpack. Is there a fix for this? Any way to export quantized pytorch model to ONNX? --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-54-d1ee04c303f8> in <module>() 24 example_outputs=outputs, 25 # opset_version=10, ---> 26 operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK) 27 # f.seek(0) 28 onnx_model = onnx.load(f) C:\Users\mhamdan\AppData\Roaming\Python\Python37\site-packages\torch\onnx\__init__.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format) 228 do_constant_folding, example_outputs, 229 strip_doc_string, dynamic_axes, keep_initializers_as_inputs, --> 230 custom_opsets, enable_onnx_checker, use_external_data_format) 231 232 C:\Users\mhamdan\AppData\Roaming\Python\Python37\site-packages\torch\onnx\utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format) 89 dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs, 90 custom_opsets=custom_opsets, enable_onnx_checker=enable_onnx_checker, ---> 91 use_external_data_format=use_external_data_format) 92 93 C:\Users\mhamdan\AppData\Roaming\Python\Python37\site-packages\torch\onnx\utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, enable_onnx_checker, use_external_data_format, onnx_shape_inference, use_new_jit_passes) 637 training=training, 638 use_new_jit_passes=use_new_jit_passes, --> 639 dynamic_axes=dynamic_axes) 640 641 # TODO: Don't allocate a in-memory string for the protobuf C:\Users\mhamdan\AppData\Roaming\Python\Python37\site-packages\torch\onnx\utils.py in _model_to_graph(model, args, verbose, input_names, output_names, operator_export_type, example_outputs, _retain_param_name, do_constant_folding, _disable_torch_constant_prop, fixed_batch_size, training, use_new_jit_passes, dynamic_axes) 419 fixed_batch_size=fixed_batch_size, params_dict=params_dict, 420 use_new_jit_passes=use_new_jit_passes, --> 421 dynamic_axes=dynamic_axes, input_names=input_names) 422 from torch.onnx.symbolic_helper import _onnx_shape_inference 423 if isinstance(model, torch.jit.ScriptModule) or isinstance(model, torch.jit.ScriptFunction): C:\Users\mhamdan\AppData\Roaming\Python\Python37\site-packages\torch\onnx\utils.py in _optimize_graph(graph, operator_export_type, _disable_torch_constant_prop, fixed_batch_size, params_dict, use_new_jit_passes, dynamic_axes, input_names) 180 torch.onnx.symbolic_helper._quantized_ops.clear() 181 # Unpack quantized weights for conv and linear ops and insert into graph. --> 182 torch._C._jit_pass_onnx_unpack_quantized_weights(graph, params_dict) 183 # Insert permutes before and after each conv op to ensure correct order. 184 torch._C._jit_pass_onnx_quantization_insert_permutes(graph, params_dict) RuntimeError: bad optional access
st183674
hi @amrmartini , we don’t have an update on this issue at the moment. We are not currently actively improving the ONNX export path for quantized models.
st183675
Hi, I’m just wondering if there is a way to export a model trained using quantisation aware training to onnx? There seem to be conflicting answers in various places saying that its not supported, and others that it is now supported. Is there some true answer for this? If it is supported, is there an example somewhere? Do we export the “prepared” model, or the “converted” model? Thanks
st183676
Solved by Vasiliy_Kuznetsov in post #2 hi @kazimpal87 , currently we do not officially support exporting quantized models via ONNX. We would definitely welcome contributions in this area.
st183677
hi @kazimpal87 , currently we do not officially support exporting quantized models via ONNX. We would definitely welcome contributions in this area.
st183678
I have quantized resenet50, quntize_per_channel_resent50 model is giving good accuracy same as floating-point. If I do torch jit save then I can load torch jit load. and do the inference. How can I use a torch.save and torch.load model on a quantized model? Will the entire state dict have same scale and zero points? How can I get each layer scale and zero points from the quantized model?
st183679
Tiru_B: How can I use a torch.save and torch.load model on a quantized model? How can I use a torch.save and torch.load model on a quantized model? Currently we only support torch.save(model.state_dict()) and model.load_state_dict(…) I think. torch.save/torch.load model directly is not yet supported I believe. Will the entire state dict have same scale and zero points? No, they’ll have scale/zero_point that’s calculated from the calibration step. How can I get each layer scale and zero points from the quantized model? you can print the quantized model and it will show scale and zero_point, e.g.: > print(torch.nn.quantized.Conv2d(3, 3, 3)) QuantizedConv2d(3, 3, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0)
st183680
jerryzh168: save Thank you @jerryzh168 I was able to save with model.state_dict() but not able to lad the model with same model.load_state_dict(). It was giving keyError. Secondly if I save with torch.jit.save(torch.jit.script(pcqmodel),“quantization_per_channel_model.pth”) I am not able to see the Quantization info after loading the model . Referred in this issue github.com/pytorch/pytorch How to save quantized model in PyTorch1.3 with quantization information 161 opened Oct 19, 2019 closed Oct 23, 2019 vippeterhou ❓ How to save the quantized model in PyTorch1.3 with quantization information Is there any way to save the quantized model in... quantization triaged
st183681
are you using the most recent version? could you try again with PyTorch nightly builds?
st183682
Also, check if it is just the __repr__ that is not showing the info or are the quant params really missing – try getting the scale and zero_point directly.
st183683
Be sure you do the whole post training preparation process (by running layer fusion, torch.quantization.prepare() and torch.quantization.convert() ) before loading the state_dict.
st183684
Has this been fixed? I’m unable to save and load quantized models even after following all the steps.
st183685
Has this been fixed? I’m unable to save and load quantized models even after following all the steps. do you have a reproducible example on a toy model?
st183686
when I use the hook to get output param, but the param’s type is Proxy. def hook_get_output(module, input, output): print(output) for name, layer in module.named_modules(): layer.register_forward_hook(hook_get_output) notes: the module is quantization_module
st183687
Solved by Vasiliy_Kuznetsov in post #2 Are you using FX graph mode quantization to symbolically trace through the model? If yes, then you are expected to see Proxy objects in the process of symbolic tracing. If not, could you provide some additional context on what you are feeding through the model after you set up the hook?
st183688
Are you using FX graph mode quantization to symbolically trace through the model? If yes, then you are expected to see Proxy objects in the process of symbolic tracing. If not, could you provide some additional context on what you are feeding through the model after you set up the hook?
st183689
I wonder does the GPU memory usage rough has a linear relationship with the batch size used in training? I was fine tune ResNet152. With a batch size 8, the total GPU memory used is around 4G and when the batch size is increased to 16 for training, the total GPU memory used is around 6G. The model itself takes about 2G. It seems to me the GPU memory consumption of training ResNet 152 is approximately 2G + 2G * batch_size / 8?
st183690
Solved by ptrblck in post #2 The batch size would increase the activation sizes during the forward pass, while the model parameter (and gradients) would still use the same amount of memory as they are not depending on the used batch size. This post explains the memory usage in more detail.
st183691
The batch size would increase the activation sizes during the forward pass, while the model parameter (and gradients) would still use the same amount of memory as they are not depending on the used batch size. This post 7 explains the memory usage in more detail.
st183692
Hello all, hope you are having a great day. I quantized a model using Graph mode post-training static quantization and everything seems to have gone smoothly without a hitch. However, upon loading the newly quantized model and trying to do a forward I get this error : Evaluating data/angles.txt... 0%| | 0/6000 [00:00<?, ?it/s] Traceback (most recent call last): File "/mnt/internet/shishosama/embeder_moder_training/graph_quantizer_static.py", line 119, in <module> lfw_test(jit_model) File "/mnt/internet/shishosama/embeder_moder_training/lfw_eval.py", line 350, in lfw_test evaluate(model) File "/mnt/internet/shishosama/embeder_moder_training/lfw_eval.py", line 111, in evaluate output = model(imgs) File "/root/anaconda3/envs/shishosama/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/__torch__/models_new/___torch_mangle_1853.py", line 23, in forward input_2_quant = torch.quantize_per_tensor(input, 0.037445519119501114, 57, 13) _0 = getattr(self, "quantized._jit_pass_packed_weight_0") _1 = ops.quantized.conv2d_relu(input_2_quant, _0, 0.0094706285744905472, 0) ~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE _6_dequant = torch.dequantize(_1) input0 = torch.feature_dropout(_6_dequant, 0., False) Traceback of TorchScript, original code (most recent call last): graph(%a_quant, %packed_params, %r_scale, %r_zero_point, %r_dtype, %stride, %padding, %dilation, %groups): %r_quant = quantized::conv2d_relu(%a_quant, %packed_params, %r_scale, %r_zero_point) ~~~~~~~~~ <--- HERE return (%r_quant) RuntimeError: Could not run 'quantized::conv2d_relu.new' with arguments from the 'QuantizedCUDA' backend. 'quantized::conv2d_relu.new' is only available for these backends: [QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode]. QuantizedCPU: registered at /pytorch/aten/src/ATen/native/quantized/cpu/qconv.cpp:858 [kernel] BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback] Named: registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] AutogradOther: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback] AutogradCPU: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback] AutogradCUDA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback] AutogradXLA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback] Tracer: fallthrough registered at /pytorch/torch/csrc/jit/frontend/tracer.cpp:967 [backend fallback] Autocast: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:254 [backend fallback] Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:511 [backend fallback] VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback] What am I missing here? previously my dynamically quantized model didn’t have this issue (they were also quantized using graph mode) so I’m not sure what’s happening here. I also get this exact error when I try to do a forward pass using the model quantized using the eager mode (here is its own thread 11) In case the quantized model is of some use, here it is: https://gofile.io/d/zyDEaY 12 Any help is greatly appreciated.
st183693
Solved by Shisho_Sama in post #2 Thanks to dear God I found the issue at last! As the error is stating (now obviously!), my model was on cpu but my input was on cuda. setting the data to be on cpu as well fixed this issue.
st183694
Thanks to dear God I found the issue at last! As the error is stating (now obviously!), my model was on cpu but my input was on cuda. setting the data to be on cpu as well fixed this issue.
st183695
Is quantize_per_tensor not supported by ONNX? Will more ops(like PReLU) be supported by nn.quantized?
st183696
Solved by supriyar in post #18 How are you exporting the quantized model to ONNX? Like previously mentioned we only currently support a custom conversion flow through ONNX to Caffe2 for quantized models. The models aren’t represented in native ONNX format, but a format specific to Caffe2. If you wish to export model to caffe2, y…
st183697
It’s not yet supported, we are still figuring out the plan for quantization support in ONNX.
st183698
@supriyar has tested the quantization in onnx with one of our internal models, but I’m not sure about the long term plans for that. @supriyar can you comment?
st183699
The support that exists currently is for Pytorch -> ONNX -> Caffe2 path. The intermediate onnx operators contain references to the C2 ops so cannot be executed standalone in ONNX. See https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic_caffe2.py 28 for more info.