id
stringlengths
3
8
text
stringlengths
1
115k
st184500
Hi I am trying to use the quantize_dynamic module, but the size of the quantized model is the same as the original model in my case. import os import torch from craft import CRAFT trained_model='craft_mlt_25k.pth' def print_size_of_model(model): torch.save(model.state_dict(), "temp.p") print('Size (MB):', os.path.getsize("temp.p")/1e6) os.remove('temp.p') from collections import OrderedDict def copyStateDict(state_dict): if list(state_dict.keys())[0].startswith("module"): start_idx = 1 else: start_idx = 0 new_state_dict = OrderedDict() for k, v in state_dict.items(): name = ".".join(k.split(".")[start_idx:]) new_state_dict[name] = v return new_state_dict net = CRAFT() # initialize net.load_state_dict(copyStateDict(torch.load(trained_model, map_location='cpu'))) net.eval() print_size_of_model(net) quantized = torch.quantization.quantize_dynamic(net, dtype=torch.qint8) print_size_of_model(quantized) Size of both original and quantized model is 83.14 MB. Why doesn’t the model size change ? Any suggestions would be appreciated Similarily when I tried to quantize the pretrained model ResNet18 model from torchvision, size only changed from 46 MB to 45 MB.
st184501
Solved by raghuramank100 in post #2 Hi, Dynamic quantization only helps in reducing the model size for models that use Linear and LSTM modules. For the case of resnet18, the model consists of conv layers which do not have dynamic quantization support yet. For your model, can you check if it has linear layers?
st184502
Hi, Dynamic quantization only helps in reducing the model size for models that use Linear and LSTM modules. For the case of resnet18, the model consists of conv layers which do not have dynamic quantization support yet. For your model, can you check if it has linear layers?
st184503
@raghuramank100 Thanks for your response. But dynamic quantization is able to reduce model size from 553 MB to 182 MB, while VGG16 is mostly convolution layers, why such a drastic change then ?
st184504
Yes, for the case of VGG16, the last two fc layers contain the bulk of the weights.
st184505
Hi, I am trying to quantize a BERT but the size is not reducing, I wonder why, Can someone help me? Here is the code snippet:- import torch as torch import os from transformers.modeling_bert import BertConfig, BertForPreTraining, load_tf_weights_in_bert, BertModel tf_checkpoint_path="./distlang/" bert_config_file = “./config.json” pytorch_dump_path="./distlangpytorch/" device = “cpu” torch.backends.quantized.engine = ‘qnnpack’ qconfig = torch.quantization.get_default_qconfig(‘fbgemm’) print(qconfig) config = BertConfig.from_json_file(bert_config_file) print(“Building PyTorch model from configuration: {}”.format(str(config))) model = BertModel.from_pretrained("./distlangpytorch/") model.to(device) torch.quantization.prepare(model) quantized_model=torch.quantization.convert(model) def print_size_of_model(model): torch.save(model.state_dict(), “temp.p”) print(‘Size (MB):’, os.path.getsize(“temp.p”)/1e6) os.remove(‘temp.p’) print_size_of_model(model) print_size_of_model(quantized_model) quantized_output_dir = “./quantized_model” if not os.path.exists(quantized_output_dir): os.makedirs(quantized_output_dir)
st184506
Hi @Sagar_Gupta, In this mode of quantization the model has to be calibrated(evaluate your model after prepare()) to capture the qparams (zeropoint&scale) Which are needed to quantize the model i.e weights and all
st184507
Does Graph Mode quantization suffers from this issue as well? I recently tried to quantize a jit saved model and did not see any difference, the model size is nearly the same, but the forward pass has gotten worse (nearly two times slower) What could be the underlying issue here?
st184508
Hi, i tried the tutorial of QAT 11 , and saved the parameters as .pt files. The qat_model has modules like features.0.0: (0): ConvBNReLU( (0): QuantizedConvReLU2d(3, 32, kernel_size=(3, 3), stride=(2, 2), scale=0.027574609965085983, zero_point=0, padding=(1, 1)) (1): Identity() (2): Identity() ) when saving the params of this layer, i get these files: features.0.0.bias.pt features.0.0.scale.pt features.0.0.weight.pt but i noticed that weight.pt is a quantized tensor, weight.pt has its own scale(what is not equal to 0.027574609965085983) and zero_point, SO WHO can tell me what are the bias.pt (or as you see, in features.0.0: zero_point=0) and scale.pt (or as you see, in features.0.0: scale=0.027574609965085983) meaning? AND is that bias didn’t be quantized?
st184509
Solved by supriyar in post #7 int4 support is still in development, @supriyar has more context on that. We support torch.quint4x2 dtype, which packs two 4bit values into a byte. In order to use this dtype in operators we need kernels that understand this underlying type and can optimally operate on it. But if you wish to use …
st184510
(1). bias is the bias argument of the quantized conv relu module, scale is the output_scale for the quantized conv relu module (2). yes, bias is not quantized.
st184511
Thank you for your answer. So PyTorch QAT didn’t do full integer inference, is that right? PyTorch just use int input and int weight to do matmul in a layer, there is a dequantize and quantize pair between 2 layers? Do PyTorch support quantize a model to do full integer quantize, which only quantize inputs at first and dequantize output at last?v2-6e2a0f83cfa0b41c0da273c7967137d1_720w720×572 34.3 KB
st184512
WZUCK_WONG: ze a model to do full integer quantize, which only quantize inputs at first and dequantize output at last? yes we support full integer inference (graph on the right side)
st184513
Thank you for your reply. I find that PyTorchJIT do requantize operation between 2 layers, is that right? And i also find that quantized input(int8) multiply quantized weight(int8) will get a result out of the range of int8, so requantize is necessary. If i need my result quantized in the type of int8, i need to quantize my input and weight in the type of int4, do pytorch support it?
st184514
requantization happens in the quantized operator itself, for example quantized::conv2d will requantize the intermediate result (in int32) to int8 with the quantization parameters output_scale/output_zero_point. int4 support is still in development, @supriyar has more context on that.
st184515
int4 support is still in development, @supriyar has more context on that. We support torch.quint4x2 dtype, which packs two 4bit values into a byte. In order to use this dtype in operators we need kernels that understand this underlying type and can optimally operate on it. But if you wish to use this dtype to save storage space, then it should be supported. You can find the dtype in the nightly versions.
st184516
Hi! I am trying to implement quantization in my model. In the case of Post Static Quantization some interesting detail came across: quantized_model.qconfig = torch.quantization.get_default_qconfig('qnnpack') # torch.backends.quantized.engine = 'qnnpack' # gives error works nearly perfect according to performance numbers. However, qnnpack is not available as an engine on my machine. Trying to use quantized_model.qconfig = torch.quantization.get_default_qconfig('fbgemm') led to much worser performance numbers. Also, in my opinion this should not work, but does perform very good: quantized_model.qconfig = torch.quantization.get_default_qconfig('qnnpack') torch.backends.quantized.engine = 'fbgemm' Is this a bug? Shouldn’t fbgemm outperform qnnpack an a x86 system?
st184517
Solved by pintonos in post #9 @dskhudia Yes i mean loss. I tried to run it on a bigger dataset and it seems to work now…
st184518
pintonos: Shouldn’t fbgemm outperform qnnpack an a x86 system? Yes, that would be expected. Does your system have AVX and AVX2 capabilities? Those are needed for the fast paths of the fbgemm kernels.
st184519
pintonos: Is this a bug? Shouldn’t fbgemm outperform qnnpack an a x86 system? Yes, sounds like it could be a bug. Would you be able to share the per-op profiling results for the model you are seeing this for using https://pytorch.org/docs/stable/autograd.html#profiler 5 on both fbgemm and qnnpack on your machine? Qnnpack only has fast kernels on ARM, on x86 it is taking the slow fallback path.
st184520
Profile for fbgemm for evaluation: ------------------------------------ --------------- --------------- --------------- --------------- --------------- --------------- Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls ------------------------------------ --------------- --------------- --------------- --------------- --------------- --------------- mul 64.88% 3.584s 65.20% 3.602s 13.341ms 270 sum 13.79% 761.666ms 15.01% 829.509ms 1.097ms 756 quantized::linear 12.68% 700.596ms 12.68% 700.596ms 19.461ms 36 _cat 3.06% 168.962ms 3.14% 173.683ms 6.433ms 27 relu 1.32% 73.125ms 1.34% 73.805ms 2.734ms 27 fill_ 1.17% 64.873ms 1.17% 64.876ms 82.855us 783 index_select 0.73% 40.152ms 1.22% 67.359ms 95.953us 702 copy_ 0.42% 23.189ms 0.42% 23.197ms 44.438us 522 empty 0.39% 21.815ms 0.39% 21.815ms 9.696us 2250 quantize_per_tensor 0.38% 20.759ms 0.38% 20.771ms 2.308ms 9 cat 0.16% 9.051ms 3.31% 182.734ms 6.768ms 27 embedding 0.15% 8.441ms 2.80% 154.721ms 110.200us 1404 ... Metrics: Size (MB): 3.466263 Loss: 1.093 (not good) Acc: 0.622 Elapsed time (seconds): 7.084 Avg execution time per forward(ms): 0.00363 Profile for qnnpack for evaluation: ------------------------------------ --------------- --------------- --------------- --------------- --------------- --------------- Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls ------------------------------------ --------------- --------------- --------------- --------------- --------------- --------------- mul 66.18% 3.379s 66.49% 3.395s 12.573ms 270 sum 12.98% 662.933ms 14.21% 725.287ms 959.374us 756 quantized::linear 12.45% 635.799ms 12.45% 635.799ms 17.661ms 36 _cat 3.14% 160.059ms 3.23% 164.724ms 6.101ms 27 relu 1.33% 67.692ms 1.34% 68.278ms 2.529ms 27 fill_ 1.17% 59.914ms 1.17% 59.917ms 76.522us 783 index_select 0.68% 34.661ms 1.11% 56.808ms 80.923us 702 empty 0.38% 19.191ms 0.38% 19.191ms 8.529us 2250 quantize_per_tensor 0.37% 18.920ms 0.37% 18.930ms 2.103ms 9 copy_ 0.35% 17.947ms 0.35% 17.954ms 34.394us 522 embedding 0.14% 7.034ms 2.52% 128.492ms 91.519us 1404 ... Metrics: Size (MB): 3.443591 Loss: 0.580 (very good) Acc: 0.720 Elapsed time (seconds): 6.978 Avg execution time per forward(ms): 0.00427
st184521
hmm, one hypothesis that would fit this data is that fbgemm is not enabled, and both fbgemm and qnnpack are taking the fallback paths. cc @dskhudia , any tips?
st184522
@Vasiliy_Kuznetsov How does such fallback path look like? What happens in such a case?
st184523
@pintonos By performance do you mean the loss? It should be the same (close enough) loss with both. I see the execution time similar with both fbgemm and qnnpack for quantized::linear.
st184524
@dskhudia Yes i mean loss. I tried to run it on a bigger dataset and it seems to work now…
st184525
Hi, I’m trying to quantize a trained model of Efficientnet-Lite0, following the architectural changes detailed in this blog post 28. I’m using the implementation from this repo 26 and I get a significant accuracy drop (5-10%) after quantizing the model. The full model after converting to 8-bit is: EfficientNet( (conv_stem): ConvReLU6( (0): QuantizedConv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), scale=0.36273476481437683, zero_point=57, padding=(1, 1)) (1): QuantizedReLU6(inplace=True) ) (bn1): Identity() (act1): Identity() (blocks): Sequential( (0): Sequential( (0): DepthwiseSeparableConv( (skip_add): QFunctional( scale=1.0, zero_point=0 (activation_post_process): Identity() ) (conv_dw): ConvReLU6( (0): QuantizedConv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=0.6822086572647095, zero_point=56, padding=(1, 1), groups=32) (1): QuantizedReLU6(inplace=True) ) (bn1): Identity() (act1): Identity() (conv_pw): QuantizedConv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), scale=0.7673127055168152, zero_point=65) (bn2): Identity() (act2): Identity() ) ) (1): Sequential( (0): InvertedResidual( (skip_add): QFunctional( scale=1.0, zero_point=0 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), scale=0.5392391085624695, zero_point=60) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), scale=0.322853684425354, zero_point=57, padding=(1, 1), groups=96) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), scale=0.7627326250076294, zero_point=63) (bn3): Identity() ) (1): InvertedResidual( (skip_add): QFunctional( scale=0.8407724499702454, zero_point=62 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), scale=0.3213047683238983, zero_point=63) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(144, 144, kernel_size=(3, 3), stride=(1, 1), scale=0.267162948846817, zero_point=67, padding=(1, 1), groups=144) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(144, 24, kernel_size=(1, 1), stride=(1, 1), scale=0.6916980743408203, zero_point=53) (bn3): Identity() ) ) (2): Sequential( (0): InvertedResidual( (skip_add): QFunctional( scale=1.0, zero_point=0 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), scale=0.30310994386672974, zero_point=62) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(144, 144, kernel_size=(5, 5), stride=(2, 2), scale=0.20994137227535248, zero_point=61, padding=(2, 2), groups=144) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(144, 40, kernel_size=(1, 1), stride=(1, 1), scale=0.6519036889076233, zero_point=65) (bn3): Identity() ) (1): InvertedResidual( (skip_add): QFunctional( scale=0.7288376092910767, zero_point=63 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(40, 240, kernel_size=(1, 1), stride=(1, 1), scale=0.20947812497615814, zero_point=52) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(240, 240, kernel_size=(5, 5), stride=(1, 1), scale=0.24765455722808838, zero_point=83, padding=(2, 2), groups=240) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(240, 40, kernel_size=(1, 1), stride=(1, 1), scale=0.4334663450717926, zero_point=61) (bn3): Identity() ) ) (3): Sequential( (0): InvertedResidual( (skip_add): QFunctional( scale=1.0, zero_point=0 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(40, 240, kernel_size=(1, 1), stride=(1, 1), scale=0.20177333056926727, zero_point=56) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(240, 240, kernel_size=(3, 3), stride=(2, 2), scale=0.22160769999027252, zero_point=61, padding=(1, 1), groups=240) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(240, 80, kernel_size=(1, 1), stride=(1, 1), scale=0.5097917914390564, zero_point=64) (bn3): Identity() ) (1): InvertedResidual( (skip_add): QFunctional( scale=0.514493465423584, zero_point=64 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(80, 480, kernel_size=(1, 1), stride=(1, 1), scale=0.15477867424488068, zero_point=47) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(480, 480, kernel_size=(3, 3), stride=(1, 1), scale=0.19667555391788483, zero_point=82, padding=(1, 1), groups=480) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(480, 80, kernel_size=(1, 1), stride=(1, 1), scale=0.2826884686946869, zero_point=64) (bn3): Identity() ) (2): InvertedResidual( (skip_add): QFunctional( scale=0.5448680520057678, zero_point=65 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(80, 480, kernel_size=(1, 1), stride=(1, 1), scale=0.12001236528158188, zero_point=67) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(480, 480, kernel_size=(3, 3), stride=(1, 1), scale=0.1878129243850708, zero_point=79, padding=(1, 1), groups=480) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(480, 80, kernel_size=(1, 1), stride=(1, 1), scale=0.23110872507095337, zero_point=61) (bn3): Identity() ) ) (4): Sequential( (0): InvertedResidual( (skip_add): QFunctional( scale=1.0, zero_point=0 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(80, 480, kernel_size=(1, 1), stride=(1, 1), scale=0.20795781910419464, zero_point=51) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(480, 480, kernel_size=(5, 5), stride=(1, 1), scale=0.2575533390045166, zero_point=81, padding=(2, 2), groups=480) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(480, 112, kernel_size=(1, 1), stride=(1, 1), scale=0.5269572138786316, zero_point=63) (bn3): Identity() ) (1): InvertedResidual( (skip_add): QFunctional( scale=0.5629716515541077, zero_point=65 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), scale=0.16619464755058289, zero_point=58) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(672, 672, kernel_size=(5, 5), stride=(1, 1), scale=0.2228115200996399, zero_point=69, padding=(2, 2), groups=672) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(672, 112, kernel_size=(1, 1), stride=(1, 1), scale=0.3241402208805084, zero_point=63) (bn3): Identity() ) (2): InvertedResidual( (skip_add): QFunctional( scale=0.642544686794281, zero_point=67 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), scale=0.13504581153392792, zero_point=60) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(672, 672, kernel_size=(5, 5), stride=(1, 1), scale=0.2062821239233017, zero_point=73, padding=(2, 2), groups=672) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(672, 112, kernel_size=(1, 1), stride=(1, 1), scale=0.25870615243911743, zero_point=63) (bn3): Identity() ) ) (5): Sequential( (0): InvertedResidual( (skip_add): QFunctional( scale=1.0, zero_point=0 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), scale=0.16723443567752838, zero_point=66) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(672, 672, kernel_size=(5, 5), stride=(2, 2), scale=0.22132091224193573, zero_point=61, padding=(2, 2), groups=672) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(672, 192, kernel_size=(1, 1), stride=(1, 1), scale=0.4806938171386719, zero_point=63) (bn3): Identity() ) (1): InvertedResidual( (skip_add): QFunctional( scale=0.49192753434181213, zero_point=64 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(192, 1152, kernel_size=(1, 1), stride=(1, 1), scale=0.1888679713010788, zero_point=51) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(1152, 1152, kernel_size=(5, 5), stride=(1, 1), scale=0.2976231873035431, zero_point=83, padding=(2, 2), groups=1152) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(1152, 192, kernel_size=(1, 1), stride=(1, 1), scale=0.34456929564476013, zero_point=60) (bn3): Identity() ) (2): InvertedResidual( (skip_add): QFunctional( scale=0.5567103624343872, zero_point=62 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(192, 1152, kernel_size=(1, 1), stride=(1, 1), scale=0.19077259302139282, zero_point=47) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(1152, 1152, kernel_size=(5, 5), stride=(1, 1), scale=0.38248512148857117, zero_point=91, padding=(2, 2), groups=1152) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(1152, 192, kernel_size=(1, 1), stride=(1, 1), scale=0.2738204598426819, zero_point=65) (bn3): Identity() ) (3): InvertedResidual( (skip_add): QFunctional( scale=0.6205083727836609, zero_point=62 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(192, 1152, kernel_size=(1, 1), stride=(1, 1), scale=0.15164275467395782, zero_point=59) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(1152, 1152, kernel_size=(5, 5), stride=(1, 1), scale=0.29384535551071167, zero_point=80, padding=(2, 2), groups=1152) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(1152, 192, kernel_size=(1, 1), stride=(1, 1), scale=0.24689887464046478, zero_point=63) (bn3): Identity() ) ) (6): Sequential( (0): InvertedResidual( (skip_add): QFunctional( scale=1.0, zero_point=0 (activation_post_process): Identity() ) (conv_pw): ConvReLU6( (0): QuantizedConv2d(192, 1152, kernel_size=(1, 1), stride=(1, 1), scale=0.20717555284500122, zero_point=64) (1): QuantizedReLU6() ) (bn1): Identity() (act1): Identity() (conv_dw): ConvReLU6( (0): QuantizedConv2d(1152, 1152, kernel_size=(3, 3), stride=(1, 1), scale=0.3554805517196655, zero_point=68, padding=(1, 1), groups=1152) (1): QuantizedReLU6() ) (bn2): Identity() (act2): Identity() (conv_pwl): QuantizedConv2d(1152, 320, kernel_size=(1, 1), stride=(1, 1), scale=0.2588821351528168, zero_point=63) (bn3): Identity() ) ) ) (conv_head): ConvReLU6( (0): QuantizedConv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), scale=0.2839420437812805, zero_point=80) (1): QuantizedReLU6(inplace=True) ) (bn2): Identity() (act2): Identity() (global_pool): SelectAdaptivePool2d (output_size=1, pool_type=avg) (quant): Quantize(scale=tensor([0.0374]), zero_point=tensor([57]), dtype=torch.quint8) (dequant): DeQuantize() (classifier): QuantizedLinear(in_features=1280, out_features=1000, scale=0.14930474758148193, zero_point=34, qscheme=torch.per_channel_affine) ) Is there anything I’m missing? I can provide the conversion code and other information if needed. Thanks in advance!
st184526
We just released Numeric Suite as prototype feature in PyTorch 1.6 to support quantization debugging, you can try it out to see which layer is problematic. The tutorial can be found at: https://pytorch.org/tutorials/prototype/numeric_suite_tutorial.html 28
st184527
Thanks! I tried checking the quantization error for each of the layers and got the following: conv_stem.0.weight tensor(44.4819) blocks.0.0.conv_dw.0.weight tensor(45.0884) blocks.0.0.conv_pw.weight tensor(42.8196) blocks.1.0.conv_pw.0.weight tensor(43.1310) blocks.1.0.conv_dw.0.weight tensor(46.9183) blocks.1.0.conv_pwl.weight tensor(43.1703) blocks.1.1.conv_pw.0.weight tensor(44.4646) blocks.1.1.conv_dw.0.weight tensor(45.7783) blocks.1.1.conv_pwl.weight tensor(39.9211) blocks.2.0.conv_pw.0.weight tensor(44.0625) blocks.2.0.conv_dw.0.weight tensor(45.3749) blocks.2.0.conv_pwl.weight tensor(41.9430) blocks.2.1.conv_pw.0.weight tensor(43.8883) blocks.2.1.conv_dw.0.weight tensor(42.4965) blocks.2.1.conv_pwl.weight tensor(40.5602) blocks.3.0.conv_pw.0.weight tensor(43.9803) blocks.3.0.conv_dw.0.weight tensor(47.7440) blocks.3.0.conv_pwl.weight tensor(41.9959) blocks.3.1.conv_pw.0.weight tensor(43.2630) blocks.3.1.conv_dw.0.weight tensor(45.7537) blocks.3.1.conv_pwl.weight tensor(41.7492) blocks.3.2.conv_pw.0.weight tensor(43.5795) blocks.3.2.conv_dw.0.weight tensor(45.5840) blocks.3.2.conv_pwl.weight tensor(41.2215) blocks.4.0.conv_pw.0.weight tensor(42.7768) blocks.4.0.conv_dw.0.weight tensor(41.5424) blocks.4.0.conv_pwl.weight tensor(41.2056) blocks.4.1.conv_pw.0.weight tensor(43.2486) blocks.4.1.conv_dw.0.weight tensor(43.3677) blocks.4.1.conv_pwl.weight tensor(41.5483) blocks.4.2.conv_pw.0.weight tensor(43.2695) blocks.4.2.conv_dw.0.weight tensor(43.2045) blocks.4.2.conv_pwl.weight tensor(41.8538) blocks.5.0.conv_pw.0.weight tensor(42.5763) blocks.5.0.conv_dw.0.weight tensor(46.0717) blocks.5.0.conv_pwl.weight tensor(41.6060) blocks.5.1.conv_pw.0.weight tensor(42.4102) blocks.5.1.conv_dw.0.weight tensor(44.6428) blocks.5.1.conv_pwl.weight tensor(40.9154) blocks.5.2.conv_pw.0.weight tensor(42.4992) blocks.5.2.conv_dw.0.weight tensor(44.1465) blocks.5.2.conv_pwl.weight tensor(40.3739) blocks.5.3.conv_pw.0.weight tensor(42.2826) blocks.5.3.conv_dw.0.weight tensor(44.1184) blocks.5.3.conv_pwl.weight tensor(40.7068) blocks.6.0.conv_pw.0.weight tensor(42.2656) blocks.6.0.conv_dw.0.weight tensor(47.4642) blocks.6.0.conv_pwl.weight tensor(41.3921) conv_head.0.weight tensor(42.7725) classifier._packed_params._packed_params tensor(39.3391) I’m not sure if these are large values or standard for this type of quantization, but it doesn’t seem that a specific layer is significantly worse than others. Maybe it is something regarding the AdaptiveAvgPool2d? I saw there were changes to it in the release notes.
st184528
I just ran the activation comparison suggested in the guide provided, and got: conv_stem.0.stats tensor(28.3666) conv_stem.1.stats tensor(28.3666) blocks.0.0.conv_dw.0.stats tensor(16.1361) blocks.0.0.conv_dw.1.stats tensor(16.1361) blocks.0.0.conv_pw.stats tensor(8.5438) blocks.1.0.conv_pw.0.stats tensor(7.0812) blocks.1.0.conv_pw.1.stats tensor(10.7929) blocks.1.0.conv_dw.0.stats tensor(10.3284) blocks.1.0.conv_dw.1.stats tensor(11.8796) blocks.1.0.conv_pwl.stats tensor(6.0492) blocks.1.1.conv_pw.0.stats tensor(9.7360) blocks.1.1.conv_pw.1.stats tensor(11.2618) blocks.1.1.conv_dw.0.stats tensor(8.9654) blocks.1.1.conv_dw.1.stats tensor(9.1349) blocks.1.1.conv_pwl.stats tensor(5.3888) blocks.2.0.conv_pw.0.stats tensor(8.9415) blocks.2.0.conv_pw.1.stats tensor(10.1787) blocks.2.0.conv_dw.0.stats tensor(12.5325) blocks.2.0.conv_dw.1.stats tensor(14.5331) blocks.2.0.conv_pwl.stats tensor(5.8452) blocks.2.1.conv_pw.0.stats tensor(10.9424) blocks.2.1.conv_pw.1.stats tensor(11.9166) blocks.2.1.conv_dw.0.stats tensor(10.7086) blocks.2.1.conv_dw.1.stats tensor(11.7042) blocks.2.1.conv_pwl.stats tensor(3.9516) blocks.3.0.conv_pw.0.stats tensor(7.9058) blocks.3.0.conv_pw.1.stats tensor(8.7798) blocks.3.0.conv_dw.0.stats tensor(13.6778) blocks.3.0.conv_dw.1.stats tensor(15.0221) blocks.3.0.conv_pwl.stats tensor(7.0661) blocks.3.1.conv_pw.0.stats tensor(11.2245) blocks.3.1.conv_pw.1.stats tensor(12.1855) blocks.3.1.conv_dw.0.stats tensor(10.3169) blocks.3.1.conv_dw.1.stats tensor(7.3186) blocks.3.1.conv_pwl.stats tensor(5.9016) blocks.3.2.conv_pw.0.stats tensor(10.9814) blocks.3.2.conv_pw.1.stats tensor(12.2782) blocks.3.2.conv_dw.0.stats tensor(11.5729) blocks.3.2.conv_dw.1.stats tensor(6.8540) blocks.3.2.conv_pwl.stats tensor(4.0227) blocks.4.0.conv_pw.0.stats tensor(9.5918) blocks.4.0.conv_pw.1.stats tensor(10.4552) blocks.4.0.conv_dw.0.stats tensor(11.8454) blocks.4.0.conv_dw.1.stats tensor(12.2951) blocks.4.0.conv_pwl.stats tensor(4.5780) blocks.4.1.conv_pw.0.stats tensor(9.8242) blocks.4.1.conv_pw.1.stats tensor(9.5439) blocks.4.1.conv_dw.0.stats tensor(12.6775) blocks.4.1.conv_dw.1.stats tensor(10.9211) blocks.4.1.conv_pwl.stats tensor(2.9198) blocks.4.2.conv_pw.0.stats tensor(9.9729) blocks.4.2.conv_pw.1.stats tensor(9.4751) blocks.4.2.conv_dw.0.stats tensor(14.5569) blocks.4.2.conv_dw.1.stats tensor(12.2109) blocks.4.2.conv_pwl.stats tensor(3.3256) blocks.5.0.conv_pw.0.stats tensor(10.7336) blocks.5.0.conv_pw.1.stats tensor(9.2929) blocks.5.0.conv_dw.0.stats tensor(19.4747) blocks.5.0.conv_dw.1.stats tensor(21.1074) blocks.5.0.conv_pwl.stats tensor(8.3158) blocks.5.1.conv_pw.0.stats tensor(12.8702) blocks.5.1.conv_pw.1.stats tensor(12.2446) blocks.5.1.conv_dw.0.stats tensor(14.1980) blocks.5.1.conv_dw.1.stats tensor(12.0078) blocks.5.1.conv_pwl.stats tensor(7.1764) blocks.5.2.conv_pw.0.stats tensor(13.4789) blocks.5.2.conv_pw.1.stats tensor(12.8941) blocks.5.2.conv_dw.0.stats tensor(15.1403) blocks.5.2.conv_dw.1.stats tensor(13.3021) blocks.5.2.conv_pwl.stats tensor(6.3677) blocks.5.3.conv_pw.0.stats tensor(13.3304) blocks.5.3.conv_pw.1.stats tensor(13.2739) blocks.5.3.conv_dw.0.stats tensor(16.0722) blocks.5.3.conv_dw.1.stats tensor(14.6379) blocks.5.3.conv_pwl.stats tensor(8.0309) blocks.6.0.conv_pw.0.stats tensor(12.9786) blocks.6.0.conv_pw.1.stats tensor(13.6662) blocks.6.0.conv_dw.0.stats tensor(16.3897) blocks.6.0.conv_dw.1.stats tensor(17.3638) blocks.6.0.conv_pwl.stats tensor(6.5583) conv_head.0.stats tensor(3.8746) conv_head.1.stats tensor(3.8746) quant.stats tensor(34.5170) classifier.stats tensor(6.9768) Is there anything suspicious here?
st184529
I also calculated the cosine similarity of the activations in each of the layers: conv_stem.0.stats tensor(0.9992) conv_stem.1.stats tensor(0.9992) blocks.0.0.conv_dw.0.stats tensor(0.9882) blocks.0.0.conv_dw.1.stats tensor(0.9882) blocks.0.0.conv_pw.stats tensor(0.9376) blocks.1.0.conv_pw.0.stats tensor(0.9126) blocks.1.0.conv_pw.1.stats tensor(0.9569) blocks.1.0.conv_dw.0.stats tensor(0.9549) blocks.1.0.conv_dw.1.stats tensor(0.9677) blocks.1.0.conv_pwl.stats tensor(0.8856) blocks.1.1.conv_pw.0.stats tensor(0.9488) blocks.1.1.conv_pw.1.stats tensor(0.9625) blocks.1.1.conv_dw.0.stats tensor(0.9364) blocks.1.1.conv_dw.1.stats tensor(0.9385) blocks.1.1.conv_pwl.stats tensor(0.8623) blocks.2.0.conv_pw.0.stats tensor(0.9364) blocks.2.0.conv_pw.1.stats tensor(0.9518) blocks.2.0.conv_dw.0.stats tensor(0.9711) blocks.2.0.conv_dw.1.stats tensor(0.9819) blocks.2.0.conv_pwl.stats tensor(0.8685) blocks.2.1.conv_pw.0.stats tensor(0.9585) blocks.2.1.conv_pw.1.stats tensor(0.9671) blocks.2.1.conv_dw.0.stats tensor(0.9565) blocks.2.1.conv_dw.1.stats tensor(0.9647) blocks.2.1.conv_pwl.stats tensor(0.7922) blocks.3.0.conv_pw.0.stats tensor(0.9168) blocks.3.0.conv_pw.1.stats tensor(0.9344) blocks.3.0.conv_dw.0.stats tensor(0.9773) blocks.3.0.conv_dw.1.stats tensor(0.9831) blocks.3.0.conv_pwl.stats tensor(0.8967) blocks.3.1.conv_pw.0.stats tensor(0.9597) blocks.3.1.conv_pw.1.stats tensor(0.9683) blocks.3.1.conv_dw.0.stats tensor(0.9532) blocks.3.1.conv_dw.1.stats tensor(0.8986) blocks.3.1.conv_pwl.stats tensor(0.8574) blocks.3.2.conv_pw.0.stats tensor(0.9549) blocks.3.2.conv_pw.1.stats tensor(0.9664) blocks.3.2.conv_dw.0.stats tensor(0.9599) blocks.3.2.conv_dw.1.stats tensor(0.8697) blocks.3.2.conv_pwl.stats tensor(0.7916) blocks.4.0.conv_pw.0.stats tensor(0.9387) blocks.4.0.conv_pw.1.stats tensor(0.9521) blocks.4.0.conv_dw.0.stats tensor(0.9650) blocks.4.0.conv_dw.1.stats tensor(0.9685) blocks.4.0.conv_pwl.stats tensor(0.8268) blocks.4.1.conv_pw.0.stats tensor(0.9460) blocks.4.1.conv_pw.1.stats tensor(0.9414) blocks.4.1.conv_dw.0.stats tensor(0.9698) blocks.4.1.conv_dw.1.stats tensor(0.9566) blocks.4.1.conv_pwl.stats tensor(0.7595) blocks.4.2.conv_pw.0.stats tensor(0.9490) blocks.4.2.conv_pw.1.stats tensor(0.9423) blocks.4.2.conv_dw.0.stats tensor(0.9809) blocks.4.2.conv_dw.1.stats tensor(0.9683) blocks.4.2.conv_pwl.stats tensor(0.7715) blocks.5.0.conv_pw.0.stats tensor(0.9567) blocks.5.0.conv_pw.1.stats tensor(0.9359) blocks.5.0.conv_dw.0.stats tensor(0.9930) blocks.5.0.conv_dw.1.stats tensor(0.9949) blocks.5.0.conv_pwl.stats tensor(0.9064) blocks.5.1.conv_pw.0.stats tensor(0.9649) blocks.5.1.conv_pw.1.stats tensor(0.9595) blocks.5.1.conv_dw.0.stats tensor(0.9741) blocks.5.1.conv_dw.1.stats tensor(0.9583) blocks.5.1.conv_pwl.stats tensor(0.8771) blocks.5.2.conv_pw.0.stats tensor(0.9700) blocks.5.2.conv_pw.1.stats tensor(0.9648) blocks.5.2.conv_dw.0.stats tensor(0.9810) blocks.5.2.conv_dw.1.stats tensor(0.9709) blocks.5.2.conv_pwl.stats tensor(0.8566) blocks.5.3.conv_pw.0.stats tensor(0.9687) blocks.5.3.conv_pw.1.stats tensor(0.9679) blocks.5.3.conv_dw.0.stats tensor(0.9842) blocks.5.3.conv_dw.1.stats tensor(0.9755) blocks.5.3.conv_pwl.stats tensor(0.8879) blocks.6.0.conv_pw.0.stats tensor(0.9637) blocks.6.0.conv_pw.1.stats tensor(0.9655) blocks.6.0.conv_dw.0.stats tensor(0.9802) blocks.6.0.conv_dw.1.stats tensor(0.9842) blocks.6.0.conv_pwl.stats tensor(0.8535) conv_head.0.stats tensor(0.6853) conv_head.1.stats tensor(0.6853) quant.stats tensor(0.9998) classifier.stats tensor(0.7695) it seems as if the conv_pwl layers are mostly different, along with the conv_head and classifier. What might cause that?
st184530
@kfir_goldberg Did you solve it? I’m planing to quantize an EfficientNet-Lite0 for smartphone deployment. I chose EfficientNet-Lite as it seems to be quantization friendly, or at least, it’s what google claims. Thank you
st184531
Hi, Unfortunately, I didn’t solve it. One of the issues that bothered me is that fusing Conv-Bn-Act is not possible with Relu6 as of now so I had to implement it myself, and I’m unsure if it worked right. The best I managed to do is get about a 5% drop in accuracy (75% to 70%), and I eventually stopped trying.
st184532
kfir_goldberg: Unfortunately, I didn’t solve it. One of the issues that bothered me is that fusing Conv-Bn-Act is not possible with Relu6 as of now so I had to implement it myself, and I’m unsure if it worked right. we typically replace relu6 with relu, e.g.: https://github.com/pytorch/vision/blob/master/torchvision/models/quantization/mobilenet.py#L75 10
st184533
also did you try quantization aware training? https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#quantization-aware-training 8
st184534
@jerryzh168 I don’t see how replacing relu6 with relu doesn’t translate to a performance drop. It may work with MobileNetV2 but it isn’t a general approach. Also, I don’t understand why Pytorch doesn’t offer ConvReLU2d similar modules for relu6 and hardswish. They are a common pattern for nets that runs on low end or IoT devices like smartphones. At least, a ConvReLU6_2d for relu6 as it’s like a relu. Moreover, I think it’s worth to add both patterns as like Regnet paper says “We find that Swish outperforms ReLU at low flops, but ReLU is better at highflops. Interestingly, if g is restricted to be 1 (depthwiseconv), Swish performs much better than ReLU. This sugggests that depthwise conv and Swish interact favorably, although the underlying reason is not at all clear.” So, for lower flops and quantized network you may want to use hardswish instead of relu as activation.
st184535
I see, I’m not sure why we don’t support ConvReLU6_2d, maybe @raghuramank100 knows.
st184536
Hi @kfir_goldberg, I’m about quantize Efficientnet-lite as well. Could you please share the conversion code? Maybe I’ll found sth. That would save me some time of doing conversion so that I could concentrate just to get similar accuracy for both versions. Thanks Tomek
st184537
This is the code I used to support ConvBnReLU6 and ConvReLU6 def fuse_model(model): for m in model.modules(): if type(m) == DepthwiseSeparableConv: torch.quantization.fuse_modules(m, ['conv_dw', 'bn1', 'act1'], inplace=True, fuser_func=fuse_known_modules_mod) # torch.quantization.fuse_modules(m, ['conv_pw', 'bn2', 'act2'], inplace=True, torch.quantization.fuse_modules(m, ['conv_pw', 'bn2'], inplace=True, fuser_func=fuse_known_modules_mod) elif type(m) == InvertedResidual: torch.quantization.fuse_modules(m, ['conv_pw', 'bn1', 'act1'], inplace=True, fuser_func=fuse_known_modules_mod) torch.quantization.fuse_modules(m, ['conv_dw', 'bn2', 'act2'], inplace=True, fuser_func=fuse_known_modules_mod) torch.quantization.fuse_modules(m, ['conv_pwl', 'bn3'], inplace=True, fuser_func=fuse_known_modules_mod) torch.quantization.fuse_modules(model, ['conv_head', 'bn2', 'act2'], inplace=True, fuser_func=fuse_known_modules_mod) torch.quantization.fuse_modules(model, ['conv_stem', 'bn1', 'act1'], inplace=True, fuser_func=fuse_known_modules_mod) def fuse_known_modules_mod(mod_list): r"""Returns a list of modules that fuses the operations specified in the input module list. Fuses only the following sequence of modules: conv, bn conv, bn, relu conv, relu linear, relu For these sequences, the first element in the output module list performs the fused operation. The rest of the elements are set to nn.Identity() """ OP_LIST_TO_FUSER_METHOD = { (torch.nn.Conv2d, torch.nn.BatchNorm2d): fuse_conv_bn, (torch.nn.Conv2d, torch.nn.BatchNorm2d, torch.nn.ReLU): fuse_conv_bn_relu, (torch.nn.Conv2d, torch.nn.BatchNorm2d, torch.nn.ReLU6): fuse_conv_bn_relu6, (Conv2dSame, torch.nn.BatchNorm2d, torch.nn.ReLU6): fuse_conv_bn_relu6, (torch.nn.Conv2d, torch.nn.ReLU): torch.nn.intrinsic.ConvReLU2d, (torch.nn.Conv2d, torch.nn.ReLU6): ConvReLU6, (torch.nn.Linear, torch.nn.ReLU): torch.nn.intrinsic.LinearReLU } types = tuple(type(m) for m in mod_list) fuser_method = OP_LIST_TO_FUSER_METHOD.get(types, None) if fuser_method is None: raise NotImplementedError("Cannot fuse modules: {}".format(types)) new_mod = [None] * len(mod_list) new_mod[0] = fuser_method(*mod_list) for i in range(1, len(mod_list)): new_mod[i] = torch.nn.Identity() new_mod[i].training = mod_list[0].training return new_mod class ConvReLU6(nn.Sequential): def __init__(self, conv, relu6): super(ConvReLU6, self).__init__(conv, relu6) class ConvBnReLU6(torch.nn.Sequential): def __init__(self, conv, bn, relu6): super(ConvBnReLU6, self).__init__(conv, bn, relu6) def fuse_conv_bn_relu6(conv, bn, relu6): assert (conv.training == bn.training == relu6.training), \ "Conv and BN both must be in the same mode (train or eval)." if conv.training: return ConvBnReLU6(conv, bn, relu6) else: return ConvReLU6( torch.nn.utils.fusion.fuse_conv_bn_eval(conv, bn), relu6)
st184538
Thanks @kfir_goldberg, do I understand correctly that you’ve created a new model class that inherited from timm.models.efficientnet_lite0 and 1) placed your fuse_model() there 2) modified forward method to add quantization support? Can you share code of this class? I’m not sure where exactly to put quant/dequant. Then you made: model_.qconfig = torch.quantization.get_default_qconfig('fbgemm') torch.quantization.prepare(model_, inplace=True) and finally evaluate(model_, criterion, train_loader, neval_batches=num_calibration_batches) torch.quantization.convert(model_, inplace=True) to calibrate the model with training set and convert, right? I’m asking because when calibrating model (evaluate) I got: RuntimeError: Could not run 'quantized::conv2d.new' with arguments from the 'CPU' backend. 'quantized::conv2d.new' is only available for these backends: [QuantizedCPU]. and I found here 1 that this is probably connected with the fact that QuantStub is not placed in the right place. Have you followed this 2 tutorial? They say that when performance drops per channel quantization may be needed or Quantization-aware training. Many thanks!
st184539
Tomek: RuntimeError: Could not run 'quantized::conv2d.new' with arguments from the 'CPU' backend. 'quantized::conv2d.new' is only available for these backends: [QuantizedCPU]. yeah the error means the input of conv is not properly quantized, please make sure to place QuantStub correctly.
st184540
Context In huggingface transformers, the pegasus and t5 models overflow during beam search in half precision. Models that were originally trained in fairseq work well in half precision, which leads to be believe that models trained in bfloat16 (on TPUS with tensorflow) will often fail to generate with less dynamic range. I was considering starting a project to further train the models with a penalty for having large activations (to discourage overflow in fp16), but was wondering whether this is duplicative with the pytorch team’s current efforts. Specific Questions (a) is the snippet below likely to work in a current nightly version? (b) are the various kernel implementations “in the works” (and my proposed project won’t be useful in a few months)? © Is bfloat16 + cuda a possibility? Failing Snippet The following snippet with tries to run a transformer forward pass in bfloat16: from transformers import BartForConditionalGeneration import torch model = BartForConditionalGeneration.from_pretrained("sshleifer/distilbart-xsum-12-3") model = model.to(torch.bfloat16) input_ids = torch.tensor([[0, 31414, 232, 328, 740, 1140, 12695, 69, 46078, 1588, 2]], dtype=torch.long) model(input_ids) # RuntimeError: "LayerNormKernelImpl" not implemented for 'BFloat16'
st184541
Currently quantization supports int8/uint8/int32 and bfloat16 is not covered by pytorch quantization. @izdeby can you answer the above questions?
st184542
Hi, I wrote a c++ function that receives a quantized tensor and outputs a fp32 tensor. everything compiled without errors and the python binding also works but when i try to use the function i get the following error: RuntimeError: Could not run 'quantized::linear_my' with arguments from the 'QuantizedCPU' backend. 'quantized::linear_my' is only available for these backends: []. so my question is how does pytorch determine which backends does my function support? how can i fix this state where my function doesnt support any backend? Thanks, Ofir
st184543
Solved by jerryzh168 in post #4 you can take a look at how we implement quantized ops: we need to declare the signature: and then implement it:
st184544
I am not sure, I used the same method that was used in the torch library, I modified code inside the library. I resolved the problem in the end, it was a compilation problem, cleaning the project resolved it.
st184545
you can take a look at how we implement quantized ops: we need to declare the signature: github.com pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/library.cpp#L23 extern template torch::class_<ConvPackedParamsBase<2>> register_conv_params<2>(); extern template torch::class_<ConvPackedParamsBase<3>> register_conv_params<3>(); torch::class_<EmbeddingPackedParamsBase> register_embedding_params(); TORCH_LIBRARY(quantized, m) { register_linear_params(); register_conv_params<2>(); register_conv_params<3>(); register_embedding_params(); m.def(TORCH_SELECTIVE_SCHEMA("quantized::add(Tensor qa, Tensor qb, float scale, int zero_point) -> Tensor qc")); m.def(TORCH_SELECTIVE_SCHEMA("quantized::add.out(Tensor qa, Tensor qb, Tensor(a!) out) -> Tensor(a!) out")); m.def(TORCH_SELECTIVE_SCHEMA("quantized::add.Scalar(Tensor qa, Scalar b) -> Tensor qc")); m.def(TORCH_SELECTIVE_SCHEMA("quantized::add.Scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> Tensor(a!) out")); m.def(TORCH_SELECTIVE_SCHEMA("quantized::add_relu(Tensor qa, Tensor qb, float scale, int zero_point) -> Tensor qc")); m.def(TORCH_SELECTIVE_SCHEMA("quantized::add_relu.Scalar(Tensor qa, Scalar b) -> Tensor qc")); m.def(TORCH_SELECTIVE_SCHEMA("quantized::add_relu.out(Tensor qa, Tensor qb, Tensor(a!) out) -> Tensor(a!) out")); m.def(TORCH_SELECTIVE_SCHEMA("quantized::add_relu.Scalar_out(Tensor qa, Scalar b, Tensor(a!) out) -> Tensor(a!) out")); // deprecated functions, kept for backward compatibility m.def(TORCH_SELECTIVE_SCHEMA("quantized::add_out(Tensor qa, Tensor qb, Tensor(a!) out) -> Tensor(a!) out")); m.def(TORCH_SELECTIVE_SCHEMA("quantized::add_relu_out(Tensor qa, Tensor qb, Tensor(a!) out) -> Tensor(a!) out")); and then implement it: github.com pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/qadd.cpp#L268 1 } // `torch.jit.trace` will trace Scalar as Tensor // This can be removed after broadcast is supported and // all variations of `quantized::add` is merged into `quantized::add` template <bool ReLUFused = false> Tensor qadd_scalar_tensor_out(Tensor qa, Tensor b, Tensor out) { return qadd_scalar_out(qa, b.item(), out); } TORCH_LIBRARY_IMPL(quantized, QuantizedCPU, m) { m.impl(TORCH_SELECTIVE_NAME("quantized::add"), TORCH_FN(qadd</*ReLUFused=*/false>)); m.impl(TORCH_SELECTIVE_NAME("quantized::add.out"), TORCH_FN(qadd_out</*ReLUFused=*/false>)); m.impl(TORCH_SELECTIVE_NAME("quantized::add.Scalar"), TORCH_FN(qadd_scalar</*ReLUFused=*/false>)); m.impl(TORCH_SELECTIVE_NAME("quantized::add.Scalar_out"), TORCH_FN(qadd_scalar_out</*ReLUFused=*/false>)); m.impl(TORCH_SELECTIVE_NAME("quantized::add_relu"), TORCH_FN(qadd</*ReLUFused=*/true>)); m.impl(TORCH_SELECTIVE_NAME("quantized::add_relu.out"), TORCH_FN(qadd_out</*ReLUFused=*/true>)); m.impl(TORCH_SELECTIVE_NAME("quantized::add_relu.Scalar"), TORCH_FN(qadd_scalar</*ReLUFused=*/true>)); m.impl(TORCH_SELECTIVE_NAME("quantized::add_relu.Scalar_out"), TORCH_FN(qadd_scalar_out</*ReLUFused=*/true>)); // deprecated functions, kept for backward compatibility m.impl(TORCH_SELECTIVE_NAME("quantized::add_out"), TORCH_FN(qadd_out</*ReLUFused=*/false>));
st184546
I quantize my model and I want the bit value of all weights in my model. But I don’t how to do it
st184547
Solved by jerryzh168 in post #4 yeah you can try: quantized_weight.int_repr() to get the int weight, for more details about quantized tensor apis, please see: https://github.com/pytorch/pytorch/wiki/Introducing-Quantized-Tensor
st184548
Could you elaborate on what you mean by bit value of the weights? If you would like to access quantized weights you can try <quantized_module>.weight(). This returns a quantized tensor of weights and you can use int_repr() to get the int8_t values of weights.
st184549
Hi. Based on your advice, when I try to print the weight of the quantized model, I get FP32 weights, scaling, and zero_point, but I can’t get the int weight?
st184550
yeah you can try: quantized_weight.int_repr() to get the int weight, for more details about quantized tensor apis, please see: https://github.com/pytorch/pytorch/wiki/Introducing-Quantized-Tensor 2
st184551
Hello everyone, Recently, we are focusing on training with int8, not inference on int8. Considering the numerical limitation of int8, at first we keep all parameters in fp32 and only quantize convolution layer (conduct int8 operation) as it is the most compute-intensive part of a model. During the past months, we have achieved some progress (such accuracy comparable to fp32 training and faster than fp32 training), and our paper is accepted by CVPR2020 (https://arxiv.org/pdf/1912.12607.pdf 36). Now, we want to further explore quantization of other layers like ReLU, Pooling and BatchNorm, and keep dataflow in int8 in order to save memory. (1) However, we could not use int8 tensor as input & output of a layer in PyTorch due to the autograd mechanism. Would you consider supporting int8 tensor in the future? (2) Moreover, we want to pass quantization parameters like scale from layer to layer, but there are some problems at the shortcut connection. During backward, the autograd mechanism will add the gradient from main path and the gradient from the shortcut connection automatically. As we map quantization parameters to tensor, the quantization parameters are lost after the add operation. Could you provide some suggestions? Many thanks.
st184552
I think there are some work planned for int8 training, cc @raghuramank100 for more details.
st184553
hi @raghuramank100 @hx89 , I am also one of the authors of INT8 training paper on CVPR2020. Firstly, I want to clarify that we are focus on using INT8 computation to speed up training process , not for quantization aware training. It means that we need to quantize gradient on convolution backward not only for forward. we are now achieve nearly no accuracy (< 1% TOP1) decent on ImageNet of ResNet / MobileNet/ Inception/ ShuffleNet, Even on detection task on Pascal VOC and COCO with RetinaNet and FasterRCNN we got only ~1% mAP drop. We can check accuracy table on paper: (see table 7,8 and 9) And we also considering about INT8 computation implement and overhead reducing (see Section 3.6. General Purpose Training Framework) We implement with DP4A on GTX1080TI and finally get 1.6x(forward) and 1.9x(backward) speed on convolution over cuDNN: (see Figure 8.) We are now plan INT8 Training V2 about quantization all CNN layer include ReLU, Pooling, Add. But We found pytorch not yet support INT8 gradient backward, So we need Pytorch team to give some help. Feel great to see your response!
st184554
Thanks, quantized training is one of our future directions. what do you mean by “pytorch not yet support int8 gradient backward”? can you elaborate in more detail?
st184555
I am using dynamic quantization on fine-tuned bert model. When I performed inference on quantized model before saving it, I am getting almost similar results(accuracy score) between unquantized and quantized model and reduction in inference time too. However, when I load the quantized model and do inference on that, there is significant difference (around 30 to 40% decrease in accuracy) in the results, Is this because of way of loading the quantized model? Any leads will be appreciable… Thanks Following is the code def load_model(args): config = BertConfig.from_pretrained(args.model_dir) tokenizer = BertTokenizer.from_pretrained( args.model_dir, do_lower_case=args.do_lower_case ) model = BertForSequenceClassification.from_pretrained(args.model_dir, config=config) return model, tokenizer def predict_label(model, inputs): with torch.no_grad(): outputs = model(**inputs) logits = outputs[0] logits = F.softmax(logits, dim=1) logits_label = torch.argmax(logits, dim=1) labels = logits_label.detach().cpu().numpy().tolist() label_confidences = [ confidence[label].item() for confidence, label in zip(logits, labels) ] return labels, label_confidences def predict(eval_dataloader, model, examples, device): index = 0 labels_for_evaluations = [] for batch in tqdm(eval_dataloader, desc="Evaluating"): input_ids = batch["input_ids"] mask_ids = batch["mask_ids"] token_type_ids = batch["token_type_ids"] input_ids = input_ids.to(device, dtype=torch.long) mask_ids = mask_ids.to(device, dtype=torch.long) token_type_ids = token_type_ids.to(device, dtype=torch.long) inputs = {"input_ids": input_ids, "attention_mask": mask_ids} predicted_labels, label_confidences = predict_label(model, inputs) for confidence, pred_label in zip(label_confidences, predicted_labels): labels_for_evaluations.append(str(pred_label)) return labels_for_evaluations if __name__ == "__main__": examples, labels = read_tsv_file(args.data_file) bert_model, tokenizer = load_model(args) bert_model.to(args.device) # perform quantization quantized_model = quantization.quantize_dynamic(bert_model, {nn.Linear}, dtype=torch.qint8) dataframe = pd.DataFrame({"text": examples}) batch_size = 1 print("quantized model ", quantized_model) eval_dataloader = create_dataloader( dataframe, tokenizer, args.max_seq_length, batch_size, test_data=True ) # inference positive_predicted_sentences, labels_for_evaluations = predict( eval_dataloader, quantized_model, examples, args.device ) # serialized the quatized model quantized_output_dir = args.model_dir + "_quantized_batch1" if not os.path.exists(quantized_output_dir): os.makedirs(quantized_output_dir) quantized_model.save_pretrained(quantized_output_dir) tokenizer.save_pretrained(quantized_output_dir) print("accuracy score ", accuracy_score(labels, labels_for_evaluations)) Update I found many people are facing similar issue, when you load the quantized BERT model then there is huge decrease in accuracy. Here are related issues on github Dynamic Quantization on ALBERT (pytorch) #2542 5 Quantized model not preserved when imported using from_pretrained() #2556 4
st184556
hi Ramesh, would you be able to provide some more information? What is your model def, and what code are you using to save and load the model?
st184557
hi @Vasiliy_Kuznetsov thanks for your response. Please check the code. I am loading quantized bert model in a similar way as we load the pre-trained bert model. When I convert model to quantization and determine the accuracy it is pretty much similar to without quantized model. However, when I load the quantized model, after saving it, then there is lot of variation in the results. @Vasiliy_Kuznetsov I have updated the github issues link too, many people are facing similar issue.
st184558
Hi @Ramesh_Kumar, could you also provide the code used to load the quantized model? Are you using the same load_model function, and how are you calling it?
st184559
Hi @Vasiliy_Kuznetsov thanks for your response. Yes, I am loading same load function to load the quantized model.
st184560
Ramesh_Kumar: BertForSequenceClassification.from_pretrained ah, I see. In that case, one place to check would be BertForSequenceClassification.from_pretrained - it might be assuming a floating point model. You would have to modify the loading code.
st184561
Hi, I’m facing a similar issue when quantizing Efficientnet. I opened a thread about it here 4, but i was wondering if you found any solutions for your problem
st184562
hi @Vasiliy_Kuznetsov could you please guide what modifications I have to do? I cannot find any leads regarding this. Thanks
st184563
does the solution posted in https://github.com/huggingface/transformers/issues/2542 6 solve your problem?
st184564
hi @jerryzh168, Thanks for your response. I have already checked that solution, but that is specifically for Albert. I am aiming for quantization of BERT.
st184565
Can you try the updated technique mentioned in https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html#serialize-the-quantized-model 6 to save and load the quantized model?
st184566
I am doing int8 quantization and I need to exchanged the mul operation of pytorch. answer = a_tensor * 0.2 * b_tensor I tried to replace the multiplication operations like the below with FloatFunctional’s. self.ff = nn.quantized.FloatFunctional() d = self.ff.mul_scalar(a_tensor, 0.2) answer = self.ff.mul(d, b_tensor) But, when calls the torch.jit.trace() I got the exception below. answer = self.ff.mul(d, b_tensor) File "/root/.pyenv/versions/3.7.1/lib/python3.7/site-packages/torch/nn/quantized/modules/functional_modules.py", line 160, in mul r = ops.quantized.mul(x, y, scale=self.scale, zero_point=self.zero_point) RuntimeError: Mul operands should have same data type. I printed out the dtype. print("### d.dtype", a.dtype) print("### b_tensor.dtype", b_tensor.dtype) I got the below. ### d.dtype torch.quint8 ### b_tensor.dtype torch.float32 Any good solution for this situation?
st184567
Solved by jerryzh168 in post #2 first, you will need to use one FloatFunctional instance for each invocation. although the reason for the error is that b_tensor is not quantized, you’ll need to add a quantstub after b_tensor to quantize it before feeding it to floatfunctional.mul
st184568
first, you will need to use one FloatFunctional instance for each invocation. although the reason for the error is that b_tensor is not quantized, you’ll need to add a quantstub after b_tensor to quantize it before feeding it to floatfunctional.mul
st184569
Hi, I’ve installed the nightly build 1.6.0.dev20200607 today and ran my scripts that exercises quantization and jit. Until v1.5, I was able to get all packed params of a top level quantized module, say quantized resnet in torchvision, via state_dict() method. But now with nightly, I only get quant.scale and quant.zero_point from the same module. I also noticed that a packed param is now an instance of torch._C.ScriptObject, instead of QTensor as was the case until v1.5 How do I get all parameters from quantized + jitted model now? Can you point me to github issues/PRs that introduced relevant changes? @jerryzh168 @raghuramank100
st184570
Solved by masahi in post #10 We (TVM) take jitted models as input, so we don’t get to see the original models. Fortunately, I found a workaround for this problem without the use of state_dict, so it is no longer a problem for us. Thanks.
st184571
yeah, https://github.com/pytorch/pytorch/pull/35923 9 and https://github.com/pytorch/pytorch/pull/34140 9 are relevant changes. We are using TorchBind object for the packed params now.
st184572
Hi @jerryzh168, Now that v1.6 is out, I came back to this issue. As I mentioned, state_dict() method on traced quantized networks like qresnet from torchvision no longer returns all quantized parameters. After some digging and thanks to the onnx export implementation below, I found that I can use torch._C._jit_pass_lower_graph(graph, model._c) to get at quantized paremeters I’ve been looking for. Is this the recommend way for third party pkg like TVM to get quantized parameters? Having to pass model._c seems like a very internal API… github.com pytorch/pytorch/blob/master/torch/onnx/utils.py#L357 1 if isinstance(example_outputs, torch.Tensor): example_outputs = [example_outputs] torch_out = None if isinstance(model, torch.jit.ScriptModule): assert example_outputs is not None, "example_outputs must be provided when exporting a ScriptModule" try: graph = model.forward.graph torch._C._jit_pass_onnx_function_substitution(graph) method_graph, params = torch._C._jit_pass_lower_graph(graph, model._c) in_vars, in_desc = torch.jit._flatten(tuple(args) + tuple(params)) graph = _propagate_and_assign_input_shapes( method_graph, tuple(in_vars), False, propagate) except AttributeError: raise RuntimeError('\'forward\' method must be a script method') elif isinstance(model, torch.jit.ScriptFunction): assert example_outputs is not None, "example_outputs must be provided when exporting a TorchScript ScriptFunction" method = model params = () in_vars, in_desc = torch.jit._flatten(tuple(args))
st184573
does the conv packed params/linear packed params appear in state_dict? you can call unpack on these object to get the parameters I think: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/conv_packed_params.h#L17 10
st184574
Ok here is the test script. import torch from torchvision.models.quantization import mobilenet as qmobilenet def quantize_model(model, inp): model.fuse_model() model.qconfig = torch.quantization.get_default_qconfig('fbgemm') torch.quantization.prepare(model, inplace=True) model(inp) torch.quantization.convert(model, inplace=True) qmodel = qmobilenet.mobilenet_v2(pretrained=True).eval() pt_inp = torch.rand((1, 3, 224, 224)) quantize_model(qmodel, pt_inp) script_module = torch.jit.trace(qmodel, pt_inp).eval() graph = script_module.graph print(script_module.state_dict()) _, params = torch._C._jit_pass_lower_graph(graph, script_module._c) The model is quantized mobilenet v2 from torchvision. The output of state_dict() from above script is different between v1.6 and v1.5.1: With v1.5.1, all packed quantized parameters (conv and linear) are returned. Unpacking is also no problem. With v1.6, I only get OrderedDict([('quant.scale', tensor([0.0079])), ('quant.zero_point', tensor([0]))]) , so there is nothing that can be unpacked. In both version, the last line in the above script, torch._C._jit_pass_lower_graph(graph, script_module._c), returns all quantized parameters. So technically my original problem is solved. My question is if this is an expected behavior.
st184575
masahi: In both version, the last line in the above script, torch._C._jit_pass_lower_graph(graph, script_module._c) , returns all quantized parameters. So technically my original problem is solved. My question is if this is an expected behavior. probably not, I’ll create an issue for this, this for reporting
st184576
actually I think you are not supposed to call state_dict on a torchscript model, could you call state_dict before script/trace instead?
st184577
We (TVM) take jitted models as input, so we don’t get to see the original models. Fortunately, I found a workaround for this problem without the use of state_dict, so it is no longer a problem for us. Thanks.
st184578
Will quantization be supported for GPUs anytime soon? I have a project where evaluation speed is a very major concern and would love to use quantization to speed it up. I see the CPU quantization tutorial on the docs was written about 6 months ago, so I am really just curious if this is on the developers’ radar at all and if we can expect this eventually or in the near future.
st184579
We will be considering the GPU option in the second half of the year, but I think it probably won’t be a high priority item.
st184580
is there a particular reason it is not a high priority? i am still a student but was under the impression that inference with large models was typically done on GPUs, and quantization would be very beneficial
st184581
I’m not sure if there is a voting process but we (as a company) are using pytorch in our production process and inference speed of our custom BERT model is critical for us. In my opinion to get more adoption of pytorch in production and commercial applications inference speed is going to be critical and this feature would be a huge step forward for that. My two cents.
st184582
I am looking to contribute some work in the area of quantization for multiple architectures including fpgas and gpus. Any suggested guides on how to get started with contributing to pytorch? Cheers!
st184583
I have a model that contains a custom permutation that I need to apply to the second dimension of a Tensor, this is implemented as def forward(self, x: torch.Tensor): return x[:, self.permutation] where self.permutation is a LongTensor. When the model is not quantized (x is a FloatTensor) everything works correctly, when I quantize the model I get the following error: RuntimeError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend. 'aten::empty.memory_format' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, BackendSelect, Autograd, Profiler, Tracer] It seems that the operation is not implemented, I’m using PyTorch 1.6.0. Is there any alternative permutation operation that I can use? Thanks, Matteo
st184584
Solved by jerryzh168 in post #5 I’m not sure what is the operator used here, is it slice? Please open an issue in https://github.com/issues and provide a repro
st184585
Maybe you can work around it by adding dequantize before this module and quantize after so that it’s not quantized?
st184586
Yes it is a possible workaround but, because I need the permutation in many layers (I’m using a variant of ShuffleNet), I think the performance will suffer.
st184587
@jerryzh168 @raghuramank100 do you know if this is supported in PyTorch quantization?
st184588
I’m not sure what is the operator used here, is it slice? Please open an issue in https://github.com/issues 2 and provide a repro
st184589
Hi all, I am trying the resnet50 model quantization with PyTorch and I tried these 3 lines of code : the import, model=qn.resnet50(pretrain=true), and model.state_dict()), and why the coefficients being shown are all float values if this is the quarantined version of the model? Noticed this while trying to figure out how to save/load the coefficients for a quantized model, and is there anything special you need to do convert model coefficients between float and int8. Please let me know. Appreciate any help/suggestions , Thanks!
st184590
Solved by raghuramank100 in post #2 You need to set quantized=True when you load the model. i.e do model =qn.resnet(pretrained=True, quantized=True)
st184591
You need to set quantized=True when you load the model. i.e do model =qn.resnet(pretrained=True, quantized=True)
st184592
I am curious about disable_observer and freeze_bn_stats in quantization aware training. I don’t know when should I apply them. I have tried different combinations of two parameters. It seems that has a big impact on accuracy. Is there any best practice for quantization aware training? Like should I disable observer first and when should I disable it, train from scratch or fine-tune a trained model?
st184593
Solved by Vasiliy_Kuznetsov in post #2 hi @eleflea, check out https://github.com/pytorch/vision/blob/master/references/classification/train_quantization.py for one example. One approach which has proven to work well is: start QAT training from a floating point pre-trained model and with observers and fake_quant enabled after a couple …
st184594
hi @eleflea, check out https://github.com/pytorch/vision/blob/master/references/classification/train_quantization.py 20 for one example. One approach which has proven to work well is: start QAT training from a floating point pre-trained model and with observers and fake_quant enabled after a couple of epochs, freeze the BN stats if your network has any BNs (epoch == 3 in the example) after a couple of epochs, disable observers (epoch == 4 in the example)
st184595
I worked with the pytorch tutorial for static quantization 5 and when running the line: torch.quantization.convert(per_channel_quantized_model, inplace=True) I receive the following warning: .../torch/quantization/observer.py:845: UserWarning: must run observer before calling calculate_qparams. Returning default scale and zero point I call the convert function within the following lines of code: per_channel_quantized_model = load_model(..) per_channel_quantized_model.eval() per_channel_quantized_model.fuse_model() per_channel_quantized_model.qconfig = torch.quantization.get_default_qconfig('fbgemm') print(per_channel_quantized_model.qconfig) torch.quantization.prepare(per_channel_quantized_model, inplace=True) evaluate(per_channel_quantized_model, ...) torch.quantization.convert(per_channel_quantized_model, inplace=True) Does somebody have an idea what the warning means and how I can avoid that? I appreciate any hints and suggestions!
st184596
Solved by FabianSchuetze in post #6 see here for a solution.
st184597
Facing the same issue. torch.quantization.convert is supposed to run the observers, right. This warning does not make sense.
st184598
The prepare script inserts the observers. After that when model forward is run it also runs the observers. If you call convert without calling prepare then can complain about not running observers. Which model are you running this on? We can take a look if there is a repro.
st184599
Thanks for your replies, @khizar-anjum and @supriyar! After @khizar-anjum comments, I also filed a issue 12 on github. The warning is thrown when running the static quantization tutorial. I also received the warning in a SSD-type model I wrote. The quantization lead to a low accuracy and I began asking myself if it was caused by the improper quantization the observer warns against.