id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st183500 | UPDATE : In the documentation 1 it’s wrote At the moment PyTorch doesn’t provide quantized operator implementations on CUDA - this is the direction for future work. Move the model to CPU in order to test the quantized functionality.
It’s may be the reason why. |
st183501 | HarshRangwala:
I tried evaluate(model_name.to(device), val_dl) but it didn’t work. (I checked and the device is ‘cuda’)
If you want to try things out early, we also have a quantized CUDA test here: pytorch/quantized_resnet_test.py at master · pytorch/pytorch · GitHub 6 |
st183502 | I’m trying to run a easy deep learning model on embedding system without any framework
I have trained below model with my own dataset and quantize to int8
class Net(torch.nn.Module):
def __init__(self, n_feature, n_hidden, n_output, quant=False):
super(Net, self).__init__()
self.fc1 = torch.nn.Linear(n_feature, n_hidden, bias=False)
self.fc2 = torch.nn.Linear(n_hidden, n_output, bias=False)
self.relu = torch.nn.ReLU()
self.quant = quant
if self.quant:
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, input):
if self.quant:
x = self.quant(input)
else:
x = input
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
if self.quant:
x = self.dequant(x)
return x
I can get each layer’s int8 weight as fc1.weight().int_repr()
But how to use these parameter to reproduce result like net.forward()? |
st183503 | Hi,
I performed the quantization technique on efficient net models by referring post-training static quantization method in PyTorch blogs. But I was only able to bring a reduction only by 5 MB.
Also, I wasn’t able to perform the layer fusion step on the prebuilt layers of this model while quantizing using the existing PyTorch techniques. How do I approach this problem? Or is there an alternative method to bring down the size of the model without affecting its accuracy much?
Can someone help me with this?
Thanks in advance! |
st183504 | Hi all, I have applied dynamic quantization on my T5-small model (that I have built using AllenNLP framework). However, I see inconsistent latency numbers on different hardware on the same set/batch-size. On an m5 instance, I observe 10-15% speed up, while on c5.9xlarge, I actually see an increase in latency compared to the unquantized version. I chose dynamic quantization as I was expecting it to more or less work out-of-the-box.
Has anyone run into this issue before? |
st183505 | on servers, we are using fbgemm, which is using intel SIMD intrinsics to speedup the runtime I think. cc @dskhudia do we know if fbgemm always gives a speedup in aws machines? |
st183506 | Hi,
I recently work on analyzing a model’s reliability and I wonder how qint8 data like quantized model’s weights are stored in a computer.
Do these data types store in True form or Complement form
Thank you |
st183507 | Solved by jerryzh168 in post #3
qint8 is using (signed int8) underneath, so it will be two’s complement I think |
st183508 | I’m not good at English, so my word may have difference with what I want to present.
But I’m glad and will soon reply if you remark
When I say True form, It’s signed binary numbers like:
And does it save in this form or one’s complement or two’s complement |
st183509 | Hi,
I have a pytorch trained model that reached a testing precision of 90+ %.
Then I run QAT for this model and converted it to quantizied_model. Then I tested the quantized_model with the same test dataset but the precision drop to 0%.
The framework for my QAT is as below:
model = create_model(num_classes=num_classes)
train_loader, test_loader = prepare_dataloader(config, num_workers=8, train_batch_size=128, eval_batch_size=256)
# Load a pretrained model.
model = load_model(model=model, model_filepath=model_filepath, device=cuda_device)
# Move the model to CPU since static quantization does not support CUDA currently.
model.to(cpu_device)
# Make a copy of the model for layer fusion
fused_model = copy.deepcopy(model)
model.train()
fused_model.train()
# Fuse the model in place rather manually.
for module_name, module in fused_model.named_children():
if "extras" in module_name:
for basic_block_name, basic_block in module.named_children():
torch.quantization.fuse_modules(basic_block, [["0", "1", "2"]], inplace=True)
for sub_block_name, sub_block in basic_block.named_children():
if sub_block_name == "Sequential":
torch.quantization.fuse_modules(sub_block, [["0", "1", "2"]], inplace=True)
# Model and fused model should be equivalent.
model.eval()
fused_model.eval()
assert model_equivalence(model_1=model, model_2=fused_model, device=cpu_device, rtol=1e-03, atol=1e-06, num_tests=100, input_size=(1,3,32,32)), "Fused model is not equivalent to the original model!"
quantized_model = QuantizedRSSD(model_fp32=fused_model)
quantization_config = torch.quantization.get_default_qconfig("fbgemm")
quantized_model.qconfig = quantization_config
prepared = torch.quantization.prepare_qat(quantized_model, inplace=True)
prepared.apply(torch.quantization.enable_observer)
# # Use training data for calibration.
if not os.path.exists(quantized_model_filepath):
print("Training QAT Model...")
quantized_model.train()
prepared.apply(torch.quantization.enable_fake_quant)
train_model(model=prepared, config=config, model_filename=quantized_model_filename,
train_loader=train_loader, test_loader=test_loader, device=cuda_device,
learning_rate=1e-2, num_epochs=3)
prepared.to(cpu_device)
quantized_model = torch.quantization.convert(prepared, inplace=True)
quantized_model.eval()
if not os.path.exists(quantized_model_filepath):
save_model(model=quantized_model, model_dir=model_dir, model_filename=quantized_model_filename)
What am I missing in my QAT causing the precision drop to 0? Please help. Thanks. |
st183510 | QAT needs careful tuning, my guess is your learning rate might be too large here |
st183511 | Hello. I am not an expert of PyTorch, however I need to quantize my model to less than 8 bits (e.g. 4-bits, 2-bits etc.). I’ve seen that PyTorch actually does not officially support this “aggressive” quantization. Is there any way to do this? I’m asking you if there is some sort of documentation with steps to follow (or something like that) because as I’ve said I’m not an expert. Plus, I don’t need to only evaluate the accuracy of the quantized model, but also compress the model in a way that I will be able to deploy it on a controller or a mobile. Thanks in advance to whoever tries to help me. |
st183512 | @supriyar added quint4x2 datatype to quantization, some tests can be found in pytorch/test_quantize_fx.py at master · pytorch/pytorch · GitHub 7, but we do have have int4 datatype support for any ops except embedding_bag.
We are working on a design doc to support custom backends currently and I think it will be useful to support this as well, please keep an eye out for design docs in github issues, I may update here if I still remember this post. |
st183513 | what dtype do you need? we only have quint4x2 currently but contributions are welcome. We don’t have other use cases right now so it might not make much sense for us to add other types currently. |
st183514 | pytorch: 1.9.1
android: 11
android gradle:
implementation 'org.pytorch:pytorch_android_lite:1.9.0'
implementation 'org.pytorch:pytorch_android_torchvision:1.9.0'
problem:
com.facebook.jni.CppException: Following ops cannot be found. Check fburl.com/missing_ops for
the fix.{prim::is_quantized, } ()
Test code:
class AnnotatedConvBnReLUModel(torch.nn.Module):
def __init__(self):
super(AnnotatedConvBnReLUModel, self).__init__()
self.conv = torch.nn.Conv2d(3, 5, 3, bias=False).to(dtype=torch.float)
self.bn = torch.nn.BatchNorm2d(5).to(dtype=torch.float)
self.hs = torch.nn.Hardswish()
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = self.bn(x)
x = self.hs(x)
x = self.dequant(x)
return x
The android lib “pytorch:pytorch_android_lite” dose not support torch.nn.Hardswish ?
model = torchvision.models.quantization.mobilenet_v3_large(pretrained=False, quantize=True)
and the mobilenet_v3_large has the same problem, why? |
st183515 | Solved by cuberfan in post #7
The func “torch.nn.functional.hardsigmoid” can work instead of torch.nn.HardSigmoid |
st183516 | def convert_to_mobile(self):
logging.info("mode: convert_to_mobile")
self.net.eval()
logging.info("step1: quantize model")
model_quantizer = ModelQuantizer()
quantized_model = model_quantizer.quantize_model(self.cfg, self.net, self.cur_device, "qnnpack")
# quantized_model = self.net
logging.info("step2: script/trace model")
traced_script_model = torch.jit.script(quantized_model)
logging.info("step3: model optimization")
traced_script_model_optimized = optimize_for_mobile(traced_script_model)
# traced_script_model_optimized = traced_script_model
logging.info("step4: save mobile model")
mobile_model_path = self.cfg['common']['convert_model_mobile']
traced_script_model_optimized._save_for_lite_interpreter(mobile_model_path) |
st183517 | i’d try it with a hardswish module rather than a functional, it looks like pytorch_android_lite doesn’t like the quantization check in the quantized hardswish op.
alternatively you can just remove the line: pytorch/functional.py at 871a31b9c444e52ba0cc6667fb317c11802ac4de · pytorch/pytorch · GitHub 2
In your custom version of pytorch and see if that works. |
st183518 | The func “torch.nn.functional.hardsigmoid” can work instead of torch.nn.HardSigmoid |
st183519 | Hello, everyone!
I have two warnings when I’m trying to quantize a part of my model.
Do you have any idea when does that comes from?
Warnings:
Warning 1 : Happens when I run torch.quantization.prepare(model.l_lnrs, inplace=True)
UserWarning: Please use quant_min and quant_max to specify the range for observers. reduce_range will be deprecated in a future release of PyTorch.
Warning 2 : Happens when I run torch.quantization.convert(model.l_lnrs, inplace=True)
warnings.warn(
/home/thytu/Prog/Blackfoot/herding-cats-poc1-2l/venv/lib/python3.9/site-packages/torch/ao/quantization/observer.py:886: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
src_bin_begin // dst_bin_width, 0, self.dst_nbins - 1
/home/thytu/Prog/Blackfoot/herding-cats-poc1-2l/venv/lib/python3.9/site-packages/torch/ao/quantization/observer.py:891: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
src_bin_end // dst_bin_width, 0, self.dst_nbins - 1
Model Class:
class LSTM(nn.Module):
def __init__(self, input_size, output_size, **kwargs):
super(LSTM, self).__init__()
self.lstm_num_layers = kwargs.get("lstm_num_layers", 1)
self.lstm_hidden_size = kwargs.get("lstm_hidden_size", 1)
self.hidden_size = kwargs.get("hidden_size", 1)
self.l_lstm = nn.LSTM(
input_size=input_size,
hidden_size=self.lstm_hidden_size,
num_layers=self.lstm_num_layers,
batch_first=True
)
self.l_lnrs = nn.Sequential(
nn.ReLU(),
nn.Linear(self.lstm_hidden_size, self.hidden_size),
nn.ReLU(),
nn.Linear(self.hidden_size, self.hidden_size),
nn.ReLU(),
nn.Linear(self.hidden_size, output_size),
)
def forward(self, x):
h_0 = torch.zeros(self.lstm_num_layers, x.size(0), self.lstm_hidden_size)
c_0 = torch.zeros(self.lstm_num_layers, x.size(0), self.lstm_hidden_size)
_, (hn, _) = self.l_lstm(x, (h_0, c_0))
hn = hn[-1].view(-1, self.lstm_hidden_size)
return self.l_lnrs(hn)
Quantization function:
def quantize_model(model: torch.nn.Module, sample: torch.Tensor) -> torch.nn.Module:
SERVER_INFERENCE_CONFIG = 'fbgemm'
model.l_lnrs = torch.nn.Sequential(
torch.quantization.QuantStub(),
*model.l_lnrs,
torch.quantization.DeQuantStub()
)
model.eval()
model.qconfig = torch.quantization.get_default_qconfig(SERVER_INFERENCE_CONFIG)
pair_of_modules_to_fuze = []
for name, layer in model.named_modules():
if isinstance(layer, torch.nn.Linear):
pair_of_modules_to_fuze.append([name.split('.')[-1]])
elif isinstance(layer, torch.nn.ReLU) and len(pair_of_modules_to_fuze) > 0:
pair_of_modules_to_fuze[-1].append(name.split('.')[-1])
pair_of_modules_to_fuze = list(filter(lambda x: len(x) == 2, pair_of_modules_to_fuze))
for i, _ in enumerate(model.l_lnrs):
model.l_lnrs[i].qconfig = torch.quantization.get_default_qconfig(SERVER_INFERENCE_CONFIG)
torch.quantization.fuse_modules(model.l_lnrs, pair_of_modules_to_fuze, inplace=True)
torch.quantization.prepare(model.l_lnrs, inplace=True)
for data in sample:
model.forward(data)
model = torch.quantization.convert(model)
torch.quantization.convert(model.l_lnrs, inplace=True)
return model |
st183520 | Solved by Vasiliy_Kuznetsov in post #2
Hi @Thytu , both of these warnings are ok and you are not doing anything wrong.
The first warning is noting that reduce_range may be deprecated in a future release. There are no specific plans yet to do the deprecation.
The second warning is technical debt on the PyTorch quantization team, we wil… |
st183521 | Hi @Thytu , both of these warnings are ok and you are not doing anything wrong.
The first warning is noting that reduce_range may be deprecated in a future release. There are no specific plans yet to do the deprecation.
The second warning is technical debt on the PyTorch quantization team, we will clean this up, thanks for reporting! |
st183522 | Hi @Vasiliy_Kuznetsov
Thank you for your answer!
Seeing a warning is never reassuring
Have a nice day! |
st183523 | I am trying implement post quantization training for 8 bits say. I saw a custom code using pytorch. But, I am really confused. I trained the model and test it first. Then I quantize the model to 8 bits. But, the weights are still floating point numbers. Should not it be in the range 0 to 255? Please clarify. |
st183524 | Solved by zetyquickly in post #5
I see, the problem is understood.
I suspect but not claim that numbers you see in troch.qint8 tensor is just a representation of integers actually stored there. I mean it implicitly applies dequantization with zero_point and scale to print out results. |
st183525 | Hello @Kai123
Are you sure that weights are floats not the outputs of the model?
Could please provide code snippet where you see it |
st183526 | If I try this simple example from the docs
import torch
# define a floating point model
class M(torch.nn.Module):
def __init__(self):
super(M, self).__init__()
self.fc = torch.nn.Linear(4, 4)
def forward(self, x):
x = self.fc(x)
return x
# create a model instance
model_fp32 = M()
# create a quantized model instance
model_int8 = torch.quantization.quantize_dynamic(
model_fp32, # the original model
{torch.nn.Linear}, # a set of layers to dynamically quantize
dtype=torch.qint8) # the target dtype for quantized weights
# run the model
input_fp32 = torch.randn(4, 4, 4, 4)
res = model_int8(input_fp32)
I can check the weights of fc layer with
print(model_int8.fc.weight())
And the output will be:
tensor([[-0.0307, 0.1806, 0.3189, 0.0692],
[ 0.1306, -0.1114, 0.1306, -0.4227],
[-0.1575, -0.1729, -0.4841, -0.2997],
[ 0.2036, 0.0307, 0.4880, 0.3804]], size=(4, 4), dtype=torch.qint8,
quantization_scheme=torch.per_tensor_affine, scale=0.0038423435762524605,
zero_point=0)
It says that weights are of qint8 type |
st183527 | Thank you for time and response. You have shown the weights of the fc layer of the quantized model. And here is my question. The weights are float value. Why is it so? Should not be it in the range of 0 to 255? I think I misunderstood something. Please clarify.
If I run this :print(model_fp32.fc.weight)
Parameter containing:
tensor([[ 0.1790, -0.3491, -0.1060, 0.0618],
[ 0.4251, -0.0054, -0.4578, 0.0406],
[-0.2153, -0.1724, -0.1873, -0.2386],
[-0.3372, -0.2659, -0.4900, -0.2144]], requires_grad=True)
If I run this: print(model_int8.fc.weight())
tensor([[ 0.1806, -0.3497, -0.1076, 0.0615],
[ 0.4266, -0.0038, -0.4573, 0.0423],
[-0.2152, -0.1729, -0.1883, -0.2383],
[-0.3382, -0.2652, -0.4919, -0.2152]], size=(4, 4), dtype=torch.qint8,
quantization_scheme=torch.per_tensor_affine, scale=0.0038432059809565544,
zero_point=0)
For both fp32 and int8 model. the weights are almost similar. And both the weights are floating point numbers. |
st183528 | I see, the problem is understood.
I suspect but not claim that numbers you see in troch.qint8 tensor is just a representation of integers actually stored there. I mean it implicitly applies dequantization with zero_point and scale to print out results. |
st183529 | I quantized my CNN network with the static quantization module and the per channel quantization . Here is a part of my quantized network:
Sequential(
(0): QuantizedConv2d(2, 2, kernel_size=(1, 8), stride=(1, 2), scale=0.43247514963150024, zero_point=62, padding=(0, 3), groups=2)
(1): QuantizedConvReLU2d(2, 32, kernel_size=(1, 1), stride=(1, 1), scale=0.0933830738067627, zero_point=0)
(2): Identity()
(3): Identity()
(4): Dropout(p=0.5, inplace=False)
)
In the end, I tried to extract the weights and quantization parameters of each convolution kernel.
In [518]: layers[0].weight()
Out[518]:
tensor([[[[ 0.5521, -0.4270, -0.9423, -0.8687, -0.4932, -0.3313, -0.3755,
0.0221]]],
[[[-0.4360, -0.6763, -0.7154, -0.5980, -0.6372, -0.0447, -0.1733,
-0.2962]]]], size=(2, 1, 1, 8), dtype=torch.qint8,
quantization_scheme=torch.per_channel_affine,
scale=tensor([0.0074, 0.0056], dtype=torch.float64),
zero_point=tensor([0, 0]), axis=0)
I tried to read the weights, but I got this error:
In [562]: layers[0].weight().data[0,0,0,0]
Traceback (most recent call last):
File "<ipython-input-562-742a141c2263>", line 1, in <module>
layers[0].weight().data[0,0,0,0]
RuntimeError: Setting strides is possible only on uniformly quantized tensor
Also, I can get a single scale for the whole layer[0], but I do not know what it is related to compared to the per-channel scale (q_per_channel_scales)?
In [567]: layers[0].scale
Out[567]: 0.43247514963150024 |
st183530 | Solved by babak_hss in post #2
OK. It seems like we can use
layer[0].weight().int_repr().data[i,j,l,m]
to get the INT8 representation of the weight entries.
Also,
layer[0].weight().dequantize()
gives the tensor of the weights in FP format to have element-wise access to its contents. |
st183531 | OK. It seems like we can use
layer[0].weight().int_repr().data[i,j,l,m]
to get the INT8 representation of the weight entries.
Also,
layer[0].weight().dequantize()
gives the tensor of the weights in FP format to have element-wise access to its contents. |
st183532 | Hi,
I tried adopted your advice and it works well
But I have another question
How can I change the respondent value to the model
I change the value directly like what I do in normal models that will not affect the value of the model’s value
like:
number.int_repr()[0]=number.int_repr()[0]*2
number.int_repr()[0]
# Don't have any change
number.dequantize()=number.dequantize()*2
number.dequantize()
# Don't change too
Thank you |
st183533 | I am using Unet model for semantic segmentation. I pass a batch of images to the model. The model is expected to output 0 or 1 for each pixel of the image (depending upon whether pixel is part of person object or not). 0 is for background, and 1 is for foreground.
I am trying to quantize the Unet model with Pytorch quantization apis (static quantization). The model accuracy is good for FBGEMM config. However, the model outputs all pixels as black pixels (background pixels) for all images. The model output is very high positive for background channel and very high negative for forground channel. (I perform softmax to get final classification of the pixel). The code works fine for FBGEMM. Same code produces very different results for QNNPACK. Is there anything missing for QNNPACK code? I am pasting my code below for reference.
# Static Quantization - FBGEMM/QNNPACK
import torch.quantization as Q
import torch
framework = 'qnnpack' # change to fbgemm for x86 architecture
per_channel_quantized_model = model
per_channel_quantized_model.eval()
per_channel_quantized_model.fuse_model()
per_channel_quantized_model.qconfig = Q.get_default_qconfig(framework)
torch.backends.quantized.engine = framework
print("Preparing . . .")
Q.prepare(per_channel_quantized_model, inplace=True)
print("Running model . . .")
eval_model_for_quantization(per_channel_quantized_model, 'cpu')
print("Converting . . .")
Q.convert(per_channel_quantized_model, inplace=True)
print("***************")
print(per_channel_quantized_model)
print("***************")
print("Checking Accuracy . . .")
accuracy = eval_model_for_quantization(per_channel_quantized_model, 'cpu')
print()
print('Evaluation accuracy after quantization', accuracy)
One issue that I did encounter is that the upsampling layers of Unet use nn.ConvTranspose2d which is not supported for quantization. Hence before this layer, we need to dequantize tensors, apply nn.ConvTranspose2d, and then requantize for subsequent layers. Can this be reason for lower accuracy?
NOTE - I did try with QAT for QNNPACK. However, model output does not change i.e. it gives out all black pixels. |
st183534 | Here I have more details…
Following is the model output for some inputs. The LHS bracket is target value. Values in RHS are output values for 2 channels (background and foreground). The RHS values are passed to a Softmax function and the final result is obtained (…and to be compared with target)
Model output before quantization:
..........(1.0) (1.42 16.16)
..........(1.0) (-40.55 42.14)
..........(0.0) (15.20 -19.15)
..........(1.0) (-21.16 25.58)
..........(1.0) (-43.54 41.77)
..........(0.0) (19.74 -23.29)
..........(1.0) (-29.66 33.56)
..........(1.0) (1.23 -7.96)
..........(1.0) (-35.54 42.13)
..........(0.0) (16.74 -19.38)
..........(0.0) (9.40 -2.54)
..........(0.0) (21.67 -27.59)
..........(1.0) (-52.96 53.53)
..........(0.0) (18.02 -20.90)
..........(1.0) (-19.79 22.51)
..........(0.0) (13.33 -20.11)
..........(0.0) (29.95 -31.26)
..........(0.0) (23.35 -29.38)
..........(1.0) (-15.23 9.97)
..........(0.0) (18.14 -24.80)
..........(0.0) (19.13 -26.98)
..........(1.0) (-18.12 22.96)
Model output after FBGEMM quantization - as you can see below, the output values did change, but only to small extent
..........(1.0) (0.00 18.41)
..........(1.0) (-45.41 50.32)
..........(0.0) (13.50 -14.73)
..........(1.0) (-24.55 30.69)
..........(1.0) (-39.28 38.05)
..........(0.0) (13.50 -17.18)
..........(1.0) (-22.09 25.78)
..........(1.0) (2.45 -7.36)
..........(1.0) (-23.32 29.46)
..........(0.0) (17.18 -23.32)
..........(0.0) (12.27 -6.14)
..........(0.0) (20.87 -23.32)
..........(1.0) (-45.41 49.10)
..........(0.0) (15.96 -18.41)
..........(1.0) (-17.18 20.87)
..........(0.0) (11.05 -18.41)
..........(0.0) (27.00 -27.00)
..........(0.0) (17.18 -23.32)
..........(1.0) (-2.45 1.23)
..........(0.0) (15.96 -20.87)
..........(0.0) (18.41 -24.55)
..........(1.0) (-15.96 20.87)
Now look at following model output for QNNPACK quantization. The output is very different from the unquantized version. In particular, values for all pixels is positive for background channel and negative for foreground channel.
..........(1.0) (14.06 -17.12)
..........(1.0) (11.61 -16.51)
..........(0.0) (20.17 -25.06)
..........(1.0) (18.34 -22.01)
..........(1.0) (15.89 -14.06)
..........(0.0) (20.17 -25.67)
..........(1.0) (22.62 -29.34)
..........(1.0) (24.45 -28.73)
..........(1.0) (14.06 -20.17)
..........(0.0) (22.62 -28.12)
..........(0.0) (27.51 -20.17)
..........(0.0) (21.40 -23.84)
..........(1.0) (20.78 -29.34)
..........(0.0) (17.12 -23.23)
..........(1.0) (28.73 -31.18)
..........(0.0) (18.34 -20.78)
..........(0.0) (20.17 -23.84)
..........(0.0) (20.78 -23.84)
..........(1.0) (13.45 -16.51)
..........(0.0) (17.12 -21.40)
..........(0.0) (21.40 -25.67)
..........(1.0) (21.40 -26.90)
Any thoughts by anyone? |
st183535 | did you set the qengine to qnnpack before you evaluate the model? you can set the qengine with https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/common_quantized.py#L110 10 |
st183536 | Hi Jerry - thanks for the reply.
Yes I have done this already. You can see following code line in my first post…
amitdedhia:
torch.backends.quantized.engine = framework
is the code okay? Am I doing anything wrong? |
st183537 | I see. quantization code looks correct. cc @supriyar @dskhudia could you take a look |
st183538 | Update - Earlier when I worked on QAT, I was using wrong config. After correcting it, the QAT helps improve the accuracy. However I am still interested in knowing (in case of static quantization for qnnpack config) why the output value for all pixels is positive for background channel and negative for foreground channel.
The model I am using is available here:
source - https://github.com/thuyngch/Human-Segmentation-PyTorch 15
model file - https://drive.google.com/file/d/17GZLCi_FHhWo4E4wPobbLAQdBZrlqVnF/view 3 |
st183539 | amitdedhia:
I am still interested in knowing (in case of static quantization for qnnpack config) why the output value for all pixels is positive for background channel and negative for foreground channel.
Some models are more sensitive to quantization than others, and you can try selectively quantizing a part of the model, e.g. keep first conv unquantized, to mitigate the problem for post training static quantization. and typically qat will help in these cases as well. |
st183540 | we also have eager mode numeric suite to help debug which layer is more sensitive to quantization as well: https://pytorch.org/tutorials/prototype/numeric_suite_tutorial.html?highlight=transformer 7 |
st183541 | we are landing https://github.com/pytorch/pytorch/pull/46077 20 which fixes a bug where some qnnpack activations had incorrect numerical values if the preceding layer was in NHWC. If your model used one of the activations patched by that PR, it might help. |
st183542 | Hi @amitdedhia,
I am also trying to use quantized UNet model (static quantization) for semantic segmentation (10 classes). I am facing issues with the fbgemm configuration itself as mentioned in the forum post 3. If possible, can you point out where I’m going wrong in my quantization? Meanwhile I’m trying to debug using the link shared by @Vasiliy_Kuznetsov |
st183543 | Working on Windows with Pytorch 1.9.1+cu102
I have an object detector that runs with a Torchvision FP32 ResNet-18 backbone.
Added QuantStub and DequantStub to my model, calling the quant stub on the input and the dequant stub on the output (in the model’s forward method)
Quantized the model using fbgemm
When the ResNet model tries to perform out += identity (From Torchvision implementation, BasicBlock class), I get the following error:
File "d:\anaconda3\lib\site-packages\torchvision\models\resnet.py", line 80, in forward
out += identity
NotImplementedError: Could not run 'aten::add.out' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::add.out' is only available for these backends: [CPU, CUDA, Meta, MkldnnCPU, SparseCPU, SparseCUDA, SparseCsrCPU, BackendSelect, Named, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
How can I overcome it? |
st183544 | Solved by Vasiliy_Kuznetsov in post #2
Check out here (Quantization — PyTorch 1.9.1 documentation) for a description of this error, it means you are passing a quantized tensor to a fp32 kernel.
For eager mode quantization, you could edit the model code to replace the addition with FloatFunctional.
For a backbone such as ResNet18, you c… |
st183545 | Check out here (Quantization — PyTorch 1.9.1 documentation) for a description of this error, it means you are passing a quantized tensor to a fp32 kernel.
For eager mode quantization, you could edit the model code to replace the addition with FloatFunctional.
For a backbone such as ResNet18, you could also use FX graph mode quantization which will do the syntax transforms for you automatically. You could check out here (Quantization — PyTorch 1.9.1 documentation 3) for an API example (scroll down to FX Graph Mode Quantization). |
st183546 | Hello,
To my knowledge, reusing networks in Pytorch typically requires a network class definition and a weights file (i.e., .pth), which is saved and loaded using the state_dict mechanism.
In quantization, the problem is that the quantization process (e.g., post-training quantization) modifies the network class instance. This means that in order to reproduce the quantized model, either a programmer needs to define a new class that will be compatible with the modified instance, or the quantization process must be repeated on every new instance of the original FP32 model.
On Linux machines, it might be a reasonable workaround to post-training-quantize every new instance of the network. However, this scenario is not possible on Windows machines, since performing quantization is not currently supported on them.
What is therefore a recommended practice for really saving and loading an already-quantized network?
For example, is using Python’s pickle mechanism going to do the work? Can I quantize a network on Linux, save it using Pickle and reload it on a Windows machine? Is it a recommended approach? |
st183547 | you might want to save the quantized_model using
save_torchscript_model(model=quantized_model, model_dir=model_dir, model_filename=quantized_model_filename)
and load the script in the other place for inference:
quantized_jit_model = load_torchscript_model(model_filepath=quantized_model_filepath, device=cpu_device)
quantized_jit_model.eval()
outputs = quantized_jit_model(inputs) |
st183548 | quantization previously didn’t work on windows because fbgemm wasn’t supported on windows. This is no longer the case so there should be no issue (Is this available on windows? · Issue #150 · pytorch/FBGEMM · GitHub 1). Additionally, the issue wasn’t that you couldn’t do quantization on windows, its that windows didn’t have kernels for quantized ops, so you could still do the quantization, you just couldn’t run the quantized model.
note: torchscript wouldn’t get around this, but it is a good solution to not wanting to go through the quantization process each time you want to load a quantized model. |
st183549 | I tried to post-training quantize a model on Windows using Pytorch version 1.9.1+cu102, following a tutorial on pytorch.org.
The application crashes when I run torch.backends.quantized.engine = 'qnnpack'. I concluded that it is just not supported on Windows.
Is there a way to overcome it? |
st183550 | I was sure it also crashed when I used fbgemm, but somehow it hasn’t crashed now as I executed torch.backends.quantized.engine = 'fbgemm'. Thanks! |
st183551 | So I still cannot run a quantized model on Windows? I was finally able to quantize it, but now cannot inference it. I was able to inference the quantized models from Torchvision. |
st183552 | I have an QAT quantized_model which has no problem to run:
quantized_model.eval()
_ = quantized_model(torch.rand(1,3,300,300))
and it also can be traced successfuly:
trace_model = torch.jit.trace(quantized_model, torch.Tensor(1,3,300,300))
but when I tried to run the trace_model as below:
trace_model.eval()
_ = trace_model(torch.rand(1,3,300,300))
I encountered the following error message:
_ = trace_model(torch.rand(1,3,300,300))
File "/home/paul/rknn2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "<string>", line 63, in <foward op>
dilation: List[int],
ceil_mode: bool):
output, indices = torch.max_pool2d_with_indices(self, kernel_size, stride, padding, dilation, ceil_mode)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
def backward(grad_output):
grad_self = torch.max_pool2d_with_indices_backward(grad_output, self, kernel_size, stride, padding, dilation, ceil_mode, indices)
RuntimeError: Could not run 'aten::max_pool2d_with_indices' with arguments from the 'QuantizedCPU' backend. 'aten::max_pool2d_with_indices' is only available for these backends: [CPU, CUDA, Named, Autograd, Profiler, Tracer].
What am I missing? How can I fix this error? Thank you for your help. |
st183553 | looks like the quantized torchscript is not playing well with the autograd, try running without gradient
with torch.no_grad():
_ = trace_model(torch.rand(1,3,300,300))
otherwise not sure, if you can give a minimal repro that would help. |
st183554 | @HDCharles
Thanks for the help. It works now!
The reason I create such test script is to duplicate the same “QuantizedCPU” backend error I encountered when I saved the trace_model into a file, says, “trace_model.pt” and then tried to load it by using
rknn.load_pytorch(model="trace_model.pt", input_size_list=input_size_list)
where rknn is a toolkit for RockChip NPU.
Now my question is: is there a way to save the trace_model with torch.no_grad() ? such that when rknn.load_pytorch would not encounter “QuantizedCPU” backend error?
or has to modify rknn.load_pytorch module to add
with torch.no_grad():
in it before parsing the model script?
Thank you again for your help. |
st183555 | you can try just shoving that into the forward loop of the model, not sure if that would work.
note: dont do something like:
if train_flag:
code
else:
with torch.no_grad
code
because the if statement will break the tracing.
alternatively you can try:
with torch.no_grad()
rknn.load_pytorch(…) |
st183556 | x=self.quant(x)
x=self.conv(x)
x=self.bn(x)
x=self.act(x)
x=self.dequant(x)
I trained a QAT model and when i tried evaluating the model, i got the error.
Could not run ‘quantized::conv2d.new’ with arguments from the ‘QuantizedCUDA’ backend … … ‘quantized::conv2d.new’ is only available for these backends: [QuantizedCPU, …].
when i added x = x.to(‘cpu’) before x = self.quant(x), to make it a QuantizedCPU backend (note that doing so, i am unable to train the model again as i will get:
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
which is another problem…?), i will then get:
Could not run ‘aten::silu.out’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::silu.out’ is only available for these backends: [CPU, CUDA,…
so i changed position of dequant to
x=x.to(‘cpu’)
x=self.quant(x)
x=self.conv(x)
x=self.bn(x)
x=self.dequant(x)
x=self.act(x)
I get the error pointing to x=self.quant(x) :
Could not run ‘aten::quantize_per_tensor’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::quantize_per_tensor’ is only available for these backends: [CPU, CUDA,
and if i remove x = self.quant(x), i get back:
Could not run ‘quantized::conv2d.new’ with arguments from the ‘CPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘quantized::conv2d.new’ is only available for these backends: [QuantizedCPU,
Please help as i’ve been encountering errors after errors even after searching online for solutions. |
st183557 | Hi, I’m also getting same your problem. If possible, Could you give me some solutions to fix this error? |
st183558 | MrOCW:
Could not run ‘quantized::conv2d.new’ with arguments from the ‘QuantizedCUDA’ backend … … ‘quantized::conv2d.new’ is only available for these backends: [QuantizedCPU, …].
this error occurs when you try to run a quantized op with weight or input on cuda. For example if you were to take a correctly quantized model and then do .to(‘cuda’) and then run the model, you’d get this error.
based on the second error message, your weight is on cuda. Note: changing where x.to(‘cpu’) is located will not fix this problem if the actual op weight is on cuda.
my guess is that somewhere in your code you have model.to(‘cuda’) (likely during training) and you are not converting it back to cpu i.e. model.to(‘cpu’) before trying to do quantization.
addtionally, it looks like your self.act op is aten::silu which isn’t being converted to a quantized op (looks like it doesn’t have a quantized implementation pytorch/activation.py at master · pytorch/pytorch · GitHub 2). You can either implement it yourself or change to something along the lines of
y = sigmoid(x)
x = y * x
I would also maybe start with a less weird model and make sure the flow works for you before iterating on that. Something like: (beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials 1.9.1+cu102 documentation 6 could be a good starting point. |
st183559 | my guess is that somewhere in your code you have model.to(‘cuda’) (likely during training) and you are not converting it back to cpu i.e. model.to(‘cpu’) before trying to do quantization.
Strange because I have done model.to(‘cpu’) before torch.quantization.convert(model) |
st183560 | you can inspect the model and identify whether the weight is stored correctly, its possible its not transfering over or something, though usually modules move over their attributes by default. |
st183561 | I tried to use torch.autograd.grad() to calculate gradients for a quantized model, just as what we usually do on full precision models:
for idx, (inputs, targets) in enumerate(data_loader):
with torch.enable_grad():
inputs.requires_grad = True
outputs = quantized_model(inputs)
loss = criterion(outputs, targets)
grads = torch.autograd.grad(loss, inputs)[0]
But I got a RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Does models quantized with PyTorch Quantization currently do not support backpropagation? Is there some methods I can calculate the gradients for PyTorch quantized models? |
st183562 | Hi @yyl-github-1896
quantized models currently run only during inference so you can only call forward on them. If you are trying out quantization aware training Quantization Recipe — PyTorch Tutorials 1.9.1+cu102 documentation 4, we do support back-propagation in that case during training. |
st183563 | Thank you for the reply. I know that quantization aware training use fake quantization during training, which simulates quantization with fp32. I want to know that what is the difference between fake quantization and real quantization, especially when we do back-propagation on them? |
st183564 | fake quantization simulates quantization but uses high precision data types
so for example imagine if you were trying to quantize to integers.
mathematically a quantized linear op would be:
X = round(X).to(int)
weight = round(weight).to(int)
out = X*weight
whereas a fake_quantized linear would be
X = round(X).to(fp32)
weight = round(weight).to(fp32)
out = X*weight
In practice quantized weights are stored as quantized tensors which are difficult to interact with in order to make them able to perform quantized operations quickly.
fake_quantized weights are stored as floats so you can interact with them easily in order to do gradient updates.
most quantized ops do not have a gradient function so you won’t be able to take a gradient of it. Note: even quantization aware training doesn’t really give gradients of the model, see: [1308.3432] Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation 2 |
st183565 | Hello, I try to inspect the Scale and zero_point value of “QFuncational” with a torchscript Model.
But, it always got error: “‘RecursiveScriptModule’ object has no attribute ‘scale’” and “zero_point”.
for “Conv” it can directly got “scale” and “zero_point” value with conv.scale/conv.zero_point.
And, the “QFuncational” actually is “Add” |
st183566 | this works, can you give a repro?
model = nn.Sequential(torch.quantization.QuantStub(), torch.nn.Conv2d(2,2,2))
model.qconfig = torch.quantization.get_default_qconfig()
modelp = torch.quantization.prepare(model)
modelp(torch.randn(1,2,32,32))
modelq = torch.quantization.convert(modelp)
modelj = torch.jit.script(modelq)
print(modelq[1].scale)
print(modelj[1].scale)
also it looks like QFunctionals should still work in the same way (pytorch/functional_modules.py at 22ec6250283568fc475d030dc0b15eda5d13b00b · pytorch/pytorch · GitHub 1)
so you should be able to inspect them directly. |
st183567 | Hi,
I quantized a pre-trained FP32 model to INT8 by using PTQ in Pytorch, but when I print out the model parameters, they are not integer(INT8). Why? and how can I obtain the INT8 model?
Screen Shot 2021-10-10 at 3.59.23 AM1106×520 56.3 KB
PS. My goal is to port the Pytorch model to FPGA, and run in INT8, is there any suggestion?
Thanks |
st183568 | Solved by HDCharles in post #2
qtensor.int_repr() gives the integer weights but I think there might be a misunderstanding.
in general quantized tensors contain 3 components: int weights, scale and zero_points.
if your original tensor is something like: 0, .1, .2, .31
if you were to quantize it, you might get something with wei… |
st183569 | qtensor.int_repr() gives the integer weights but I think there might be a misunderstanding.
in general quantized tensors contain 3 components: int weights, scale and zero_points.
if your original tensor is something like: 0, .1, .2, .31
if you were to quantize it, you might get something with weight=[0,1,2,3] scale=.1, zero_point=0.
we aren’t actually quantizing the weights to integer values, for to be quantized tensor T we are trying to find s,z,T_int such that sT_int+z ~ T so what you are seeing is the actual values of sT_int+z (note: the actual quant/dequant equation depends) |
st183570 | Hi,
I have done QAT model seems successfully but when I run
_ = model(x)
I got the following error:
_ = model(x)
File "/home/paul/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/paul/rknn2/PytorchProject/ssd/r-ssd8.py", line 255, in forward
x = self.dequant(x)
File "/home/paul/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/paul/pytorch/lib/python3.6/site-packages/torch/nn/quantized/modules/__init__.py", line 74, in forward
return Xq.dequantize()
AttributeError: 'tuple' object has no attribute 'dequantize'
The model class is defined as below:
class QuantizedRSSD(nn.Module):
def __init__(self, model_fp32):
super(QuantizedRSSD, self).__init__()
# QuantStub converts tensors from floating point to quantized.
# This will only be used for inputs.
self.quant = torch.quantization.QuantStub()
# DeQuantStub converts tensors from quantized to floating point.
# This will only be used for outputs.
self.dequant = torch.quantization.DeQuantStub()
# FP32 model
self.model_fp32 = model_fp32
def forward(self, x):
x = self.quant(x)
x = self.model_fp32(x)
x = self.dequant(x)
return x
Can anyone help to identify why during the QAT the QuantizedRSSD seems working well but when run it then complains no dequantize attribute? Does the torch.quantization.DeQuantStub() work? or what’s missing?
Thanks a lot for your help. |
st183571 | Solved by ynjiun_wang in post #4
the torch.jit.trace works when I manually split the map() into
x1,x2=self.dequant(x1),self.dequant(x2) |
st183572 | DeQuantStub only works for Tensors, if you have a tuple output (and both of them needs to be dequantized), you can write something like: x = map(self.dequant, x) |
st183573 | @jerryzh168
Thank you so much for your help. Your solution works!
Then I tried to port my resulting quantized_model to RKNN h/w and it requires to
trace_model = torch.jit.trace(quantized_model, torch.Tensor(1,3,300,300))
before it can convert the quantized_model into rknn model. But when run the torch.jit.trace, I encountered the following error:
RuntimeError: Tracer cannot infer type of <map object at 0x7f308b270cc0>
:Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type map.
I believe it is the x=map(self.dequant, x) we just introduced to solve the previous problem.
I wonder is there a way to get the torch.jit.trace to work for this case? Thanks a lot for your help again. |
st183574 | the torch.jit.trace works when I manually split the map() into
x1,x2=self.dequant(x1),self.dequant(x2) |
st183575 | I tried to fine tune the ResNe18 model on GPU. The training code runs OK. However, I decided to further train the produced model using a different loss function (switched from L1 loss to L2 loss) but got the the error below. No other code or data changed.
RuntimeError: Found dtype Double but expected Float
Any idea what might have caused the issue and how to fix it?
Below is the information of my environment.
OS: Ubuntu 20.04 in Docker
python 3.8.10
torch 1.9.1+cu111
torchaudio 0.9.1
torchvision 0.10.1+cu111 |
st183576 | It doesn’t work even if call loss = loss.to(torch.float32) to convert loss to torch.float32 before backward propagation. |
st183577 | you shouldnt do it that way, you need to change the target/label to float, here is an example:
loss = nn.MSELoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.randn(3, 5)
output = loss(input, target.float()) # the floating can happen here
output.backward() |
st183578 | Im not sure, one reason can be probably because the backprop in the mse cannot handle double format since it is squared L2 norm, but for l1 it is not an issue. |
st183579 | It should not raise an issue in the backward, if the forward was successfully executed.
As mentioned in the other thread: could you post an executable code snippet which we could use to reproduce and debug this issue, please? |
st183580 | Hello,
I have built a simple convolutional net and would like to quantize it for deployment.
Here is the code used for quantization.
def quantize_net(net):
for module_name, module in net.named_children():
if module_name in ['conv1', 'conv3_1', 'conv4_1', 'conv5_pa']:
torch.quantization.fuse_modules(module, ['conv', 'bn', 'activation'], inplace=True)
elif module_name in ['conv2', 'conv3_2', 'conv4_2']:
for submodule_name, submodule in module.named_children():
if submodule_name in ['depthwise', 'pointwise']:
torch.quantization.fuse_modules(submodule, ['conv', 'bn', 'activation'], inplace=True)
if module_name in ['conv5_pb']:
torch.quantization.fuse_modules(module, ['conv', 'bn'], inplace=True)
net.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(net, inplace=True)
net(torch.randint(low=0, high=255, size=(1000, 1, 40, 40), dtype=torch.float32))
torch.quantization.convert(net, inplace=True)
All the convolution weights are quantized to quint8 but the biases stay in fp32. I assume that during inference biases are casted to int32 and added to the output of the conv operator. However, I would like to quantize the biases to quint8.
Is this supported at the moment? Is there a way I could achieve it?
Thanks! |
st183581 | Hi @dalnoguer
We currently don’t support quantizing biases to quint8. The main reason being our quantized backend (fbgemm) supports fp32 biases.
If you wish to update this to use quint8 bias, you’d have to write your own custom operator that accepts biases in this format along with the necessary kernel to do the computation in int8. |
st183582 | I am trying to do post quantization.
added QuantStub() and DeQuantStub() in the model
super(ConvTasNet, self).__init__()
# Hyper-parameter
self.N, self.L, self.B, self.Sk, self.H, self.P, self.X, self.R, self.C = N, L, B, Sk, H, P, X, R, C
self.norm_type = norm_type
self.causal = causal
self.mask_nonlinear = mask_nonlinear
self.skip_conn = []
#quantization
self.quant = QuantStub()
self.dequant = DeQuantStub()
# Components
self.encoder = Encoder(L, N)
self.separator = TemporalConvNet(N, B, Sk, H, P, X, R, C, norm_type, causal, mask_nonlinear)
self.decoder = Decoder(N, L)
# init
for p in self.parameters():
if p.dim() > 1:
nn.init.xavier_normal_(p)
def forward(self, mixture):
"""
Args:
mixture: [M, T], M is batch size, T is #samples
Returns:
est_source: [M, C, T]
"""
mixture = self.quant(mixture)
mixture_w = self.encoder(mixture)
est_mask = self.separator(mixture_w)
est_source = self.decoder(mixture_w, est_mask)
est_source = self.dequant(est_source)
return est_source
Now, I want to evaluate model without and with Quantize converted model
if __name__ == '__main__':
args = parser.parse_args()
print(args)
#evaluate(args)
num_calibration_batches = 10
model = ConvTasNet.load_model('E:\\Project\\MVNS_EC\\quantization\\Simple-Neural-Nets-with-PyTorch-master\\Eager_Post_quant\\MVNS_model\\Quantization\\model\\final.pth.tar')
model.eval()
model.qconfig = torch.quantization.default_qconfig
torch.quantization.prepare(model, inplace=True)
evaluate(model,args)
torch.quantization.convert(model, inplace=True)
evaluate(model,args)
Evaluation without quantization is done, and also the model is converted to Quantized model.
I am recieving error in the last line, when I am trying to evaluate with the quantized model.
Please guide me, what is that I am doing wrong? |
st183583 | Solved by jerryzh168 in post #13
I think this error typically means that QuantStub/DeQuantStub is not placed correctly, probably a missing DeQuantStub before conv2d. or conv2d is not quantized. |
st183584 | I added model.to(‘cpu’) before model.eval(), still facing the same issue.
is it because I didn’t fuse the model? or is it the placement of QuantStub()? |
st183585 | I am using conv1d, prelu and layernorm layers.
Does pytorch support quantization to these layers?
or is it possible only for the following layers?
(Actual layer : quantized layer)
nn.Linear: nnq.Linear,
nn.ReLU: nnq.ReLU,
nn.ReLU6: nnq.ReLU6,
nn.Conv2d: nnq.Conv2d,
nn.Conv3d: nnq.Conv3d,
nn.BatchNorm2d: nnq.BatchNorm2d,
nn.BatchNorm3d: nnq.BatchNorm3d,
QuantStub: nnq.Quantize,
DeQuantStub: nnq.DeQuantize
# Wrapper Modules:
nnq.FloatFunctional: nnq.QFunctional
# Intrinsic modules:
nni.ConvReLU2d: nniq.ConvReLU2d,
nni.ConvReLU3d: nniq.ConvReLU3d,
nni.LinearReLU: nniq.LinearReLU,
nniqat.ConvReLU2d: nniq.ConvReLU2d,
nniqat.LinearReLU: nniq.LinearReLU,
nniqat.ConvBn2d: nnq.Conv2d,
nniqat.ConvBnReLU2d: nniq.ConvReLU2d,
# QAT modules:
nnqat.Linear: nnq.Linear,
nnqat.Conv2d: nnq.Conv2d, |
st183586 | I build a pytorch model based on conv1d. I gone through quantization and implemented some cases as well but all those are working on conv2d, bn,relu but In my case, my model is built on conv1d and PReLU. Does this quatization valid for these network layers? Because when I did quantization only the layers which are included in mapping is only quantized. Let me show you those layers for which quantization is valid(i.e which are included in mapping)
Please find the list of modules it supports:(according to source code in went through)
(Actual layer : quantized layer)
nn.Linear: nnq.Linear,
nn.ReLU: nnq.ReLU,
nn.ReLU6: nnq.ReLU6,
nn.Conv2d: nnq.Conv2d,
nn.Conv3d: nnq.Conv3d,
nn.BatchNorm2d: nnq.BatchNorm2d,
nn.BatchNorm3d: nnq.BatchNorm3d,
QuantStub: nnq.Quantize,
DeQuantStub: nnq.DeQuantize,
Wrapper Modules:
nnq.FloatFunctional: nnq.QFunctional,
Intrinsic modules:
nni.ConvReLU2d: nniq.ConvReLU2d,
nni.ConvReLU3d: nniq.ConvReLU3d,
nni.LinearReLU: nniq.LinearReLU,
nniqat.ConvReLU2d: nniq.ConvReLU2d,
nniqat.LinearReLU: nniq.LinearReLU,
nniqat.ConvBn2d: nnq.Conv2d,
nniqat.ConvBnReLU2d: nniq.ConvReLU2d,
QAT modules:
nnqat.Linear: nnq.Linear,
nnqat.Conv2d: nnq.Conv2d,
Is it, it means that quantization can’t be done on conv1d and PReLU? |
st183587 | Conv1d support is being added, https://github.com/pytorch/pytorch/pull/38438 20. We dont support fusion with pReLU and LayerNorm currently, so those ops will have to execute separately. |
st183588 | Okay @supriyar
Apart from fusing with PReLU and layerNorm, does these two are able to get quantized? Since these two layers are not mapped in default mapping config as you can see it in last comment I kept.
Thankyou @supriyar |
st183589 | We currently do not have support to quantize pReLU, but it should be very similar to leakyReLU which we do support.
Quantization of LayerNorm is supported as seen in https://github.com/pytorch/pytorch/blob/master/torch/nn/quantized/modules/normalization.py#L9 20
We will add it to our documentation. Thanks for brining it up! |
st183590 | Hi @supriyar
Thanks for that. I’m able to see quantized layer mappings for LayerNorm, ReLU & ReLU6 but LeakyReLU is not included here. Is LeakyReLU going to be added?
Thanks @supriyar,
Aravind |
st183591 | You should be able to use the functional one torch.nn.functional.leaky_relu
cc @Zafar |
st183592 | @Sasank_Kottapalli Hi, did you solve the problems ? I’m facing exactly the same one. |
st183593 | I am facing similar problem. I did model.to(torch.device("cpu")) before model.eval(). |
st183594 | I think this error typically means that QuantStub/DeQuantStub is not placed correctly, probably a missing DeQuantStub before conv2d. or conv2d is not quantized. |
st183595 | Hi jerryzh, hope you are fine.
I am facing almost the same issue as yours, so kindly help me with this
I have trained the model using fastai, and timm libararies.
Currently, I am doing following:
effb3_model=learner_effb3.model.eval()
backend = "qnnpack"
effb3_model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
model_static_quantized = torch.quantization.prepare(effb3_model, inplace=False)
model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False)
print_size_of_model(model_static_quantized)
But I am facing following error, while calling the model for inference:
RuntimeError: Could not run 'aten::thnn_conv2d_forward' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::thnn_conv2d_forward' is only available for these backends: [CPU, CUDA, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
This is my model:
Sequential(
(0): Sequential(
(0): Conv2dSame(3, 40, kernel_size=(3, 3), stride=(2, 2), bias=False)
(1): QuantizedBatchNorm2d(40, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
(3): Sequential(
(0): Sequential(
(0): DepthwiseSeparableConv(
(conv_dw): QuantizedConv2d(40, 40, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=40)
(bn1): QuantizedBatchNorm2d(40, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(40, 10, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(10, 40, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pw): QuantizedConv2d(40, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn2): QuantizedBatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): Identity()
)
(1): DepthwiseSeparableConv(
(conv_dw): QuantizedConv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=24)
(bn1): QuantizedBatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(24, 6, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(6, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pw): QuantizedConv2d(24, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn2): QuantizedBatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): Identity()
)
)
(1): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): Conv2dSame(144, 144, kernel_size=(3, 3), stride=(2, 2), groups=144, bias=False)
(bn2): QuantizedBatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(144, 6, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(6, 144, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=192)
(bn2): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(192, 8, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(8, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=192)
(bn2): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(192, 8, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(8, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): Conv2dSame(192, 192, kernel_size=(5, 5), stride=(2, 2), groups=192, bias=False)
(bn2): QuantizedBatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(192, 8, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(8, 192, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(192, 48, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(288, 288, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=288)
(bn2): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(288, 12, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(12, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(288, 288, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=288)
(bn2): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(288, 12, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(12, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(3): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): Conv2dSame(288, 288, kernel_size=(3, 3), stride=(2, 2), groups=288, bias=False)
(bn2): QuantizedBatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(288, 12, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(12, 288, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(288, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(3): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(4): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(576, 576, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=576)
(bn2): QuantizedBatchNorm2d(576, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(576, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(24, 576, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(576, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(3): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(4): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(816, 816, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=816)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 136, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(136, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(5): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(136, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): Conv2dSame(816, 816, kernel_size=(5, 5), stride=(2, 2), groups=816, bias=False)
(bn2): QuantizedBatchNorm2d(816, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(816, 34, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(34, 816, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(816, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(2): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(3): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(4): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(5): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(5, 5), stride=(1, 1), scale=1.0, zero_point=0, padding=(2, 2), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 232, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(232, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(6): Sequential(
(0): InvertedResidual(
(conv_pw): QuantizedConv2d(232, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(1392, 1392, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=1392)
(bn2): QuantizedBatchNorm2d(1392, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(1392, 58, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(58, 1392, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(1392, 384, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): InvertedResidual(
(conv_pw): QuantizedConv2d(384, 2304, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn1): QuantizedBatchNorm2d(2304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act1): SiLU(inplace=True)
(conv_dw): QuantizedConv2d(2304, 2304, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=2304)
(bn2): QuantizedBatchNorm2d(2304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(act2): SiLU(inplace=True)
(se): SqueezeExcite(
(conv_reduce): QuantizedConv2d(2304, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(act1): SiLU(inplace=True)
(conv_expand): QuantizedConv2d(96, 2304, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(conv_pwl): QuantizedConv2d(2304, 384, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn3): QuantizedBatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(4): QuantizedConv2d(384, 1536, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(5): QuantizedBatchNorm2d(1536, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(6): SiLU(inplace=True)
)
(1): Sequential(
(0): AdaptiveConcatPool2d(
(ap): AdaptiveAvgPool2d(output_size=1)
(mp): AdaptiveMaxPool2d(output_size=1)
)
(1): Flatten(full=False)
(2): BatchNorm1d(3072, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): Dropout(p=0.25, inplace=False)
(4): QuantizedLinear(in_features=3072, out_features=512, scale=1.0, zero_point=0, qscheme=torch.per_tensor_affine)
(5): ReLU(inplace=True)
(6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): Dropout(p=0.5, inplace=False)
(8): QuantizedLinear(in_features=512, out_features=73, scale=1.0, zero_point=0, qscheme=torch.per_tensor_affine)
)
)
Thanks for any help. |
st183596 | to use eager mode quantization you will need to explicitly place QuantStub/DeQuantStub in the points where quantization needs to happen, could you follow (beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials 1.9.0+cu102 documentation 9 to quantize your model?
Alternatively, you can also try out our fx graph mode quantization: (prototype) FX Graph Mode Post Training Static Quantization — PyTorch Tutorials 1.9.0+cu102 documentation 4 |
st183597 | Hi @jerryzh168 , thanks for your help.
Actually my model is efficientnet_b3_ns, and I think there are some layers in it(eg Silu), which cannot be quantized.
So what to do in this case…
Thanks alot |
st183598 | you can explicitly set the module/functional to use None to skip quantizing them in fx graph mode quantization:
for example:
qocnfig_dict = {“silu”: None}, full docs for the qconfig_dict api can be found here: pytorch/quantize_fx.py at master · pytorch/pytorch · GitHub 4 |
st183599 | Thanks alot @jerryzh168 , but sorry I am quite dumb to figure it out myself ,
Can you suggest me where to pass it in the following code:
effb3_model=learner_effb3.model.eval()
backend = "qnnpack"
effb3_model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
model_static_quantized = torch.quantization.prepare(effb3_model, inplace=False)
model_static_quantized = torch.quantization.convert(model_static_quantized, inplace=False)
print_size_of_model(model_static_quantized)
Thanks alot… |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.