id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st184600 | Hello, I also encountered this problem, is there any latest solution, thank you! |
st184601 | Replace self.skip_add.add with torch.add
class InvertedResidual(nn.Module):
def init(self,in_channel,out_channel,stride,expand_ratio):
super(InvertedResidual, self).init()
hidden_channel=int(round(in_channel*expand_ratio))
self.shortcut=stride==1 and in_channel==out_channel
layers=[]
if expand_ratio!=1:
#1x1 pointwise conv
layers.append(ConvBnRelu(in_channel,hidden_channel,kersize=1))
layers.extend([
# 3x3 depthwise conv
ConvBnRelu(hidden_channel,hidden_channel,stride=stride,groups=hidden_channel),
nn.Conv2d(hidden_channel,out_channel,kernel_size=1,bias=False),
nn.BatchNorm2d(out_channel),
])
self.conv=nn.Sequential(*layers)
#self.skip_add = nn.quantized.FloatFunctional()
def forward(self,x):
if self.shortcut:
#return self.skip_add.add(x,self.conv(x))
return torch.add(x,self.conv(x))
else:
return self.conv(x)
RuntimeError: Could not run ‘aten::add.Tensor’ with arguments from the ‘QuantizedCPUTensorId’ backend. ‘aten::add.Tensor’ is only available for these backends: [CPUTensorId, MkldnnCPUTensorId, SparseCPUTensorId, VariableTensorId].
Where did I write it wrong? thanks! |
st184602 | I am still facing the same issue even after following the instructions present in the link you shared. Are there any further updates to it? |
st184603 | When training a 3D CNN model on a 4-tesla-v100 node with mixed precision, I got a wired result:
The mixed-precision (O1 && O3) result is slower than the FP32 result when batch size is 1 using time.time() for recording the execution time.
Using the torch.profiler(), it displays that the mixed-precision indeed speeds up the training in terms of the CPU time and the CUDA time.
image1652×140 6.83 KB
Notably, the problem only exists when batch size is equal to 1 (batch size = 4 is accelerated as predicted) and I tried two scales of the 3D CNN models. (The large model can only be trained with batch size =1 for it is too large.)
Question:
It seems that there exists a large portion of the execution time that is not related to computing.
Do you have any idea about it?
Why the total execution of 3D CNN in mixed precision is slower than the FP32 when batch size =1?
Env:
apex:0.1
pytorch:1.5 && 1.3
hardware: DGX |
st184604 | If you are manually timing CUDA operations, you would need to synchronize the code before starting and stopping the timer via torch.cuda.synchronize().
Also, the first CUDA operation will create the CUDA context etc. and will be slower than the following calls, so you should add some warmup iterations. |
st184605 | Thanks.
I have used torch.cuda.synchronize(), and there is about hundreds of iterations in one epoch with the same problem.
I will provide you the detailed results soon. |
st184606 | FP32 training without apex:
image2358×1671 619 KB
FP32 training with apex O0:
image1684×924 149 KB
image2383×1204 377 KB
mixed precision training with apex O1:
image1695×1093 190 KB
image2385×1368 483 KB |
st184607 | Both the forward and backward passes seem to see a speedup between O0 and O1.
Are you seeing that the complete training time in O1 is still higher than in O0? |
st184608 | The main concern is that these two kinds of profiling methods produce very different results. |
st184609 | I download the code and try to run it, but it exit after the backward funtion,can anyone tell me why and how to solve it,the project is https://github.com/ice-tong/pytorch-captcha, thank u
图片1147×441 31.3 KB |
st184610 | Solved by Disp41r_QAQ in post #2
i think it may be my cuda’s problem,i use the cpu to train it,and it works |
st184611 | How quantization for object detection models varies from that of classification models?
Since detection models need to handle the bbox coordinates(multiple objects in an input), there must be some scaling trick in quantization.
Is there any implementation sources? |
st184612 | We have usually quantized the backbone part for detection models while leaving the rest in fp32 and gotten good speedups. For the other part, @Zafar has tried quantizing but the accuracy is usually bad. |
st184613 | If we employ MinMax observer for calibrating the floating model for quantization,how are the bounding box coordinates quantized? Does it follow same way as of feature extraction? |
st184614 | Yes, MinMax observer will operate the same way if you’re using it for bounding box co-ordinates. It calculates the scale and zero-point of the tensor based on min and max values. |
st184615 | PyTorch/Libtorch 1.4.0
ABOUT CNN:
Make a model just like MobileNetV3
Do post-training static quantization with fbgemm
The model size is reduced to a quarter of the original, the inferring speed is reduce to a half of the original, and the CPU usage is about 2400%, that means the default OMP_NUM_THREADS is 24
Do “export OMP_NUM_THREADS=1”, the inferring speed is increased to 3 times the original
Do “export OMP_NUM_THREADS=6”, the inferring speed is closed to the original
After more testing, I found that the problem is in depth-wise conv where groups is not 1.
My question is “Is this normal?”
ABOUT RNN:
Make a model with 2 LSTMs
Do post-training dynamic quantization
The model size is reduced to a quarter of the original, the inferring speed is no significantly changed
My question is “Is this normal?” |
st184616 | Solved by dskhudia in post #5
@wizardk: Thanks for reporting it. It’s an issue in our backend library, FBGEMM, for certain specific depthwise shapes (i_c != o_c ). In your case, depthwise convolution goes through a slower path. https://github.com/pytorch/FBGEMM/issues/347 is tracking the progress on improving performance for suc… |
st184617 | Hi @dskhudia,
I had tested it in 1 and 10 threads. Let’s just make it simple, test it in 1 thread and limit the OMP with 1. Here are the details of the experiment.
1.Install Pytorch 1.4, download Libtorch 1.4
2.Prepare JIT model in Pytorch
import torch
from torch import nn
import torch.quantization as Q
class TestConv(nn.Module):
def __init__(self, q, dw, i_c, o_c):
super(TestConv, self).__init__()
self.lyr = nn.Sequential(
nn.Conv2d(in_channels=i_c, out_channels=i_c, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=False),
nn.BatchNorm2d(num_features=i_c),
nn.ReLU(inplace=False) if q else nn.ReLU6(inplace=True),
nn.Conv2d(in_channels=i_c, out_channels=o_c, kernel_size=3, stride=1, padding=1, dilation=1, groups=i_c if dw else 1, bias=False),
nn.BatchNorm2d(num_features=o_c),
nn.ReLU(inplace=False) if q else nn.ReLU6(inplace=True),
nn.Conv2d(in_channels=o_c, out_channels=o_c, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=False),
nn.BatchNorm2d(num_features=o_c),
nn.ReLU(inplace=False) if q else nn.ReLU6(inplace=True),
)
def forward(self, x):
return self.lyr(x)
class TestCNN(nn.Module):
def __init__(self, q, dw):
super(TestCNN, self).__init__()
self.q = q
self.quant = Q.QuantStub()
self.dequant = Q.DeQuantStub()
i_c = 1
self.cnn = []
for _ in range(8):
self.cnn.append(TestConv(q=q, dw=dw, i_c=i_c, o_c=i_c*2))
i_c *= 2
self.cnn = nn.Sequential(*self.cnn)
def fuse_model(self):
for m in self.modules():
if type(m) == TestConv:
Q.fuse_modules(m.lyr, ['0', '1', '2'], inplace=True)
Q.fuse_modules(m.lyr, ['3', '4', '5'], inplace=True)
Q.fuse_modules(m.lyr, ['6', '7', '8'], inplace=True)
def forward(self, x):
if self.q:
x = self.quant(x)
x = self.cnn(x)
if self.q:
x = self.dequant(x)
return x
def q_test(dw):
def _eval(m):
m.eval()
with torch.no_grad():
for batch_idx in range(10):
x = torch.randn(10, 1, 100, 100)
y = m(x)
print('\nno quantization\n')
fm = TestCNN(q=False, dw=dw)
torch.save(fm.state_dict(), 'float.{}.pt'.format('dw' if dw else 'cmn'))
torch.jit.save(torch.jit.script(fm), 'jit.f.{}.pt'.format('dw' if dw else 'cmn'))
print('\npost-training static quantization\n')
qm = TestCNN(q=True, dw=dw)
qm.load_state_dict(torch.load('float.{}.pt'.format('dw' if dw else 'cmn'), map_location='cpu'))
qm.eval()
qm.fuse_model()
qm.qconfig = Q.get_default_qconfig('fbgemm')
Q.prepare(qm, inplace=True)
_eval(qm) # calibration
Q.convert(qm, inplace=True)
torch.jit.save(torch.jit.script(qm), 'jit.q.{}.pt'.format('dw' if dw else 'cmn'))
q_test(dw=False) # dump float and quant model without depthwise
q_test(dw=True) # dump float and quant model with depthwise
3.Run JIT model in Libtorch
#include <torch/script.h>
#include <torch/torch.h>
#include <pthread.h>
#include <omp.h>
#include <algorithm>
#include <iostream>
#include <chrono>
#include <vector>
#include <numeric>
typedef struct t_s_param {
torch::jit::script::Module * sess;
int loop_cnt;
int * ms, * min_ms, * max_ms;
} s_param;
torch::TensorOptions g_options = torch::TensorOptions().dtype(torch::kFloat32).requires_grad(false).device(torch::kCPU);
torch::jit::script::Module load(const char * model_file_name)
{
torch::NoGradGuard no_guard;
torch::jit::script::Module module = torch::jit::load(model_file_name);
module.to(torch::kCPU);
module.eval();
torch::Tensor x = torch::randn({ 1, 1, 32, 100 }, g_options);
std::chrono::system_clock::time_point start = std::chrono::system_clock::now();
torch::Tensor y = module.forward({x}).toTensor();
std::chrono::milliseconds elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now() - start);
std::cout << "warmup " << elapsed.count() << std::endl;
return module;
}
void * working_thread(void * param)
{
torch::init_num_threads();
int * ms = ((s_param *)param)->ms;
int * min_ms = ((s_param *)param)->min_ms;
int * max_ms = ((s_param *)param)->max_ms;
for (int idx = 0; idx < ((s_param *)param)->loop_cnt; ++idx) {
torch::NoGradGuard no_guard;
torch::Tensor x = torch::randn({ 1, 1, 32, 1000 }, g_options);
std::chrono::system_clock::time_point start = std::chrono::system_clock::now();
torch::Tensor y = ((s_param *)param)->sess->get_method("forward")({x}).toTensor();
std::chrono::milliseconds elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now() - start);
int elapsed_ms = elapsed.count();
*ms += elapsed_ms;
if (*min_ms == 0 || *min_ms > elapsed_ms) { *min_ms = elapsed_ms; }
if (*max_ms == 0 || *max_ms < elapsed_ms) { *max_ms = elapsed_ms; }
}
*ms /= ((s_param *)param)->loop_cnt;
std::cout << "thread quit" << std::endl;
return 0;
}
int main(int argc, char ** argv)
{
if (argc != 2) { return 0; }
omp_set_num_threads(1);
torch::set_num_threads(1);
torch::set_num_interop_threads(1);
torch::jit::script::Module module = load(argv[1]);
// create thread
std::vector<int> ms(thread_cnt, 0);
std::vector<int> min_ms(thread_cnt, 0);
std::vector<int> max_ms(thread_cnt, 0);
std::vector<s_param> param(thread_cnt);
std::vector<pthread_t> thread_handle;
for (int idx = 0; idx < thread_cnt; ++idx) {
param[idx].sess = &module;
param[idx].op_thread_cnt = op_thread_cnt;
param[idx].loop_cnt = loop_cnt;
param[idx].ms = &ms[idx];
param[idx].min_ms = &min_ms[idx];
param[idx].max_ms = &max_ms[idx];
pthread_t sub_handle;
pthread_create(&sub_handle, 0, working_thread, ¶m[idx]);
thread_handle.push_back(sub_handle);
}
for (int idx = 0; idx < thread_cnt; ++idx) {
pthread_join(thread_handle[idx], 0);
}
float mean_time = std::accumulate(ms.begin(), ms.end(), 0) / ms.size();
float min_time = *std::min_element(min_ms.begin(), min_ms.end());
float max_time = *std::max_element(max_ms.begin(), max_ms.end());
std::cout << "mean time : " << mean_time << std::endl;
std::cout << "min time : " << min_time << std::endl;
std::cout << "max time : " << max_time << std::endl;
return 0;
}
4.Experiment result
Run float model without depthwise:
mean time : 648
min time : 642
max time : 805
Run quant model without depthwise:
mean time : 478
min time : 474
max time : 533
Run float model with depthwise:
mean time : 422
min time : 376
max time : 608
Run quant model with depthwise:
mean time : 1731
min time : 1725
max time : 1828 |
st184618 | @wizardk: Thanks for reporting it. It’s an issue in our backend library, FBGEMM, for certain specific depthwise shapes (i_c != o_c ). In your case, depthwise convolution goes through a slower path. https://github.com/pytorch/FBGEMM/issues/347 19 is tracking the progress on improving performance for such cases. If your use case doesn’t need i_c != o_c, please proceed with using i_c == o_c for depthwise convolutions.
Meanwhile I see the following results for your 4 cases, if I make depthwise to have the same i_c and o_c.
Self CPU time total: 64.826ms
Self CPU time total: 31.913ms
Self CPU time total: 50.317ms
Self CPU time total: 17.530ms
The following is the code I used for benchmarking.
import torch
from torch import nn
import torch.quantization as Q
torch.set_num_threads(1)
class TestConv(nn.Module):
def __init__(self, q, dw, i_c, o_c):
super(TestConv, self).__init__()
self.lyr = nn.Sequential(
nn.Conv2d(in_channels=i_c, out_channels=i_c, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=False),
nn.BatchNorm2d(num_features=i_c),
nn.ReLU(inplace=False) if q else nn.ReLU6(inplace=True),
nn.Conv2d(in_channels=i_c, out_channels=o_c, kernel_size=3, stride=1, padding=1, dilation=1, groups=i_c if dw else 1, bias=False),
nn.BatchNorm2d(num_features=o_c),
nn.ReLU(inplace=False) if q else nn.ReLU6(inplace=True),
nn.Conv2d(in_channels=o_c, out_channels=2*o_c, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=False),
nn.BatchNorm2d(num_features=2*o_c),
nn.ReLU(inplace=False) if q else nn.ReLU6(inplace=True),
)
def forward(self, x):
return self.lyr(x)
class TestCNN(nn.Module):
def __init__(self, q, dw):
super(TestCNN, self).__init__()
self.q = q
self.quant = Q.QuantStub()
self.dequant = Q.DeQuantStub()
i_c = 1
self.cnn = []
for _ in range(8):
self.cnn.append(TestConv(q=q, dw=dw, i_c=i_c, o_c=i_c))
i_c *= 2
self.cnn = nn.Sequential(*self.cnn)
def fuse_model(self):
for m in self.modules():
if type(m) == TestConv:
Q.fuse_modules(m.lyr, ['0', '1', '2'], inplace=True)
Q.fuse_modules(m.lyr, ['3', '4', '5'], inplace=True)
Q.fuse_modules(m.lyr, ['6', '7', '8'], inplace=True)
def forward(self, x):
if self.q:
x = self.quant(x)
x = self.cnn(x)
if self.q:
x = self.dequant(x)
return x
def q_test(dw):
def _eval(m):
m.eval()
with torch.no_grad():
for batch_idx in range(10):
x = torch.randn(10, 1, 100, 100)
y = m(x)
print('\nno quantization\n')
fm = TestCNN(q=False, dw=dw)
fm.eval()
torch.save(fm.state_dict(), 'float.{}.pt'.format('dw' if dw else 'cmn'))
scriptModel = torch.jit.script(fm)
x = torch.randn(1, 1, 32, 100)
with torch.autograd.profiler.profile(record_shapes=True) as prof:
scriptModel(x)
print("autograd prof:\n {} \n".format(prof.key_averages(group_by_input_shape=False)))
#print("autograd prof table:\n {} \n".format(prof.table(row_limit=-1)))
torch.jit.save(scriptModel, 'jit.f.{}.pt'.format('dw' if dw else 'cmn'))
print('\npost-training static quantization\n')
qm = TestCNN(q=True, dw=dw)
#print(qm)
qm.load_state_dict(torch.load('float.{}.pt'.format('dw' if dw else 'cmn'), map_location='cpu'))
qm.eval()
qm.fuse_model()
qm.qconfig = Q.get_default_qconfig('fbgemm')
Q.prepare(qm, inplace=True)
_eval(qm) # calibration
Q.convert(qm, inplace=True)
qscriptModel = torch.jit.script(qm)
with torch.autograd.profiler.profile(record_shapes=True) as prof:
qscriptModel(x)
print("autograd prof:\n {} \n".format(prof.key_averages(group_by_input_shape=False)))
#print("autograd prof table:\n {} \n".format(prof.table(row_limit=-1)))
torch.jit.save(qscriptModel, 'jit.q.{}.pt'.format('dw' if dw else 'cmn'))
q_test(dw=False) # dump float and quant model without depthwise
q_test(dw=True) # dump float and quant model with depthwise |
st184619 | Hello, I confronted similar slow speed incident but this time it happens when I’m using depth-wise convolution with non-squared kernel (e.g, 3X1).
When I change the filtersize to (3,1) using the script above, it shows me the speed as follows,
Self CPU time total: 48.013ms
Self CPU time total: 29.187ms
Self CPU time total: 24.026ms
Self CPU time total: 85.271ms
Int8 operation with non-squared depth-wise convolution significantly increases the elapsed time.
Is there anyway I can deal with this issue?
Thank you. |
st184620 | I am a little bit confused about the randomness of the pytorch model.
I used the following code to fix the random seed so that the training results of the model can be repeated on the same device:
def seed_torch(seed=2020):
random.seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
But what is confusing is that when the code is run on another device with the same hardware configuration and software environment, it will produce different results.
I reinstalled the virtual environment to ensure that the versions of the libraries are consistent, but this problem still puzzles me.
Feel free to ask if more code is needed to explain the problem. |
st184621 | Solved by briankosw in post #2
Refer to Reproducibility over Different Machines for the same discussion! |
st184622 | Sorry if the question has been answered somewhere, I couldn’t find similar question across the forum so I would want to put my question here, and hope for your answer.
So we have a simple trained model, and applied the static quantization to get quantized model using ‘fbgemm’ as qconfig:
myModel.qconfig = torch.quantization.get_default_qconfig('fbgemm')
After this, we have quantized model with weights (int_repr()) exported.
I expect if I create a similar architecture, and import the int represented weight in, I can generate same result per layer as quantized model, but turn out the results are different.
Below is detailed flows:
#Notes: x_batch and x_quant were exported previously with quant model eval to pickle file and reload here for comparison
#Flow 1
#using x as input, calculate results through loaded quantized model
#forward: x--> x_quant = self.quant(x) --> f = self.featExt(x_quant)
# featExt definition: self.featExt = nn.Sequential(nn.Conv2d(1, 8,
# kernel_size=5, stride=5, bias=False), nn.ReLU())
x_quant_new, f, x_conv, y_hat = quant_net.forward(x_batch[0])
print('using saved quantized model: ')
print('x_quant to compare(int): ', x_quant_new.int_repr())
print('filter to compare(int): ', quant_net.featExt[0].weight().int_repr())
print('output to compare(int): ', f.int_repr())
#Flow 2
#using x_quant as input, calculate conv 2d using pytorch function
conv2d = nn.Conv2d(1, 8, kernel_size=5, stride=5, bias=False)
conv2d.weight.data = my_debug_net.featConv.weight.data
with torch.no_grad():
conv2d.eval()
res1 = conv2d(x_quant[0].type(torch.CharTensor))
print('*********using F.conv2d***********')
print('x_quant: ', x_quant[0])
print('filter: ', conv2d.weight.data)
print('F.conv2d Output ', res1)
print('F.relu Output ', F.relu(res1)) |
st184623 | Giang_Dang:
I expect if I create a similar architecture, and import the int represented weight in, I can generate same result per layer as quantized model, but turn out the results are different.
This should be possible, if the weights are copied correctly. Would you have a reproducible toy example of this behavior? |
st184624 | Thanks for confirming the thinking. I can’t upload the quantized model and architecture we are currently working here but for the purpose of demonstrating, I will create a toy example to share for the investigation.
For now I can share the log from the 2 flows I put in my question, that is to prove the weights are the same. Perhaps with this log you will find something that I had missed.
I added the log here to avoid messing-up the conversation: https://drive.google.com/drive/folders/1O7A96jJIWbqS_5uYL1tmp__N6LJHMh9k?usp=sharing 8 |
st184625 | Hi @Giang_Dang,
Unfortunately it’s hard to spot what could be missing in your code without seeing it. Here is a toy example representing the expected behavior:
import torch
import torch.nn as nn
# toy model
class M(nn.Module):
def __init__(self):
super().__init__()
self.quant = torch.quantization.QuantStub()
self.fc = nn.Linear(2, 2)
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.fc(x)
x = self.dequant(x)
return x
m1 = M()
m2 = M()
def static_quant(m):
m.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(m, inplace=True)
# toy calibration
data = torch.rand(4, 2)
m(data)
torch.quantization.convert(m, inplace=True)
static_quant(m1)
static_quant(m2)
# m1 and m2 now have different weights, because of different
# initialization, and different calibration data
# verify that same inputs do not lead to same outputs
data = torch.rand(16, 2)
print('outputs match', torch.allclose(m1(data), m2(data)))
# set m2's weights to be equal to m1's weights
m2.quant.load_state_dict(m1.quant.state_dict())
m2.fc.load_state_dict(m1.fc.state_dict())
# verify that same inputs lead to same outputs
data = torch.rand(16, 2)
print('outputs match', torch.allclose(m1(data), m2(data)))
One thing you could try is to use the state dict to transfer weights between modules of the same type, instead of manually copying attributes over. However, if you manually transfer all the attributes correctly, it should work as well. |
st184626 | Hi @Vasiliy_Kuznetsov: thank you for taking the time to create the toy example.
The approach to save state_dict and reload the state_dict with the same architecture as you described would work as expected, and I don’t have issue with that.
To clarify, my purpose is to have: trained pytorch model (M) -> quantized trained pytorch model(M1) -> port to run on ARM cortex-M4 with CMSIS-NN (M3).
In order to do so, I am doing the intermediate steps:
quantized trained pytorch model(M2) -> export weights param in integers -> load to a brand new Pytorch architecture without quantized info(M2_int) -> this model will be close to what is developed in embedded device (M3).
I will update your example to show the above steps. What I am not clear is some normalization steps done in pytorch internal functions, that would be different between quantized and non-quantized model. |
st184627 | The state dicts don’t have to be used on the whole model, you can do it module by module, something like model2.conv3.load_state_dict(model1.conv3.state_dict()). But in any case, loading a state dict is the same thing as transferring all the attributes manually, it’s just easier.
load to a brand new Pytorch architecture without quantized info(M2_int)
If you are still seeing different results after transferring the weights, there could be other differences. Some things to debug would be:
are the other parameters you need to transfer (conv bias, etc)
is the input data coming in exactly the same (you are modeling quant/dequant correctly, etc) |
st184628 | Giang_Dang:
I expect if I create a similar architecture, and import the int represented weight in, I can generate same result per layer as quantized model
Unless the two architectures are the same, you can not expect to get the same same result as your network output. You are guaranteed to get the same result for the very same layers, with the same input, but anything other than that will cause the result to change. |
st184629 | Vasiliy_Kuznetsov:
If you are still seeing different results after transferring the weights, there could be other differences. Some things to debug would be:
are the other parameters you need to transfer (conv bias, etc)
is the input data coming in exactly the same (you are modeling quant/dequant correctly, etc)
Hi Both,
I am thankful for your time to look into the issue.
I totally agree with you both on the logic. I modified the program from @Vasiliy_Kuznetsov to demonstrate what I am trying to achieve. Would this be explained, I am thankful for that, since this is an essential step to convert pytorch model to C model:
import torch
import torch.nn as nn
# toy model
class M(nn.Module):
def __init__(self):
super().__init__()
self.quant = torch.quantization.QuantStub()
self.conv = nn.Conv2d(1,1,kernel_size=2,stride=2,padding=0,bias=False)
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x_quant = self.quant(x)
x_conv = self.conv(x_quant)
y = self.dequant(x_conv)
return x_quant, x_conv, y
class M_int(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Conv2d(1,1,kernel_size=2,stride=2,padding=0,bias=False)
def forward(self, x):
# get x_quant as input
x_conv = self.conv(x)
return x_conv
m1 = M()
m2 = M()
def static_quant(m):
m.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(m, inplace=True)
# toy calibration
data = torch.rand(4, 1, 2, 2)
m(data)
torch.quantization.convert(m, inplace=True)
static_quant(m1)
static_quant(m2)
# m1 and m2 now have different weights, because of different
# initialization, and different calibration data
# verify that same inputs do not lead to same outputs
data = torch.rand(4, 1, 2, 2)
print('outputs match', torch.allclose(m1(data)[2], m2(data)[2]))
# set m2's weights to be equal to m1's weights
m2.quant.load_state_dict(m1.quant.state_dict())
m2.conv.load_state_dict(m1.conv.state_dict())
# verify that same inputs lead to same outputs
data = torch.rand(4, 1, 2, 2)
print('outputs match', torch.allclose(m1(data)[2], m2(data)[2]))
m3 = M_int()
with torch.no_grad():
m3.conv.weight.data = m1.conv.state_dict()['weight'].int_repr().type(torch.ByteTensor)
m3.eval()
data = torch.rand(4, 1, 2, 2)
x_quant, x_conv, y = m1(data)
x_conv3 = m3(x_quant.int_repr().type(torch.ByteTensor))
print('weight match', torch.allclose(m1.conv.state_dict()['weight'].int_repr().type(torch.ByteTensor), m3.conv.weight.data))
print('outputs match', torch.allclose(x_conv.int_repr(), x_conv3))
M_int model is the fresh model with integer weight loaded-in.
I expect to have the result after conv layer to be the same for m1 and m3.
I changed from linear to conv just because I am debugging for convolution2D currently. |
st184630 | Hi @Giang_Dang,
I’m not sure if it makes sense conceptually to try to put weights from a quantized layer directly into a floating point layer. Consider the translation between the quantized and floating point domain:
x_quant = round(x_fp / scale + zero_point)
x_fp = (x_quant - zero_point) * scale
For the weights of the quantized conv, even though they are stored in the quantized domain, they represent the floating point domain. To use them in non-quantized layers you’d need to convert back to the floating point domain. |
st184631 | Hi @Vasiliy_Kuznetsov,
For quant() layer yes I managed to figure out the formula and it is fine to apply.
For the weights of convolution layer, it goes the same formula to calculate int_repr() values from float with scale and zero_point.
The purpose of quantization is to have parameters in integer and hence reduce computation cost during convolution. If we couldn’t produce the same result with plain network with these weights, it seems the task to port successfully to C model is not feasible, or at least, not well-supported by Pytorch currently.
Cheers,
Giang |
st184632 | with torch.no_grad():
m3.conv.weight.data = m1.conv.state_dict()['weight'].int_repr().type(torch.ByteTensor)
This line doesn’t seem to be applying the dequantization. If you want m3.conv to match m1.conv when m3 is floating point and m1 is quantized, you would need to convert the weights back to floating point. Int_repr() returns the integer weights but it does not dequantize them.
One other thing you could consider is to run quantization on m3 directly. |
st184633 | When I was reading the source code of torchvision.models.quantization.inception_v3, I found self.myop.cat is used 3 times in QuantizableInceptionE, so when I finished the training, there is only one group of quantization params (min_val/max_val/scale/zeros_point). If I understand correctly, we need 3 different group of quantization params for each concat operation.
Can any one help to explain whether it’s a bug here or I misunderstood it? |
st184634 | Solved by hx89 in post #2
You are right, looks like we need 3 different self.myop.cat. Could you file an issue for it? We will take a look. |
st184635 | You are right, looks like we need 3 different self.myop.cat. Could you file an issue for it? We will take a look. |
st184636 | I want to perform dynamic quantization on a FwFM model (paper 1). Unfortunately, i did not manage to make it work!
I get following error:
AttributeError: 'function' object has no attribute 't'
corresponding to the line in my forward function:
outer_fwfm = torch.einsum('klij,kl->klij', outer_fm,
(self.field_cov.weight.t() + self.field_cov.weight) * 0.5)
Corresponding model layer before quantization:
(field_cov): Linear(in_features=39, out_features=39, bias=False)
After quantization:
(field_cov): DynamicQuantizedLinear(in_features=39, out_features=39, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
Quantization line:
quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
Am I missing something out here? Thanks for your help!
GitHub Code 3 |
st184637 | Solved by Vasiliy_Kuznetsov in post #5
hi @pintonos, currently we don’t have a quantized kernel for einsum, we would be happy to review a PR if someone is interested in implementing. In the meanwhile, a workaround could be to dequantize -> floating point einsum -> quantize. |
st184638 | It seems that DynamicQuantizedLinear replaces the weight attribute with a method:
lin = torch.nn.quantized.dynamic.Linear(1, 1)
print(lin.weight())
So you might need to call self.field_cov.weight().t() + self.field_cov.weight().
Note that, while this might work functionality-wise, I’m not familiar enough with your use case or the dynamic quantization to claim it’s the right approach to use when quantization is applied. |
st184639 | Thanks!
Code now looks like this:
if self.dynamic_quantization or self.static_quantization or self.quantization_aware:
q_func = QFunctional()
q_add = q_func.add(self.field_cov.weight().t(), self.field_cov.weight())
q_add_mul = q_func.mul_scalar(q_add, 0.5)
outer_fwfm = torch.einsum('klij,kl->klij', outer_fm, q_add_mul)
Error Traceback:
...
return _VF.einsum(equation, operands)
RuntimeError: Could not run 'aten::mul.Tensor' with arguments from the 'QuantizedCPU' backend. 'aten::mul.Tensor' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, Named, Autograd, Profiler, Tracer, Batched].
Can torch.einsum(...) be quantized? Would there be a workaround since it consists of mulitplication and addition? |
st184640 | Unfortunately, I’m not experienced enough using the quantization package, so we would need to wait for an expert. |
st184641 | pintonos:
einsum
hi @pintonos, currently we don’t have a quantized kernel for einsum, we would be happy to review a PR if someone is interested in implementing. In the meanwhile, a workaround could be to dequantize -> floating point einsum -> quantize. |
st184642 | I have quantized and saved the model using this code -
model = Net()
model.load_state_dict(torch.load('model.pth', map_location=torch.device('cpu')))
model.qconfig = torch.quantization.default_qconfig
model.eval()
torch.quantization.prepare(model, inplace=True)
evaluate(model)
torch.quantization.convert(model, inplace=True)
model.eval()
x = evaluate(model)
torch.save(model.state_dict(), 'model_q.pth')
and loading the model like this -
model2 = Net()
torch.qconfig = torch.quantization.default_qconfig
model2.eval()
torch.quantization.prepare(model2, inplace=True)
torch.quantization.convert(model2, inplace=True)
model2.eval()
model2.load_state_dict(torch.load('model_q.pth'))
xQ = evaluate(model2)
Now x and xQ are different. I checked the parameters of both ‘model’ and ‘model2’. Parameters are same.
for i in range(len(list(model.parameters()))):
print(np.equal(list(model.parameters())[i].detach().numpy(), list(model2.parameters())[i].detach().numpy()))
All parameters are equal.
Is there anything incorrect with my method of loading or saving? or some bug in PyTorch? |
st184643 | Solved by deepak_mangla in post #2
I compared output of all layers of original model and loaded model. I found output of BatchNorm2d was different.
The problems is Pytorch wasn’t saving ‘scale’ and ‘zero_point’ of unfused QuantizedBatchNorm in checkpoints. Two solutions -
Save these values as pickle when saving model. While load… |
st184644 | I compared output of all layers of original model and loaded model. I found output of BatchNorm2d was different.
The problems is Pytorch wasn’t saving ‘scale’ and ‘zero_point’ of unfused QuantizedBatchNorm in checkpoints. Two solutions -
Save these values as pickle when saving model. While loading model, load pickle and add scale and zero point to QuantizedBatchNorm layers.
Fuse BatchNorm with Convolution. |
st184645 | hi @deepak_mangla, thanks for the report. I created https://github.com/pytorch/pytorch/issues/43774 17 to verify correct behavior. Please let us know if you have a repro on a toy model. |
st184646 | Dear Users,
I would like to ask for pointers for how to extend nn.Linear and nn.Conv2d for post-training static quantization or quantization-aware training without rewriting a lot of stuff, such that it can still be used with operator fusion etc… An example change could be to apply an affine transformation to the weights prior to calling the linear operation. Could someone help please? Thanks! |
st184647 | Solved by jerryzh168 in post #2
the change in eager mode quantization will require inheriting https://github.com/pytorch/pytorch/blob/master/torch/nn/quantized/modules/linear.py#L103 and also related fusion modules under https://github.com/pytorch/pytorch/tree/master/torch/nn/intrinsic folder, and pass a white_list(https://github.… |
st184648 | the change in eager mode quantization will require inheriting https://github.com/pytorch/pytorch/blob/master/torch/nn/quantized/modules/linear.py#L103 24 and also related fusion modules under https://github.com/pytorch/pytorch/tree/master/torch/nn/intrinsic 7 folder, and pass a white_list(https://github.com/pytorch/pytorch/blob/master/torch/quantization/quantize.py#L178 12) extended with the new module. It will require familiarity of the whole eager mode quantization flow. |
st184649 | jerryzh168:
https://github.com/pytorch/pytorch/blob/master/torch/nn/quantized/modules/linear.py#L103
Thanks Jerry, this is what I initially thought, but I wanted to double-check if my assumption was right. Thank you! |
st184650 | Hi all,
I apologize if this question is covered elsewhere.
I would like to perform quantization-aware training, but with the model initialized according to the pre-trained, post-training-quantized quantization parameters (e.g., a torchvision quantized model with layers initialized with the same scale, zero_point, etc. as in the pre-trained quantization model that is initialized with quantize=True).
That is, I’d like the initial model used for QAT to produce the same output as a pre-trained model that has been quantized using a post-training method (e.g., static quantization).
Is there an easy way to achieve this? I had tried hacking manually setting some of the QAT model’s FakeQuantizer parameters, but was unable to get it working properly.
I appreciate any help! Please let me know if my question is unclear and I will rephrase it.
Thanks! |
st184651 | Solved by jerryzh168 in post #2
I don’t think that is supported right now, instead if you have access to the original floating point model you can just do qat with that. you’ll get the model with same accuracy as the post training model if you call prepare_qat for the original model and calibrate it with the same data. |
st184652 | I don’t think that is supported right now, instead if you have access to the original floating point model you can just do qat with that. you’ll get the model with same accuracy as the post training model if you call prepare_qat for the original model and calibrate it with the same data. |
st184653 | I’m trying to quantize BERT to 4 bits or mixed precision, and I don’t see available methods to to quantization aware training on BERT for any precision other than torch.uint8. This is given in the dynamic quantization tutorial.
I want to use both post training quantization and dynamic quantization for lower than 8 bits.
Will I have to rewrite the modeling_bert.py (transformers/modeling_bert.py) layers with fake quantization added? How can lower than 8bit precision and mixed precision be implemented on BERT? |
st184654 | The difficulty there is PyTorch inherently assumes that things are at least 1 byte when doing things with memory.
I’d probably convert to TVM 9 and see what can be done there.
(QAT with fake quantization probably could work for 4 bits, too.) |
st184655 | It’s not an issue even if the weights are stored as FP32 values in memory.
I’m trying to evaluate post training quantization or fine tune the model with quantization aware training, but do this all under under fake quantization to any bit width of my choosing. |
st184656 | While I don’t think it works out of the box, you could try to adapt the observers and fake quant layers to be more flexible. For example, there are some obvious 8 bit hard coded values here:
github.com
pytorch/pytorch/blob/a414bd69de8d01af44751bfe327703ec997dafd9/torch/quantization/observer.py#L146 3
Learned Step Size Quantization: https://openreview.net/pdf?id=rkgO66VKDS
Trained Quantization Thresholds: https://arxiv.org/pdf/1903.08066.pdf
"""
# The variable names are prefixed with "initial" because their values (qmin and qmax) might be adjusted
# based on whether quantization range is reduced and the datatype (signed/unsigned) used by the observer.
initial_qmin, initial_qmax = initial_dynamic_qrange
assert initial_qmin <= 0 <= initial_qmax, "Dynamic quantization range must include 0."
assert initial_qmin < initial_qmax, "qmin must be strictly less than qmax for dynamic quantization range."
@torch.jit.export
def _calculate_qmin_qmax(self):
# type: () -> Tuple[int, int]
r"""Calculates actual qmin and qmax based on the quantization range,
observer datatype and if range is reduced.
"""
if self.is_dynamic_qrange:
# This initialization here is to be resolve TorchScript compilation issues and allow
# using of refinement to decouple initial_qmin and initial_qmax from quantization range.
# The actual values of initial_qmin and initial_qmax will be reset below.
initial_qmin, initial_qmax = 0, 255
# The following assignment of initial_qrange to a local variable and the if check refine the |
st184657 | pkadambi:
I’m trying to evaluate post training quantization or fine tune the model with quantization aware training, but do this all under under fake quantization to any bit width of my choosing.
we do have the support for lower bits in https://github.com/pytorch/pytorch/blob/master/torch/quantization/observer.py#L185 20 now, one of our interns just added this recently. |
st184658 | I have a module like this -
self.conv = nn.Sequence(nn.ConstantPad2d((1,2,1,2)), nn.Conv2d(...))
Model converts successfully into quantized form but when I try to evaluate it, I get this error -
RuntimeError: Could not run ‘aten::empty.memory_format’ with arguments from the ‘QuantizedCPU’ backend. ‘aten::empty.memory_format’ is only available
for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, BackendSelect, Autograd, Profiler, Tracer].
I think this is because quantization of nn.ConstantPad2d is not supported. So, any solution around it?
I cannot merge ConstantPad2d and Conv2d because Conv2d don’t support odd paddings (equivalent of nn.ConstantPad2d((1,2,1,2))) . |
st184659 | I think @Zafar is working on supporting constant pad right now: https://github.com/pytorch/pytorch/pull/43304 17 |
st184660 | Hi.
I’m trying to use Pytorch’s quantization scheme.
I’d like to quantize only weight with fake-quantization(QAT), not activation.
I tried this:
import torch.quantization as Q
model = load_model(my_config) # currently I'm using resnet architecture
qat_model = Q.fuse_modules(model, my_modules_to_fuse)
qat_model = Q.Qconfig(activation=Q.NoopObserver, weight=Q.FakeQuantize)
and this process from pytorch quantization tutorial
for nepoch in range(8):
train_one_epoch(qat_model, criterion, optimizer, data_loader, torch.device('cpu'), num_train_batches)
if nepoch > 3:
# Freeze quantizer parameters
qat_model.apply(torch.quantization.disable_observer)
if nepoch > 2:
# Freeze batch norm mean and variance estimates
qat_model.apply(torch.nn.intrinsic.qat.freeze_bn_stats)
# Check the accuracy after each epoch
quantized_model = torch.quantization.convert(qat_model.eval(), inplace=False)
quantized_model.eval()
top1, top5 = evaluate(quantized_model,criterion, data_loader_test, neval_batches=num_eval_batches)
print('Epoch %d :Evaluation accuracy on %d images, %2.2f'%(nepoch, num_eval_batches * eval_batch_size, top1.avg))
But the program gives this error:
calculate_qparams should not be called for NoopObserver
the reason I why used NoopObserver is avoiding calculate_qparams for activation… but It’s confused result.
How to solve this problem? any suggestion will be appreciated.
Thanks. |
st184661 | You can’t skip quantizing just by setting the observer to NoopObserver. I don’t think weight only quantization is support in convert stage. You can evaluate the accuracy of the qat module directly without convert. |
st184662 | I am using alexnet model ,where 7layers are binarized(input and weight),1st layer is not binarized(input,weight are floating point 32 bit).I want only 1st layer input,weight to be converted to 8 bit before sending into the convolution function without harming the other.
i am using pretrained weight here |
st184663 | Solved by Zafar in post #2
Just to make it clear – when you say “convert to 8bit” are you using quantization or are you just casting the types down? Also, we don’t support quantization lower than 8 bits, so binarization of the layers might not be supported without custom hacks.
Lastly, if you already have the weights, and yo… |
st184664 | Just to make it clear – when you say “convert to 8bit” are you using quantization or are you just casting the types down? Also, we don’t support quantization lower than 8 bits, so binarization of the layers might not be supported without custom hacks.
Lastly, if you already have the weights, and you just need an 8-bit model, you can follow these steps:
Make sure your model is quantizable – all layers in your network must be stateful and unique, that is, no “implied” layers in the forward and no inplace computation
Prepare the model using prepare function
Calibrate the prepared model by running through your data AT LEAST once
Convert your model to the quantized version.
You can follow the PTQ tutorial here: https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 15
On the first point:
This model cannot be quantized:
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.relu = nn.ReLU(inplace=True)
def forward(self, a, b):
ra = self.relu(a)
rb = self.relu(b)
return ra + rb
To make the model quantizable, you need to make sure there are no inplace operations, and every operation can save the state:
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.relu_a = nn.ReLU(inplace=False)
self.relu_b = nn.ReLU(inplace=False)
self.F = nn.quantized.FloatFunctional()
def forward(self, a, b):
ra = self.relu_a(a)
rb = self.relu_b(b)
return self.F.add(ra, rb)
If you want to have the model take FP input and return the FP output you will need to insert the QuantStub/DequantStub at the appropriate locations:
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.quant_stub_a = torch.quantization.QuantStub()
self.quant_stub_b = torch.quantization.QuantStub()
self.relu_a = nn.ReLU(inplace=False)
self.relu_b = nn.ReLU(inplace=False)
self.F = nn.quantized.FloatFunctional()
self.dequant_stub = torch.quantization.DeQuantStub()
def forward(self, a, b):
qa = self.quant_stub_a(a)
qb = self.quant_stub_b(b)
ra = self.relu_a(qa)
rb = self.relu_b(qb)
return self.dequant_stub(self.F.add(ra, rb))
Similarly, if you would like to only quantize a single layer, you would need to place the quant/dequant only where you want to quantize. Please, note that you would need to specify the quantization parameters appropriately:
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.quant_stub_a = torch.quantization.QuantStub()
self.relu_a = nn.ReLU(inplace=False)
self.relu_b = nn.ReLU(inplace=False)
def forward(self, a, b):
qa = self.quant_stub_a(a)
ra = self.relu_a(qa)
a = self.dequant_stub(ra)
rb = self.relu(b)
return ra + rb
The model above will be partially quantizable, and you would need to give the qconfig to the quant_stub and the relu only. |
st184665 | How do we perform layer wise quantization in pytorch i.e I want to quantize only third and fourth layer, how can i do it?
when we prepare model for quantization using model.prepare all the modules present in the whitelist are quantising . But i didn’t find a way to quantize a single layer. Any kind of help is appreciated? |
st184666 | Solved by snehaoladri in post #2
I am finally able to achieve it by setting different config for the layers i want to quantize, rather than using model.qconfig (which does for all the layers in the model).
For example:
I quantized the first layer by accessing the first layer and assigning it the qconfig.
per_channel_quantized_mo… |
st184667 | I am finally able to achieve it by setting different config for the layers i want to quantize, rather than using model.qconfig (which does for all the layers in the model).
For example:
I quantized the first layer by accessing the first layer and assigning it the qconfig.
per_channel_quantized_model.features[0][0].qconfig=torch.quantization.get_default_qconfig(‘fbgemm’)
per_channel_quantized_model.features[0][0].qconfig=torch.quantization.get_default_qconfig(‘fbgemm’)
Hope this helps. And if there is any other efficient approach to achieve this please do let me know. |
st184668 | yeah, this is how we do it in eager mode, we have prototype graph mode that works on torchscript models: https://pytorch.org/tutorials/prototype/graph_mode_static_quantization_tutorial.html 16 which can configure layers with a qconfig_dict. Although we might move away from this soon, but this should still generally work if you need to use it now. Note to use the prototype you will need to use nightly build. |
st184669 | I’ve tried to quantize a simple model with conv+bn+relu combination but it performs much slower in int8.
Am I missing something here?
Code To Reproduce
import os
import time
import torch.nn as nn
from torch.quantization import QuantStub, DeQuantStub
backend = 'qnnpack'
# backend = 'fbgemm'
import torch
torch.backends.quantized.engine = backend
class DownBlockQ(nn.Module):
def __init__(self, in_ch, out_ch):
super().__init__()
self.quant_input = QuantStub()
self.dequant_output = DeQuantStub()
self.conv1 = nn.Conv2d(in_ch, in_ch, 4, stride=2, padding=1, groups=in_ch)
self.bn1 = nn.BatchNorm2d(in_ch)
self.relu1 = nn.ReLU()
self.conv2 = nn.Conv2d(in_ch, out_ch, 1)
self.bn2 = nn.BatchNorm2d(out_ch)
self.relu2 = nn.ReLU()
def forward(self, x):
# x = self.quant_input(x)
x = self.conv1(x)
x = self.bn1(x)
x = self.relu1(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.relu2(x)
# x = self.dequant_output(x)
return x
def fuse_model(self):
torch.quantization.fuse_modules(self, ['conv1', 'bn1', 'relu1'], inplace=True)
torch.quantization.fuse_modules(self, ['conv2', 'bn2', 'relu2'], inplace=True)
class Model(nn.Module):
def __init__(self, filters=22):
super().__init__()
self.quant_input = QuantStub()
self.dequant_output = DeQuantStub()
self.db1 = DownBlockQ(filters * 1, filters * 2) # 128
self.db2 = DownBlockQ(filters * 2, filters * 4) # 64
self.db3 = DownBlockQ(filters * 4, filters * 8) # 32
def forward(self, x):
x = self.quant_input(x)
x = self.db1(x)
x = self.db2(x)
x = self.db3(x)
x = self.dequant_output(x)
return x
def fuse_model(model):
if hasattr(model, 'fuse_model'):
model.fuse_model()
for p in list(model.modules())[1:]:
fuse_model(p)
def print_size_of_model(model):
torch.save(model.state_dict(), "temp.p")
print('Size (MB):', os.path.getsize("temp.p") / 1e6)
os.remove('temp.p')
def benchmark(func, iters=10, *args):
t1 = time.time()
for _ in range(iters):
res = func(*args)
print(f'{((time.time() - t1) / iters):.6f} sec')
return res
def quantize():
dummy = torch.rand(1, 22, 256, 256)
# model = DownBlockQ(22 * 1, 22 * 2)
model = Model(filters=22)
model = model.eval()
print("Before quantization")
print_size_of_model(model)
benchmark(model, 20, dummy)
# print(model)
fuse_model(model)
model.qconfig = torch.quantization.get_default_qconfig(backend)
# print(model.qconfig)
torch.quantization.prepare(model, inplace=True)
torch.quantization.convert(model, inplace=True)
# print(model)
print("After quantization")
print_size_of_model(model)
benchmark(model, 20, dummy)
# torch.jit.script(model).save('models/model_scripted.pt')
if __name__ == '__main__':
quantize()
Expected behavior
Int8 model to be 2-3 times faster than float32.
Environment
PyTorch version: 1.7.0.dev20200727
Is debug build: No
CUDA used to build PyTorch: 10.2
OS: Ubuntu 20.04 LTS
GCC version: (Ubuntu 8.4.0-3ubuntu2) 8.4.0
CMake version: version 3.16.3
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.2.89
GPU models and configuration: GPU 0: GeForce GTX 1070
Nvidia driver version: 440.100
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.19.0
[pip3] torch==1.7.0.dev20200727
[pip3] torchvision==0.8.0.dev20200727
[conda] Could not collect |
st184670 | Thanks for flagging, the input sizes to the conv layers seem a bit unconventional so I’m wondering if that is causing a slowdown. Are these sizes part of an actual model?
cc @dskhudia
I tried printing the model
Model(
(quant_input): Quantize(scale=tensor([1.]), zero_point=tensor([0]), dtype=torch.quint8)
(dequant_output): DeQuantize()
(db1): DownBlockQ(
(quant_input): Quantize(scale=tensor([1.]), zero_point=tensor([0]), dtype=torch.quint8)
(dequant_output): DeQuantize()
(conv1): QuantizedConvReLU2d(22, 22, kernel_size=(4, 4), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1), groups=22)
(bn1): Identity()
(relu1): Identity()
(conv2): QuantizedConvReLU2d(22, 44, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn2): Identity()
(relu2): Identity()
)
(db2): DownBlockQ(
(quant_input): Quantize(scale=tensor([1.]), zero_point=tensor([0]), dtype=torch.quint8)
(dequant_output): DeQuantize()
(conv1): QuantizedConvReLU2d(44, 44, kernel_size=(4, 4), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1), groups=44)
(bn1): Identity()
(relu1): Identity()
(conv2): QuantizedConvReLU2d(44, 88, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn2): Identity()
(relu2): Identity()
)
(db3): DownBlockQ(
(quant_input): Quantize(scale=tensor([1.]), zero_point=tensor([0]), dtype=torch.quint8)
(dequant_output): DeQuantize()
(conv1): QuantizedConvReLU2d(88, 88, kernel_size=(4, 4), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1), groups=88)
(bn1): Identity()
(relu1): Identity()
(conv2): QuantizedConvReLU2d(88, 176, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
(bn2): Identity()
(relu2): Identity()
)
) |
st184671 | I think kernel size = 4 and dewpthwise conv is the culprit here. Quanatized depthwise is optimized mainly for common kernel sizes 3 and 5. Just to reiterate @supriyar’s question: Is there any reason to use kernel size 4? |
st184672 | @dskhudia @supriyar Thanks for your replies! Yes, it is a part of an actual model. So it is very undesirable to change it. I’ve tried convs with kernels 3 and 5 but even with such config int8 slower than float32. |
st184673 | Please, take a look:
import os
import time
import torch.nn as nn
from torch.quantization import QuantStub, DeQuantStub
# backend = 'qnnpack'
backend = 'fbgemm'
import torch
torch.backends.quantized.engine = backend
class DownBlockQ(nn.Module):
def __init__(self, in_ch, out_ch):
super().__init__()
self.conv1 = nn.Conv2d(in_ch, in_ch, 3, stride=2, padding=1, groups=in_ch)
self.bn1 = nn.BatchNorm2d(in_ch)
self.relu1 = nn.ReLU()
self.conv2 = nn.Conv2d(in_ch, out_ch, 1)
self.bn2 = nn.BatchNorm2d(out_ch)
self.relu2 = nn.ReLU()
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu1(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.relu2(x)
return x
def fuse_model(self):
torch.quantization.fuse_modules(self, ['conv1', 'bn1', 'relu1'], inplace=True)
torch.quantization.fuse_modules(self, ['conv2', 'bn2', 'relu2'], inplace=True)
class Model(nn.Module):
def __init__(self, filters=22, quant=True):
super().__init__()
self.quant = quant
self.quant_input = QuantStub()
self.dequant_output = DeQuantStub()
self.db1 = DownBlockQ(filters * 1, filters * 2) # 128
self.db2 = DownBlockQ(filters * 2, filters * 4) # 64
self.db3 = DownBlockQ(filters * 4, filters * 8) # 32
def forward(self, x):
if self.quant:
x = self.quant_input(x)
x = self.db1(x)
x = self.db2(x)
x = self.db3(x)
if self.quant:
x = self.dequant_output(x)
return x
def fuse_model(model):
if hasattr(model, 'fuse_model'):
model.fuse_model()
for p in list(model.modules())[1:]:
fuse_model(p)
def print_size_of_model(model):
torch.save(model.state_dict(), "temp.p")
print('Size (MB):', os.path.getsize("temp.p") / 1e6)
os.remove('temp.p')
def benchmark(func, iters=10, *args):
t1 = time.time()
for _ in range(iters):
res = func(*args)
print(f'{((time.time() - t1) / iters):.6f} sec')
return res
def quantize():
dummy = torch.rand(1, 22, 256, 256)
model = Model(filters=22, quant=False).eval()
print("Before quantization")
print_size_of_model(model)
benchmark(model, 20, dummy)
# print(model)
model = Model(filters=22, quant=True).eval()
fuse_model(model)
model.qconfig = torch.quantization.get_default_qconfig(backend)
# print(model.qconfig)
torch.quantization.prepare(model, inplace=True)
torch.quantization.convert(model, inplace=True)
print("After quantization")
print_size_of_model(model)
benchmark(model, 20, dummy)
if __name__ == '__main__':
quantize() |
st184674 | Hi @dklvch,
Int8 depthwise convolution is very slow when filters is not a multiple of 8. Could you try with filters = 16 or 24?
dummy = torch.rand(1, 22, 256, 256) => dummy = torch.rand(1, 24, 256, 256)
model = Model(filters=22, quant=False).eval() => model = Model(filters=24, quant=False).eval()
model = Model(filters=22, quant=True).eval() => model = Model(filters=24, quant=True).eval() |
st184675 | I need to train a quantized model which has 0 offset due to limitations of my inference framework.
I’m following the flow described in https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 1
So the model is prepared with prepare_qat which adds FakeQuantize layers
The problem is that both scale and zero_point are being trained. I need the zero_point to be fixed at 0. |
st184676 | Solved by supriyar in post #4
You can follow any of the observers defined in [https://github.com/pytorch/pytorch/blob/master/torch/quantization/observer.py] as a starting point.
To enable it in the qconfig you can do
FakeQuantize.with_args(observer=MyObserver, quant_min=0, quant_max=255, dtype=torch.qint8, qscheme=torch.per_te… |
st184677 | The scale and zero_point aren’t trained - they are calculated by observers inserted in the network. You can implement an observer specific to your use-case which will fix the zero_point at 0. For reference the zero_point calculation happens in https://github.com/pytorch/pytorch/blob/master/torch/quantization/observer.py#L187 5
Observers are set when you initialize the qconfig (in this case you seem to be using the default. i.e. https://github.com/pytorch/pytorch/blob/master/torch/quantization/qconfig.py#L90 2 |
st184678 | Thanks for your input.
Do you know a good example of applying custom observers ?
qconfig = QConfig(activation=FakeQuantize.with_args(observer=,
quant_min=0,
quant_max=255,
reduce_range=True),
weight=default_per_channel_weight_fake_quant)
Is this enough or setting a custom observer or there are some nuances ? |
st184679 | You can follow any of the observers defined in [https://github.com/pytorch/pytorch/blob/master/torch/quantization/observer.py 5] as a starting point.
To enable it in the qconfig you can do
FakeQuantize.with_args(observer=MyObserver, quant_min=0, quant_max=255, dtype=torch.qint8, qscheme=torch.per_tensor_affine) |
st184680 | I have a quantized model which works on intel cpu and can be traced but fails to be run on android. Float32 model works fine on mobile though. Unfortunately, I cannot share the model. I get the following error:
java.lang.IllegalArgumentException: at::Tensor scalar type is not supported on java side
Environment
PyTorch version: 1.7.0.dev20200727
Is debug build: No
CUDA used to build PyTorch: 10.2
OS: Ubuntu 20.04 LTS
GCC version: (Ubuntu 8.4.0-3ubuntu2) 8.4.0
CMake version: version 3.16.3
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.2.89
GPU models and configuration: GPU 0: GeForce GTX 1070
Nvidia driver version: 440.100
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.19.0
[pip3] torch==1.7.0.dev20200727
[pip3] torchvision==0.8.0.dev20200727
[conda] Could not collect
cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a @vkuzo |
st184681 | Solved by dzhulgakov in post #2
Most likely you’re trying to return a quantized tensor (we should improve the error message ). We don’t have Java binding for quantized tensor yet. You can try to dequantize the tensor within your model (something like result.dequantize()) or return individual components of the tensor (result.int_r… |
st184682 | Most likely you’re trying to return a quantized tensor (we should improve the error message ). We don’t have Java binding for quantized tensor yet. You can try to dequantize the tensor within your model (something like result.dequantize()) or return individual components of the tensor (result.int_repr(), result.q_scale(), result.q_zero_point()) |
st184683 | #onnx #jit #quantization
Hi, I am very confused.
While tracing to ONNX my quantized model faced an error. This happens with fused QuantizedConvReLU2d. I use OperatorExportTypes.ONNX_ATEN_FALLBACK.
Pytorch version is 1.6.0.dev20200520
Traceback (most recent call last):
File "./tools/caffe2_converter.py", line 115, in <module>
caffe2_model = export_caffe2_model(cfg, model, first_batch)
File "/root/some_detectron2/detectron2/export/api.py", line 157, in export_caffe2_model
return Caffe2Tracer(cfg, model, inputs).export_caffe2()
File "/root/some_detectron2/detectron2/export/api.py", line 95, in export_caffe2
predict_net, init_net = export_caffe2_detection_model(model, inputs)
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 144, in export_caffe2_detection_model
onnx_model = export_onnx_model(model, (tensor_inputs,))
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 63, in export_onnx_model
export_params=True,
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/__init__.py", line 172, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 92, in export
use_external_data_format=use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 530, in _export
fixed_batch_size=fixed_batch_size)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 366, in _model_to_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 319, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 284, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 577, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 372, in forward
self._force_outplace,
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 358, in wrapper
outs.append(self.inner(*trace_inputs))
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 575, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 561, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/contextlib.py", line 74, in inner
return func(*args, **kwds)
File "/root/some_detectron2/detectron2/export/caffe2_modeling.py", line 319, in forward
features = self._wrapped_model.backbone(images.tensor)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 575, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 561, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/DensePose_ADASE/densepose/modeling/quantize_caffe2.py", line 166, in new_forward
p5, p4, p3, p2 = self.bottom_up(x) # top->down
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 575, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 561, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/timm/models/efficientnet.py", line 350, in forward
x = self.conv_stem(x)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 575, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 561, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/intrinsic/quantized/modules/conv_relu.py", line 71, in forward
input, self._packed_params, self.scale, self.zero_point)
RuntimeError: Tried to trace <__torch__.torch.classes.quantized.Conv2dPackedParamsBase object at 0x5600474e9670> but it is not part of the active trace. Modules that are called during a trace must be registered
as submodules of the thing being traced.
May presense of pre_forward hooks in self.bottom_up(x) (but not the self.conv_stem(x)) affect tracing such way?
Model were QAT with preserving hooks from commit https://github.com/pytorch/pytorch/pull/37233 8
Also PT -> ONNX -> Caffe2 exporting works on this very model without quantization patching |
st184684 | Solved by jerryzh168 in post #19
we are not working on onnx conversions, feel free to submit PRs to add the support. |
st184685 | P.S. here’s also a warning
/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/quantized/modules/utils.py:10: UserWarning: 0quantize_tensor_per_tensor_affine current rounding mode is not set to round-to-nearest-ties-to-e
ven (FE_TONEAREST). This will cause accuracy issues in quantized models. (Triggered internally at /opt/conda/conda-bld/pytorch_1589958443755/work/aten/src/ATen/native/quantized/affine_quantizer.cpp:25.)
float(wt_scale), int(wt_zp), torch.qint8) |
st184686 | @jerryzh168 @James_Reed
I’ve prepared a repro. It might be not minimal but it mocks the pipeline I use.
Two files:
# network.py
import torch
class ConvModel(torch.nn.Module):
def __init__(self):
super(ConvModel, self).__init__()
self.conv_stem = torch.nn.Conv2d(
3, 5, 2, bias=True
).to(dtype=torch.float)
self.bn1 = torch.nn.BatchNorm2d(5)
self.act1 = torch.nn.ReLU()
def forward(self, x):
x = self.conv_stem(x)
x = self.bn1(x)
x = self.act1(x)
return x
# actions.py
import torch
import io
import onnx
from torch.onnx import OperatorExportTypes
def ConvModel_decorate(cls):
def fuse(self):
torch.quantization.fuse_modules(
self,
['conv_stem', 'bn1', 'act1'],
inplace=True
)
cls.fuse = fuse
return cls
def fuse_modules(module):
module_output = module
if callable(getattr(module_output, "fuse", None)):
module_output.fuse()
for name, child in module.named_children():
new_child = fuse_modules(child)
if new_child is not child:
module_output.add_module(name, new_child)
return module_output
def create_and_update_model():
import network
network.ConvModel = ConvModel_decorate(network.ConvModel)
model = network.ConvModel()
backend = 'qnnpack'
model = fuse_modules(model)
model.qconfig = torch.quantization.get_default_qat_qconfig(backend)
torch.backends.quantized.engine = backend
torch.quantization.prepare_qat(model, inplace=True)
model.apply(torch.nn.intrinsic.qat.freeze_bn_stats)
return model
def QAT(model):
N = 100
for idx in range(N):
input_tensor = torch.rand(1, 3, 6, 6)
model(input_tensor)
return model
if __name__ == '__main__':
model = create_and_update_model()
model = QAT(model)
torch.quantization.convert(model, inplace=True)
model.eval()
inputs = torch.rand(1, 3, 6, 6)
# Export the model to ONNX
with torch.no_grad():
with io.BytesIO() as f:
torch.onnx.export(
model,
inputs,
f,
opset_version=11,
operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK,
verbose=True, # NOTE: uncomment this for debugging
export_params=True,
)
onnx_model = onnx.load_from_string(f.getvalue())
Error:
(pytorch-gpu) root@ca7d6f51c4c7:~/some_detectron2# /root/anaconda2/envs/pytorch-gpu/bin/python /root/some_detectron2/min_repro/actions.py
/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/quantized/modules/utils.py:10: UserWarning: 0quantize_tensor_per_tensor_affine current rounding mode is not set to round-to-nearest-ties-to-even (FE_TONEAREST). This will cause accuracy issues in quantized models. (Triggered internally at /opt/conda/conda-bld/pytorch_1589958443755/work/aten/src/ATen/native/quantized/affine_quantizer.cpp:25.)
float(wt_scale), int(wt_zp), torch.qint8)
/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py:243: UserWarning: `add_node_names' can be set to True only when 'operator_export_type' is `ONNX`. Since 'operator_export_type' is not set to 'ONNX', `add_node_names` argument will be ignored.
"`{}` argument will be ignored.".format(arg_name, arg_name))
/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py:243: UserWarning: `do_constant_folding' can be set to True only when 'operator_export_type' is `ONNX`. Since 'operator_export_type' is not set to 'ONNX', `do_constant_folding` argument will be ignored.
"`{}` argument will be ignored.".format(arg_name, arg_name))
Traceback (most recent call last):
File "/root/some_detectron2/min_repro/actions.py", line 65, in <module>
export_params=True,
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/__init__.py", line 172, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 92, in export
use_external_data_format=use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 530, in _export
fixed_batch_size=fixed_batch_size)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 366, in _model_to_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 319, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 284, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 577, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 372, in forward
self._force_outplace,
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 358, in wrapper
outs.append(self.inner(*trace_inputs))
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 575, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 561, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/some_detectron2/min_repro/network.py", line 14, in forward
x = self.conv_stem(x)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 575, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 561, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/intrinsic/quantized/modules/conv_relu.py", line 71, in forward
input, self._packed_params, self.scale, self.zero_point)
RuntimeError: Tried to trace <__torch__.torch.classes.quantized.Conv2dPackedParamsBase object at 0x564c572bd980> but it is not part of the active trace. Modules that are called during a trace must be registered as submodules of the thing being traced. |
st184687 | Hi @zetyquickly,
First, your model does not run with the given inputs. The quantized model expects a quantized input, but inputs in your script is float-valued. QuantWrapper can be used to force quantization/dequantization for inputs/outputs of the model, respectively:
@@ -31,7 +31,7 @@ def fuse_modules(module):
def create_and_update_model():
import network
network.ConvModel = ConvModel_decorate(network.ConvModel)
- model = network.ConvModel()
+ model = torch.quantization.QuantWrapper(network.ConvModel())
Second, there’s a strange difference in behavior here between when ONNX is tracing the model and when we use the standalone TorchScript tracer. Tracing the model works fine when we use the standalone tracer. To workaround this issue, you can do this:
@@ -54,16 +54,19 @@ if __name__ == '__main__':
model.eval()
inputs = torch.rand(1, 3, 6, 6)
+ traced = torch.jit.trace(model, (inputs,))
+
# Export the model to ONNX
with torch.no_grad():
with io.BytesIO() as f:
torch.onnx.export(
- model,
+ traced,
inputs,
f,
opset_version=11,
operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK,
verbose=True, # NOTE: uncomment this for debugging
export_params=True,
+ example_outputs=traced(inputs)
)
onnx_model = onnx.load_from_string(f.getvalue())
We will investigate this difference in tracing |
st184688 | Thank you @James_Reed
I had knew that we should trace a model before passing it to ONNX export, but 7.
For now I know it for sure
Could you please help me reveal what’s going on with traced model during exporting when I see the following:
Traceback (most recent call last):
File "./tools/caffe2_converter.py", line 115, in <module>
caffe2_model = export_caffe2_model(cfg, model, first_batch)
File "/root/some_detectron2/detectron2/export/api.py", line 157, in export_caffe2_model
return Caffe2Tracer(cfg, model, inputs).export_caffe2()
File "/root/some_detectron2/detectron2/export/api.py", line 95, in export_caffe2
predict_net, init_net = export_caffe2_detection_model(model, inputs)
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 147, in export_caffe2_detection_model
onnx_model = export_onnx_model(model, (tensor_inputs,))
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 66, in export_onnx_model
example_outputs=traced(inputs[0])
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/__init__.py", line 172, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 92, in export
use_external_data_format=use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 530, in _export
fixed_batch_size=fixed_batch_size)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 384, in _model_to_graph
fixed_batch_size=fixed_batch_size, params_dict=params_dict)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 171, in _optimize_graph
torch._C._jit_pass_onnx_unpack_quantized_weights(graph, params_dict)
RuntimeError: quantized::conv2d_relu expected scale to be 7th input
How could it be that layer has lost its parameters? |
st184689 | zetyquickly:
expected scale to be 7th input
we did some refactor in this PR: https://github.com/pytorch/pytorch/pull/35923/files 6 that removed some arguments from quantized::conv2d related ops. Does it work for quantized::conv2d? |
st184690 | @jerryzh168 thankyou, this allowed us to take a step forward!
Changed fusing configuration.
Now all QuantizedConvReLU2d to QuantizedConv2d + QuantizedReLU. Don’t know whether it’s work but it produces a graph but it’s inconsistent. It causes an error that I’ve seen already.
Something wrong with produced ONNX graph.
Traceback (most recent call last):
File "./tools/caffe2_converter.py", line 115, in <module>
caffe2_model = export_caffe2_model(cfg, model, first_batch)
File "/root/some_detectron2/detectron2/export/api.py", line 157, in export_caffe2_model
return Caffe2Tracer(cfg, model, inputs).export_caffe2()
File "/root/some_detectron2/detectron2/export/api.py", line 95, in export_caffe2
predict_net, init_net = export_caffe2_detection_model(model, inputs)
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 147, in export_caffe2_detection_model
onnx_model = export_onnx_model(model, (tensor_inputs,))
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 66, in export_onnx_model
example_outputs=traced(inputs[0])
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/__init__.py", line 172, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 92, in export
use_external_data_format=use_external_data_format)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/onnx/utils.py", line 557, in _export
_check_onnx_proto(proto)
RuntimeError: Attribute 'kernel_shape' is expected to have field 'ints'
==> Context: Bad node spec: input: "735" input: "98" input: "99" output: "743" op_type: "Conv" attribute { name: "dilations" ints: 1 ints: 1 type: INTS } attribute { name: "group" i: 1 type: INT } attribute { na
me: "kernel_shape" type: INTS } attribute { name: "pads" ints: 1 ints: 1 ints: 1 ints: 1 type: INTS } attribute { name: "strides" ints: 1 ints: 1 type: INTS }
This is very location where quantized output is dequantized and fed into Conv of RPN.
Here are the bits of a graph output:
...
%98 : Long(1:1),
%99 : Long(1:1),
...
%620 : QUInt8(1:1638400, 64:25600, 128:200, 200:1) = _caffe2::Int8Relu[Y_scale=0.045047003775835037, Y_zero_point=119](%619), scope: __module._wrapped_model.backbone/__module._wrapped_model.backbone.p2_out/__module._wrapped_model.backbone.p2_out.2 # /root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/quantized/functional.py:381:0
...
%735 : Float(1:1638400, 64:25600, 128:200, 200:1) = _caffe2::Int8Dequantize(%620), scope: __module._wrapped_model.backbone/__module._wrapped_model.backbone.dequant_out # /root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/quantized/modules/__init__.py:74:0
...
%743 : Float(1:1638400, 64:25600, 128:200, 200:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=annotate(List[int], []), pads=[1, 1, 1, 1], strides=[1, 1]](%735, %98, %99), scope: __module._wrapped_model.proposal_generator/__module._wrapped_model.proposal_generator.rpn_head/__module._wrapped_model.proposal_generator.rpn_head.conv # /root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/conv.py:374:0
Important that eager mode model without quantization smoothly passes through this convertion pipeline and it is not data dependent.
If you are interested this is detectron2 export to Caffe2 pipeline 1 |
st184691 | zetyquickly:
RuntimeError: quantized::conv2d_relu expected scale to be 7th input
Please re-try with pytorch nightly build, we recently fixed this so you shouldn’t be seeing this error anymore.
zetyquickly:
RuntimeError: Attribute 'kernel_shape' is expected to have field 'ints'
Seems like the conv layer is not quantized so it produces onnx::Conv as opposed to the _caffe2::Int8Conv operator. Currently the onnx export path to caffe2 does not support partially quantized model, so it expects the entire pytorch model to be able to get quantized. |
st184692 | Thank you very much @supriyar,
I am still eager to find a solution. I’ve tried your assumptions, installed fresh build and tried again. Re-run QAT on model (just to make sure) and exporting process.
Now it says that MaxPool cannot be created.
Traceback (most recent call last):
File "./tools/caffe2_converter.py", line 114, in <module>
caffe2_model = export_caffe2_model(cfg, model, first_batch)
File "/root/some_detectron2/detectron2/export/api.py", line 157, in export_caffe2_model
return Caffe2Tracer(cfg, model, inputs).export_caffe2()
File "/root/some_detectron2/detectron2/export/api.py", line 95, in export_caffe2
predict_net, init_net = export_caffe2_detection_model(model, inputs)
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 151, in export_caffe2_detection_model
onnx_model = export_onnx_model(model, (tensor_inputs,))
File "/root/some_detectron2/detectron2/export/caffe2_export.py", line 53, in export_onnx_model
traced = torch.jit.trace(model, inputs, strict=False)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 900, in trace
check_tolerance, strict, _force_outplace, _module_class)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/jit/__init__.py", line 1054, in trace_module
module._c._create_method_from_trace(method_name, func, example_inputs, var_lookup_fn, strict, _force_outplace)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 575, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 561, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/contextlib.py", line 74, in inner
return func(*args, **kwds)
File "/root/some_detectron2/detectron2/export/caffe2_modeling.py", line 319, in forward
features = self._wrapped_model.backbone(images.tensor)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 575, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 561, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/DensePose_ADASE/densepose/modeling/quantize.py", line 205, in new_forward
return {"p2": p2_out, "p3": p3_out, "p4": p4_out, "p5": p5_out, "p6": self.top_block(p5_out)[0]}
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 575, in __call__
result = self._slow_forward(*input, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 561, in _slow_forward
result = self.forward(*input, **kwargs)
File "/root/some_detectron2/detectron2/modeling/backbone/fpn.py", line 177, in forward
return [F.max_pool2d(x, kernel_size=1, stride=2, padding=0)]
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/_jit_internal.py", line 210, in fn
return if_false(*args, **kwargs)
File "/root/anaconda2/envs/pytorch-gpu/lib/python3.7/site-packages/torch/nn/functional.py", line 576, in _max_pool2d
input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: createStatus == pytorch_qnnp_status_success INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1590649859799/work/aten/src/ATen/native/quantized/cpu/qpool.cpp":313, please report a bug to PyTorch. failed to create QNNPACK MaxPool operator
Path to the source
Looks weird, why didn’t it happen earlier?
UPD: It looks like nested F.max_pool2d won’t quantize. Test showed that it works with float32 after convert |
st184693 | supriyar:
Seems like the conv layer is not quantized so it produces onnx::Conv as opposed to the _caffe2::Int8Conv operator.
Is it possible to find a workaround for now? Do I understand correctly, that it is impossible to have a network with quantize dequantize during inference in Caffe2 export?
UPD: what if we just make all Convs are quantized for ONNX.export not fail |
st184694 | zetyquickly:
UPD: what if we just make all Convs are quantized for ONNX.export not fail
If all convs in the network are quantized it should work and you will see _caffe2::Int8Conv ops in the converted network.
zetyquickly:
Do I understand correctly, that it is impossible to have a network with quantize dequantize during inference in Caffe2 export?
You would have quantize dequantize at the start and end of the network. Which implies all the network ops are quantized. |
st184695 | Thank you @supriyar , I see
I have another question about non-quant operations in network
What do you think if we register C10 export like it is done here 1
would it be possible to patch non-quantized operators from torch.nn.ConvTranspose2d to torch.ops._caffe2.ConvTranspose2d to use them as is? Or is it better to implement quantized version of nn.ConvTranspose2d?
@jerryzh168 I could implement a PR of such functionality if it is valid |
st184696 | I think we are already working on quantized version of conv2d transpose, cc @Zafar |
st184697 | Hello, can you now export the quantized model to Caffe2, and then export Caffe2 to ncnn? Thank you! |
st184698 | Hello, @blueskywwc
First of all I haven’t managed to export quantized network to Caffe2. I do not know is it possible to export it to ncnn. Just a suggestion, maybe it is better to export model to ONNX and than to ncnn |
st184699 | hello,@jerryzh168 When will it be possible to support the conversion of the quantified model to onnx, I hope there is a reference time, thank you! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.