id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st183200 | Do you observe memory leak during training or test? Also, are you running the latest version of Opacus? |
st183201 | I see. Leaks can happen if you do more forward passes than backward passes as the activations do not get deleted. Is that the case for you? Also is it possible to get a minimal reproducing code sample? |
st183202 | Thanks! I think that was the issue, there was an extra forward pass. However, I am not sure why this does not cause any issues when I train without the privacy engine. |
st183203 | I was facing a similar problem when I was using GradSampleModule. The problem was that I needed to call model.zero_grad() instead of optim.zero_grad() to clear the accumulated grad_sample attributes. |
st183204 | Hi!
I’m using Opacus to train my model. The version before 1.0.0 works fine without dataloader. However, it seems that the new version privacy engine requires one at initialization. Is there any way to avoid this?
Thx! |
st183205 | Hi!
Thanks for using opacus.
There’s certainly a way to use opacus without DataLoader - please refer to our Migration Guide, section “No DataLoader” 3
Feel free to ask any further questions here |
st183206 | thx for reply!
However, as I’m following the guide, something seems to get wrong:
optimizer.attach_step_hook(…
However, there’s an error claim that SGD has no such attribute. I use SGD from torch.optim.SGD. Any idea about the situation? |
st183207 | Whoops, thanks for pointing this out - that’s actually a mistake on migration guide. You need to do
dp_optimizer.attach_step_hook(...
See PR #307 |
st183208 | Thx, the guide works fine for the problem. However, there’s a new one. I attached the privacyengine to the optim, but when I ran the code, it reported a CUDA out of memory.
RuntimeError: CUDA out of memory. Tried to allocate 6.10 GiB (GPU 0; 23.70 GiB total capacity; 13.27 GiB already allocated; 3.20 GiB free; 18.51 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
So I then check the my graphics using nvidia-smi. I got 2 rtx3090 on the server, both available with 24gb memory. Then I ran the code with older opacus, basically the same but the way to attach privacyengine. The older one works.
Trace back reported something about backward() having trouble. Any idea? |
st183209 | One potential reason this could be happening is poisson sampling. If you’re using default setup with Opacus 1.0, then your dataloader is using a non-standard sampler (UniformWithReplacementSampler ). (it was available before, but had to be manually enabled).
With this sampler batches are of variable size: the average size is still the same, but some batches are larger than average. This leads to an increased memory requirement, as your memory is limited by the peak batch size, not average.
We have a utility to handle this: BatchMemoryManager (see example here 2)
If it’s not that, then it’s interesting and we’ll need to double check and see what could’ve caused it. |
st183210 | Hi,
I run my computations on a server cluster where computation jobs have a time limit, but my learning process of multiple epochs typically takes longer than this time limit. Therefore, I regularly store the state of my computations (i.e., after every epoch), and the resume the computations when the job finished and I started a new one.
This especially includes using the .load_state_dict() function of the model and the optimizer.
Now I would like to integrate differential privacy with Opacus into my computations, and I am asking myself whether and how I can store the states of Opacus, and resume it? Specifically, my questions are:
Can I just torch.save() the PrivacyEngine object?
If the answer to 1) is no, is there a .state_dict property and .load_state_dict() function for the PrivacyEngine?
Can I alternatively just run the model.load_state_dict() and optimizer.load_state_dict(), then update these by running privacy_engine.make_private(), and use the objects as normal?
In the case of 3), can I store the epsilon = privacy_engine.get_epsilon(DELTA) after every epoch (along with the model state and the optimizer state), and add the previous epsilon value to the epsilon value obtained after resuming the calculations? Or is this not an additive relation and I would end up with a wrong epsilon value when resuming the calculations?
Thank you very much in advance! |
st183211 | Solved by ffuuugor in post #2
Hi,
Thanks for your question - ability to save/load checkpoints is an important feature, and it’s good for us to have some input on how people could be using it.
While we’re considering how to include this into Opacus API here’s what you need to know to make it work today.
PrivacyEngine doesn’t… |
st183212 | Hi,
Thanks for your question - ability to save/load checkpoints is an important feature, and it’s good for us to have some input on how people could be using it.
While we’re considering how to include this into Opacus API here’s what you need to know to make it work today.
PrivacyEngine doesn’t maintain links to model, optimizer, or data_loader. The only important state maintained by PrivacyEngine is accountant. Accountant’s state is just a list of numerical tuples, so torch.save() or any other pickle mechanism should do. That said, you probably should save the accountant (privacy_engine.accountant), not the privacy engine itself. That’s because we also maintain the link to the dataset used in the first call to the make_private() method - to do a sanity check the dataset is not being swapped in the middle (accounting is performed on per-dataset basis)
While GradSampleModule functionally is just a wrapper around nn.Module, saving/loading probably doesn’t work for them out of the box. I’d say your best bet is to save/load underlying model and then wrap it with GradSampleModule every time you’re restoring from the checkpoint.
To summarise, here are the steps you need to take
On saving:
Save accountant: torch.save(privacy_engine.accountant)
Save model: torch.save(model. _module.state_dict())
If your optimizer has state (e.g. dynamic lr or momentum) - save wrapped optimizer: optimizer.original_optimizer.state_dict()
On loading:
Initialize empty PrivacyEngine
Load accountant and replace brand new with the one you’ve just initialized: privacy_engine.accountant = accountant_you_have_just_loaded_from_checkpoint
Load your non-private nn.Module as normal
Load your non-private optimizer as normal
Pass loaded model and optimzier to privacy_engine.make_private() |
st183213 | Hi,
Given a training process I need, at each step, to get all gradients tensors associated to each individual sample of the batch; then I need to perform some operation on each of this gradients, finally collect them together and perform the .optimizer.step().
I found that Opacus 1 could fit my problem, but:
I don’t need nor the gradient clipping neither the noise adding
on the other hand I need to access the set of gradients associated to each batch’s element and perform some operation on them; after these operations I’ll get a single gradient that will be used for the weight update (.optimizer.step()).
Is it possible to do this with Opacus library? |
st183214 | Hi @Torcione
Yes, you can use opacus for that; Take a look at GradSampleModule (opacus/grad_sample/grad_sample_module.py)
It’s a wrapper around nn.Module that encapsulates per sample gradient computation. When you wrap your model with GradSampleModule, each trainable parameter will get .grad_sample attribute containing per-sample gradients. No noise or clipping is performed.
Hope this helps |
st183215 | Hi @ffuuugor,
Is it possible to use this GradSampleModule on loss function which themselves contain derivatives? I’ve managed to calculate per-sample gradients using a combination of forward_pre_hook and full_backward_hook, however, I’ve noticed that if your loss function contains terms that are derivatives using hooks naively fails. Could opacus be a solution for this?
For clarity, this was briefly discussed in this thread: Per-sample gradient, should we design each layer differently? - #22 by AlphaBetaGamma96 4 and I wrote a small example snippet which highlights where it fails here: per-sample-gradient-limitation/example.py at main · AlphaBetaGamma96/per-sample-gradient-limitation · GitHub 2
Do you think opacus could solve this issue?
Thank you! |
st183216 | I’ll take a closer look at the code a bit later, but my first hunch would be that opacus won’t make much of a difference. We also use hooks (although it’s regular backward hooks, not full_backward_hook, that are being deprecated in the newer PyTorch version), so I don’t see a reason why we won’t face the same problem in opacus |
st183217 | Hey everyone, I attempt to accumulate gradients in training to save GPU memory.
The training loop works quite well without privacy_engine
opt.zero_grad()
for i, (input, target) in enumerate(dataset):
pred = net(input)
loss = crit(pred, target)
# one graph is created here
loss.backward()
# graph is cleared here
if (i+1)%10 == 0:
# every 10 iterations of batches of size 10
opt.step()
opt.zero_grad()
However, when I attach the privacy_engine to the optimizer, CUDA is out of memory.
Do anyone know how to solve this problem? Thanks in advance. |
st183218 | Solved by ffuuugor in post #2
Hi!
It is expected that Opacus has a certain memory overhear. At the very least, we have to store per-sample gradients for all model parameters - that alone increases the memory required to store gradient by the factor of batch size.
In order to address that I suggest using virtual_step() method i… |
st183219 | Hi!
It is expected that Opacus has a certain memory overhear. At the very least, we have to store per-sample gradients for all model parameters - that alone increases the memory required to store gradient by the factor of batch size.
In order to address that I suggest using virtual_step() method in the optimizer.
It does gradient clipping and accumulation (thus saving memory), but doesn’t do the actual optimizer step.
Your code would look smth like this:
opt.zero_grad()
for i, (input, target) in enumerate(dataset):
pred = net(input)
loss = crit(pred, target)
# one graph is created here
loss.backward()
# graph is cleared here
if (i+1)%10 == 0:
# every 10 iterations of batches of size 10
opt.step()
opt.zero_grad()
else:
opt.virtual_step()
For more examples using virtual_step see our Cifar10 tutorial: opacus/cifar10.py at main · pytorch/opacus · GitHub 3 |
st183220 | Hey there, we are trying to train a private version of a particular model which uses nn.Parameter and getting the error torchdp.dp_model_inspector.IncompatibleModuleException.
In particular, the parameters are defined for the model here 10 and used in the forward function here 4. To the best of our knowledge these parameters and associated operations preserve privacy because they don’t compute any aggregate batch statistics. What would be the recommended way about being able to train with this model definition.
Is there some sort of workaround we could do to wrap these lines in a valid module? Do we need to wait for the team to add an accepted module to opacus.SUPPORTED_LAYERS? |
st183221 | Hi Chris!
Luckily you don’t need to fork nor wait for the team to change that for you. You will however have the responsibility of writing a function to compute grad_sample for your layer. Once you write it, you simply register it for your layer using @register_grad_sampler 8 and you should be good to go! |
st183222 | Hi! I’ve been running into a similar issue, @ChrisWaites did you manage to get this working? |
st183223 | Hi all,
I have followed tutorials regards DP Image Classification using Resnet18.
I have some questions:
When a model has many layers, it wasn’t able to convergence under DP. Are there any recommended approaches to overcome this problem for large models with many Fully connected layers?
When I decreased batch_size using the same model (due to memory size 8GB), the loss goes up to 80-200. That’s mean it is challenging to make a large batch with DP. Large batch size sometimes helps to improve model accuracy.
With trying different DL models, training non-private model takes a reasonable time (e.g., 6 minutes) compared to the private model (19 minutes) with lower accuracy.
Is DP-SGD very slow due to per-sample gradients? Is there any way to speed up processing time?
Can we achieve a comparable accuracy with the baseline model under a modest privacy budget?
Is the DP-Deep Learning model more sensitive to hyperparameters like batch size and noise level or the structure of NN?
Thanks, |
st183224 | Hello @NBu,
I really don’t know how this post went unattended. Sorry for the delay.
When a model has many layers, it wasn’t able to convergence under DP. Are there any recommended approaches to overcome this problem for large models with many Fully connected layers?
Do you mind sharing an example notebook of this? Hyper-parameter tuning should typically help with these cases. Some tips: FAQ · Opacus
When I decreased batch_size using the same model (due to memory size 8GB), the loss goes up to 80-200. That’s mean it is challenging to make a large batch with DP. Large batch size sometimes helps to improve model accuracy.
The memory requirement is an unfortunate consequence of maintaining per-sample gradients. Opacus provides a concept of virtual_batch to overcome this issue. Please try it out if you haven’t already. FAQ · Opacus
Is DP-SGD very slow due to per-sample gradients? Is there any way to speed up processing time?
There is some overhead to computing per-sample gradients. The speed degradation depends on the layers as well (eg Linear vs LSTM). We can take a look at your notebook to make some suggestions.
Can we achieve a comparable accuracy with the baseline model under a modest privacy budget?
Depends a lot on the task. As you can see on https://opacus.ai 2, the answer is a resounding yes for MNIST with a small network. However, there is a ~20pp gap in accuracy for CIFAR-10 with ResNet18.
Bridging this gap is an active area of research, and we whole heartedly welcome your ideas and contributions. One of the main goals of open-sourcing opacus is to help push this research. |
st183225 | Hello, How to compute a per sample gradient for a usual model using Opacus? In this case, we don’t care about the privacy issue. |
st183226 | Hello,
Excellent question. We recently wrote a blog about this: Differential Privacy Series Part 2 | Efficient Per-Sample Gradient Computation in Opacus | by PyTorch | PyTorch | Sep, 2021 | Medium 15 |
st183227 | With the latest versions:
opacus==0.14.0
torch==1.9.0
I’m getting the following deprecation warnings when running the CIFAR10 example:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-4ed4226e-cb69-4f21-bbff-ba9e9ab8fc55/lib/python3.7/site-packages/torch/nn/modules/module.py:974: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.
warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes "
Is this the expected behavior? Should I be concerned with the warning saying This hook will be missing some grad_input?
Thank you! |
st183228 | Solved by ffuuugor in post #2
Hi!
Yeah, that’s somewhat expected - we never migrated to register_full_backward_hook since it was released in PyTorch 1.8.
We’ll fix that soon (thanks for raising this!). But in the meantime, as far as I’m aware, this shouldn’t create any problems (at least with 1.9) - all our tests are passing… |
st183229 | Hi!
Yeah, that’s somewhat expected - we never migrated to register_full_backward_hook since it was released in PyTorch 1.8.
We’ll fix that soon (thanks for raising this!). But in the meantime, as far as I’m aware, this shouldn’t create any problems (at least with 1.9) - all our tests are passing and we never received any reports of this causing any issues |
st183230 | Hi,
Im using Opacus to make CT-GAN (GitHub - sdv-dev/CTGAN: Conditional GAN for generating synthetic tabular data.) differntial private.
There is already an implementation who does this: (smartnoise-sdk/dpctgan.py at main · opendp/smartnoise-sdk · GitHub 6)
However, they use an older version of opacus (v0.9) and CTGAN(v.0.2.2.dev1).
I used their method to make the newest version of CTGAN differential private with the newest opacus version. Unfortunatly i run into the following error:
.../CTGAN_DP/DP_CTGAN.py", line 309, in fit
loss_d.backward()
File ".../lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File ".../lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
File ".../python3.7/site-packages/opacus/grad_sample/grad_sample_module.py", line 197, in capture_backprops_hook
module, backprops, loss_reduction, batch_first
File ".../python3.7/site-packages/opacus/grad_sample/grad_sample_module.py", line 234, in rearrange_grad_samples
A = module.activations.pop()
IndexError: pop from empty list
The function in question is:
def rearrange_grad_samples(
self,
module: nn.Module,
backprops: torch.Tensor,
loss_reduction: str,
batch_first: bool,
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Rearrange activations and grad_samples based on loss reduction and batch dim
Args:
module: the module for which per-sample gradients are computed
backprops: the captured backprops
loss_reduction: either "mean" or "sum" depending on whether backpropped
loss was averaged or summed over batch
batch_first: True is batch dimension is first
"""
if not hasattr(module, "activations"):
raise ValueError(
f"No activations detected for {type(module)},"
" run forward after add_hooks(model)"
)
batch_dim = 0 if batch_first or type(module) is LSTMLinear else 1
if isinstance(module.activations, list):
A = module.activations.pop()
else:
A = module.activations
if not hasattr(module, "max_batch_len"):
# For packed sequences, max_batch_len is set in the forward of the model (e.g. the LSTM)
# Otherwise we infer it here
module.max_batch_len = _get_batch_size(module, A, batch_dim)
n = module.max_batch_len
if loss_reduction == "mean":
B = backprops * n
elif loss_reduction == "sum":
B = backprops
else:
raise ValueError(
f"loss_reduction = {loss_reduction}. Only 'sum' and 'mean' losses are supported"
)
# No matter where the batch dimension was, .grad_samples will *always* put it in the first dim
if batch_dim != 0:
A = A.permute([batch_dim] + [x for x in range(A.dim()) if x != batch_dim])
B = B.permute([batch_dim] + [x for x in range(B.dim()) if x != batch_dim])
return A, B
This does not happen with opacus v.09.
I investigated and found that module.activations is popped until only an empty list for the first module is left and then produces this error.
I hacked my way around this issue as follows:
if isinstance(module.activations, list):
#print(len(module.activations))
if len(module.activations) > 1:
A = module.activations.pop()
else:
A = module.activations[0]
else:
A = module.activations
Meaning, if module.activations is left with one element instead of an empty list at least training works.
My question is: Am I breaking anything important doing this and could this be a potential subcase which was not accounted for.
Or do I have to change something else in the model ?
The model I try to train with opacus is basically just a composition of n*(nn.Liner nn.Relu nn.Dropout) which should be fine I think.
Thanks for a reply
Have a great day. |
st183231 | Solved by sayanghosh in post #12
@shaanchandra From the discussion above, it seems that there are two potential solutions:
As @knilox mentioned above, remove the gradient regularization loss entirely - this may seem that we are not faithfully reproducing the original CTGAN, however DP eventually clips the gradients to bound SGD s… |
st183232 | Thanks for flagging this. Your fix will probably lead to incorrect gradient computations (and probably break privacy guarantees as well). Normally, at each forward the activations gets pushed to the module.activations list, and they get popped in the backward. Popping from an empty list indicates that there is one more backward pass than forward pass. I am not sure why the list gets empty in your case, do you mind sharing a minimal reproducing example? |
st183233 | I am trying to do the exact same thing (run CTGAN in a differentially private manner using opacus). I have the original code of CTGAN (same as shown above) but just changed the optimizerD by attaching PrivacyEngine to it (no other changes).
And I get the exact same error at loss_d.backward().
When I print loss_d just before the backward call, I get this tensor(-0.0344, grad_fn=<NegBackward>).
I do not see any other backward call before this step, so am puzzled as to why this is happening. |
st183234 | @shaanchandra and @knilox It would be helpful if you could share with us a minimal reproducible example (for example, in Colab) with CTGAN so we can take a closer look at why your module.activations gets empty. We’ll be happy to take a look at this. |
st183235 | @shaanchandra @alexandresablayrolle @sayanghosh,
Thanks for all your replies and sorry for not responding in a timely manner.
I will beginn creating a minimal example now and post it here soon. |
st183236 | @sayanghosh @alexandresablayrolle
here is my colab example Google Colab 9.
Again thanks for looking into this. Hope this is enough. |
st183237 | Hi @knilox thanks for the example, we are looking at it now and will provide an update soon. |
st183238 | Hello again,
After digesting the tip from @alexandresablayrolle about multiple backward passes, I think I now have localized the root cause of the problem. Not hard to find after that tip tbh.
Nevertheless, i was a little confused, because it indicated a problem with the unmodified model aswell. But training the unmodified model works just fine.
The problem is the gradient penalty, which is added to the loss function of the discriminator.
The penalty is added to enforce a soft 1-Lipschitz constraint on the gradient norms to stabilize convergence with the Wasserstein Distance. Meaning, the model is regularized to have gradient norms of 1. The other way of enforcing a 1-Lipschitz constraint is to clip the gradient norms at 1.
Usually the gradient penalty is preferred, because it performs much better.
However, as you know for DP we have to clip the gradient norms anyway
So the easiest fix for the problem is just removing pen.backward(retain_graph=True) and set max_grad_norm = 1 and we should be fine except the potential heavy loss in utility.
However, I am wondering if gradient penalty still makes sense in this situation and set max_grad_norm>1, e,g: 2, and let the model enforce grad norms of 1 itself.
The reason, i feel uncomfortable removing all the features of CT-GAN. But if that is the price for privacy i guess we have to pay it.
#pen = discriminator.calc_gradient_penalty(
#real_cat, fake_cat, self._device, self.pac)
loss_d = -(torch.mean(y_real) - torch.mean(y_fake))
optimizerD.zero_grad()
#pen.backward(retain_graph=True)
loss_d.backward()
optimizerD.step()
In addition to that, as far as i understood using retrain_graph=True should be the same as
loss_d = -(torch.mean(y_real) - torch.mean(y_fake)) + pen
optimizerD.zero_grad()
loss_d.backward()
optimizerD.step()
But this also produces the same error and i dont understand why.
If the regularization is truly the problem my intuition tells me that this should be supported. However, maybe this is intended such that silly constraints are note enforced.
PS: after finding this i reviewed the code of smartnoise-sdk (second link) again and i found they also removed gradient penalty. But i missed it. Also they do not mention it in their paper. |
st183239 | For the case you tried out with pen not undergoing a backward pass, we still add it to the total loss function and thus to the computation graph. Generation of y_fake and y_real always take two forward passes, and thus two stack pushes for each layer. During computation of the gradient penalty, we have a backward pass (because of calling autograd.grad) and a forward pass (note the self(interpolates) which calls the network). When backward() is called on loss_d then backward() is expected to be called three times, as three corresponding forward passes were observed. It did not take into account the fact that an additional backward() got called during the loss computation. We subsequently have more backward passes than forwards, and so the activation stack empties out for each layer. This might explains why if you add pen or just do pen.backward() the error arises - it is due to an additional grad computation which was not taken into account. |
st183240 | For the problem of supporting gradient regularization style techniques (for ex. in CTGAN or Wasserstein GAN), or more specifically your case the autograd.grad is done for purposes of estimation of the gradient regularizers and not training, so we should not need to go the route of per-sample gradient estimation, clipping and noise addition which DP would do here. In contrast we’d need that during the pen.backward(). |
st183241 | Hi @knilox and @sayanghosh ,
Thank you for the discussions. The working is very clear to me now.
However, I am not clear as to what is the consensus that is reached here to solve this specific problem?
Do we remove the gradient penalty term completely from the code?
If we keep it then how do we do the DP in this case using opacus?
Again, I understand what is discussed and it all makes sense. But what is the prescribed way forward is not clear to me. |
st183242 | @shaanchandra From the discussion above, it seems that there are two potential solutions:
As @knilox mentioned above, remove the gradient regularization loss entirely - this may seem that we are not faithfully reproducing the original CTGAN, however DP eventually clips the gradients to bound SGD sensitivity, automatically introducing some regularization. Further, it appears that the smartnoise-sdk implementation which is being followed here already does this.
Keep the gradient regularization loss but do not compute per-sample gradients during the call to autograd.grad (basically keep the backward hook disabled during that time) as it is for estimation of gradients and not for updating the network. We’ve tested this out and it works initially, however it seems to be creating a new issue during the .step() where it complains that the norms are of unequal length.
So our recommendation is do (1). Option (2)'s error is probably unrelated and is also a fix. |
st183243 | @sayanghosh I have one final question wrt. to differential privacy, that is unrelated to the original post.
CTGAN uses an additional penalty term to the generator loss, which is the cross entropy loss of how many conditions are fulfilled by the generator in a batch.
To calculate this loss, we need the univariate distribution of all categorical attributes in the training data.
Thus, the generator technically depends directly on the training data making it not fully differential private. The other input depends on the discriminator, but that is differential private because of opacus.
I assume that’s why the smartnoise sdk simply removes it. But I currently think that this is not necessary.
And my question here is: Are there flaws in the following arguments ?
1: We Could make the calculation of the penalty term differential private and then have a differential private discriminator and a differential private penalty term. Thus, the generator loss is differential private and we have a differential private generator after training.
2:If we assume the univariate distribution of categorical attributes to be public knowledge, which is commonly assumed. We don’t have to make the penalty term satisfy differential privacy and can still call the final generator differential private.
Thanks again
Your help is much appreciated!! |
st183244 | for p, clip_value in zip(params, clip_values):
noise = self._generate_noise(clip_value, p)
if self.loss_reduction == "mean":
noise /= batch_size
if self.rank == 0:
p.grad += noise
the noise is added to the averaged gradient, why should it be divided by the batch size?
since the output is the final grad, should it be added with noise (without being divided) directly? |
st183245 | Compare two cases. If there’s no reduction (self.loss_reduction == “sum”), then we want to add noise calibrated to the clipping norm X. Indeed, let the gradients be g_1,…,g_B. The sum is g_1 + … + g_B, and to make it private the additive noise is sampled from the Gaussian distribution N(0, sigma^2 * C^2) so that it masks presence or absence of any one gradient vector.
If the reduction function is mean, then the output is (g_1 + … + g_B) / B. What should the additive noise be in this case? I think it’s pretty obvious that it is must be a scaled down (by a factor of B) noise from before. The only difference between the two cases is the scaling parameter, and it should be applied equally both to the sensitive inputs and the noise. Right? |
st183246 | Hi All,
For an imbalanced dataset, can we first use oversampling to balance the two classes, and then use Opacus for DP training over the artificially balanced dataset? |
st183247 | Hi @PeterCheng and thanks for your question.
You can certainly do that, but it would affect how you would interpret privacy for the trained model.
Opacus provides privacy guarantees with respect to each individual sample in the dataset. We also use privacy amplification by subsampling - i.e. the epsilon guarantees directly tied to the probability of being sampled for a minibatch.
When you oversample, you essentially increase sampling rate for that particular data record.
To get an eps estimate for oversampled instances, you need to modify PrivacyEngine.get_privacy_spent method and use adjusted sample rate |
st183248 | Hi @AliSh, thank you for your questions. I’m presuming you are looking for an example of linear regression task. While we do not have an example in the Opacus repo at the moment, it should still work out of the box just and the workflow looks the same as that of other examples. (DP-SGD clips and noises gradients rather than the outputs).
Please let us know if you are running into any issues there.
P.S.: We do welcome and appreciate pull-request if you’d like to contribute an example |
st183249 | Hi!
I was running the tutorial on text classification, exactly as in opacus/building_text_classifier.ipynb at master · pytorch/opacus · GitHub 6, but I get the following error when I try to train:
AttributeError: The following layers do not have gradients: [‘module.bert.encoder.layer.11.attention.self.query.weight’, ‘module.bert.encoder.layer.11.attention.self.query.bias’, ‘module.bert.encoder.layer.11.attention.self.key.weight’, ‘module.bert.encoder.layer.11.attention.self.key.bias’, ‘module.bert.encoder.layer.11.attention.self.value.weight’, ‘module.bert.encoder.layer.11.attention.self.value.bias’, ‘module.bert.encoder.layer.11.attention.output.dense.weight’, ‘module.bert.encoder.layer.11.attention.output.dense.bias’, ‘module.bert.encoder.layer.11.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.11.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.11.intermediate.dense.weight’, ‘module.bert.encoder.layer.11.intermediate.dense.bias’, ‘module.bert.encoder.layer.11.output.dense.weight’, ‘module.bert.encoder.layer.11.output.dense.bias’, ‘module.bert.encoder.layer.11.output.LayerNorm.weight’, ‘module.bert.encoder.layer.11.output.LayerNorm.bias’, ‘module.bert.pooler.dense.weight’, ‘module.bert.pooler.dense.bias’, ‘module.classifier.weight’, ‘module.classifier.bias’]. Are you sure they were included in the backward pass?
Could someone help me understand why this is happening?
I’m on ubuntu and am using python 3.8.5
cheers! |
st183250 | Based on cell 8 it seems you are freezing some layers and train only others:
trainable_layers = [model.bert.encoder.layer[-1], model.bert.pooler, model.classifier]
total_params = 0
trainable_params = 0
for p in model.parameters():
p.requires_grad = False
total_params += p.numel()
for layer in trainable_layers:
for p in layer.parameters():
p.requires_grad = True
trainable_params += p.numel()
print(f"Total parameters count: {total_params}") # ~108M
print(f"Trainable parameters count: {trainable_params}") # ~7M
so I would assume that the frozen parameters do not have valid gradients.
I’m however unsure where this message is raised from and if it’s an error etc. so could you explain the issue a bit more? |
st183251 | Hmm makes sense. The issue arises when virtual_step() is called:
… in
optimizer.virtual_step()
… line 282, in virtual_step
self.privacy_engine.virtual_step()
… line 435, in virtual_step
self.clipper.clip_and_accumulate()
… line 179, in clip_and_accumulate
named_params=self._named_grad_samples(),
… line 263, in _named_grad_samples
where the error is thrown |
st183252 | I’m unsure what virtual_step() does and assume it’s coming from a 3rd party library?
Do you know, if this method expects all .grad attributes to be set and if so, could you filter the frozen parameters out while passing them to the optimizer? |
st183253 | Hi @anna_l !
Thanks for your question and for taking interest in opacus.
I’d need some more info to be able to help, as I wasn’t able to reproduce the issue in my setup.
Can you please share which versions of transformers and opacus are you using?
Does the error happen on the first training iteration or later?
To comment on some of the discussion points above:
virtual_step() is a method defined in PrivacyEngine in opacus. It a way to simulate large batches without heavy memory footprint.
In our tutorial we indeed freeze some layers, as correctly pointed out. However, the error above lists trainable layers as not having gradients, which is not what should happen. (e.g. bert.encoder.layer.11 is bert.encoder.layer[-1]) |
st183254 | Hi @ffuuugor, pardon slow reply. It happens on the first training iteration, my transformers version is 4.6.1 |
st183255 | Hey
Sorry, but I’m still having trouble reproducing the issue.
I’ve tried multiple package versions (opacus 0.13, 0.14, master), but none produce the error you’ve described.
Can you maybe share a Colab notebook with the error to help find the reason?
PS: While investigating this we’ve found and fixed 6 quite bad memory inefficiency, so thanks for pointing that way |
st183256 | Hello,
I wonder if there is an option in Opacus to access the per example gradient before and after clipping during the training please?
Thanks,
Ali |
st183257 | Hi
Accessing per sample gradients before clipping is easy - they’re available between loss.backward() and optimizer.step() calls. Backward pass calculates per sample gradients and stores them in parameter.grad_sample attribute. Optimizer step then does the clipping and aggregation, and cleans up the gradients.
For example:
m = Module()
optimizer = optim.SGD(m.parameters(), <...>)
privacy_engine = PrivacyEngine(<...>)
privacy_engine.attach(optimizer)
<...>
output = m(data)
loss = criterion(data, labels)
loss.backward()
print(m.fc.weight.grad_sample) # print per sample gradients
optimizer.step()
Post-clip values are more tricky - it’s not something we support out of the box.
optimizer.step() does three things at once:
clips per sample gradients
accumulates per sample gradients into parameter.grad
adds noise
Which means that there’s no easy way to access intermediate state after clipping, but before accumulation and noising.
I suppose, the easiest way to get post-clip values would be to take pre-clip values and do the clipping yourself, outside of opacus code.
All you need to do is replicate a small bit from opacus/opacus/per_sample_gradient_clip.py:clip_and_accumulate():
# step 0 : calculate the layer norms
all_norms = calc_sample_norms(
named_params=self._named_grad_samples(),
)
# step 1: calculate the clipping factors based on the noise
clipping_factor = self.norm_clipper.calc_clipping_factors(all_norms)
You can then simply multiply your pre-clipping per-sample gradients p.grad_sample by clipping_factor tensor to get the same clipping that’s happening inside opacus.
Hope this helps |
st183258 | Thanks @ffuuugor!
I wonder when each layer has both weights and biases, how opacus computes the l2 norm of gradients per example for the clipping? Does it compute separately for biases and weights please? |
st183259 | Actually, by default (and as per Abadi et al. 1) we clip the L2 norm of the entire gradient vector for a given sample, i.e. vector consisting of gradients for all trainable parameters in the model stacked together.
We’ve tried experimenting with different approaches to clipping, e.g. dynamic per-layer thresholds (see implementations in clipping.py 3), but it wasn’t too fruitful |
st183260 | Hi, I am enjoying using the opacus package to apply differential privacy to the training process of my models, I am struggling to get it to work with my TVAE implementation though, could someone let me know why I get an Incompatible Module Exception, I am using similar modules to in all my other generative models. See my code below:
import numpy as np
import torch
from torch.nn import Linear, Module, Parameter, ReLU, Sequential
from torch.nn.functional import cross_entropy
from torch.optim import Adam
from torch.utils.data import DataLoader, TensorDataset
from models.CTGAN import DataTransformer, BaseSynthesiser
from .utils import GeneralTransformer, DPSynthesiser
import opacus
from opacus import autograd_grad_sample, PrivacyEngine, utils
import dill
class Encoder(Module):
def __init__(self, data_dim, compress_dims, embedding_dim):
super(Encoder, self).__init__()
dim = data_dim
seq = []
for item in list(compress_dims):
seq += [Linear(dim, item), ReLU()]
dim = item
self.seq = Sequential(*seq)
self.fc1 = Linear(dim, embedding_dim)
self.fc2 = Linear(dim, embedding_dim)
def forward(self, input):
feature = self.seq(input)
mu = self.fc1(feature)
logvar = self.fc2(feature)
std = torch.exp(0.5 * logvar)
return mu, std, logvar
class Decoder(Module):
def __init__(self, embedding_dim, decompress_dims, data_dim):
super(Decoder, self).__init__()
dim = embedding_dim
seq = []
for item in list(decompress_dims):
seq += [Linear(dim, item), ReLU()]
dim = item
seq.append(Linear(dim, data_dim))
self.seq = Sequential(*seq)
self.sigma = Parameter(torch.ones(data_dim) * 0.1)
def forward(self, input):
return self.seq(input), self.sigma
def loss_function(recon_x, x, sigmas, mu, logvar, output_info, factor):
st = 0
loss = []
for column_info in output_info:
for span_info in column_info:
if len(column_info) != 1 or span_info.activation_fn != "softmax":
ed = st + span_info.dim
std = sigmas[st]
loss.append(((x[:, st] - torch.tanh(recon_x[:, st])) ** 2 / 2 / (std ** 2)).sum())
loss.append(torch.log(std) * x.size()[0])
st = ed
else:
ed = st + span_info.dim
loss.append(cross_entropy(recon_x[:, st:ed], torch.argmax(x[:, st:ed], dim=-1), reduction="sum"))
st = ed
assert st == recon_x.size()[1]
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return sum(loss) * factor / x.size()[0], KLD / x.size()[0]
class DPTVAE(BaseSynthesiser):
def __init__(
self,
embedding_dim=128,
compress_dims=(128, 128),
decompress_dims=(128, 128),
l2scale=1e-5,
batch_size=500,
disabled_dp=False,
delta=1e-5,
noise_multiplier=3.5,
max_per_sample_grad_norm=1.0,
epsilon=1.0,
iterations=300,
verbose=True,
):
self.embedding_dim = embedding_dim
self.compress_dims = compress_dims
self.decompress_dims = decompress_dims
self.l2scale = l2scale
self.batch_size = batch_size
self.loss_factor = 2
self.iterations = iterations
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# opacus parameters
self.noise_multiplier = noise_multiplier
self.disabled_dp = disabled_dp
self.delta = delta
self.max_per_sample_grad_norm = max_per_sample_grad_norm
self.epsilon = epsilon
self.epsilon_list = []
self.alpha_list = []
self.loss_list = []
self.verbose = verbose
def train(self, data, categorical_columns=None, ordinal_columns=None, update_epsilon=None, verbose=False):
self.transformer = DataTransformer()
self.transformer.fit(data, discrete_columns=categorical_columns)
data = self.transformer.transform(data)
dataset = TensorDataset(torch.from_numpy(data.astype("float32")).to(self.device))
loader = DataLoader(dataset, batch_size=self.batch_size, shuffle=True, drop_last=True)
data_dim = self.transformer.output_dimensions
self.encoder = Encoder(data_dim, self.compress_dims, self.embedding_dim).to(self.device)
self.decoder = Decoder(self.embedding_dim, self.compress_dims, data_dim).to(self.device)
self.optimizerAE = Adam(list(self.encoder.parameters()) + list(self.decoder.parameters()), weight_decay=self.l2scale)
privacy_engine = opacus.PrivacyEngine(
self.decoder,
batch_size=self.batch_size,
sample_size=data.shape[0],
alphas=[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64)),
target_delta=self.delta,
noise_multiplier=self.noise_multiplier,
max_grad_norm=self.max_per_sample_grad_norm,
clip_per_layer=True,
)
if not self.disabled_dp:
privacy_engine.attach(self.optimizerAE)
if hasattr(self, "privacy_engine"):
epsilon, best_alpha = self.optimizerAE.privacy_engine.get_privacy_spent(self.delta)
else:
epsilon = 0
for i in range(self.iterations):
if not self.disabled_dp:
if self.epsilon < epsilon:
break
for id_, data_ in enumerate(loader):
self.optimizerAE.zero_grad()
real = data_[0].to(self.device)
mu, std, logvar = self.encoder(real)
eps = torch.randn_like(std)
emb = eps * std + mu
rec, sigmas = self.decoder(emb)
loss_1, loss_2 = loss_function(rec, real, sigmas, mu, logvar, self.transformer.output_info_list, self.loss_factor)
loss = loss_1 + loss_2
loss.backward()
self.optimizerAE.step()
self.decoder.sigma.data_.clamp_(0.01, 1.0)
if not self.disabled_dp:
for p in self.decoder.parameters():
if hasattr(p, "grad_sample"):
del p.grad_sample
epsilon, best_alpha = self.optimizerAE.privacy_engine.get_privacy_spent(self.delta)
self.epsilon_list.append(epsilon)
self.alpha_list.append(best_alpha)
if verbose:
print("eps: {:f} \t alpha: {:f} \t Loss: {:f}".format(epsilon, best_alpha, loss.detach().cpu()))
if not self.disabled_dp:
if self.epsilon < epsilon:
break
self.loss_list.append(loss)
privacy_engine.detach()
self.privacy_engine = privacy_engine
self.state_dict = self.optimizerAE.state_dict()
return self.loss_list, self.epsilon_list, self.alpha_list
def sample(self, samples):
self.decoder.eval()
steps = samples // self.batch_size + 1
data_ = []
for _ in range(steps):
mean = torch.zeros(self.batch_size, self.embedding_dim)
std = mean + 1
noise = torch.normal(mean=mean, std=std).to(self.device)
fake, sigmas = self.decoder(noise)
fake = torch.tanh(fake)
data_.append(fake.detach().cpu().numpy())
data_ = np.concatenate(data_, axis=0)
data_ = data_[:samples]
return self.transformer.inverse_transform(data_, sigmas.detach().cpu().numpy())
def set_device(self, device):
self.device = device
self.decoder.to(self.device)
def save(self, path):
assert hasattr(self, "data_sampler")
# always save a cpu model.
device_bak = self.device
self.device = torch.device("cpu")
self.encoder.to(self.device)
self.decoder.to(self.device)
torch.save(self, path, pickle_module=dill)
self.device = device_bak
self.encoder.to(self.device)
self.decoder.to(self.device)
@classmethod
def load(cls, path):
model = torch.load(path)
model.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.encoder.to(model.device)
model.decoder.to(model.device)
return model
def DPTVAE_runner(seed, run_name, epsilon, delta, n_s, df, batch_size, save, load, categorical_columns, ordinal_columns, iterations, noise_multiplier):
if not load:
gan = DPSynthesiser(
DPTVAE(
batch_size=batch_size,
iterations=iterations,
delta=delta,
noise_multiplier=noise_multiplier,
),
GeneralTransformer(),
epsilon=epsilon,
)
gan.fit(df, categorical_columns=categorical_columns, ordinal_columns=ordinal_columns, verbose=True, seed=seed)
else:
gan = DPTVAE.load(load)
df_synth = gan.sample(n_s)
if save:
gan.save(save)
return df_synth, None
Any help would be appreciated, thanks! |
st183261 | Could you post the error message you are receiving?
I’m not familiar with Opacus so I don’t know, if you are facing a PyTorch error or if it’s related to Opacus. |
st183262 | This is the error I see, I believe it is something to do with Opacus’ choices in terms of supported modules.
File "main.py", line 351, in <module>
main()
File "main.py", line 225, in main
synth_data, log_model_iw = model_map[args.model_class](**params)
File "/Users/harrisonwilde/Library/Mobile Documents/com~apple~CloudDocs/PhD/Holmes/WeightedDP/models/CUSTOM/dptvae.py", line 247, in DPTVAE_runner
gan.fit(df, categorical_columns=categorical_columns, ordinal_columns=ordinal_columns, verbose=True, seed=seed)
File "/Users/harrisonwilde/Library/Mobile Documents/com~apple~CloudDocs/PhD/Holmes/WeightedDP/models/CUSTOM/utils/synthesiser.py", line 68, in fit
self.gan.train(preprocessed_data, categorical_columns=categorical_columns, ordinal_columns=ordinal_columns, update_epsilon=self.epsilon, verbose=verbose)
File "/Users/harrisonwilde/Library/Mobile Documents/com~apple~CloudDocs/PhD/Holmes/WeightedDP/models/CUSTOM/dptvae.py", line 138, in train
privacy_engine.attach(self.optimizerAE)
File "/usr/local/Caskroom/miniconda/base/envs/dp/lib/python3.8/site-packages/opacus/privacy_engine.py", line 161, in attach
self.validator.validate(self.module)
File "/usr/local/Caskroom/miniconda/base/envs/dp/lib/python3.8/site-packages/opacus/dp_model_inspector.py", line 113, in validate
raise IncompatibleModuleException(message)
opacus.dp_model_inspector.IncompatibleModuleException: Model contains incompatible modules.
Some modules are not valid.: ['Main'] |
st183263 | The error unfortunately doesn’t tell which modules are incompatible or any workarounds.
I would thus generally recommend to create an issue in their repository, so that Opacus devs could check the error. |
st183264 | Hi @HarrisonWilde!
Damn, it’s impossible to beat @ptrblck’s responsiveness, but I know the answer to this one!
For a module to be supported by Opacus, the following conditions apply:
Modules with no trainable parameters (eg nn.ReLU, nn.Tanh)
Modules which are frozen. A nn.Module can be frozen in PyTorch by unsetting requires_grad in each of its parameters, ie for p in module.parameters(): p.requires_grad = False.
Explicitly supported modules (we keep a dictionary in opacus.SUPPORTED_LAYERS), eg nn.Conv2d.
Any complex nn.Module that contains only supported nn.Modules. This means that most models will be compatible, given that we support most of the common building blocks. This however also means that Opacus support depends on how a specific nn.Module is implemented. For example, nn.LSTM could be written by using nn.Linear (which we support), but its actual implementation does not use it (so that it can fuse operators and be faster). Any layer that needs a rewrite to be supported is in the /layers folder.
So in this case, the issue is that your Decoder class has a nn.Parameter of its own, the self.sigma!
Judging at how you use it, it looks like you are basically using it as InstanceNorm (correct me if I’m wrong!). We do support nn.InstanceNorm, nn.LayerNorm and nn.GroupNorm, so maybe you could normalize using one of those modules instead? |
st183265 | Thank you for the detailed response! That is super useful information, sorry if I had missed that somewhere but it is the first I am seeing of it. Yes it does seem to be the sigma then. I am not familiar with the LayerNorm’s, and frankly still getting my head around PyTorch (but loving it), do you mind helping me figure out how I can swap out the sigma in this case for something that is supported, reading the docs I am not sure how it might swap out, but I do believe that yes an InstanceNorm or LayerNorm is what I am after, you can see in the main training loop that I output sigmas though, would this be a problem if we were to change it as you describe? The implementation of TVAE I am using comes from this paper if you have seen it, alongside the CTGAN presented https://arxiv.org/pdf/1907.00503.pdf 1 . |
st183266 | I’m not a GAN expert but hopefully we can figure this out
In PyTorch, all normalization layers have the same formula (subtract mean, divide by STD). The only difference is in what you are normalizing against. See more here: In-layer normalization techniques for training very deep neural networks | AI Summer 5
The extra twist here is that you are trying to learn the normalization STD as a network parameter, which is something I have never seen before. You shouldn’t need a gradient for it! The STD is normally calculated by keeping a buffer, remembering the values you saw and computing it from there (like a running mean…). Exposing it is a parameter means that you want to compute a gradient of the loss wrt to the STD which is odd to me - but I’m not a GAN expert so I don’t know if this is something that somehow people do.
According to this part:
std = sigmas[st]
loss.append(((x[:, st] - torch.tanh(recon_x[:, st])) ** 2 / 2 / (std ** 2)).sum())
loss.append(torch.log(std) * x.size()[0])
it seems to me you are normalizing against the last dimension, so this seems akin to LayerNorm? I’m not sure I read that right though, so I’d recommend investigating deeper rather than just trusting me
Another thing to say is that in my experience, deep nets care that you do normalize, but what layer you choose tends to have little to no impact on the eventual results. So while you dig deeper into the formula to figure out the right layer to use, I do recommend you just put a LayerNorm there and let your computer crunch while you are looking at the paper. Don’t use Opacus for this! Just replace that part and keep everything the same, run it and see if you get similar performance. |
st183267 | Hi @HarrisonWilde
I’m also now dealing with the same issue that you were experience with converting TVAE to a differential privacy tabular-VAE. I’m interested to know, whether you managed to get a fully working implementation after following advice from this posting? |
st183268 | Hello!
I have a question about Gradient Clipping, that arises from the following principles of privacy accounting and DP-SGD:
The RDP calculation for each step in training is based on the ratio between maximum norm bound of the gradients and the std. deviation of the noise being added to them. This ratio is known as the noise multiplier. As long as the ratio stays the same, the privacy guarantee for a given real valued function does not change. So if I want to increase the maximum norm bound (sensitivity) of a real valued function, the noise std. dev. just has to be scaled by the same amount to satisfy the same privacy. #11 (see also Opacus Issue #11 , as well as Proposition 7 / Corollary 3 in https://arxiv.org/pdf/1702.07476.pdf 3)
Given this, I want to discuss the following example:
Suppose (for the sake of simplicity) that I have chosen a norm bound of B = 1, and that the corresponding noise std. dev. sigma is also 1. The noise multiplier z = sigma/B = 1, and this real valued function then satisfies (alpha, alpha / 2*z^2)-RDP.
Consider then the following two cases during training:
If the gradient is of size 1 at a particular step in training, the noise fits the exact sensitivity of the gradient and the privacy is accounted for in a reasonable way.
However, If the gradient is less than the norm bound, let’s say of size 0.5, the noise of scale 1 suddenly is too big for the now smaller sensitivity. As stated above, B and sigma could be scaled down to 0.5 as well to satisfy the same privacy guarantee as before. Worded differently, if for this step we would change B = 0.5 (which is just as valid a clipping bound as 1 and yields the same update to the gradient) but keep sigma = 1, this would satisfy a different privacy guarantee while providing the same update to the parameters (as having B=1, sigma=1). More specifically the guarantee should be equivalent to doubling the size of z resulting in alpha, alpha / 2*(2*z)^2) = (alpha, alpha / 8*z^2)-RDP.
My question now is, is there an obvious reason as to why this is not considered in privacy accounting? It does not seem to me that the accountant takes into consideration the actual scale of gradients or scales noise accordingly. The clipping threshold and noise multiplier are constant hyperparameters that are to be freely chosen by the user of Opacus. Because these are constant the noise that is added to the gradients is also always constant. As the sizes of the gradients most definitely are not, this leads me to believe that the privacy calculation sometimes would yield the first case for which the noise is correctly scaled and at other times the second case listed above for which the noise is not accurate (or rather always pessimistic) and that we add too much noise for a given guarantee, hurting the models utility.
Could you address this concern and whether it is possible to mitigate this using something like an adaptive clipping bound/noise during training? Or is there something I am missing?
Thanks in advance! |
st183269 | Hi!
You are right: indeed the accountant does not consider the scale of gradients of each batch but this is intentional The TLDR answer is that if we allow ourselves to look at the norm of each gradient, we would know when a batch contains an outlier. Even if we do eventually clip it, this knowledge is a further source of information leak that would have to be addressed by our analysis.
If it’s helpful, let’s look at this from another angle: our guarantees will always have to be pessimistic, because by nature we need to protect the privacy of every sample, therefore we gotta make sure that whatever we do, we protect the privacy of each and every outlier. Let’s consider the DP definition and go back to the canonical example of an adversary that sees some outputs and needs to understand if they are seeing dataset D or dataset D’ (prime). They differ by a single example, so let’s say all examples are “easy” except from that one, which gets a huge gradient. This procedure of looking first and clipping later would output the same model, but now an adversary that sees the training loop as it trains would know who’s the outlier and beat us at the D vs D’ game.
That being said, this is potentially one of the things that we could do if we relaxed our guarantees by making less pessimistic assumptions on the capabilities of our adversary. It would be interesting to think about it, but to my knowledge it’s never been done |
st183270 | Is there any literature/documentation on the necessity of cryptographically-secure RNG in Opacus? As far as I can tell it’s only used for noise generation so would like to understand the underlying risks/compromises. It looks to me like TF-Privacy isn’t using a CSPRNG for its noise generation so am curious about the reasoning for the divergence. |
st183271 | Excellent question! There are two separate uses of CSPRNG in Opacus - for batch composition (shuffling or sampling) and for noise generation. If batches are formed using an insecure RNG, the algorithm is still differentially private, but we don’t get to claim amplification by subsampling.
It looks to me like TF-Privacy isn’t using a CSPRNG for its noise generation so am curious about the reasoning for the divergence.
It’s a pragmatic choice. Basically, TF does not natively support CSPRNG; Pytorch’s decision to implement it is in no small part due to its commitment to supporting Opacus.
I’ll add parenthetically, that, strictly speaking, DP is an information-theoretic notion. As such, it cannot be satisfied with “just” cryptographically secure PRNG. For purists (or complexity theorists) out there, Opacus meets the definition of Computational Differential Privacy. In practice, all randomness available on computers is computational, and I don’t believe drawing the distinction is productive at this point of technology development. |
st183272 | Hi,
Thanks for this great project! I want to adjust the learning rate in opacus. When I create scheduler before attaching optimizer just like the following code.
model = Net()
optimizer = SGD(model.parameters(), lr=0.05)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[30,80], gamma=0.1)
privacy_engine = PrivacyEngine(
model,
batch_size,
sample_size,
alphas=[10, 100],
noise_multiplier=1.3,
max_grad_norm=1.0,
)
privacy_engine.attach(optimizer)
It raises UserWarning: Seems like optimizer.step() has been overridden after learning rate scheduler initialization. What’s the potential risk in the above code?
I also try to attach optimizer before learning rate scheduler initialization, thus this warning goes away. Is it necessary?
Any help is appreciated! |
st183273 | Solved by karthikprasad in post #2
Hello @hkz,
Thank you for surfacing this quirk.
Opacus does indeed override the optimizer.step() when privacy engine is attached to it. In the current implementation, we perform the per-sample gradient clipping and subsequently call the original step() function of the optimizer (opacus/privacy_eng… |
st183274 | Hello @hkz,
Thank you for surfacing this quirk.
Opacus does indeed override the optimizer.step() when privacy engine is attached to it. In the current implementation, we perform the per-sample gradient clipping and subsequently call the original step() function of the optimizer (opacus/privacy_engine.py at master · pytorch/opacus · GitHub 1). Therefore, your code should work correctly regardless of when you initialize the scheduler.
However, this is risky as it might not work correctly should the implementation of optimizer.step() change in a manner inconsistent with the scheduler’s expectations; although, I don’t see this happening.
Bottom line: heeding the warning is ideal and recommended. |
st183275 | Hi all.
I apply the opacus package to Unet training, I don`t know how to judge the privacy guarantee level of Unet. Is it related to the output of ϵ? Does ϵ less than a certain value mean that privacy guarantee is good.
However, ϵ keeps increasing during the training process. Does this mean that the longer the training time, the lower the privacy protection level?
Sincerely looking forward to receiving your reply. |
st183276 | Hi @liuwenshuang0211.
First of all, thanks for trying out opacus, we do appreciate it.
To better understand the concept of (ε,𝛿) - differential privacy, I suggest starting with FAQ section on our website, we have a paragraph on that: FAQ · Opacus 9
tl;dr - Epsilon defines the multiplicative difference between two output distributions based on two datasets, which differ in a single example. In other words, epsilon is a measure of how much difference a single example can make under the worst possible circumstances.
Delta, in turn, defines the probability of failure, in which we don’t uphold privacy guarantee defined by epsilon.
Now, the question about threshold values for “good privacy guarantee” is tricky and hard.
For delta you typically want it to be orders of magnitude less than 1/N, where N is the number of entries in your dataset. Otherwise you don’t protect against full release of one entry.
For epsilon, the answer is “it depends”.
I’d suggest to look at papers to see what’s considered reasonable for your task.
For example, on MNIST we’re able to get acceptable results with ε=1.19 and good results with ε=7.
In another example, paper “Tempered Sigmoid Activations for
Deep Learning with Differential Privacy” by Papernot et al uses ε=3 for MNIST and ε=7 for CIFAR10.
There’re also some evidence, that epsilon values reported by DP-SGD are on a very pessimistic end, and the observed privacy is much better (roughly shrinks epsilon by a factor of 10)
To summarise this part, you’d want to set your 𝛿 << 1/N, and to have your ε in single digits (but it depends)
As for the training process, the short answer is yes - the more training iterations you perform, the weaker privacy guarantee is. Bear in mind, however, that the relation is sub-linear.
For more math behind it I would suggest looking into the original DP-SGD paper (https://arxiv.org/pdf/1607.00133.pdf 3), specifically section 3.1 Differentially Private SGD Algorithm, subsection “Privacy Accounting” |
st183277 | About losing privacy the longer you train, maybe an example can help internalize this concept.
Let’s leave ML aside for now and let’s just imagine we are building something like Google Maps’s feature that tells you how busy a store is depending on the time of day. To make this privacy-safe, you can use differential privacy and instead of submitting the actual position of a user down to the right centimeter, you instead add some noise and randomize each user’s position within a 3 meter radius. This will not alter your aggregate significantly because if you have many users, you’ll still be able to count correctly as the noise will tend to cancel out over many many users.
Now the problem is that the noise will also cancel out if we keep on asking for your position over and over. This is what happens with DP-SGD too the longer you keep training |
st183278 | The fact that the more times you release differentially private estimates, the larger the accumulated privacy loss becomes is a common phenomenon in Differential Privacy and is generally attributed due to a very useful property in the differential privacy literature known as composition. Many of the well known differentially private algorithms rely on various composition theorems to actually limit the accumulated privacy loss. If you are interested, you can take a look at this paper : http://proceedings.mlr.press/v37/kairouz15.pdf 2 for more information on composition theorems. |
st183279 | Hi Opacus community,
I am looking for experiences / best practices for using DP with transfer learning. Let’s say a hospital decides to build a DP image classification model based on patient data.
3 scenarios com to mind:
baseline network pre-trained on public dataset (from a similar domain, e.g. x-rays)
baseline network pre-trained on existing data of the hospital (classic, non private way)
baseline network pre-trained with DP on existing data of the hospital
I would assume that generally all approaches could make sense because probably less epochs are required compared to training from scratch (hence, spending less privacy budget).
I would appreciate learning your experiences / thoughts about these scenarios.
Thanks
Andreas |
st183280 | Hello @Andreas_Kopp !
With transfer learning, there are two separate datasets to consider: one used for training the baseline model and another for fine-tuning/training your final model.
In scenarios 1, the privacy of the public dataset is not preserved, but the privacy of the final model is preserved if you train with DP.
In scenario 2, just as in scenario 1, the privacy of the dataset used to train your baseline model is NOT preserved, but only that of the data used to train your final model is. If preserving the data used to train your baseline model is not important for your use case, this might result in better accuracy than that of scenario 1 (depending, ofcourse, on your task as well as the size and distribution of this data compared the public dataset)
In scenario 3, the privacy of all the data is preserved, but the accuracy will be very low.
Hope this adds some clarity. |
st183281 | Hi all,
I am wondering if anyone has figured out any “standard” hyperparameter configurations for obtaining the best accuracy at a given epsilon privacy level on CIFAR-10?
If so, it might be a good idea to include them in the tutorial scripts?
Thanks in advance! |
st183282 | I don’t think these experiments are published yet, but am sure that any contribution is welcome.
Would you be interested in working on such a tutorial? |
st183283 | I have done some hp sweeps and am planning to do some more over the holidays. I don’t expect them to be entirely exhaustive but they’d be a good starting place. I’ll share in early 2021. Thanks! |
st183284 | Hi @tginart! Sorry for the delay in getting back to you, we are still getting used to the forums ourselves and figuring out how to setup notifications
We are still in the process of figuring it out ourselves. Getting >70 top-1 accuracy for reasonable privacy levels (let’s say under 10 epsilon at delta 1e-5) is quite hard and an area of active development.
One shortcut could be to pretrain without dp: in this case, you can get there quite easily. See this colab: bit.ly/opacus-dev-day.
In another experiment, we were able to get to 68 without any pretraining using a smaller model, but we are still in the process of cleaning up the results before we publish them
Looking forward to seeing more experiments and results! We would indeed welcome PRs, or tutorials. Even if you want to write it externally, we’d still be happy to link to it in our resources page |
st183285 | Hello, @Darktex!
Sorry for the late reply. Somehow this got buried in my notifications before the new year.
With regards to using smaller models — since the privacy budget can depend on the parameter count, it does stand to reason that a smaller model might give you more “bang for buck.” I actually tested this out a little bit by comparing MobileNetV2 to ResNet18 on a few runs but I wasn’t able to get a major improvement (granted, I could’ve tried much harder).
I’m looking forward to seeing your results on this! I’ll keep you posted on anything from my end as well. I think it would be useful/cool to maintain a sort of “state-of-the-art” leaderboard for provably DP models at a given (eps,delta). I could help kick this off with some of my experiments, although 68% is already significantly better than anything I’ve obtained. My best runs don’t even make it to 60%. |
st183286 | This category is for questions, discussion and issues related to PyTorch’s quantization feature.
For more information, see: https://github.com/pytorch/pytorch/issues/18318 103 |
st183287 | Pytorch ver : 1.10
OS: Ubuntu x64
Problem:
I used quantization method in docs trying to speed up model inference time
(beta) Static Quantization with Eager Mode in PyTorch — PyTorch Tutorials 1.10.1+cu102 documentation
But results was strange, inference time not decrease but 4 times increase.(model size is quite ok)
Below is original model and fused model and my quantization code.
Original model
QuantizedDDRNet(
(conv1): Stem(
(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(32, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
(1): BasicBlock(
(conv1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
)
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(32, 64, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
)
(layer3): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
)
(layer4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
)
(layer5): Sequential(
(0): Bottleneck(
(conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
)
(layer3_): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
)
(layer4_): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
)
(layer5_): Sequential(
(0): Bottleneck(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
)
(compression3): ConvBN(
(0): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(compression4): ConvBN(
(0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(down3): ConvBN(
(0): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(down4): Conv2BN(
(0): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(spp): DAPPM(
(scale1): Scale(
(0): AvgPool2d(kernel_size=5, stride=2, padding=2)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(scale2): Scale(
(0): AvgPool2d(kernel_size=9, stride=4, padding=4)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(scale3): Scale(
(0): AvgPool2d(kernel_size=17, stride=8, padding=8)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(scale4): Scale(
(0): AvgPool2d(kernel_size=1, stride=1, padding=0)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(scale0): ConvModule(
(0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(process1): ConvModule(
(0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
)
(process2): ConvModule(
(0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
)
(process3): ConvModule(
(0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
)
(process4): ConvModule(
(0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
)
(compression): ConvModule(
(0): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(shortcut): ConvModule(
(0): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
(seghead_extra): SegHead(
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 6, kernel_size=(1, 1), stride=(1, 1))
)
(final_layer): SegHead(
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 6, kernel_size=(1, 1), stride=(1, 1))
)
(quant): QuantStub()
(dequant): DeQuantStub()
(add): FloatFunctional(
(activation_post_process): Identity()
)
)
========================================= PERFORMANCE =============================================
Size of the model(MB): 23.121037
Elapsed time = 134.1410 milliseconds
====================================================================================================
Quantized model
QuantizedDDRNet(
(conv1): Stem(
(0): QuantizedConv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1))
(1): QuantizedBNReLU2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): Identity()
(3): QuantizedConv2d(32, 32, kernel_size=(3, 3), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1))
(4): QuantizedBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(layer1): Sequential(
(0): BasicBlock(
(conv1): QuantizedConv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn1): QuantizedBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(1): BasicBlock(
(conv1): QuantizedConv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn1): QuantizedBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
)
(layer2): Sequential(
(0): BasicBlock(
(conv1): QuantizedConv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn1): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): QuantizedConv2d(32, 64, kernel_size=(1, 1), stride=(2, 2), scale=1.0, zero_point=0, bias=False)
(1): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(1): BasicBlock(
(conv1): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn1): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
)
(layer3): Sequential(
(0): BasicBlock(
(conv1): QuantizedConv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn1): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): QuantizedConv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), scale=1.0, zero_point=0, bias=False)
(1): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(1): BasicBlock(
(conv1): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn1): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
)
(layer4): Sequential(
(0): BasicBlock(
(conv1): QuantizedConv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn1): QuantizedBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): QuantizedConv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), scale=1.0, zero_point=0, bias=False)
(1): QuantizedBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(1): BasicBlock(
(conv1): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn1): QuantizedBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
)
(layer5): Sequential(
(0): Bottleneck(
(conv1): QuantizedConv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
(bn1): QuantizedBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): QuantizedConv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
(bn3): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): QuantizedConv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), scale=1.0, zero_point=0, bias=False)
(1): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
)
(layer3_): Sequential(
(0): BasicBlock(
(conv1): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn1): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(1): BasicBlock(
(conv1): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn1): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
)
(layer4_): Sequential(
(0): BasicBlock(
(conv1): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn1): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(1): BasicBlock(
(conv1): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn1): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
)
(layer5_): Sequential(
(0): Bottleneck(
(conv1): QuantizedConv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
(bn1): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): QuantizedConv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
(bn3): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): QuantizedConv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
(1): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
)
(compression3): ConvBN(
(0): QuantizedConv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
(1): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(compression4): ConvBN(
(0): QuantizedConv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
(1): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(down3): ConvBN(
(0): QuantizedConv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(1): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(down4): Conv2BN(
(0): QuantizedConv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(1): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): QuantizedConv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(4): QuantizedBatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(spp): DAPPM(
(scale1): Scale(
(0): AvgPool2d(kernel_size=5, stride=2, padding=2)
(1): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): QuantizedConv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
)
(scale2): Scale(
(0): AvgPool2d(kernel_size=9, stride=4, padding=4)
(1): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): QuantizedConv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
)
(scale3): Scale(
(0): AvgPool2d(kernel_size=17, stride=8, padding=8)
(1): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): QuantizedConv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
)
(scale4): Scale(
(0): AvgPool2d(kernel_size=1, stride=1, padding=0)
(1): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): QuantizedConv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
)
(scale0): ConvModule(
(0): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): QuantizedConv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
)
(process1): ConvModule(
(0): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
)
(process2): ConvModule(
(0): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
)
(process3): ConvModule(
(0): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
)
(process4): ConvModule(
(0): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): QuantizedConv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
)
(compression): ConvModule(
(0): QuantizedBatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): QuantizedConv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
)
(shortcut): ConvModule(
(0): QuantizedBatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): QuantizedConv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
)
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
(seghead_extra): SegHead(
(bn1): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv1): QuantizedConv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(64, 6, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(final_layer): SegHead(
(bn1): QuantizedBatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv1): QuantizedConv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(bn2): QuantizedBatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): QuantizedConv2d(64, 6, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(quant): Quantize(scale=tensor([1.]), zero_point=tensor([0]), dtype=torch.quint8)
(dequant): DeQuantize()
(add): QFunctional(
scale=1.0, zero_point=0
(activation_post_process): Identity()
)
)
========================================= PERFORMANCE =============================================
Size of the model(MB): 6.170369
Elapsed time = 420.7537 milliseconds
====================================================================================================
Moreover, I also tried torch.fx 's static quantization, also got same inference time increase, what happened?
Tried below this method also but did’t work.
INT8 quantized model is much slower than fp32 model on CPU - quantization - PyTorch Forums 1
Hope some can help, thanks! |
st183288 | quantizeation code
modules_to_fuse = [['conv1.0', 'conv1.1', 'conv1.2'],
['conv1.3', 'conv1.4', 'conv1.5'],
['layer1.0.conv1', 'layer1.0.bn1'],
['layer1.0.conv2', 'layer1.0.bn2'],
['layer1.1.conv1', 'layer1.1.bn1'],
['layer1.1.conv2', 'layer1.1.bn2'],
['layer2.0.conv1', 'layer2.0.bn1'],
['layer2.0.conv2', 'layer2.0.bn2'],
['layer2.0.downsample.0', 'layer2.0.downsample.1'],
['layer2.1.conv1', 'layer2.1.bn1'],
['layer2.1.conv2', 'layer2.1.bn2'],
['layer3.0.conv1', 'layer3.0.bn1'],
['layer3.0.conv2', 'layer3.0.bn2'],
['layer3.0.downsample.0', 'layer3.0.downsample.1'],
['layer3.1.conv1', 'layer3.1.bn1'],
['layer3.1.conv2', 'layer3.1.bn2'],
['layer4.0.conv1', 'layer4.0.bn1'],
['layer4.0.conv2', 'layer4.0.bn2'],
['layer4.0.downsample.0', 'layer4.0.downsample.1'],
['layer4.1.conv1', 'layer4.1.bn1'],
['layer4.1.conv2', 'layer4.1.bn2'],
['layer5.0.conv1', 'layer5.0.bn1'],
['layer5.0.conv2', 'layer5.0.bn2'],
['layer5.0.conv3', 'layer5.0.bn3'],
['layer5.0.downsample.0', 'layer5.0.downsample.1'],
['layer3_.0.conv1', 'layer3_.0.bn1'],
['layer3_.0.conv2', 'layer3_.0.bn2'],
['layer3_.1.conv1', 'layer3_.1.bn1'],
['layer3_.1.conv2', 'layer3_.1.bn2'],
['layer4_.0.conv1', 'layer4_.0.bn1'],
['layer4_.0.conv2', 'layer4_.0.bn2'],
['layer4_.1.conv1', 'layer4_.1.bn1'],
['layer4_.1.conv2', 'layer4_.1.bn2'],
['layer5_.0.conv1', 'layer5_.0.bn1'],
['layer5_.0.conv2', 'layer5_.0.bn2'],
['layer5_.0.conv3', 'layer5_.0.bn3'],
['layer5_.0.downsample.0', 'layer5_.0.downsample.1'],
['compression3.0', 'compression3.1'],
['compression4.0', 'compression4.1'],
['down3.0', 'down3.1'],
['down4.0', 'down4.1', 'down4.2'],
['down4.3', 'down4.4'],
['spp.scale1.1', 'spp.scale1.2'],
['spp.scale2.1', 'spp.scale2.2'],
['spp.scale3.1', 'spp.scale3.2'],
['spp.scale4.1', 'spp.scale4.2'],
['spp.scale0.0', 'spp.scale0.1'],
['spp.process1.0', 'spp.process1.1'],
['spp.process2.0', 'spp.process2.1'],
['spp.process3.0', 'spp.process3.1'],
['spp.process4.0', 'spp.process4.1'],
['spp.compression.0', 'spp.compression.1'],
['spp.shortcut.0', 'spp.shortcut.1'],
['seghead_extra.conv1', 'seghead_extra.bn2'],
['final_layer.conv1', 'final_layer.bn2']
]
fused_model = torch.quantization.fuse_modules(model, modules_to_fuse2, inplace=False)
fused_model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
model_prepared = torch.quantization.prepare(fused_model)
# Calibrate
# ...
quantized_model = torch.quantization.convert(model_prepared) |
st183289 | yeah probably, here is the current backend support: Quantization — PyTorch 1.10 documentation |
st183290 | Hi
im trying to quantize DETR model but I faced with this error:
/usr/local/lib/python3.7/dist-packages/torch/nn/quantized/modules/embedding_ops.py in from_float(cls, mod)
150 dtype = weight_observer.dtype
151
→ 152 assert dtype == torch.quint8, ‘The only supported dtype for nnq.Embedding is torch.quint8’
153
154 # Run the observer to calculate qparams.
AssertionError: The only supported dtype for nnq.Embedding is torch.quint8 |
st183291 | can you set the dtype for weight observer to torch.quint8? I think the default is torch.qint8. can you paste the code that’s used to quantize your model? |
st183292 | I want to quantize segmentation models like u2net(u2net model github
u2net structure is like below:
...
...
...
def forward(self,x):
hx = x
#stage 1
hx1 = self.stage1(hx)
hx = self.pool12(hx1)
#stage 2
hx2 = self.stage2(hx)
hx = self.pool23(hx2)
#stage 3
hx3 = self.stage3(hx)
hx = self.pool34(hx3)
#stage 4
hx4 = self.stage4(hx)
hx = self.pool45(hx4)
#stage 5
hx5 = self.stage5(hx)
hx = self.pool56(hx5)
#stage 6
hx6 = self.stage6(hx)
hx6up = _upsample_like(hx6,hx5)
#-------------------- decoder --------------------
hx5d = self.stage5d(torch.cat((hx6up,hx5),1))
hx5dup = _upsample_like(hx5d,hx4)
hx4d = self.stage4d(torch.cat((hx5dup,hx4),1))
hx4dup = _upsample_like(hx4d,hx3)
hx3d = self.stage3d(torch.cat((hx4dup,hx3),1))
hx3dup = _upsample_like(hx3d,hx2)
hx2d = self.stage2d(torch.cat((hx3dup,hx2),1))
hx2dup = _upsample_like(hx2d,hx1)
hx1d = self.stage1d(torch.cat((hx2dup,hx1),1))
#side output
d1 = self.side1(hx1d)
d2 = self.side2(hx2d)
d2 = _upsample_like(d2,d1)
d3 = self.side3(hx3d)
d3 = _upsample_like(d3,d1)
d4 = self.side4(hx4d)
d4 = _upsample_like(d4,d1)
d5 = self.side5(hx5d)
d5 = _upsample_like(d5,d1)
d6 = self.side6(hx6)
d6 = _upsample_like(d6,d1)
d0 = self.outconv(torch.cat((d1,d2,d3,d4,d5,d6),1))
return F.sigmoid(d0), F.sigmoid(d1), F.sigmoid(d2), F.sigmoid(d3), F.sigmoid(d4), F.sigmoid(d5), F.sigmoid(d6)
has 7 output !!!
I define Quantizedu2net class like below:
class Quantizedu2net(nn.Module):
def __init__(self, model_fp32):
super(Quantizedu2net, self).__init__()
# QuantStub converts tensors from floating point to quantized.
# This will only be used for inputs.
self.quant = torch.quantization.QuantStub()
# DeQuantStub converts tensors from quantized to floating point.
# This will only be used for outputs.
self.dequant = torch.quantization.DeQuantStub()
# FP32 model
self.model_fp32 = model_fp32
def forward(self, x):
# manually specify where tensors will be converted from floating
# point to quantized in the quantized model
x = self.quant(x)
x = self.model_fp32(x)
# manually specify where tensors will be converted from quantized
# to floating point in the quantized model
x = self.dequant(x)
return x
but when I want to save a quantized model, I get error:
forward(__torch__.torch.nn.quantized.modules.DeQuantize self, Tensor Xq) -> (Tensor):
Expected a value of type 'Tensor (inferred)' for argument 'Xq' but instead found type 'Tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor]'.
Inferred 'Xq' to be of type 'Tensor' because it was not annotated with an explicit type.
:
File "/home/segmentation_u2net/func_u2_net_v1.py", line 1086
# manually specify where tensors will be converted from quantized
# to floating point in the quantized model
x = self.dequant(x)
~~~~~~~~~~~~ <--- HERE
return x
can anyone help me? |
st183293 | m.safari:
~~~~~~~~~~~~ <--- HERE
yeah you would need to modify the original model instead of wrapping it inside self.model_fp32 here I think. Or if you don’t have access to the original model, you could unpack the output of self.model_fp32(x) and Add a dequant stub for the ones that needs to be dequantized. |
st183294 | Hi
im trying to quantize efficinetdet model but i faced with this error:
400 if self.padding_mode == 'zeros':
401 self._packed_params = torch.ops.quantized.conv2d_prepack(
→ 402 w, b, self.stride, self.padding, self.dilation, self.groups)
403 else:
404 self._packed_params = torch.ops.quantized.conv2d_prepack(
RuntimeError: stride should contain 2 elements for 2D convolution.
git repo:
GitHub
GitHub - zylo117/Yet-Another-EfficientDet-Pytorch: The pytorch re-implement... 2
The pytorch re-implement of the official efficientdet with SOTA performance in real time and pretrained weights. - GitHub - zylo117/Yet-Another-EfficientDet-Pytorch: The pytorch re-implement of the... |
st183295 | Solved by L-Reichardt in post #2
Haven’t found much for conv2d_prepack, but have you tried passing a tuple as stride?
self._packed_params = torch.ops.quantized.conv2d_prepack(w, b, (self.stride, self.stride), self.padding, self.dilation, self.groups)
If that is the issue, you might need to pass padding as a tuple as well. |
st183296 | Haven’t found much for conv2d_prepack, but have you tried passing a tuple as stride?
self._packed_params = torch.ops.quantized.conv2d_prepack(w, b, (self.stride, self.stride), self.padding, self.dilation, self.groups)
If that is the issue, you might need to pass padding as a tuple as well. |
st183297 | I want to implement quantized network in pure C. One of the purposes is to get full understanding on how the operations with quantized tensors work.
I made PQT with Renset-18 architecture and got good accuracy with fbgemm backend.
Now I am struggling to replicate the operations. I decided that the simplest to start is addition in Renset block.
self.skip_add = nn.quantized.FloatFunctional()
And during inference I can add two tensors via
out1 = self.skip_add.add(x1, x2)
where x1 and x2 are tensors of torch.Tensor type, quantized with fbgemm backend during post training quantization procedure.
I expected out2_int = x1.int_repr() + x2.int_repr() should be the same as out1.int_repr() (with probably need of clamping in the needed range). However that is not the case.
Can anyone please provide me with any information on how to implement operations with quantized tensors?
Below I dump the example outputs.
print(x1)
...,
[-0.0596, -0.0496, -0.1390, ..., -0.0596, -0.0695, -0.0099],
[-0.0893, 0.0000, -0.0695, ..., 0.0596, -0.0893, -0.0298],
[-0.1092, 0.0099, 0.0000, ..., -0.0397, -0.0794, -0.0199]]]],
size=(1, 256, 14, 14), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.009925744496285915,
zero_point=75)
print(x2)
...,
[ 0.1390, -0.1669, -0.0278, ..., -0.2225, -0.0556, -0.1112],
[ 0.0000, -0.1669, -0.0556, ..., 0.0556, 0.1112, -0.2781],
[ 0.1390, 0.1669, 0.0278, ..., 0.2225, 0.4171, 0.0834]]]],
size=(1, 256, 14, 14), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.02780967578291893,
zero_point=61)
print(x1.int_repr())
...,
[69, 70, 61, ..., 69, 68, 74],
[66, 75, 68, ..., 81, 66, 72],
[64, 76, 75, ..., 71, 67, 73]]]], dtype=torch.uint8)
print(x2.int_repr())
...,
[66, 55, 60, ..., 53, 59, 57],
[61, 55, 59, ..., 63, 65, 51],
[66, 67, 62, ..., 69, 76, 64]]]], dtype=torch.uint8)
print(self.skip_add.add(x1, x2))
...,
[ 0.0904, -0.2109, -0.1808, ..., -0.2712, -0.1205, -0.1205],
[-0.0904, -0.1808, -0.1205, ..., 0.1205, 0.0301, -0.3013],
[ 0.0301, 0.1808, 0.0301, ..., 0.1808, 0.3314, 0.0603]]]],
size=(1, 256, 14, 14), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.03012925386428833,
zero_point=56)
print(self.skip_add.add(x1, x2).int_repr())
...,
[59, 49, 50, ..., 47, 52, 52],
[53, 50, 52, ..., 60, 57, 46],
[57, 62, 57, ..., 62, 67, 58]]]], dtype=torch.uint8)
print(x1.int_repr() + x2.int_repr())
[135, 125, 121, ..., 122, 127, 131],
[127, 130, 127, ..., 144, 131, 123],
[130, 143, 137, ..., 140, 143, 137]]]], dtype=torch.uint8) |
st183298 | Solved by HDCharles in post #7
It looks to me like its just dequantizing, adding together, and then requantizing:
import torch
x = torch.randn(10,10)
y = torch.randn(10,10)
#arbitrary scales and zero_points
zp_x = 1
zp_y = 2
zp_z = 3
s_x = .1
s_y = .2
s_z = .3
#quantize tensors
xq = torch.quantize_per_tensor(x, s_x, zp_x, to… |
st183299 | I have found that I was misconceptioned about quantized values as soon as byte value shows quant of its specific domain. I used the following paper as a guide https://arxiv.org/pdf/1712.05877.pdf 2 (though it’s a little superior to pytorch’s implementation because it provides Integer-Arithmetic-Only quantization, why many operations in pytorch quantization stay float).
With that I was able to successfully reimplement convolution and fully connected layers, but nn.quantized.FloatFunctional().add() still resists to be implemented. I I thought it would be the easiest layer, heh.
At the moment tryed the following reasoning:
S_3(q_3 - Z_3) = S_1(q_1 - Z_1) + S_2(q_2 - Z_2)
from where
q_3 = [S_1(q_1 - Z_1) + S_2(q_2 - Z_2)] / S_3 + Z_3
In that case almost half of the elements appear to be the same as in pytorch, but other half can change drastically. A lot of misses come from values needed to be clipped.
def manual_addition(x1, x2, add_layer):
#add_layer is of the type of nn.quantized.FloatFunctional()
q1 = x1.int_repr().numpy()
q2 = x2.int_repr().numpy()
z1 = x1.q_zero_point()
z2 = x2.q_zero_point()
s1 = x1.q_scale()
s2 = x2.q_scale()
z3 = add_layer.zero_point
s3 = add_layer.scale
q3_1 = s1 * (q1 - z1)
q3_2 = s2 * (q2 - z2)
qres = q3_1 + q3_2
qres = qres / s3 + z3
q3_int32 = (qres).round()
q3 = q3_int32.clip(0.255).astype(np.uint8) # many misses are from clipped values
gt_res = add_layer.add(x1, x2)
gt_res_int = gt_res.int_repr().numpy()
calc_hit_rate(q3, gt_res_int) #hit rate =0.51
If anyone has any clue on how clipping should be done here, I would appretiate your help so much. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.