id
stringlengths
3
8
text
stringlengths
1
115k
st81168
I don’t think this is possible using a tensor, since e.g. the strides would be hard (impossible) to handle. Python containers (e.g. list) are quite flexible regarding different types of elements, but you won’t be able to apply e.g. a matrix multiplication on this “array”. If you are running out of memory, you could have a look at torch.utils.checkpoint to trade compute for memory or e.g. apex/amp for mixed precision training.
st81169
Thanks for the answer. I’m back here to share my progressed idea to this problem. Back to the problem, ‘element-wise(pixel-wise) different precision in Tensor(Image)’. For example, look at this picture: light-4297386_1920.jpg1920×1079 315 KB Although the background of image is not-focused well, human can recognize the object in this Image is lamp. On a similar principle, can object-detection networks recognize objects from Image that has different precision(resolution) pixel-wisely? Moreover, can we get memory reduction effect from this solution? This is why I suggested this problem. From now, this is my progressed idea. I made an helper function from this forum # returns actual allocated memory size of given tensor def get_actual_memsize(tensor): return tensor.element_size() * tensor.nelement() To test my idea, I made a sample tensor: B, C, H, W = 1, 10, 224, 224 tensor = torch.randn(B, C, H, W) print(tensor.type()) print(get_actual_memsize(tensor)) this returns: torch.FloatTensor 2007040 I decomposed tensor based on each element’s value, like this: threshold = 0.2 important_region = torch.ge(tensor, threshold) unimportant_region = ~important_region #just reverse unimportant_values = torch.masked_select(tensor, unimportant_region).to(torch.float16) #Precision Reduction important_values = torch.masked_select(tensor, important_region) print(unimportant_values.type()) print(important_values.type()) print(get_actual_memsize(unimportant_values) + get_actual_memsize(important_values) + get_actual_memsize(unimportant_region)) and memory reduced: 2007040 -> 1927504 torch.HalfTensor torch.FloatTensor 1927504 Since the sample tensor is small(size is [1, 10, 244, 244]), the reducing effect is much more larger in bigger tensor, I think. Now I’m restoring tensor with these decomposed components. It’s quite confusing. I’m looking for suitable pytorch API now(like masked_fill) but all I found is not suitable… This is my current idea. and if you have a similar idea or knowledge about pytorch API, please answer how to restoring the tensor please.
st81170
FruitVinegar: Now I’m restoring tensor with these decomposed components. It’s quite confusing. I’m looking for suitable pytorch API now(like masked_fill) but all I found is not suitable… This is my current idea. and if you have a similar idea or knowledge about pytorch API, please answer how to restoring the tensor please. That’s the pitfall I tried to mention in my last post. I’m not sure, if there might be a clean way to reconstruct e.g. an array (tensor) with mixed precision data types, as the strides would change based on the current element type. Let’s think about a plain tensor: x = torch.tensor([[0., 1., 2., 3.], [4., 5., 6., 7.]]) print(x.size()) > torch.Size([2, 4]) print(x.stride()) > (4, 1) print(x.nelement()) > 8 print(x.element_size()) > 4 The stride shows, how many element_size() bytes you would have to skip in the contiguous tensor to get to the next index in the corresponding dimension. E.g. to index the 5 in x, you would have to use x[1, 1], which corresponds to 4+1 elements offset from the beginning of the data. Also, as you can see the element_size() is defined per tensor, which makes the indexing (and shape changes) via strides possible. If you are using different data types in the tensor, you would e.g. need to access the size of each elements to index any value in it. That being said, this approach won’t work for this strided approach using contiguous memory chunks. There might be some other data layout, I’m not aware of, so please let me know, if you find something.
st81171
I have been using ResNet for transfer learning, but wanted to try Inception to see if I’d get better results. My way of doing this for ResNet is as follows: class ResNet18(nn.Module): def __init__(self, orig_model): super(ResNet18, self).__init__() self.drop = nn.Dropout2d(0.5).to(device) self.bn = nn.BatchNorm2d(512).to(device) self.bn2 = nn.BatchNorm1d(256) self.bn3 = nn.BatchNorm1d(100) self.orig = nn.Sequential(*(list(orig_model.children())[:-1])).to(device) for param in self.orig.parameters(): param.requires_grad = True # Replace the last fully-connected layer # Parameters of newly constructed modules have requires_grad=True by default self.fc = nn.Linear(512, 256).to(device) self.fc2 = nn.Linear(256, 100).to(device) self.fc3 = nn.Linear(100, 2).to(device) def forward(self, x): x = self.orig(x) x = self.bn(x) x = x.view(x.size(0), -1) x = F.relu(self.bn2(self.fc(x))) x = self.drop(x) x = F.relu(self.bn3(self.fc2(x))) x = self.drop(x) x = F.relu(self.fc3(x)) p = F.softmax(x, dim=1) return x, p However, when I try this for Inception V3, I get errors. I have seen other solutions where they show how to manipulate the last FC layer. But I want to add more layers, and I haven’t seen a thread that shows a concrete example of this.
st81172
The forward method 13 of Inception is using some functional API calls, which will be missed, if you wrap all submodules in an nn.Sequential container. The better approach would be to derive your own class using Inception as the parent class and add your layers there.
st81173
Thank you. In this class, would I have to add the layers within the container, similar to this 27?
st81174
I’m training my model, and at the third epoch, got RuntimeError CUDA error: device-side assert triggered. Here is my code, where occurred the error. # epoch loop for i, (batch_z_16, batch_z_32, batch_z_48, batch_label) in enumerate(data_loader_validation): batch_z_16 = batch_z_16.to(device=device, dtype=torch.float) # Exception here And I also got message in terminal (using VS Code) as below: C:/w/1/s/tmp_conda_3.6_035809/conda/conda-bld/pytorch_1556683229598/work/aten/src/THCUNN/BCECriterion.cu:57: block: [0,0,0], thread: [7,0,0] Assertion `*input >= 0. && *input <= 1.` failed. C:/w/1/s/tmp_conda_3.6_035809/conda/conda-bld/pytorch_1556683229598/work/aten/src/THCUNN/BCECriterion.cu:57: block: [0,0,0], thread: [12,0,0] Assertion `*input >= 0. && *input <= 1.` failed. C:/w/1/s/tmp_conda_3.6_035809/conda/conda-bld/pytorch_1556683229598/work/aten/src/THCUNN/BCECriterion.cu:57: block: [0,0,0], thread: [15,0,0] Assertion `*input >= 0. && *input <= 1.` failed. C:/w/1/s/tmp_conda_3.6_035809/conda/conda-bld/pytorch_1556683229598/work/aten/src/THCUNN/BCECriterion.cu:57: block: [0,0,0], thread: [27,0,0] Assertion `*input >= 0. && *input <= 1.` failed. C:/w/1/s/tmp_conda_3.6_035809/conda/conda-bld/pytorch_1556683229598/work/aten/src/THCUNN/BCECriterion.cu:57: block: [0,0,0], thread: [30,0,0] Assertion `*input >= 0. && *input <= 1.` failed. My question is: Why it doesn’t occur at the same epoch or why it doesn’t occur at the first epoch? (I tried some times and got that it occurs at different epoch time, even sometimes it doesn’t occur.) It is device-side error, so can I fix it as a user? (what is device-side?) What exactly it is? problem of CUDA, PyTorch or my Python coding? How can I make it never happens?
st81175
Hi, It looks like an error because some index in you BCECriterion are wrong. This may not happen in all epochs if you drop the last partial batches for example. Or if you generate these inputs on the fly and some of them are wrong. It’s a device side that says that what you gave as input does not verify some condition. So yes you can fix it by giving correct inputs to that function This kind of error is raised when a cuda kernel detect a problem. CUDA is asynchronous, so unless you start your code with CUDA_LAUNCH_BLOCKING=1 environment variable, the python stack trace is wrong. Unfortunaly, because of how cuda works, we can’t make these errors much more user-friendly. The error is in the file BCECriterion.cu so it comes from a call to BCELoss (or a criterion that uses it) You can see that *input >= 0. && *input <= 1. You might want to check that your values are in [0, 1] as expected from the doc 4.
st81176
Hi, I installed the latest stable version of pytorch but I am getting the below so I asked my admin to upgrade my NVIDIA to the latest driver and now after upgrading the NVIDIA to the latest driver of stable version 430. Screen Shot 2019-09-15 at 12.01.32 PM.png1054×712 76.2 KB error repeatedly. Now I am getting The new Error: Screen Shot 2019-09-16 at 4.57.58 PM.png1748×482 70.7 KB Please Need some help in fixing or degrading my pytorch??
st81177
Solved by jmandivarapu1 in post #9 Fixed it. The Main reason is the once you update your NVIDIA GPU drivers it needs us to reinstall the CUDA again. we need to uninstall the cuda, pytorch and install cuda, pytorch again then it works.
st81178
Which driver does nvidia-smi show? Could you reinstall PyTorch after updating the driver? I’m not sure, if that helps in this use case, but could be an easy solution.
st81179
ptrblck: nvidia-smi It didn’t work I uninstalled torch and torchvision and Installed again. My NVIDIA version is 430.26 Screen Shot 2019-09-16 at 6.32.40 PM.png800×80 3.94 KB
st81180
I did analyzed few and figured out that my torch.cuda.is_available() is "FALSE’ after the upgrade to latest pytorch (1.2.0) and to the stable version of Nvidia Driver 430.26… Is the pytorch latest version not compatible with the NVIDIA driver? Should I downgrade and ask all my team members to stop upgrading to latest pytorch?
st81181
Fixed it. The Main reason is the once you update your NVIDIA GPU drivers it needs us to reinstall the CUDA again. we need to uninstall the cuda, pytorch and install cuda, pytorch again then it works.
st81182
There is an implementation of logsumexp in pytorch, but there is no logcumsumexp. How can it be implemented efficiently and stable in pytorch?
st81183
Solved by agadetsky in post #3 @KFrank Thank you for your answer. I created PR, maybe someone will add cummax and logcumsumexp at ATen to be efficient. At the moment I end up with the following implementation, maybe someone will need it in the future. import torch import numpy as np def cummax(x, dim): x_np = x.detach().cp…
st81184
Hello Artyom! agadetsky: There is an implementation of logsumexp in pytorch, but there is no logcumsumexp. How can it be implemented efficiently and stable in pytorch? As it stands, it cannot, other than someone writing it. You presumably understand the numerical issues with calculating logsumexp. (See, for example, the discussion in Wikipedia’s LogSumExp 8.) So if pytorch had, for example, a cummax tensor function, you could implement logcumsumexp using pytorch tensor functions. But this doesn’t exist (yet). See: https://stackoverflow.com/questions/55665624/vectorized-implementation-of-cumulative-maximum-in-pytorch-with-requires-grad-tr 8 https://github.com/pytorch/pytorch/issues/20240 5 and https://discuss.pytorch.org/t/sliding-max-over-dimension/49799 2 So, short of writing the logcumsumexp (or related) tensor function “from scratch,” you would have to use a loop to get the “running maximum” (cummax) part, thus forgoing some of the efficiency provided by using just tensor functions. Good luck. K. Frank
st81185
@KFrank Thank you for your answer. I created PR, maybe someone will add cummax and logcumsumexp at ATen to be efficient. At the moment I end up with the following implementation, maybe someone will need it in the future. import torch import numpy as np def cummax(x, dim): x_np = x.detach().cpu().numpy() ret = np.maximum.accumulate(x_np, axis=dim) return torch.from_numpy(ret).to(x) def logcumsumexp(x, dim=-1): if (dim != -1) or (dim != x.ndimension() - 1): x = x.transpose(dim, -1) init_size = x.size() last_dim_size = init_size[-1] x_resized = x.contiguous().view(-1, last_dim_size) d1, d2 = x_resized.size() x_cummax = cummax(x_resized, -1).view(d1, d2, 1) x_expand = x_resized.unsqueeze(1).expand(d1, d2, last_dim_size) mask = torch.tril(torch.ones(last_dim_size, last_dim_size)).unsqueeze(0) ret = torch.log(torch.sum(torch.exp(x_expand - x_cummax) * mask, dim=-1)) + x_cummax.view(d1, d2) ret = ret.view(*init_size) if (dim != -1) or (dim != x.ndimension() - 1): ret = ret.transpose(-1, dim) return ret
st81186
Also, what is going on in memory when torch.roll is called? Does it simply take an element off one end and concatenate it at the other or does it require linear time to restructure the whole array?
st81187
Hi all, I would like to use the RMSE loss instead of MSE. From what I saw in pytorch documentation, there is no build-in function. Any ideas how this could be implemented?
st81188
Wouldn’t it work, if you just call torch.sqrt() in nn.MSELoss? x = Variable(torch.randn(5, 10), requires_grad=True) y = Variable(torch.randn(5, 10)) criterion = nn.MSELoss() loss = torch.sqrt(criterion(x, y)) loss.backward() print(x.grad)
st81189
The solution of @ptrblck is the best I think (because the simplest one). For the fun, you can also do the following ones: # create a function (this my favorite choice) def RMSELoss(yhat,y): return torch.sqrt(torch.mean((yhat-y)**2)) criterion = RMSELoss loss = criterion(yhat,y) # create a nn class (just-for-fun choice :-) class RMSELoss(nn.Module): def __init__(self): super().__init__() self.mse = nn.MSELoss() def forward(self,yhat,y): return torch.sqrt(self.mse(yhat,y)) criterion = RMSELoss() loss = criterion(yhat,y)
st81190
You should be careful with NaN which will appear if the mse=0. Something like this would probably be better : class RMSELoss(nn.Module): def __init__(self, eps=1e-6): super().__init__() self.mse = nn.MSELoss() self.eps = eps def forward(self,yhat,y): loss = torch.sqrt(self.mse(yhat,y) + self.eps) return loss
st81191
Damien_Lancry: torch.sqrt(torch.zeros(1)) Of course, the issue is during the backward pass as you multiply 0 by infinity (derivative of sqrt at 0). >>> mse = nn.MSELoss() >>> yhat = torch.zeros(1, requires_grad=True) >>> y = torch.zeros(1) >>> loss = torch.sqrt(mse(yhat,y)) >>> loss.backward() >>> yhat.grad tensor([nan]) Using the simple module I wrote above >>> rmse = RMSELoss() >>> yhat = torch.zeros(1, requires_grad=True) >>> y = torch.zeros(1) >>> loss = rmse(yhat,y) >>> loss.backward() >>> yhat.grad tensor([0.])
st81192
Hi, I wonder if that’s exactly the same as RMSE when dealing with batch size more than 1 tensor. i.e. target and prediction are [2,0,256,256] tensor MSE_0 = MSE(prediction[0,:,:,:], target[0,:,:,:]) MSE_1 = MSE(prediction[1,:,:,:], target[2,:,:,:]) RMSE what we want is: SQRT( MSE_0) + SQRT( MSE_1) torch.sqrt(nn.MSELoss(x,y)) will give: SQRT( MSE_0 + MSE_1) so: sqrt(M1+M2) is not equals to sqrt(M1) + sqrt(M2) with reduction is even off, we wanna Mean[ Mean (sqrt (MSE_0) ) + Mean(sqrt (MSE_1) ) ] what will get with reduction = ‘mean’ instead, I think is: sqrt (Mean(MSE_0) + Mean(MSE_1) ) so: [sqrt(M1) / N + sqrt(M2)/N] /2 is not equals to sqrt (M1/N + M2/N) please correct me if my understanding is wrong. Thanks
st81193
How to build a view layer in Pytorch for Sequential Models? Is this ok: class View(nn.Module): def forward(self, input, shape): return input.view(*shape) I tried it based on the flatten layer but I couldn’t even make the flatten layer work: import torch import torch.nn as nn ## Q: why does the flatten layer work? class Flatten(nn.Module): def forward(self, input): print(input.size()) out = input.view(input.size(0),-1) print(out.size()) return out class View(nn.Module): def forward(self, input, shape): return input.view(*shape) def main(): x = torch.arange(0,6).view(3,2) print(x) print(x.size()) flatten = Flatten() flt_x = flatten(x) print(flt_x) print(flt_x.size()) if __name__ == '__main__': main() look at the weird output doesn’t chage size: tensor([[0, 1], [2, 3], [4, 5]]) torch.Size([3, 2]) torch.Size([3, 2]) torch.Size([3, 2]) tensor([[0, 1], [2, 3], [4, 5]]) torch.Size([3, 2])
st81194
Solved by pinocchio in post #12 class View(nn.Module): def __init__(self, shape): super().__init__() self.shape = shape def __repr__(self): return f'View{self.shape}' def forward(self, input): ''' Reshapes the input according to the shape saved in the view data structure. …
st81195
Hi, This seems to work no? You keep the first dimension and collapse all the others. But your Tensor had only 2 dimensions to begin with. By the way for use within a Sequential, you can define a custom __init__() function on your View Module that will take the shape as input.
st81196
ok perhaps this clarifies my confusion: class Flatten(nn.Module): def forward(self, input): ''' Note that input.size(0) is usually the batch size. So what it does is that given any input with input.size(0) # of batches, will flatten to be 1 * nb_elements. ''' batch_size = input.size(0) out = input.view(batch_size,-1) return out # (batch_size, *size) class View(nn.Module): def forward(self, input, shape): ''' TODO: the first dimension is the data batch_size so we need to decide how the input shape should be like ''' return input.view(*shape)
st81197
So this is correct: class View(nn.Module): def __init__(self, shape): self.shape = shape def forward(self, input): ''' TODO: the first dimension is the data batch_size so we need to decide how the input shape should be like ''' return input.view(*self.shape)
st81198
If you want to use the View in a sequential yes. You have to do this. Because the Sequential only passes the output of the previous layer. For your Flatten layer, it seem to work fine no? import torch from torch import nn class Flatten(nn.Module): def forward(self, input): ''' Note that input.size(0) is usually the batch size. So what it does is that given any input with input.size(0) # of batches, will flatten to be 1 * nb_elements. ''' batch_size = input.size(0) out = input.view(batch_size,-1) return out # (batch_size, *size) print("2D input") foo = torch.rand(10, 20) print("Input size:") print(foo.size()) bar = Flatten()(foo) print("Output size:") print(bar.size()) print("3D input") foo = torch.rand(10, 20, 30) print("Input size:") print(foo.size()) bar = Flatten()(foo) print("Output size:") print(bar.size()) print("8D input") foo = torch.rand(10, 2, 3, 4, 5, 6, 7, 8) print("Input size:") print(foo.size()) bar = Flatten()(foo) print("Output size:") print(bar.size())
st81199
correction: class View(nn.Module): def __init__(self, shape): super().__init__() self.shape = shape def forward(self, input): ''' Reshapes the input according to the shape saved in the view data structure. ''' batch_size = input.size(0) shape = (batch_size, *self.shape) out = input.view(shape) return out
st81200
I made my layer but I get a weird error: File "/Users/pinocchio/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 488, in __call__ for hook in self._forward_pre_hooks.values(): File "/Users/pinocchio/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 539, in __getattr__ type(self).__name__, name)) AttributeError: 'View' object has no attribute '_forward_pre_hooks' you know why?
st81201
Make sure to properly call the parent __init__ function when creating your own nn.Module(). Also make sure that you don’t have any other class/function called View that could conflict with your new one.
st81202
hmmm I will try debugging a little longer. Meanwhile let me paste the code I ran (that had an error) for reference: ## batch_size = 1 CHW = (3, 32, 32) out = torch.randn(batch_size,*CHW) print(f'out.size()') ## conv2d_shape = (-1, 8, 8) view = View(shape=(batch_size,*conv2d_shape)) ## out = view(out) print(f'out.size()')
st81203
pinocchio: super().init() Darn it! I was using a wrong version of my View layer. Ooops! Fixed.
st81204
class View(nn.Module): def __init__(self, shape): super().__init__() self.shape = shape def __repr__(self): return f'View{self.shape}' def forward(self, input): ''' Reshapes the input according to the shape saved in the view data structure. ''' batch_size = input.size(0) shape = (batch_size, *self.shape) out = input.view(shape) return out
st81205
I am trying to train my network on a cluster of machines without GPU using DistributedDataParallel and read the tutorials associated but they are kinda terrible and many details are not explained. For instance, what do the following lines do ? os.environ['MASTER_ADDR'] = 'localhost' os.environ['MASTER_PORT'] = '12355' Can I just copy them ? Also what is a rank,when do we call it and why is it even a variable ? None of these apparently important concepts are clearly explained … Can somebody recommend me a good tutorial on how to do parallel computing with pytorch ? The lack of resources is driving crazy (I have no knowledge of parallel computing) Any help will be appreciated
st81206
I am trying to train a network distributively on 2 computing nodes. Our lab uses slurm for task scheduling. I am not sure if I am using the correct slurm script for running the training code. I tried srun python train.py 91 Is it the right way to launch the job? I am using gloo backend. How should I verify DistributedDataParallel exchanges and average gradients between two nodes? I just want to make sure they are communicating correctly. Thanks!
st81207
@heilaw did you have any success with this? Found a link here from someone who claimed to get this working on slurm: https://www.glue.umd.edu/hpcc/help/software/pytorch.html#distrib 699
st81208
I use GroupNorm in pytorch instead of BatchNorm and keep all the others fixed. It shows that in Imagenet dataset, GroupNorm is 40% slower than BatchNorm, and consumes 33% more GPU memory than BatchNorm. I am really confused because GroupNorm shouldn’t need more calculation than BatchNorm. The details are listed below. For BatchNorm, one minibatch consumes 12.8 seconds with GPU memory 7.51GB; For GroupNorm, one minibatch consumes 17.9 seconds with GPU memory 10.02GB.
st81209
Hi I’m running the code snippet below. class LFBlock(nn.Module): def __init__(self, kernel_size=(1,1,1), block_shape=(32,32,32)): super(LFBlock,self).__init__() self.block_shape = block_shape self.conv = nn.Conv3d(1,1, kernel_size=kernel_size, stride=1, padding=0, bias=True) def forward(self, x): x1 = x[:,0][:,None] x2 = x[:,1][:,None] x3 = x[:,2][:,None] x4 = x[:,3][:,None] x1 = self.conv(x1) print('x1 | convweight=', self.conv.weight.data) x2 = self.conv(x2) print('x2 | convweight=', self.conv.weight.data) x3 = self.conv(x3) print('x3 | convweight=', self.conv.weight.data) x4 = self.conv(x4) print('x4 | convweight=', self.conv.weight.data) The print outputs of this code snippet is: x1 | convweight= tensor([[[[[-0.2251]]]]]) x2 | convweight= tensor([[[[[-0.2251]]]]]) x3 | convweight= tensor([[[[[-0.2251]]]]]) x4 | convweight= tensor([[[[[-0.2251]]]]]) Where the weights of each conv is the same. This is a smaller scale testing of a model where I’m running lots of small convolutions ~16 for a single pass. Instead of defining 16 individual conv layers, I was thinking I could just reuse one. Where am I going wrong?
st81210
Solved by albanD in post #2 Hi, The weight is associated to the Conv2D class instance. So if you reuse one instance, then you reuse its weights. If you want more than one instance to train different weights, you will need to create more than one instance of the Conv2D class.
st81211
Hi, The weight is associated to the Conv2D class instance. So if you reuse one instance, then you reuse its weights. If you want more than one instance to train different weights, you will need to create more than one instance of the Conv2D class.
st81212
So if the weights were random e.g. 0.327 before conv, then after conv it became -0.2251. Then I do something with those weights. And I re-run the conv, wouldn’t it re-initialise with another random value then change based on what you feed it? Even if it were to randomly reinitialise to 0.327, I thought it would turn out different cause my inputs to it are different. Or does it freeze the kernel weights after a single pass? For my particular use case, I’m fine with overriding the conv weights cause I just want to use the weights straight after the conv for some equations
st81213
Hi, You should check the doc about nn.Module’s role 79. That should give you a good introduction of what nn.Module (which is the parent class of Conv2D) are for. Feel free to ask more questions here afterwards.
st81214
Hi, I tried to compare the optimization of Neural network modeling using Keras Pytorch (using weight_decay argument in torch.optim to add regularizer) Pytorch (creating function to calculate L2 norm of weights along with coefficient to add regularizer) I tried training the network and see the result after 1 step of Stochastic Gradient Descent to see the comparison across the models. By setting the regularization coefficient to zero, all the above 3 methods should yield the same result (weights update, prediction value) given the same training data and initial weights and this is the case for most of the time. However, sometimes the weights update is different along with some strangness in the program. I will first explain the strangeness I found. I write out the code to print out some thing when there is a huge difference in prediction between method 1.vs 2. and 1. vs 3. It sometimes reported this Gap of output with builtin pytorch 18.459202 Gap of output with manual pytorch 18.459202 However when I checked the quantity myself, the gap disappeared as follows In[410]: np.average(np.abs(keras_output-builtin_model.forward(torch.from_numpy(X_train).type(torch.FloatTensor)).data.numpy())) Out[410]: 4.091505e-06 It seems like something has been updated simultaneously (like updating the gradient and projection) after the first printing I later on tried to see what is going on by looking at the gradient calculated on one of its weight of both method 2 and 3 which they contain the same gradient In [412]: manual_model.hidden[1].weight.grad Out[412]: tensor([[-0.5983]]) In [413]: builtin_model.hidden[1].weight.grad Out[413]: tensor([[-0.5983]]) I can manually calculate the updated weight to see which one differs from my expectation At command [414], I calculated the updated weight = old weight + learning rate x gradient and [415] is shown that this is the same value as method 3 In [414]: temp_model.hidden[1].weight.data-learn_rate*manual_model.hidden[1].weight.grad Out[414]: tensor([[1.1240]]) In [415]: manual_model.hidden[1].weight.data Out[415]: tensor([[1.1240]]) The updated weight of method 2 is as follows In [417]: builtin_model.hidden[1].weight.data Out[417]: tensor([[1.1257]]) I included the major component of the code for reference #Construct Model temp_n_hidden = copy.deepcopy(n_hidden) manual_model=MultilayerPerceptron(input_dim, output_dim, temp_n_hidden, dropout) temp_n_hidden = copy.deepcopy(n_hidden) builtin_model=MultilayerPerceptron(input_dim, output_dim, temp_n_hidden, dropout) temp_n_hidden = copy.deepcopy(n_hidden) temp_model=MultilayerPerceptron(input_dim, output_dim, temp_n_hidden, dropout) #Construct Keras model temp_n_hidden = copy.deepcopy(n_hidden) keras_model=MultilayerPerceptron_Keras(input_dim, output_dim, temp_n_hidden, dropout) #Copy Weight from Keras to Pytorch keras_to_pyt(keras_model,builtin_model) #Copy Weight from Pytorch to Pytorch manual_model.load_state_dict(builtin_model.state_dict()) temp_model.load_state_dict(builtin_model.state_dict()) #Set up data for training dataset=Data_prep(X_train, y_train_normalized) train_loader=DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True, num_workers=0) #Training Process builtin_optimizer = torch.optim.SGD(list(builtin_model.parameters()), lr=learn_rate) manual_optimizer = torch.optim.SGD(list(manual_model.parameters()), lr=learn_rate) count=0 for epoch in range(n_epochs): for i, data in enumerate(train_loader,0): #data for pytorch inputs, y=data inputs, y=to_variable(var=(inputs, y)) #Manual Training manual_outputs = manual_model.forward(inputs) MSE=loss_func(manual_outputs, y) Weight_Regularization=l2_regularization(reg/2.0,manual_model) manual_loss = MSE+Weight_Regularization manual_optimizer.zero_grad() manual_loss.backward() manual_optimizer.step() #builtin opt Training builtin_outputs = builtin_model.forward(inputs) MSE=loss_func(builtin_outputs, y) builtin_loss = MSE builtin_optimizer.zero_grad() builtin_loss.backward() builtin_optimizer.step() temp= (manual_model.hidden[1].weight.data-builtin_model.hidden[1].weight.data).abs().sum().data.numpy().item() if count % 100 == 0 : print("Weight Regularizaation ",Weight_Regularization.data) print("Weight difference %.8f" % temp) print("Built-in MSE ", builtin_loss.data,"\n") print("Manual MSE ", loss_func(manual_outputs.data, y),"\n") count+=1 #Keras training #data for keras y_train_normalized= np.array(y.numpy(), ndmin = 2).T adam = keras.optimizers.Adam(lr=learn_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0, amsgrad=False) sgd = keras.optimizers.SGD(lr=learn_rate) keras_model.compile(loss='mean_squared_error', optimizer=sgd) keras_model.fit(X_train, y_train_normalized, batch_size=batch_size, nb_epoch=n_epochs, verbose=0) #Comparison keras_output=keras_model.predict(X_train, batch_size=500) pytorch_builtin_output=builtin_model.forward(torch.from_numpy(X_train).type(torch.FloatTensor)) pytorch_manual_output=manual_model.forward(torch.from_numpy(X_train).type(torch.FloatTensor)) print("Gap of output with manual pytorch ",np.average(np.abs(keras_output-manual_model.forward(torch.from_numpy(X_train).type(torch.FloatTensor)).data.numpy()))) print("Gap of output with builtin pytorch ",np.average(np.abs(keras_output-builtin_model.forward(torch.from_numpy(X_train).type(torch.FloatTensor)).data.numpy()))) if np.average(np.abs(keras_output-builtin_model.forward(torch.from_numpy(X_train).type(torch.FloatTensor)).data.numpy()))>1: print("Stop") time.sleep(5.5)
st81215
Hi, Could this be due to the fact that the weight_decay is computed during the gradient step and the .grad field is updated inplace during that? See here 2 for sgd for example.
st81216
but the gradient of both method 2 and 3 is equal as expected (the weight_decay was set to 0), so I am not sure that cause the problem
st81217
Hi, Sorry your post a bit hard to follow. Is the problem: after n_epochs of training, pytorch and keras yeld models that do not predict the same result?
st81218
Sorry It’s a bit difficult to explain. Thanks for your answer. There are 3 models Keras Pytorch (using weight_decay argument in torch.optim to add regularizer) Pytorch (creating function to calculate L2 norm of weights along with coefficient to add regularizer) All 3 models showed the same result most of the time except when it printed Gap of output with builtin pytorch 18.459202 (model 1 vs model 2) Gap of output with manual pytorch 18.459202 (model 1 vs model 3) which showed the gap between keras and pytorch. However when the program ends, I still have those variables in the workspace (as I worked from IDE) so I printed out the value and the gap disappeared (exact same variable as above). It goes from 18.459202 to 4.091505e-06 as below. In[410]: np.average(np.abs(keras_output-builtin_model.forward(torch.from_numpy(X_train).type(torch.FloatTensor)).data.numpy())) Out[410]: 4.091505e-06 There’s no code in between these steps though since the program will end after it print out a gap.
st81219
From your code it’s quite confusing as you don’t actually pass the weight_decay parameter to the optimizer in case 2. Is that just a typo? Or do you actually always have weight_decay to 0 and this has no link with weight decay?
st81220
Yeah after I found this strangeness, I drop the weight decay to zero in all of the models (since it might make it more difficult to find the causes).
st81221
Then I would say that you should rerun your experiment again in a clean environment. This is most likely something in between the end of your script and your new check that changed the value of some things. Also for pytorch code: Do not use the .forward() method on nn.Modules, simply use a call function: manual_model(inputs). Do not use .data. If you want to modify a tensor without registering the change in the autograd, use with torch.no_grad(). If you want a new Tensor with the same values that does not share history use .detach().
st81222
From Pytorch documentation, I know that when one use DistributedDataParallel, gradients on multiple GPUs will be averaged(all-reduced), and each model’s parameters will be updated based on the grapdients. So there is no need to sync model’s parameters. But we know float tensor addition might incur presicion error. With the training going on (such as millions of minibatches), will the presicion error affect the consistency of model’s parameters on miltiple GPUs? Thank you!
st81223
Hi, I don’t think any extra step is taken on our side to avoid this. So you need to make sure that your values are well formed and can be accumulated with no problem (which should be the case if your inputs are correctly preprocessed and you use regular initialization method for your weights).
st81224
Is there any way to run pytorch programs in background so that i can logout from terminal?
st81225
This question is not related to pytorch, it’s about linux in general, stackoverflow.com How to run Node.js as a background process and never die? 11 node.js, linux, background-process asked by murvinlai on 05:53PM - 25 Jan 11 UTC
st81226
This is linux related issue, but it is simple, and I often forget the nohup. nohup python ... &
st81227
Solved by ptrblck in post #8 Yes, this would produce positive and negative values. Have a look at this wiki for more information.
st81228
Normalizing the data usually is beneficial for training the model. It’s sometimes not necessary, but might stabilize and accelerate the training.
st81229
Thank you @ptrblck in order to have all the data sets ranges between 0 and 1 what parameters should I use while normalizing ? or even most of them are 0 and some of them are 1 ?
st81230
If you are loading PIL.Images, torchvision.transforms.ToTensor() will create a normalized tensor with values in the range [0, 1]. torchvision.transforms.Normalize() will create standardized tensors with zero mean and a unit variance. Usually you would use the mean and standard deviation from the training set. However, often the ImageNet statistics are used for RGB images, especially if you are using a pretrained model (same input statistics) and if your dataset is similar to ImageNet.
st81231
Thankyou @ptrblck if I understood correctly there is no need for normalization since it will be automatically between 0 and 1 ?
st81232
Normalizing the data additionally to zero mean and unit variance could further help in training.
st81233
@ptrblck yes but using 0 as mean will create some negative values (since I tied to positive values only)?
st81234
Yes, this would produce positive and negative values. Have a look at this wiki 137 for more information.
st81235
Suddenly I noticed that the virtual memory usage is huge during my training. The size grows when the first Tensor is passed to GPU. Also, I noticed that using more GPU is much slower than training using one GPU. I am almost certain that the training time on 2 GPUs what almost 2 times faster than using one GPU. I am using ubuntu, PyTorch 1.0.1 and CUDA 10. Here I can show htop output of training on one GPU Zrzut ekranu 2019-03-04 o 18.54.35.png2172×1068 936 KB Does anyone have the idea what’s going on?
st81236
Pytorch claims all the memory of your GPU and even when it is not using that memory, if you run nvidia-smi it would show, no memory is free. Refer to memory management docs 45 for more details.
st81237
Excuse me, I am not able to understand the answer. nvidia-smi behaves normally - some of my chosen GPU’s are occupied at the same level as before in terms of RAM. The simplest code that creates 3 visible lines with more than 20GB virtual memory taken in htop is here: import torch import time cuda = torch.device('cuda') A = torch.empty((100, 100), device=cuda).normal_(0.0, 1.0) time.sleep(5)
st81238
I’d like to bump this up because it seems strange to me as well. Why so much virtual memory?
st81239
I want to add SN to densenet (the pretrained-one). I copy the script from torchvison here, import re import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.model_zoo as model_zoo from collections import OrderedDict __all__ = ['DenseNet', 'densenet121', 'densenet169', 'densenet201', 'densenet161'] model_urls = { 'densenet121': 'https://download.pytorch.org/models/densenet121-a639ec97.pth', } def densenet121(pretrained=False, **kwargs): r"""Densenet-121 model from `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_ Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ model = DenseNet(num_init_features=64, growth_rate=32, block_config=(6, 12, 24, 16), **kwargs) if pretrained: # '.'s are no longer allowed in module names, but pervious _DenseLayer # has keys 'norm.1', 'relu.1', 'conv.1', 'norm.2', 'relu.2', 'conv.2'. # They are also in the checkpoints in model_urls. This pattern is used # to find such keys. pattern = re.compile( r'^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$') state_dict = model_zoo.load_url(model_urls['densenet121']) for key in list(state_dict.keys()): res = pattern.match(key) if res: new_key = res.group(1) + res.group(2) state_dict[new_key] = state_dict[key] del state_dict[key] model.load_state_dict(state_dict, strict=False) return model class _DenseLayer(nn.Sequential): def __init__(self, num_input_features, growth_rate, bn_size, drop_rate): super(_DenseLayer, self).__init__() self.add_module('norm1', nn.BatchNorm2d(num_input_features)), self.add_module('relu1', nn.ReLU()), self.add_module('conv1', nn.Conv2d(num_input_features, bn_size * growth_rate, kernel_size=1, stride=1, bias=False)), self.add_module('norm2', nn.BatchNorm2d(bn_size * growth_rate)), self.add_module('relu2', nn.ReLU()), self.add_module('conv2', nn.Conv2d(bn_size * growth_rate, growth_rate, kernel_size=3, stride=1, padding=1, bias=False)), self.drop_rate = drop_rate def forward(self, x): new_features = super(_DenseLayer, self).forward(x) if self.drop_rate > 0: new_features = F.dropout(new_features, p=self.drop_rate, training=self.training) return torch.cat([x, new_features], 1) class _DenseBlock(nn.Sequential): def __init__(self, num_layers, num_input_features, bn_size, growth_rate, drop_rate): super(_DenseBlock, self).__init__() for i in range(num_layers): layer = _DenseLayer(num_input_features + i * growth_rate, growth_rate, bn_size, drop_rate) self.add_module('denselayer%d' % (i + 1), layer) class _Transition(nn.Sequential): def __init__(self, num_input_features, num_output_features): super(_Transition, self).__init__() self.add_module('norm', nn.BatchNorm2d(num_input_features)) self.add_module('relu', nn.ReLU()) self.add_module('conv', nn.Conv2d(num_input_features, num_output_features, kernel_size=1, stride=1, bias=False)) self.add_module('pool', nn.AvgPool2d(kernel_size=2, stride=2)) class DenseNet(nn.Module): r"""Densenet-BC model class, based on `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_ Args: growth_rate (int) - how many filters to add each layer (`k` in paper) block_config (list of 4 ints) - how many layers in each pooling block num_init_features (int) - the number of filters to learn in the first convolution layer bn_size (int) - multiplicative factor for number of bottle neck layers (i.e. bn_size * k features in the bottleneck layer) drop_rate (float) - dropout rate after each dense layer num_classes (int) - number of classification classes """ def __init__(self, growth_rate=32, block_config=(6, 12, 24, 16), num_init_features=64, bn_size=4, drop_rate=0, num_classes=1000): super(DenseNet, self).__init__() # First convolution self.features = nn.Sequential(OrderedDict([ ('conv0', nn.Conv2d(3, num_init_features, kernel_size=7, stride=2, padding=3, bias=False)), ('norm0', nn.BatchNorm2d(num_init_features)), ('relu0', nn.ReLU()), ('pool0', nn.MaxPool2d(kernel_size=3, stride=2, padding=1)), ])) # Each denseblock num_features = num_init_features for i, num_layers in enumerate(block_config): block = _DenseBlock(num_layers=num_layers, num_input_features=num_features, bn_size=bn_size, growth_rate=growth_rate, drop_rate=drop_rate) self.features.add_module('denseblock%d' % (i + 1), block) num_features = num_features + num_layers * growth_rate if i != len(block_config) - 1: trans = _Transition(num_input_features=num_features, num_output_features=num_features // 2) self.features.add_module('transition%d' % (i + 1), trans) num_features = num_features // 2 # Final batch norm self.features.add_module('norm5', nn.BatchNorm2d(num_features)) self.conv_last = nn.Conv2d(num_features, 256, kernel_size=3) # Linear layer # self.classifier = nn.Linear(num_features, num_classes) # Official init from torch repo. for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight.data) elif isinstance(m, nn.BatchNorm2d): m.weight.data.fill_(1) m.bias.data.zero_() elif isinstance(m, nn.Linear): m.bias.data.zero_() def forward(self, x): features = self.features(x) features = self.conv_last(features) return features How to modify this script to add SN to it ? Is pytorch smart enough to load the weights to the layers if I run model = densenet121(pretrained=True) ?
st81240
Solved by Naruto-Sasuke in post #2 Have solved by myself.
st81241
Hey Naruto-Sasuke, mind sharing your solution? I am having kind of the same issue…
st81242
github.com/Zhaoyi-Yan/Shift-Net_pytorch add SN to densenet (#60) 125 by Zhaoyi-Yan on 10:40AM - 22 Feb 19 UTC changed 3 files with 21 additions and 21 deletions. Here is the commit that adds SN to densenet.
st81243
Hi, I tried to use torch.utils.tensorboard (for replacing tensorboardX) and found that: import os import torch import torch.nn.functional as F from tqdm import tqdm from torch.utils.data import DataLoader os.environ['CUDA_VISIBLE_DEVICES'] = '4' from torch.utils.tensorboard import SummaryWriter (main code) works normally, while import os import torch import torch.nn.functional as F from tqdm import tqdm from torch.utils.data import DataLoader from torch.utils.tensorboard import SummaryWriter os.environ['CUDA_VISIBLE_DEVICES'] = '4' (main code) will still use gpu 0. Am I using it wrong? Thanks!
st81244
Solved by ptrblck in post #4 I’m not sure why this happens, but I would guess that some imports in the __init__ method checks for GPUs and thus grabs them (which is apparently not the case in tensorboardX). However, I’m also not sure it’s worth investigating.
st81245
I would recommend to set the environment variables in the terminal before running the script, as setting it via os.environ might have side effects. E.g. if you are loading a library, which calls to CUDA and grabs all available GPUs, setting 'CUDA_VISIBLE_DEVICES' won’t have any effect anymore, as it apparently the case in your code. If you really want to set it in your code, make sure os.environ is called first in your script.
st81246
Thank you! It makes sense. However, I am wondering why replacing torch.utils.tensorboard to tensorboardX will make the code work? I usually set os.environ after args are parsed since gpu is specified in args, which is typically after pakages are imported.
st81247
I’m not sure why this happens, but I would guess that some imports in the __init__ method checks for GPUs and thus grabs them (which is apparently not the case in tensorboardX). However, I’m also not sure it’s worth investigating.
st81248
The model is trained with fp32. I try to use .half() to change layers and inputs to fp16. Actually,it indeed accelerate the inference. But its acceleration effect is far away from twice of fp32. My platform is nvidia tx2. Its compute capability is 6.2. It supports fp16 very well. So what I want to ask is whether fp16 cannot be twice as fast as fp32 in pytorch. Looking for your reply.Thank you.
st81249
The performance gains depend on the operations, shapes, and other potential bottlenecks. E.g. convolutions in cudnn7.2 and earlier needed to have input channels, output channels and batch size as multiple of 8 to be able to use TensorCores. This restriction was listed in cudnn7.3 and later. GEMMs however should still use matrices with shapes of multiples of 8 to use TensorCores in cublas and cudnn. Are you preloading the data or are you using e.g. a DataLoader? Note that you might encounter new bottlenecks in your code, if the model was the previous one and now got accelerated.
st81250
This is a follow up question from this thread 2. Basically I shared weights between two layers. but the problem is, when I visualize the weights, they are not exactly the same, one of them is a bit off compared to the other. (looks washed out if you will) here is the samples I’m talking about : trained_encoder_w.png752×768 323 KB trained_decoder_w_transposed.jpg752×768 155 KB As you can see, they are nearly identical, expect for the fact that the decoder’s weight seem washed out (indicates more values are either 0 or very close to 0, compared to the encoders weight.) However, knowing that both encoder and decoder share the same weight, why am I seeing this? (I trained this on a sparse autoencoder by the way. and the weights are shared like this : weights = nn.Parameter(torch.randn_like(self.encoder[0].weight)) self.encoder[0].weight.data = weights.clone() self.decoder[0].weight.data = self.encoder[0].weight.data.t() What is the reason behind this behavior? I’d be very grateful to know
st81251
I’m a pytorch noob so take my answer with a grain of salt. But I am working on a similar problem and have just got it working. From your code segment I cannot tell if you have implemented the weight tie in the forward function or not, but that was the problem for me. I originally tied the weights in the init but this did not force the weights to update together, so during learning time, they drifted apart. Instead now I have used functional methods in the decoder section, and directly connected the weights in that forward function… so far this seems to work.
st81252
Thanks alot good to know. I’ll give that a try and see how it goes:slightly_smiling_face:
st81253
I am having a problem getting .to(device) to work asynchronously. The training loop in the first code snippet below takes 3X longer than the second snippet. The first snippet sets pin_memory=True, non_blocking=True and num_workers=12. The second snippet moves tensors to the GPU in getitem and uses num_workers=0. Images that are being loaded are of shape [1, 512, 512]. The target is just a single float32. Is there something I need to set in the CUDA drivers? GPU: V100 PyTorch: 1.1.0 Python: 3.7.4 Cuda compilation tools, release 9.2, V9.2.148 conda version : 4.6.14 Ubuntu 16.04.5 # This is very slow. device = “cuda” class MyDataset (Dataset): def __getitem__(self, idx): image = self.get_image_tensor(idx) target = self.get_target(idx) return {"images": image, "targets": target} train_dataset = MyDataset() train_loader = DataLoader( train_dataset, batch_size=16, shuffle=True, num_workers=12, pin_memory=True) def train(): for batch in train_loader: images = batch[“images”].to(device, non_blocking=True) targets = batch[“targets”].to(device, non_blocking=True) # This is faster, but still slower than it should be. device = “cuda” class MyDataset (Dataset): def __getitem__(self, idx): image = self.get_image_tensor(idx).to(device) target = self.get_target(idx).to(device) return {"images": image, "targets": target} train_dataset = MyDataset() train_loader = DataLoader( train_dataset, batch_size=16, shuffle=True, num_workers=0, pin_memory=False) def train(): for batch in train_loader: images = batch[“images”] targets = batch[“targets”]
st81254
Hi, I think it’s expected that there is an overhead of using multiple workers if your tensors are very cheap to load. The point is to be able to load from disk ahead of time to reduce the impact of the slow hard drives. Why is the second one “slower than it should be” ?
st81255
I meant to say that it would be much faster if the multi-worker version worked properly. In the first snippet, GPU utilization is very low. The call to .to(device) starts only after the current batch being processed by the GPU is done which causes a lot of GPU idle time. The same implementation runs much faster in TensorFlow.
st81256
We had some issues using pinned memory recently (@rwightman reported it here 15), which were fixed recently, so you could try out the nightly build or build from source and check, if the profiling changes. For general data loading bottlenecks, have a look at this post 57.
st81257
Hi everyone, I try to convert my linear system to CNN. My linear NN class is class Module(nn.Module): def __init__(self, D_in, H1, H2,D_out): super().__init__() self.linear1 = nn.Linear(D_in, H1) self.linear2 = nn.Linear(H1, H2) self.linear3 = nn.Linear(H2, D_out) self.dropout1 = nn.Dropout(0.2) self.dropout2 = nn.Dropout(0.2) def forward(self, x): x = F.relu(self.linear1(x)) x = self.dropout1(x) x = F.relu(self.linear2(x)) x = self.dropout2(x) x = self.linear3(x) return x and my training set input shape is [2048,15,1] and my interpreter loop is for e in range(epochs): running_loss = 0.0 running_corrects = 0.0 val_running_loss = 0.0 val_running_corrects = 0.0 for inputs,out in train_generator: output=model(inputs) print(inputs.size()) print(output.size()) loss = criterion(output,out) preds,_=torch.max(output,1) # outputss.append(preds.max().detach().numpy()) losses.append(loss) optimizer.zero_grad() loss.backward() optimizer.step() else: with torch.no_grad(): for val_inputs, val_labels in valid_generator: val_inputs=val_inputs.to(device) val_labels=val_labels.to(device) val_outputs = model(val_inputs) val_loss = criterion(val_outputs, val_labels) val_preds,_ = torch.max(val_outputs, 1) val_running_loss_history.append(val_loss) val_running_corrects_history.append(val_preds.max().detach().numpy()) and my CNN class : class Module(nn.Module): #def __init__(self,insize,hid,out): def __init__(self,insize=15,hid=16,out=1): super().__init__() self.con1=nn.Conv1d(insize,hid,1) self.con2=nn.Conv1d(hid,out,1) self.fc1=nn.Linear(1*2048,1) self.fc2=nn.Linear(1901,1) self.drop=nn.Dropout(0.2) def forward(self,x): x=F.elu(self.con1(x)) x=F.max_pool1d(x,1) x=F.elu(self.con2(x)) x=self.drop(x) if x.size(0)==1901: x=x.view(-1,1901) x=self.fc2(-1,1901) else: x=x.view(-1,2048*1) x=self.fc1(x) return x when I execute my code I got this error: TypeError: forward() takes 2 positional arguments but 3 were given How can I solve this? Thank you
st81258
Solved by ptrblck in post #2 I would guess this error is raised in self.fc2, as you are feeding the shape: x=self.fc2(-1,1901) instead of the activation x.
st81259
I would guess this error is raised in self.fc2, as you are feeding the shape: x=self.fc2(-1,1901) instead of the activation x.
st81260
Hi, I have such a simple method in my model def get_normal(self, std): if <here I need to know which device is used> : eps = torch.cuda.FloatTensor(std.size()).normal_() else: eps = torch.FloatTensor(std.size()).normal_() return Variable(eps).mul(std) To work efficiently, it needs to know which device is currently used (CPU or GPU). I was looking for something like model.is_cuda - but different tensors can be placed on different devices, so probably there is something like std.device_context, but I haven’t found such method either. What is recommended to handle such situations?
st81261
Solved by ptrblck in post #13 You could try a = torch.randn(10).to('cuda:0') b = torch.randn(10).to(a.device)
st81262
So far I am using std.is_cuda (which seems to be ok solution if there is one GPU device), but better options are welcome.
st81263
The common way is to start your code with: use_cuda = torch.cuda.is_available() Then, each time you create a new instance of any tensor/variable/module, just do: if use_cuda: my_obect.cuda() That way you make sure that everything is stored or not on GPU or CPU (by default, without calling .cuda() it will be on CPU)
st81264
The common way is to start your code with: Yup, I’ve noted this (this code is based on pytorch examples). However, there may be different possible situations to handle and I want the model code to be quite isolated from the environment (e.g. place it in a separate python module), and this doesn’t look like a general solution.
st81265
Just came up with a better idea: tensor.new(sizes).normal_(0, 1) seems to be the right way to get gaussian noise on the right device.
st81266
This is a feature wanted for quite a long time. I don’t see any reason why pytorch hasn’t provided API simply as .device() to return the device a model/variable/tensor resides on
st81267
I guess you could use this: cuda_check = my_tensor.is_cuda if cuda_check: get_cuda_device = my_tensor.get_device()