id
stringlengths
3
8
text
stringlengths
1
115k
st82768
Thank you for your answer. _, inds = torch.sort(b) will give inds = [2, 0, 1]. If I want inds = [0, 2, 1], which means the ordering between two 1s stay the same, then how should I do?
st82769
Look, if I run: _, inds = torch.sort(b) print(inds) I get: [0, 2, 1] Are you sure you did not modify b?
st82770
What if you sort this tensor: [1, 2, 3, 4, 4, 1, 2, 3, 2, 4, 3, 1, 3, 1, 2, 4] What will you get? I am getting: [11, 0, 13, 5, 14, 6, 1, 8, 12, 7, 10, 2, 9, 4, 3, 15]
st82771
Hi, I have manage to get expected sort using numpy.argsort(b). This will give stable sort. Thank you for your reply again.
st82772
According to my understanding, the following nloss.sum() and sloss should be equal, but the actual is different. Why is this? import torch import torch.nn as nn mean_loss = nn.BCELoss(reduction='mean') sum_loss = nn.BCELoss(reduction='sum') none_loss = nn.BCELoss(reduction='none') prediction = torch.rand((4, 1, 256, 256)) target = torch.randint(0, 2, (4, 1, 256, 256), dtype=torch.float32) mloss = mean_loss(prediction, target) sloss = sum_loss(prediction, target) nloss = none_loss(prediction, target) print(f"none_loss: {nloss}") print(f"none_loss size: {nloss.size()}") print(f"none_loss sum: {nloss.sum()}") print(f"sum_loss: {sloss}") print(f"none_loss mean: {nloss.mean()}") print(f"sum_loss/(4*1*256*256): {sloss / (4 * 1 * 256 * 256)}") print(f"mean_loss: {mloss}") output: none_loss: tensor([[[[1.3495, 0.2165, 0.1645, ..., 0.9105, 1.3246, 2.3588], [0.2977, 1.2160, 0.2346, ..., 0.2156, 0.0112, 0.0647], [2.3740, 0.0570, 1.7437, ..., 1.9400, 0.1285, 2.8168], ..., [0.6525, 0.3042, 0.9111, ..., 1.0126, 1.0627, 6.1224], [4.6391, 0.6456, 0.9346, ..., 0.9919, 0.0441, 0.0186], [1.5025, 0.8117, 0.3026, ..., 1.2144, 0.8634, 0.0161]]], [[[1.8923, 1.4190, 1.1883, ..., 0.4101, 1.8231, 0.7425], [0.0231, 0.7847, 2.2190, ..., 0.5121, 0.1161, 1.3471], [0.9289, 3.2961, 0.2482, ..., 2.1655, 0.3986, 0.3489], ..., [0.0197, 2.0553, 0.0988, ..., 0.0907, 0.8908, 0.6585], [0.7845, 4.9424, 1.9141, ..., 2.5432, 2.2620, 1.1243], [0.6280, 0.2520, 1.0300, ..., 0.4113, 1.2322, 0.4129]]], [[[3.9760, 1.3871, 0.5002, ..., 0.0077, 0.3474, 0.0095], [3.5548, 2.1170, 1.0698, ..., 0.3680, 2.2655, 0.0573], [1.6277, 2.4666, 0.0724, ..., 0.6252, 0.1662, 0.3559], ..., [1.9241, 0.0977, 1.0205, ..., 1.4400, 0.4261, 1.4168], [0.3839, 1.5772, 0.2207, ..., 0.4982, 1.3771, 0.0717], [0.6420, 0.4036, 0.5927, ..., 1.3667, 2.6873, 0.0586]]], [[[1.5006, 0.1055, 0.5871, ..., 0.2185, 0.3801, 0.0206], [0.0636, 0.2080, 0.5735, ..., 0.3822, 0.7734, 0.7849], [0.8799, 1.9971, 1.4169, ..., 0.0417, 0.4123, 1.9111], ..., [0.9491, 0.8655, 0.8146, ..., 0.0889, 1.1045, 0.0404], [2.3005, 0.9741, 1.6709, ..., 0.3271, 0.4325, 1.3791], [1.7653, 6.5351, 1.3232, ..., 0.4984, 0.6124, 1.4612]]]]) none_loss size: torch.Size([4, 1, 256, 256]) none_loss sum: 261970.9375 sum_loss: 261969.8125 none_loss mean: 0.9993398189544678 sum_loss/(4*1*256*256): 0.999335527420044 mean_loss: 0.999335527420044
st82773
Solved by liyz15 in post #2 They are equal, it results from precision of float. Change dtype of prediction and target to torch.float64 will give you closer answer, but float is float, it’s never exactly the same.
st82774
They are equal, it results from precision of float. Change dtype of prediction and target to torch.float64 will give you closer answer, but float is float, it’s never exactly the same.
st82775
It does seem that way. import torch import torch.nn as nn mean_loss = nn.BCELoss(reduction='mean') sum_loss = nn.BCELoss(reduction='sum') none_loss = nn.BCELoss(reduction='none') torch.set_default_tensor_type(torch.DoubleTensor) prediction = torch.rand((4, 1, 256, 256)) target = torch.randint(0, 2, (4, 1, 256, 256)).to(prediction.dtype) mloss = mean_loss(prediction, target) sloss = sum_loss(prediction, target) nloss = none_loss(prediction, target) print(f"none_loss: {nloss}") print(f"none_loss size: {nloss.size()}") print(f"none_loss sum - sum_loss: {nloss.sum() - sloss}") print(f"sum_loss: {sloss}") print(f"none_loss mean - mean_loss: {nloss.mean() - mloss}") print(f"none_loss mean - sum_loss/(4*1*256*256): {nloss.mean() - sloss / (4 * 1 * 256 * 256)}") print(f"mean_loss - sum_loss/(4*1*256*256): {mloss - sloss / (4 * 1 * 256 * 256)}") output: none_loss: tensor([[[[1.6909e-01, 2.2602e-01, 2.9009e-01, ..., 6.7879e-01, 2.9715e-01, 2.2295e-01], [1.6360e-01, 6.4142e-01, 4.2995e+00, ..., 5.5690e+00, 3.1017e+00, 1.5648e+00], [2.1270e-02, 1.3671e+00, 1.0986e+00, ..., 6.4392e-01, 4.7506e-01, 3.2062e+00], ..., [3.1378e-01, 5.4515e-01, 1.8998e+00, ..., 2.4271e+00, 4.0148e-02, 8.0373e-01], [2.1827e+00, 2.1392e+00, 4.0147e-02, ..., 1.5841e+00, 3.2365e-01, 4.0541e-01], [4.8378e-03, 2.2808e+00, 1.0209e+00, ..., 1.0025e-01, 8.9763e-01, 6.0305e-01]]], [[[8.6865e-01, 2.3439e-01, 9.7892e-01, ..., 2.8537e-01, 8.0262e-01, 1.4477e+00], [3.6908e-01, 1.3302e-01, 3.6755e-01, ..., 4.4746e-01, 1.1312e+00, 1.5401e+00], [5.9827e-01, 1.2599e-01, 4.1681e-01, ..., 1.4241e+00, 1.2873e-01, 3.6063e-02], ..., [2.1580e-02, 1.2570e+00, 3.5218e-01, ..., 5.6630e-04, 1.1745e+00, 8.3446e-01], [6.5295e-01, 2.8552e-01, 6.6924e-01, ..., 3.3659e-01, 5.2286e-01, 4.0466e+00], [3.0521e-02, 4.1409e-01, 6.3107e-02, ..., 5.7095e-01, 9.7246e-01, 1.6637e-02]]], [[[1.2195e+00, 1.6163e+00, 6.2884e-01, ..., 9.3902e-01, 8.8280e-01, 9.0248e-01], [6.1688e-01, 1.1943e+00, 6.5388e-01, ..., 2.2729e-02, 1.8613e+00, 4.6382e-01], [2.0489e+00, 4.9627e-01, 1.0688e+00, ..., 1.2870e+00, 4.2555e+00, 6.5962e-01], ..., [4.7444e-02, 1.7521e-01, 9.8799e-01, ..., 1.7306e-01, 4.7791e-01, 9.8417e-01], [5.8264e+00, 2.9936e-01, 6.4415e-01, ..., 1.5320e+00, 8.1047e-02, 1.1640e+00], [2.4262e-02, 3.5514e-01, 3.5209e-01, ..., 8.9354e-01, 2.3711e+00, 1.9904e-01]]], [[[8.9773e-01, 1.8652e-01, 2.3964e+00, ..., 1.5578e-01, 4.3828e-01, 1.4039e-01], [9.0407e-01, 3.1064e-01, 5.5755e-01, ..., 2.1385e+00, 3.2113e-01, 1.1566e+00], [4.6947e-01, 8.2999e-01, 3.8598e-01, ..., 6.5852e-01, 3.8495e-01, 2.8922e+00], ..., [1.0772e-01, 4.0390e-01, 2.4569e+00, ..., 1.2215e+00, 1.4233e-01, 1.3202e+00], [1.8880e-02, 3.5397e-01, 4.5789e-01, ..., 3.6323e-01, 1.5102e+00, 1.8710e-01], [3.0939e-01, 1.5822e-01, 8.6607e-01, ..., 8.2385e-01, 1.7487e+00, 1.0783e+00]]]]) none_loss size: torch.Size([4, 1, 256, 256]) none_loss sum - sum_loss: 2.153683453798294e-09 sum_loss: 261975.25710102005 none_loss mean - mean_loss: 8.215650382226158e-15 none_loss mean - sum_loss/(4*1*256*256): 8.215650382226158e-15 mean_loss - sum_loss/(4*1*256*256): 0.0 When I commented out torch.set_default_tensor_type(torch.DoubleTensor), it outputs: none_loss: tensor([[[[3.5456e+00, 3.7387e+00, 8.4511e-01, ..., 8.3419e-01, 3.1007e-01, 6.0784e-01], [3.2497e-01, 7.1582e-02, 1.0087e+00, ..., 1.0328e+00, 1.1397e+00, 4.6922e-01], [9.5055e-01, 1.1585e+00, 5.0625e-01, ..., 9.2214e-02, 1.2421e+00, 1.0988e+00], ..., [8.3196e-01, 1.9343e+00, 3.8697e+00, ..., 5.8421e-01, 3.5900e-01, 1.4599e+00], [1.7304e+00, 3.9824e+00, 6.6815e-01, ..., 8.6631e-01, 7.6646e-02, 7.7781e-01], [3.0506e+00, 1.0827e+00, 3.8251e+00, ..., 1.8362e+00, 4.5225e-01, 2.1761e-01]]], [[[6.5931e-01, 2.4224e+00, 2.9061e-01, ..., 1.5686e+00, 4.9234e+00, 7.5245e-01], [4.5296e-01, 7.7876e-01, 3.3517e-01, ..., 4.0715e-01, 1.8543e-01, 5.9114e-02], [1.2131e+00, 1.4136e+00, 5.6655e-01, ..., 1.6677e+00, 1.1074e-01, 3.2634e-01], ..., [2.0635e+00, 6.5898e-02, 5.1794e-01, ..., 2.5420e-01, 1.9394e-01, 1.3224e-01], [7.3909e-01, 8.0544e-01, 1.9719e+00, ..., 1.1651e+00, 1.4917e+00, 1.2840e-01], [1.1194e+00, 3.3584e-01, 6.4722e-01, ..., 1.1440e+00, 2.4911e-01, 4.8057e-01]]], [[[3.3183e-01, 2.2492e-01, 1.5291e+00, ..., 9.8453e-01, 4.4725e-01, 1.3618e-01], [7.1685e-01, 1.9812e+00, 3.0295e-01, ..., 4.9616e-01, 4.2325e-01, 1.2133e+00], [1.1641e+00, 5.6044e-01, 3.5851e-01, ..., 8.3811e-01, 2.6152e-01, 1.2948e+00], ..., [4.4844e-01, 1.6640e+00, 1.4553e+00, ..., 3.4472e+00, 2.0390e-03, 5.9021e-01], [2.0639e-03, 7.7447e-01, 7.5002e-01, ..., 5.5059e-01, 1.1729e+00, 4.1135e-01], [3.1777e-01, 1.4453e-01, 1.4527e+00, ..., 1.9885e-01, 1.9115e+00, 2.5113e+00]]], [[[5.8665e-01, 7.0809e-01, 2.9890e-01, ..., 2.6212e-01, 2.9796e-01, 2.8047e-01], [1.4107e+00, 9.7047e-03, 2.5597e-01, ..., 4.2398e-01, 1.3963e+00, 1.2573e-01], [3.5281e-01, 8.8076e-01, 1.1918e-01, ..., 1.3587e+00, 2.2086e+00, 9.7953e-01], ..., [1.7415e-01, 2.1369e-01, 1.1759e+00, ..., 3.1513e-01, 2.6444e+00, 6.0460e-01], [4.3002e-01, 8.0646e-02, 5.3878e-01, ..., 3.2743e-01, 3.4625e+00, 8.5545e-01], [1.4224e-01, 1.2358e+00, 1.2238e+00, ..., 7.6897e-02, 1.5435e+00, 2.6061e-01]]]]) none_loss size: torch.Size([4, 1, 256, 256]) none_loss sum - sum_loss: 2.1875 sum_loss: 262595.4375 none_loss mean - mean_loss: 8.344650268554688e-06 none_loss mean - sum_loss/(4*1*256*256): 8.344650268554688e-06 mean_loss - sum_loss/(4*1*256*256): 0.0 The order of magnitude of the difference has changed from -15 to -6.
st82776
I was wondering if it is possible to get the input and output activations of a layer given its parameters names. For example, assume a weight tensor is called module.fc3.weights. Can I access the inputs and outputs of the layer which contains the said weight tensor? I only need to do this once for a pertained neural network and therefore, good performance is not a concern.
st82777
You could use forward hooks 1.3k and use the parameter name to register them. Let me know, if that would work for you.
st82778
Thank you for the answer. This should solve the problem. I just need to find a method to iterate all layers within the neural network and add this hook automatically. I have two follow up questions: If the activation function is defined in a container like sequential, e.g. nn.ReLU(), does the output consider such activation function or not? In other words, is the output before or after activation function is applied? Would I be able to combine multiple layers into one? For example, if a layer is followed by batch normalization, can I get the output after batch normalization is applied?
st82779
This approach 600 might work to register hooks for all modules. If you are using out of place activations, the output will non-linearity will be applied on the input and will return the output you could clone into your dict. However, if you are using inplace=True, note that also the input will be manipulated in-place. Yes, you can pass any module (containing other submodules) and register the hook to this outer output.
st82780
I am not sure how model and optimizers work together in pytorch. Here is the thing, I save my model(full model, not the state_dict of the model) and i save the optimizer.state_dict torch.save({ 'model': model, # it saves the whole model 'optimizer_state_dict': optimizer.state_dict(), 'lr_scheduler_state_dict': lr_scheduler.state_dict(), }, save_path) Then i load my model and freeze some layers and i define the optimizer again and then load the optimizers state_dict… ckpt = torch.load(save_path) model = ckpt['model'] for name, param in model.named_parameters(): if ('layer4' in name) or ('fc' in name): param.requires_grad = True else: param.requires_grad = False optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr = lr) optimizer.load_state_dict(ckpt['optimizer_state_dict']) exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1) exp_lr_scheduler.load_state_dict(ckpt['lr_scheduler']) It throws an ugly error- ValueError: loaded state dict contains a parameter group that doesn’t match the size of optimizer’s group Can you pls help me get it corrected ? Any why do we even need to save the state_dict of the optimizers and the scheduler ?
st82781
It could be you filter out some parameters in the new optimizer, resulting mismatch. Removing the filter should resolve the issue. Saving the state_dict of optimizer and scheduler helps to resume training.
st82782
@liyz15, thanks for replying I get it now, that the error is because of the mismatch of the parameters but i can not remove the filters, since i am freezing some layers of the model. What if after freezing the layers, i define a new optimizer ? Would it be any different from loading the state_dict of the previously saved optimizer ?
st82783
I believe setting requires_grad=False should be enough to freeze. See https://stackoverflow.com/questions/53159427/pytorch-freeze-weights-and-update-param-groups 27 As long as you set requires_grad=False before forward and use optimizer.zero_grad(), it won’t be updated. Adam optimizer maintains learning rate adaptively, so it has internal state changing with training. It’s different between new and trained. Generally, if you want to continue training, load from state_dict.
st82784
Hi @liyz15 Thanks for the explaination once again ! But what do you suggest me to do in my case where, i have to freeze layers after some epochs of training ? Do you suggest me to define a new optimizer after freezing the layers of the model ?
st82785
What’s the purpose of freezing layers? If you are trying to finetune on different dataset, a new one is preferred. If it’s some training techniques to freeze some layer during training, then continue with the same one. BTW, saving directly the entire model may not be the best practice, save model.state_dict() is recommended, see https://github.com/pytorch/pytorch/blob/761d6799beb3afa03657a71776412a2171ee7533/docs/source/notes/serialization.rst 15
st82786
Yes freezing layers is a technique of training a model. So i train the model for, lets say 5 epochs on the last 3 layers, then train further for 3 epochs for only the last 2 layers(freezing the thirds last layer) and so on… And saving the model.state_dict() does not save the requires_grad attribute of the parameters of the model whearas saving the entire model does save it. Saving the entire model works as long as u are not changing the architecture of the model itself. So, given that i have to freeze layers after a few epochs, do i have any option other than defining a new optimizer ?
st82787
Just use the old one, no need for filtering, requires_grad=False will do freezing.
st82788
I want to build a model that evaluates two loss functions with the same l1 metric. which way is a more pythonic method(or Pytorch method?) # status 1 criterion1 = nn.L1Loss() criterion2 = nn.L1Loss() criterion1(input_1,output_1) criterion2(input_2,output_2) loss.backward() # status 2 criterion1 = nn.L1Loss() criterion1(input_1,output_1) criterion1(input_2,output_2) loss.backward()
st82789
Solved by gioperin in post #2 Both solutions are correct and wrong at the same time, because you are not defining the variable loss. If the two criteria are exactly the same one, you can create just one object and do: criterion = nn.L1Loss() loss = criterion(input_1,output_1) loss += criterion(input_2,output_2) loss.backward(…
st82790
Both solutions are correct and wrong at the same time, because you are not defining the variable loss. If the two criteria are exactly the same one, you can create just one object and do: criterion = nn.L1Loss() loss = criterion(input_1,output_1) loss += criterion(input_2,output_2) loss.backward()
st82791
loss = nn.MSELoss() input = torch.randn(3, 5, requires_grad=True) target = torch.randn(3, 5) output = loss(input, target) output.backward() If I do this? output = loss(target, input) Swap target and input. Will this be a mistake?
st82792
I would say that it depends on the loss: for the MSE loss it is probably the same because mse(a,b) = mse(b,a), as you are doing the square of the error. The same holds for the mae criterion, because you are taking the absolute value. But be careful: this does not hold for measures such as the kullback leibler divergence, for which kll(a,b) != kll(b,a).
st82793
I want to learn the hidden state of the RNN but have no idea how to do it. The shape of the hidden state depends upon the batch_size and due to that I am unable to make any Parameter for it. How can I do that?
st82794
I have a resnet that uses convolutions and nn.ReflectionPad1D. The data that I have is 1D (1045 long) with 2 channels (real and imaginary). All of the padding layers are defined using an integer, so the built-in functions convert that into a paired tuple, i.e. equal padding on both sides of the 1D vector. When I check the dimensions of the input using “input.dim()”, it of course equals 2: [1045, 2]. Here is the relevant portion of the error message: File “/home/john/Documents/Research/GAN/modules/loss_networks.py”, line 112, in setup input = autograd.Variable(gen(input).data) File “/home/john/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 477, in call result = self.forward(*input, **kwargs) File “/home/john/Documents/Research/GAN/modules/generator_networks.py”, line 63, in forward return self.model(input) File “/home/john/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 477, in call result = self.forward(*input, **kwargs) File “/home/john/.local/lib/python3.6/site-packages/torch/nn/modules/container.py”, line 91, in forward input = module(input) File “/home/john/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 477, in call result = self.forward(*input, **kwargs) File “/home/john/.local/lib/python3.6/site-packages/torch/nn/modules/padding.py”, line 163, in forward return F.pad(input, self.padding, ‘reflect’) File “/home/john/.local/lib/python3.6/site-packages/torch/nn/functional.py”, line 2181, in pad raise NotImplementedError(“Only 3D, 4D, 5D padding with non-constant padding are supported for now”) NotImplementedError: Only 3D, 4D, 5D padding with non-constant padding are supported for now Process finished with exit code 1 I looked at the source code and it says that 1D padding using reflection/replicaiton requires a 3D input tensor. I thought the point of nn.Reflection1DPad was to pad a 1D tensor, so I don’t understand why it requires a 3D input. Can someone please explain what I am doing wrong? If you need anymore information (code, etc.), I’ll be happy to provide whatever you need. Thank you in advance.
st82795
Solved by liyz15 in post #2 Check the doc, https://pytorch.org/docs/stable/nn.html#torch.nn.ConstantPad1d, input should be like (N, C, W_{int}). The first dimension should be batch size, in you case N=1 so just reshape the input.
st82796
Check the doc, https://pytorch.org/docs/stable/nn.html#torch.nn.ConstantPad1d 208, input should be like (N, C, W_{int}). The first dimension should be batch size, in you case N=1 so just reshape the input.
st82797
according to broadcasting rules If two tensors x , y are “broadcastable”, the resulting tensor size is calculated as follows: If the number of dimensions of x and y are not equal, prepend 1 to the dimensions of the tensor with fewer dimensions to make them equal length. Then, for each dimension size, the resulting dimension size is the max of the sizes of x and y along that dimension. x = torch.randn(1, 1, 0) y = torch.randn(4, 1, 1) (x + y).shape torch.Size([4, 1, 0]) according to the above rules, shouldn’t it have been 4, 1, 1, just y, or give an error?
st82798
This one give 4.1.2 x = torch.randn(1, 1, 1) y = torch.randn(4, 1, 2) (x+y).shape and this one give an error x = torch.randn(1, 1, 0) y = torch.randn(4, 1, 2) (x+y).shape
st82799
vainaijr: If the number of dimensions of x and y are not equal, prepend 1 to the dimensions of the tensor with fewer dimensions to make them equal length. We can also take a look at the numpy broadcast rules 1, that state: they are equal, or one of them is 1 2 conditions satisfied in your example >>> x_np = np.random.rand(4,1,0) >>> y_np = np.random.rand(4,1,1) >>> x_np + y_np array([], shape=(4, 1, 0), dtype=float64) dim of 0 is a special case, that will make your tensor volume to be 0. Apparently, in is considered to be “bigger” than 1 in broadcasting. So the resulting dim is 0*1=0
st82800
but adding an empty tensor to some tensor, should return some tensor. I mean adding something to nothing should return something, why does it return nothing? or it should give error, that addition of these two tensors is not possible.
st82801
It not actually like adding something to nothing, x = torch.randn(1, 1, 0) is empty indeed, but it has a shape. Then, for each dimension size, the resulting dimension size is the max of the sizes of x and y along that dimension. I don’t know where you read about this, but it’s incorrect. To broadcast, two must meet the rules, check https://pytorch.org/docs/stable/notes/broadcasting.html 1. For not equal dimensions, the resulting should be the one dimension which is not 1. (Generally the larger one, except for 0 I guess. Take the broadcasting rules this way, if two dimension is not equal and one of them is 1, repeat the one with 1 dimension to make them match. So when performing x + y with shape [1, 1, 0] and [4, 1, 1], image that Along the first dimension, their shapes are not equal and x has dimension 1, so x is repeated 4 times, making it [4, 1, 0] (which is still empty). Along the second dimension, their shapes are equal, so skip. Along the third dimension, their shapes are not equal and y has dimension 1, so y is repeated 0 times, making it [4, 1, 0] (and it becomes empty). Then we get a empty tensor with size [4, 1, 0].
st82802
Let’s say I have the following example (modified from [Data parallel tutorial])(https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html#simple-model 2 ) class Model(nn.Module): # Our model def __init__(self, input_size, output_size): super(Model, self).__init__() self.fc = nn.Linear(input_size, output_size) self.f = torch.ones(1) def forward(self, input): output = self.fc(input) * self.f print(self.f) print("\tIn Model: input size", input.size(), "output size", output.size()) return output wrapped in a nn.DataParallel. This results in this error when doing a forward pass, because of RuntimeError: Expected object of type torch.cuda.FloatTensor but found type torch.FloatTensor for argument #2 'other' If I try to call .cuda() on the f field, it goes on the first cuda device, and then the forward pass does not work because they are on different devices: class Model(nn.Module): # Our model def __init__(self, input_size, output_size): super(Model, self).__init__() self.fc = nn.Linear(input_size, output_size) self.f = torch.ones(1).cuda() def forward(self, input): output = self.fc(input) * self.f print(self.f) print("\tIn Model: input size", input.size(), "output size", output.size()) return output Log: RuntimeError: arguments are located on different GPUs at So how do I use data parallel with the functional API?
st82803
Solved by ptrblck in post #2 Most likely self.f is not pushed to the right device, since it is neither registered as an nn.Parameter nor as a buffer (using self.register_buffer). Use the former case, if self.f should require gradients and the latter if not. The manual cuda() call inside your __init__ method won’t work, as the…
st82804
Most likely self.f is not pushed to the right device, since it is neither registered as an nn.Parameter nor as a buffer (using self.register_buffer). Use the former case, if self.f should require gradients and the latter if not. The manual cuda() call inside your __init__ method won’t work, as the model will be sent to each GPU which was passed as device_ids. If you want to manually push the tensor (not necessary in your use case), you could use: output = self.fc(input) * self.f.to(input.device) in your forward.
st82805
Thanks! That is exactly what the problem was. I am new to PyTorch and I had no idea I had to register it, but it definitely makes sense though, how would PyTorch know otherwise.
st82806
Greetings pytorchers. I am trying to run pytorch on a machine with eight Testla K10G1.8GB GPUs (confirmed by nvidia-smi). CUDA is installed correctly and the samples from nvidia run (as do my own .cu tests). A simple example gives me a runtime error, that I am finding difficult to comprehend. Pointers on how to debug this problem (details are below) would be awesome. Thank you. Here goes… uname-ar: Linux 4398392dfc97 4.4.0-83-generic #106-Ubuntu SMP Mon Jun 26 17:54:43 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux torch.version: 0.5.0a0+ef477b2 (compiled from source following instructions on github) Command (I see the -DWITH_CUDA compiler flag flash by in gcc): CUDA_HOME=/usr/local/cuda python setup.py install Additional (in the pytorch directory): grep -R “-DWITH_CUDA” ./*: ./setup.py: extra_compile_args += [’-DWITH_CUDA’] ./tools/cpp_build/libtorch/CMakeLists.txt: add_definitions(-DWITH_CUDA) ./torch/lib/THD/CMakeLists.txt: ADD_DEFINITIONS(-DWITH_CUDA=1) ./torch/lib/build/THD/CMakeFiles/THD.dir/flags.make:CXX_DEFINES = -DWITH_CUDA=1 -DWITH_GLOO=1 -D_THD_CORE=1 nvcc --version: NVIDIA ® Cuda compiler driver Copyright © 2005-2016 NVIDIA Corporation Built on Tue_Jan_10_13:22:03_CST_2017 Cuda compilation tools, release 8.0, V8.0.61 gcc -v: Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.8/lto-wrapper Target: x86_64-linux-gnu Code (test.py): import torch n_devices = torch.cuda.device_count () a = torch.cuda.FloatTensor([1.]) Output: THCudaCheck FAIL file=/home/tyoung/usr/src/pytorch/aten/src/THC/THCGeneral.cpp line=71 error=38 : no CUDA-capable device is detected Traceback (most recent call last): File “cuda.py”, line 4, in a = torch.cuda.FloatTensor([1.]) File “/home/tyoung/usr/bin/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py”, line 161, in _lazy_init torch._C._cuda_init() RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /home/tyoung/usr/src/pytorch/aten/src/THC/THCGeneral.cpp:71 Note, if I do the same thing on a different machine with one GeForce GTX 680 and run CUDA_VISIBLE_DEVICES=1 python test.py ie., point it to a nonexistant device, I get the same output as above. Naturally, there, CUDA_VISIBLE_DEVICES=0 python test.py runs code correctly. How (or why?) are my Testla K10s hiding from (py)torch? Best, Toby
st82807
While building from source, did you get any information regarding the found CUDA version and its location etc.?
st82808
Thank you for the response. I will have another try tomorrow and post any configure/compile indicators I find. Best, T
st82809
Good pointer @ptrblck ! Thanks, that helped me to solve the issue. After searching through the configuration scripts this morning, I found some dodgy looking links to cuda library. A careful, simple, fresh install from scratch and pytorch works just great and as expected. Best, T
st82810
I have implemented a PyTorch NN code for classification and regression. Classification: a) Use stratifiedKfolds for cross-validation (K=10- means 10 fold-cross validation) I divided the data: as follows: Suppose I have 100 data: 10 for testing, 18 for validation, 72 for training. b) Loss function = CrossEntropy c) Optimization = SGD d) Early Stopping where waittime = 100 epochs. Problem is: Baseline Accuracy = 51% Accuracy on Training set = 100% Accuracy on validation set = 90% Accuracy on testing set = 72% I don’t understand what are the reasons behind the huge performance difference in Testing data/ Validation data? How can I solve this problem? Regression: a) use the same network structure b) loss function = MSELoss c) Optimization = SGD d) Early Stopping where waittime = 100 epochs. e) Use K-fold for cross-validation. I divided the data: as follows: Suppose I have 100 data: 10 for testing, 18 for validation, 72 for training. > Problem is: Baseline MSE= 14.0 Accuracy on Training set = 0.0012 Accuracy on validation set = 6.45 Accuracy on testing set = 17.12 I don’t understand what are the reasons behind the huge performance difference in Testing data/ Validation data? How can I solve these problems? or Is this an obvious thing for NN/ depend on particular dataset?
st82811
You might see some generalization error in your first example. Chapter 40 7 of Ng’s Machine Learning Yearning might give you some more information of this 5 Wikipedia article.
st82812
I notice that there are torch.distribution.sample(), torch.distribution.log_prob() something like that in PyTorch document. They can be used in sampling data and calculating the log probability in reinforcement learning right? Why do I seldom see anyone use them? Is it because there are some mistakes in the torch. distribution gradient?
st82813
Are there any guidelines/articles as how to choose the cutoffs for adaptive softmax? The class is here: https://pytorch.org/docs/stable/_modules/torch/nn/modules/adaptive.html 15. Thanks in advance.
st82814
I have a set of strings of sequential models e.g. 'Sequential( (conv1): Conv2d(3, 4, kernel_size=(3, 3), stride=(1, 1)) (relu1): ReLU() (conv2): Conv2d(4, 2, kernel_size=(3, 3), stride=(1, 1)) (relu2): ReLU() (Flatten): Flatten() (fc): Linear(in_features=1568, out_features=10, bias=True) )' I want to be able to parse it to extract the layers, the hyper parameters and sometimes even make a new model with that setting. Is there an easy way to do this in PyTorch?
st82815
There’s always regex! It looks as though you could split the string on whitespace + open bracket to have separate strings for each layer. Then write functions to read each type of layer. Not a particularly quick or clean solution though… I don’t work with sequential models much, but is this basically the syntax you’d use to define one? If so, to make a new model with this setting you could just run mymodel = eval(mystring)
st82816
Hi, I am getting getting the following error when I try to evaluate my model after training: In SegModel: False ------Epoch 51 | Step 71 | Train Loss 0.1662866771221161 | Validation Loss 0.48616326600313187 ----------- model loaded Traceback (most recent call last): File “tester.py”, line 61, in model.eval() File “/data/sbanerjee/anaconda_env/SV/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 1009, in eval return self.train(False) File “/data/sbanerjee/anaconda_env/SV/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 998, in train module.train(mode) TypeError: ‘bool’ object is not callable Here’s my code snippet: model = SegModel(train=False) if torch.cuda.device_count() > 1: model = nn.DataParallel(model) model.to(device) load_path = ‘path/to/saved/model/model_best_0816.pth.tar’ checkpoint = torch.load(load_path) epoch = checkpoint[‘epoch’] train_step = checkpoint[‘train step’] train_loss = checkpoint[‘train_loss’] val_loss = checkpoint[‘val_loss’] print ("------Epoch {} | Step {} | Train Loss {} | Validation Loss {} -----------".format(epoch, train_step, train_loss, val_loss)) model.load_state_dict(checkpoint[‘state_dict’]) print (“model loaded”) model.eval() Any help would be greatly appreciated
st82817
It looks like you have replaced the internal self.train() method inside your model with a bool value: class MyModel(nn.Module): def __init__(self, train): super(MyModel, self).__init__() self.bn = nn.BatchNorm2d(3) self.train = train def forward(self, x): x = self.bn(x) return x Note that self.train() is defined as a method in all nn.Modules, so you should use another name.
st82818
I have a tensor size of 1x2x32x32x32. I want to feed it into spatial transformation network using the tutorial in pytorch. I have change the size of fc based on my input size. The final size before send to the grid = F.affine_grid(theta, x.size()) is theta: (1,6) x (1,2,32,32,32) However, I got the error. How should I fix it? You can run my code in the colab at https://colab.research.google.com/drive/1MPzto7IJ3Z9yj5wo1trl3Sk9JxlI7EQr#scrollTo=mYdLsM7Dkf1j 6 ret = torch.affine_grid_generator(theta, size) RuntimeError: invalid argument 6: wrong matrix size at /opt/conda/conda-bld/pytorch-nightly_1555305720252/work/aten/src/THC/generic/THCTensorMathBlas.cu:494 This is my code from __future__ import print_function import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision from torchvision import datasets, transforms import matplotlib.pyplot as plt import numpy as np plt.ion() # interactive mode class Net(nn.Module): def __init__(self): super(Net, self).__init__() # Spatial transformer localization-network self.localization = nn.Sequential( nn.Conv3d(2, 8, kernel_size=7), nn.MaxPool3d(2, stride=2), nn.ReLU(True), nn.Conv3d(8, 10, kernel_size=5), nn.MaxPool3d(2, stride=2), nn.ReLU(True) ) # Regressor for the 3 * 2 affine matrix self.fc_loc = nn.Sequential( nn.Linear(10 * 4 * 4 * 4, 32), nn.ReLU(True), nn.Linear(32, 3 * 2) ) # Initialize the weights/bias with identity transformation self.fc_loc[2].weight.data.zero_() self.fc_loc[2].bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0], dtype=torch.float)) # Spatial transformer network forward function def stn(self, x): xs = self.localization(x) print (xs.size()) xs = xs.view(-1, 10 * 4 * 4 * 4) theta = self.fc_loc(xs) theta = theta.view(-1, 2, 3) grid = F.affine_grid(theta, x.size()) x = F.grid_sample(x, grid) return x def forward(self, x): # transform the input x = self.stn(x) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = Net().to(device) x = torch.rand(1,2,32,32,32).to(device) print('Input shape', x.shape) x_stn = model(x)
st82819
Hello, I need a function that computes the full log poisson loss, but can’t find anything useful for PyTorch. The TF equivalent would be tf.nn.log_poisson_loss(targets, log_input, compute_full_loss=True). Is there anything already implemented for PyTorch? Thanks!
st82820
I implemented it like in TF as such: def log_poisson_loss(targets, log_input, compute_full_loss): if targets.size() != log_input.size(): raise ValueError( "log_input and targets must have the same shape (%s vs %s)" % (log_input.size(), targets.size())) result = torch.exp(log_input) - log_input * targets if compute_full_loss: point_five = 0.5 two_pi = 2 * math.pi stirling_approx = (targets * torch.log(targets)) - targets + (point_five * torch.log(two_pi * targets)) zeros = torch.zeros_like(targets, dtype=targets.dtype) ones = torch.ones_like(targets, dtype=targets.dtype) cond = (targets >= zeros) & (targets <= ones) result += torch.where(cond, zeros, stirling_approx) return result
st82821
Greetings, I have to compile pytorch from source (with cuda 8.0 and cuDnn 7) because our cluster ships with an ancient ScientificLinux (glibc 2.12) and I was wondering what the most up to date way to compile from source is? Is running setup.py still appropriate or is build_anaconda.sh recommended? In addition, there are hard-coded paths for gcc/g++ in nccl and possibly other third party dependencies that interfere with the location of the compilers on the system (/usr/bin/cc is too old for nvcc and the actual default cc returned by which is at some other location). Is there a nice way to fix this? Unfortunately, it isn’t easily possible to create a clean build environment just for building pytorch.
st82822
python setup.py install is still recommended. I’m not sure what build_anaconda.sh is. You can specify cc and gcc paths like the following: CC=clang CXX=clang++ python setup.py install (replace clang and clang++ with paths to your compilers)
st82823
build_anaconda.sh seems to be the original build script of caffe2 but supposedly builds a combined caffe2+pytorch wheel with the -integrated flag. I thought setting CC/CXX on the top level did nothing but apparently that was just caused by some non-purged previous Makefiles.
st82824
Hello, Is there a way to build from source via setup.py for only my userspace on Linux? (i.e. to avoid any permission denied issues while building the source).
st82825
I have a basic doubt regarding the syntax of the code while passing the input to the Linear model. Following is the code that I got from documentation. m = nn.Linear(20, 30) //Line1 input = torch.randn(128, 20)//Line2 output = m(input) //Line3 Now my doubt is: in Line1, m is the object that we created by sending the parameters (20, 30) to the init of the nn.Linear class. Now, in Line3 while passing the inputs how are we able to pass the inputs by calling m(inputs), as m is an object? Instead, what I think should have been the correct procedure for passing the inputs is: output = m.forward(input) Please clarify their approach as its a bit unclear to me and this is something that they have used throughout the documentation.
st82826
Solved by ptrblck in post #2 If you call the model directly, the internal __call__ method will be called, which will register hooks etc. and call forward itself. You should therefore not call forward manually, but let the module do it.
st82827
If you call the model directly, the internal __call__ 5 method will be called, which will register hooks etc. and call forward itself. You should therefore not call forward manually, but let the module do it.
st82828
Hi all, I have a question for you. Do you know if it is possible to add dropout layers to an already trained neural network? I have a model trained without dropout, but I want to use dropout at inference time to estimate uncertainty. I managed to copy the weights from my trained model into a model with dropout layers, but predictions are messed up. This is probably because in PyTorch when dropout is set in train mode it will divide the weights by 1-p. Do you know if there is any way to deal with this issue? Or it is simply wrong to add dropout to an already trained model without retraining it? Thanks in advance for your help
st82829
Indeed, training with dropout needs to account for scaling, so the strategy is to divide the weights by 1/p after training or multiply the weights by 1/p during training (I don’t know which one PyTorch uses). If you need to apply Dropout during inference, you therefore need to compensate for the missing nodes in the network by dividing the weights by 1/p on the affected layers. It will obviously produce wrong results, but that is what you want to measure. E.g: if p = 1/5, you multiply the weights by 5, and this simulates having all nodes active in the layer. See this answer 16 for more details.
st82830
Exactly, what PyTorch does is to account for scaling at training time (multiplies the weights at training time by 1/(1-p_drop) and leaves weights untouched at test time). It means that if I add dropout layers in between in my trained model and at test time I apply dropout as in train mode, PyTorch directly scales correctly the weights to account for missing nodes. Now, my intuition is that the results are worse than what I expected because network weights were not trained to be “robust”, because dropout layers were not present during training. Hence, since by adding dropout at test time I get bad predictions, this means that my model is extremely uncertain about its predictions (I am using Monte Carlo dropout to estimate uncertainty). I don’t know if you are familiar with this technique, in case do you think that my deduction is correct? Thanks for your help
st82831
I haven’t used this approach in practice, but I can understand the procedure. I doubt you can measure confidence using dropout however, but you can evaluate the resilience to co-adaptation between neurons, and therefore the regularization of the network… As to the actual procedure, I think you really need to scale by hand, because as you said, PyTorch doesn’t do anything with dropout at test time. Therefore, you will have fewer nodes in the layer, which won’t contribute as much to the next layer’s input as in the default network. This will produce very different results than the untouched network. Note that it’s easier to use nn.functional.dropout if you want it active during evaluation, as model.eval() will deactivate the nn.Dropout layers…
st82832
Suppose I have a memory which I want to update in the following manner: memory = torch.zeros(4) indices = torch.LongTensor([1, 2, 3, 1]) values = torch.FloatTensor([1, 10, 100, 1000]) memory[indices] += values print (memory) > tensor([ 0., 1000., 10., 100.]) Here, since there is a repetition in the indices the corresponding values do not get added and only the last one gets added. I can see why that can happen due to simultaneous updates. Is there an easy way to add all the values corresponding to repeated indices Basically, I want the answer to be tensor([ 0., 1001., 10., 100.])
st82833
Hey I made a model for time series forecasting. The model works very well. I tried different batch sizes 32, 50, 64, 100, 128 and 256. I got the best result for a batch size of 50. With a batch size of 32 the model still works, but it takes too long until the error converges. I tried multiple experiments with a batch size of 64. With that batch size the model is not able to learn. The MAE is very high, the training error decreases, but the validation error is very high. It seems, that the model is overfitting for a batch size of 64. So I made further experiments with a batch size of 100, 128 and 256. 100 works of, but not as good as 50. 128 shows the same results as 64. 256 shows good results in training and validation error, but not in the MAE. Do you have any ideas why 64 and 128 do not work? I know that models tend to overfit, when the batch size is too large, but this reason makes no sense for me in this case. Thanks for your help!
st82834
Is it okay if they are not permutation (I can still recover the permutation matix via torch.lu_unpack)? @vishwakftw, do you know if it’s fine? The final result is still correct. import torch a = torch.Tensor([ [-1.0357, 1.7709, -0.3924, 0.0632], [-0.0573, -1.9293, -0.7089, -0.8931], [-2.0286, 0.4177, -1.6017, 0.2191], [ 0.9666, -0.7376, 0.6699, 1.1101]] ) q = a.qr().Q LU, P = a.lu() print(P) # tensor([3, 2, 3, 4], dtype=torch.int32)
st82835
I would like to retrain models from torch.models, but they have inplace operation included. How can I change it to False?
st82836
Ok, I simply imported code from pytorch.models into new class and changed it there
st82837
Hi all, I use torch.eq operation in my network recently. And I find there is no grad_fn for this function, which means the loss cannot backward. Therefore, I want to define a differentiable equal function by myself. Here is my code. class Eq(torch.autograd.Function): @staticmethod def forward(ctx, input, value): mask = torch.eq(input, value) idx = mask.nonzero() ctx._input_shape = input.shape ctx._input_dtype = input.dtype ctx._input_device = input.device ctx.save_for_backward(idx) return mask.float() @staticmethod def backward(ctx, grad_output): idx, = ctx.saved_tensors grad_input = torch.zeros(ctx._input_shape, device=ctx._input_device, dtype=ctx._input_dtype) grad_input[idx[:,0], idx[:,1], idx[:,2]] = grad_output[idx[:,0], idx[:,1], idx[:,2]] return grad_input, None However, I am not sure where it is right or wrong. And the funtion gradcheck seems not applicable to this situation. Can anyone help me solve this problem? Thanks,
st82838
Well, after careful consideration, equal operation cannot back propogate loss. Trying to define such a function equals banging my head against a brick wall.
st82839
Hello! I am a bit confused about using torch.optim.lr_scheduler.CyclicLR. I see that by default it has 2000 steps up and 2000 down. This means (if i understand it right), that for this built-in case I need 4000 iterations in my code. So if I have more or less then that, I have to adjust the numbers by hand, such that the sum of steps up + steps down = number of iterations, right? Also, what happens if I keep it like this but I have more than 4000 iterations, do I get an error or the rest of the code is ran with the final value of the LR i.e. the smallest one. Then, I know that in the paper that introduced the cyclical LR, after the down step, the LR was going even further down by few order of magnitude, for something like 10% of the total number of iterations. How can I set this parameter? I see no parameter for this percentage? Thank you!
st82840
You will not get any error if you don’t keep steps up + steps down = number of iterations. Paper suggest to do this (step_size = 2*iterations) to get good results. The answer to second question is , if you use mode = 'triangular2' your ampltiude decreases by half evey time. You can also use exp_range. You can read more this here 4.
st82841
Thank you! For the second question, I was actually wondering if I can do something like this: Screen Shot 2019-08-20 at 01.28.37.png961×664 72.8 KB So after the “normal” cycle is over, the LR gets lower by a few orders of magnitude for a few more iterations.
st82842
smu226: So after the “normal” cycle is over, the LR gets lower by a few orders of magnitude for a few more iterations. what happens after getting lowered ? , it will start new cycle ?
st82843
No, it stops. It’s the one cycle policy from Leslie Smith’s paper. The plot shows the evolution of the LR over the whole training
st82844
So I have a text file of about 3M training samples, with each line representing a sample. First I tried a single process to read samples line by line in a mini-batch style, then feed them to my model. Then I found that the GPU usage is just around 20%, so I guess the reading process is too slow. After I implemented a parallel version using DataLoader and expect it to have a prominent speedup, it turned out to be more than 2x slower than the single process version. These are my code: # an iterable object of a training data file class SampleFile(object): def __init__(self, filePath): self.filePath = filePath def __iter__(self): with open(self.filePath) as file: while True: line = file.readline().rstrip("\n") if not line: break sample = self.parseSample(line) yield sample # DataSet object with (maybe) multiple data files class TrainingSet(IterableDataset): def __init__(self, dataFilePath, workerNum): super(TrainingSet).__init__() self.dataFilePath = dataFilePath self.workerNum = workerNum def __iter__(self): workerInfo = torch.utils.data.get_worker_info() if workerInfo is None: # single process, just read the whole data return iter(SampleFile(self.dataFilePath)) else: # read splitted data file generated by Linux `split` command workerNumLen = len(str(self.workerNum-1)) suffix = (workerNumLen-len(str(workerInfo.id))) * '0' + str(workerInfo.id) partFilePath = self.dataFilePath + suffix return iter(SampleFile(partFilePath)) Any one could give me a hint? Thanks
st82845
Hello, I’m studying hard PyTorch. But I have a problem that I can’t solve myself. ###################################################################### sequence_Input = keras.layers.Input(shape=(None, num_x_signals,), dtype=‘float32’, name=‘sequence_Input’) RNN_Flow = Bidirectional(GRU(250, return_sequences=True, activation=‘relu’))(sequence_Input) RNN_Flow = Bidirectional(GRU(250, return_sequences=True, activation=‘tanh’))(RNN_Flow) additional_Input = keras.layers.Input(shape=(None, num_add_signals,), dtype=‘float32’, name=‘addtional_input’) NEW_Flow = keras.layers.Concatenate(axis=2)([RNN_Flow, additional_Input]) output_Main = Dense(100, activation = ‘relu’)(NEW_Flow) output_Main = Dense(100, activation = ‘tanh’)(output_Main) output_Main = Dense(num_y_signals, activation=‘sigmoid’, name=‘output_Main’)(output_Main) model = keras.models.Model(inputs=[sequence_Input, additional_Input], outputs=output_Main) ####################################################################### I want to change the Keras model to Pytorch. Please help me. I really need your help
st82846
Dear frinds, What is the shared embedding means? and what is the diffrent between shared embadding, and the regular embadding in the sequance to sequance model? I’m developing Grammar Error Correction (GEC) model using Pytorch. The develping model is a convolutional neural network, like Statistical Machine Translation (SMT). I have read the shared embedding will improve GEC model results, but I did’t find a good artical explain what is the shared embedding technequne and how to implement.
st82847
I have been trying to compile PyTorch from source on Windows 10. I am following the steps as suggested on https://github.com/pytorch/pytorch#from-source 4. I have followed these steps to compile it with Ninja after having problems trying to compile it with the default steps mentioned. These are the steps I have followed after cloning pytorch. 1.set CMAKE_GENERATOR=Ninja 2.set USE_NINJA=ON 3.set CMAKE_GENERATOR_TOOLSET_VERSION=14.11 4.set DISTUTILS_USE_SDK=1 5.for /f “usebackq tokens=*” %i in ("%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,16^) -products * -latest -property installationPath) do call “D:\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\vcvarsall.bat” x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION% 6.set CUDAHOSTCXX=D:\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.11.25503\bin\HostX64\x64\cl.exe 7.python setup.py install and after taking hours to compile I received this traceback from the anaconda console. error.PNG3840×2065 353 KB I have uploaded the Image from the traceback and I would really get some help on how to solve this as I am trying to go through the fastai course using my own gpu but it says that old gpus are not supported. I have a GTX 960M GPU and was asked to compile it from source to make it work. Thanks.
st82848
So I tried fixing the problem and found out that I had not downloaded and install cudnn. I installed it and reran python setup.py install and it worked.
st82849
I’m implementing a neural network using pytorch. My loss function is a custom negative log likelihood which is easy to implement using pyro plate. How can I incorporate this loss (in pyro) into pytorch?
st82850
example of using autograd for computing second order derivative, along with what one would have to do if autograd was not used.
st82851
I use a docker environment to do some pytorch training work, but frequently get stuck at .to(‘cuda’) or .cuda() calls. Not know why. Some system behaviors I notice: The training process has a cpu usage of 100%, and almost 97% are sys time as shown by top 1. I used strace to debug the training process, and also used the /proc file system, found the process keep getting stuck on a poll syscall to a pipe, but I don’t know where the pipe is used in pytorch.
st82852
Hi. I have a problem with Pytorch Hook function, as I mentioned in title, my Hook demands current epoch for recording some statistics that changed by every epoch. Is there any some tricky method for solving this problem? For example, my_model = some_net() my_model.specific_layer.register_forward_hook(tricky_hook_for_recording_epoch) for epoch in range(epoch_size): for i, (data, label) in enumerate(dataloader): output = my_model(data) I’m using TensorBoard for Pytorch and this is why I need to record epoch. Thanks for any answer!
st82853
Hi @FruitVinegar pytorch/ignite 11 might be help you to achieve that easily. Please look here 7 for a description of how it works.
st82854
But if you don’t want to bother with ignite I guess you could also do this: global epoch, prev_epoch def tricky_hook_for_recording_epoch(module, input, output): if epoch > prev_epoch: # TensorBoard stats, logging, etc prev_epoch = epoch my_model = some_net() my_model.specific_layer.register_forward_hook(tricky_hook_for_recording_epoch) prev_epoch = 0 for epoch in range(epoch_size): for i, (data, label) in enumerate(dataloader): output = my_model(data)
st82855
I was refactoring my spaghetti code with many parameters and libraries, so that Ignite looks helpful for me. Thank you for introducing that!
st82856
What’s the recommended way in pytorch to preform cross validation. So far I’ve seen two ways, one is to use the RandomSubsetSampler to make indices and pass those along to the DataLoader and the other one is to create your own method which makes indices for each fold, make your custom DataSet which returns train/valid loader and pass that to DataLoader. I guess both are valid approaches but what I’m asking is what’s the native pytorch recommended way to avoid errors and be consistent with best practices. (side note: I’ve read pretty much all posts I could find relative to the subject but couldn’t find a clarification on this topic)
st82857
I’ll try to answer my own question in case someone else finds it useful. One way to achieve this (althought I’m not 100% if this is the correct pytorch way) is to use Subset class to retrieve the corresponding data and targets according to a set of indices. For instance: for tr_idx, te_idx in kfold(...): train_subset = torch.utils.data.Subset(train_dataset, tr_idx) train_loader = torch.utils.data.DataLoader(train_subset, batch_size=64) for x, y in train_loader: model.train() y_hat = model(x)
st82858
Hi all, I’m looking to chop an exciting network up in parts so I could later chain different parts together, eg. model = densenet121() first_half = densenet121[:6] second half = densenet121[6:] However, this has not been so easy, I’ve tried splitting the model using model.parameters, model.modules and model.features, but I’ve not been able retain functionality when tied back together. What is the proper way to copy or disregard one half of a model?
st82859
I would recommend to derive a custom class using the Densenet121 implementation as the parent class. Inside the __init__, you could try to split the self.features sequential module and create your own forward method using these splits. Your code snippet would work for very simple models, which can be defined in an nn.Sequential module alone.
st82860
Hi, I’m trying to to run a big model (resnext-wsl 32x48d) on 8 GPUs with 12GB each, but I’m getting a memory error:“RuntimeError: CUDA out of memory. Tried to allocate 42.00 MiB (GPU 0; 11.75 GiB total capacity; 10.28 GiB already allocated; 2.94 MiB free; 334.31 MiB cached)” It seems that the data-parallel is splitting the memory in un even way, as can be shown in the memory layout printed a moment before crashing (before optimizer.step): As can be seen, GPU0 is overloaded while the rest still have some space. Is there anyway to solve this problem? is it possible that this model cannot be trained on gpus with “only” 12GB? Thanks.
st82861
The imbalanced memory usage and possible workarounds are described in this blog post 53 by @Thomas_Wolf.
st82862
I need to use an equivalent of tf.dynamic_partition in PyTorch. Is there anything with similar functionality in PyTorch or other library or is there a simple and clever way to code it for PyTorch and work fast? The same for tf.dynamic_stitch. Thanks!
st82863
The following implementation works as equivalent of Vector Partitions in the tf documentation 3. import torch def dynamic_partition(data, partitions, num_partitions): res = [] for i in range(num_partitions): res += [data[(partitions == i).nonzero().squeeze(1)]] return res data = torch.Tensor([10, 20, 30, 40, 50]) partitions = torch.Tensor([0, 0, 1, 1, 0]) dynamic_partition(data, partitions, 2) There are ways of implementing this more efficiently, specially for big CUDA tensors. But maybe this is good enough for your application. Regarding the tf.dynamic_stitch, the following snippet works as well (it matches the input->output from the tf documentation 4). indices = [None] * 3 indices[0] = 6 indices[1] = [4, 1] indices[2] = [[5, 2], [0, 3]] data = [None] * 3 data[0] = [61, 62] data[1] = [[41, 42], [11, 12]] data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]] data = [torch.tensor(d) for d in data] indices = [torch.tensor(idx) for idx in indices] def dynamic_stitch(indices, data): n = sum(idx.numel() for idx in indices) res = [None] * n for i, data_ in enumerate(data): idx = indices[i].view(-1) d = data_.view(idx.numel(), -1) k = 0 for idx_ in idx: res[idx_] = d[k]; k += 1 return res dynamic_stitch(indices, data) If these implementations don’t perform well enough for you application, consider the possibility of implementing an extension.
st82864
Hello, I want to implement a sparse neural network, is it fully supported by PyTorch ? Thanks
st82865
Hi, the below code increases the memory usage linearly, and at certain point I am not able to train the model. Surprisingly it is the first time I am facing problem with the following code? doubts: Vector images, Vector image is the only new data that is involved in the following code, commenting line which loads vector images makes the code run normally. I really have no idea, any hint or suggestion would be highly appreciated. Thank you, Nilesh Pandey class ImgAugTransform: def __init__(self): sometimes = lambda aug: iaa.Sometimes(0.5, aug) self.aug = iaa.Sequential([ iaa.Affine( translate_percent={"x":0.2, "y": 0.1}, rotate=40, mode='symmetric' ) ]) def __call__(self, img, img1, img2,img3): img = np.array(img) img1 = np.array(img1) img2 = np.array(img2) img3 = np.array(img3) return self.aug.augment_image(img), self.aug.augment_image(img1), self.aug.augment_image(img2),self.aug.augment_image(img3) class Dataset(data.Dataset): """Dataset for XXX. """ def __init__(self,height): super(Dataset, self).__init__() # base setting self.files = [] self.vector = [] with open("/home/XXX/train.txt", 'r') as f: for line in f.readlines(): im_name, v_name = line.strip().split() self.files.append(im_name) self.vector.append(v_name) self.masks = os.listdir("/home/XXX/") self.range = np.arange(len(self.masks)) self.rotate = ImgAugTransform() def name(self): return "XXX" def transformData(self, src, mask, target,ref_lr): if random.random() > 0.5: src, mask, target,ref_lr = self.rotate(src,mask, target,ref_lr) # Transform to tensor src = TF.to_tensor(src) mask = TF.to_tensor(mask) target = TF.to_tensor(target) ref_lr= TF.to_tensor(ref_lr) src = TF.normalize(src,(0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) mask = TF.normalize(mask, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) target = TF.normalize(target,(0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ref_lr = TF.normalize(ref_lr,(0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) return src, mask, target,ref_lr def __getitem__(self, index): file = self.files[index] mask = self.masks[random.choice(self.range)] vector = self.vector[index] # person image targ = rescale_intensity(plt.imread(osp.join('/homeXXX/', file))/255)#targ = rescale_intensity(plt.imread(osp.join('/homeXXX', file))/255) vec = rescale_intensity(plt.imread(osp.join('/home/XXX', vector))/255) mask = rescale_intensity(plt.imread(osp.join('/home/XXX', 'maskA', mask))/255) targ = resize(targ,(256,256)) vec = resize(vec,(256,256)) mask = resize(mask,(256,256)) ms2 = mask*1 ms2 = np.expand_dims(ms2,axis=2) ms2 = np.repeat(ms2,repeats=3,axis=2) src =targ*(1-ms2)+ms2 src = Image.fromarray(np.uint8(src*255)) mask = Image.fromarray(np.uint8(ms2*255)) target = Image.fromarray(np.uint8(targ*255)) vec = Image.fromarray(np.uint8(vec*255)) source,mask,target,ref = self.transformData(src, mask, target, vec) return source,mask,target,ref
st82866
Solved by Nilesh_Pandey1 in post #10 Yes, CPU memory. This with the @ptrblck response made me understand what are the reasons for potential memory leak in pytorch. I will suggest read thru the links I have posted.
st82867
I am trying to run all my previous codes and project, and basically all are causing increase in RAM consumption linearly. The change in hardware setup is low storage space, currently I am on a storage space less than <40GB, as far as I have known it doesn’t make sense, but anyone think it could be related?