id
stringlengths
3
8
text
stringlengths
1
115k
st84068
import torch a = (torch.rand(3,4)*10).long() print(a) print(a*0.9) Output: tensor([[8, 5, 3, 4], [1, 3, 6, 7], [8, 5, 8, 8]]) tensor([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]])
st84069
Hi, I don’t think thats a bug, it tries to retain the type of the tensor as long hence the rounding of 0.9 to 0.
st84070
Which one is more PyTorch like (better) code for nn.Sequential? def __init__(self): self.layers = nn.Sequential (layer1, layer2, ...) def forward(self, x): self.layers(x) or def __init__(self): self.layer1, self.layer2, ... def forward(self, x): ss = nn.Sequential (layer1, layer2, ...) Both would be okay?
st84071
You wouldn’t put it in such a structure. You could just write it as ss = nn.Sequential(layer1, activation, layer2, etc.) nn.Sequential is a sequential container that allows you to build a neural network by specifying the sequential building blocks of the model.
st84072
Hi, I have a tensor like this: tensor([[[[ 1., 2., 3., 4., 5., 6., 7., 8.]]], [[[9., 10., 11., 12., 13., 14., 15., 16.]]]]) and I want to do a summation without any loop like this: 1+2 + 5+6 2+3 + 6+7 3+4 + 7+8 as the input tensor is in a batch mode, the operation should be done in batch mode. I think it is possible by changes the shape of tensor, but I do not know how to organize it. Do you have any idea about it? Thanks Best Regards
st84073
Solved by zahra in post #2 I think I could solve it. For who has the same problem, here is the code: a = torch.tensor([[[[ 1., 2., 3., 4., 5., 6., 7., 8.]]], [[[9., 10., 11., 12., 13., 14., 15., 16.]]]]) print(a) b = a.view(2,1,2,4) print(b) c = b.unfold(2, 2, 2).unfold(3, 2, 2).contiguous().view(2, 2, 2, 2) print(…
st84074
I think I could solve it. For who has the same problem, here is the code: a = torch.tensor([[[[ 1., 2., 3., 4., 5., 6., 7., 8.]]], [[[9., 10., 11., 12., 13., 14., 15., 16.]]]]) print(a) b = a.view(2,1,2,4) print(b) c = b.unfold(2, 2, 2).unfold(3, 2, 2).contiguous().view(2, 2, 2, 2) print(c) d= c.sum(2, keepdim=True).sum(3, keepdim=True) print(d) Regards
st84075
I am using Gloo as the backend for distributed machine learning. I am curious about the implementation of torch.distributed.all_reduce in detail. Currently the official documentation does not talk about it. I wonder whether it is a ring-based all-reduce or tree-based all-reduce?
st84076
What is the most efficient way of selecting the square sub-matrix of a matrix x that is formed of the row and column intersections at some indices? Is there a faster approach than x[indices][:, indices]? I’m asking because I remember seeing this (seemingly unsolved) issue: https://github.com/pytorch/pytorch/issues/15245 110. But I’m not sure if it affects this case.
st84077
I want to calculate mean by y-axis, so the output should be 1x3. How should I do? Thank you.
st84078
Solved by LeviViana in post #2 import torch x = torch.rand(3, 3) print(x.mean(1))
st84079
Here’s a DCGAN tutotial https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html 1 When evaluating the performance, .eval() and .train() is completely ignored, no mention of them in the entire code. Why is that? Thanks. EDIT: here, I can point to the exact line where I think this is missing github.com pytorch/tutorials/blob/master/beginner_source/dcgan_faces_tutorial.py#L646 # Output training stats if i % 50 == 0: print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f' % (epoch, num_epochs, i, len(dataloader), errD.item(), errG.item(), D_x, D_G_z1, D_G_z2)) # Save Losses for plotting later G_losses.append(errG.item()) D_losses.append(errD.item()) # Check how the generator is doing by saving G's output on fixed_noise if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)): with torch.no_grad(): fake = netG(fixed_noise).detach().cpu() img_list.append(vutils.make_grid(fake, padding=2, normalize=True)) iters += 1 ###################################################################### # Results
st84080
I have also noticed this. Even while I code, even if I miss eval and train. It didn’t have any effect on the result. But the thing is how we are going about it does matter. We use eval because we won’t be interested in updating the weight of the network. While train weight updation happens. If we can turn off the gradients using torch no grad like your example, be it. It’s another way of approaching I think.
st84081
Balamurali_M: We use eval because we won’t be interested in updating the weight of the network. My understanding is that .eval() is to tell the network to disable dropout and batchnorm layers, where as the torch.no_grad() context is to disable gradient calculations. They are different concepts which happen to be used together during inference. One possibility why .eval() is missing is because there are no dropout or batchnorm layers?
st84082
Dropout, batchnorm also seems like a proper answer. Can you let us know about that ?
st84083
It does have an effect. If I take 1000 examples from the generator, the images get whiter and whiter. To fix the performance I go to eval() mode.
st84084
When import torch.optim.lr_scheduler in PyCharm, it shows that AttributeError: module ‘torch.optim’ has no attribute ‘lr_scheduler’. But in the Pytorch’ s documents, there is torch.optim.lr_scheduler. So why torch.optim.lr_scheduler can’ t import?
st84085
Hi, which version of PyTorch do you use? I think you see the doc for the master branch but use 0.12.
st84086
Thank you! You are right. If I want to use torch.optim.lr_scheduler, how to set up the corresponding version of PyTorch?
st84087
Currently the latest version is 0.12 which you use. So if you like to use the latest PyTorch, I think install from source 585 is the only way.
st84088
Check your local package, if necessary, add this line 1.4k to initialize lr_scheduler. I find my pip-package doesn’t have this line.
st84089
i found my pip-package also doesn’t have this line. can i just add this line to my init.py ? how solve this problem?? thx
st84090
I am using the the pytorch_version ‘0.1.12’ but getting the same error. One more thing is I am working in virtual environment. Is this is the problem with respect to virtual environment? Thank you in advance
st84091
You are using a very old PyTorch version. Have a look at the website 342 for the install instructions for the latest version.
st84092
Hi everyone, in my case I have to apply many tensors (of same size) to many Linear Modules (respectively). These computations are independent and the order doesn’t matter. To make the Gpu the most efficiently, I wanted to apply these computations using the least number of calls to the Gpu. I decided to design a Channels wise Linear Module which is base on the Pytorch’s Linear Module : class multiChannelsLinear(nn.Module): __constants__ = ['bias'] def __init__(self, channels, in_features, out_features, bias=True): super(multiChannelsLinear, self).__init__() self.in_features = in_features self.out_features = out_features self.channels = channels self.weight = nn.Parameter(torch.Tensor(channels, out_features, in_features)) if bias: self.bias = nn.Parameter(torch.Tensor(channels, out_features)) else: self.register_parameter('bias', None) self.reset_parameters() def reset_parameters(self): nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5)) if self.bias is not None: fan_in, _ = nn.init._calculate_fan_in_and_fan_out(self.weight) bound = 1 / math.sqrt(fan_in) nn.init.uniform_(self.bias, -bound, bound) def forward(self, input): input = input.transpose(0, 2).transpose(0, 1) output = self.weight.matmul(input) output = output.transpose(0, 1).transpose(0, 2) if self.bias is not None: output += self.bias ret = output return ret def extra_repr(self): return 'channels={}, in_features={}, out_features={}, bias={}'.format( self.channels, self.in_features, self.out_features, self.bias is not None ) It seems that the code doesn’t produce the same thing than using many Linear Modules. I cannot find were I’m wrong, By the way, why this kind of module does not exist in Pytorch? I think it would be easier to reach better performance when we want to apply the same kind of transformation on different channels.
st84093
why does it drop sometimes more, sometimes less than 3 elements in a 2x5 matrix, when I have given probability 0.3 y = torch.ones(2, 5) x = nn.Dropout(0.3) x(y) if got this output tensor([[1.4286, 1.4286, 1.4286, 0.0000, 0.0000], [1.4286, 1.4286, 1.4286, 1.4286, 1.4286]]) on later execution, this output tensor([[0.0000, 0.0000, 1.4286, 1.4286, 0.0000], [1.4286, 0.0000, 0.0000, 1.4286, 1.4286]])
st84094
Hi, Each unit is dropped with a probability 0.3. In your case, you have 10 values. You may expect exactly 3 nodes to be dropped every time. It does not need to be exactly 3. Sometimes, it might be 2. Sometimes it can be 4 or even 5. I can one intuitive example. Suppose you have an unbiased coin. P(H)=P(T)=0.5 (Here H-Head, T-Tail). If you toss the coin 10 times, on average you get 5 Heads and 5 Tails. But it can be 4 Heads and 6 Tails. The probability enforces that the expected number of heads in 10 tosses is 5. Thanks Regards Pranavan
st84095
so what do I need to do to drop exactly 3 of 10 elements, i.e. I want to drop exactly 3 elements every time? not more, not less.
st84096
In neural network, the randomness that we have is based on the probability. As far as I know, we do not have any modules which can enforce an exact behaviour. If you want the exact behaviour, you can select 3 indices at random and you can make it as zero. In that case, you have to write custom layers or modules in pytorch which extend the class nn.Module. Then you can plug it as a layer, in your code body. Thanks
st84097
In the source code, there is float(scale_factor),but the document supports tuple.
st84098
Solved by ptrblck in post #2 The current master branch supports a tuple for the scale_factor (line of code). Which PyTorch version are you using? It looks like @vishwakftw added this functionality in this PR.
st84099
The current master branch supports a tuple for the scale_factor (line of code 12). Which PyTorch version are you using? It looks like @vishwakftw added this functionality in this PR 3.
st84100
With each epochs my train loss increases and I don’t know where the error in the training code is, does anyone have any ideas? for e in range(epochs): # keep track of training and validation loss train_loss = 0.0 valid_loss = 0.0 running_loss = 0.0 running_corrects = 0.0 ################### # TRAIN THE MODEL # ################### model.train() cont = 0 for inputs, label in (dataloaders['train']): # IF GPU is availible if train_on_gpu: inputs, label = inputs.cuda(), label.cuda() #inputs, labels = i ,data optimizer.zero_grad() with torch.set_grad_enabled(True): logps = model(inputs) _, preds = torch.max(logps, 1) # tecnica nova de validacao loss = criterion(logps, label) loss.backward() optimizer.step() #*inputs.size(0) running_loss += loss.item() print("running_loss = %f , interaction = %i " % (running_loss,cont) ) running_corrects += torch.sum(preds == label.data) RL_vector.append(running_loss) cont += 1 ################### # VALID THE MODEL # ################### model.eval() for inputs, label in (dataloaders['valid']): # IF GPU is availible if train_on_gpu: inputs, label = inputs.cuda(), label.cuda() with torch.no_grad(): logps = model(inputs) _, preds = torch.max(logps, 1) # tecnica nova de validacao loss = criterion(logps, label) # update average validation loss valid_loss += loss.item() VL_vector.append(valid_loss) # calculate average losses epoch_loss_train = running_loss / dataset_sizes['train'] epoch_acc_train = running_corrects.double() / dataset_sizes['train'] epoch_loss_valid = valid_loss / dataset_sizes['valid'] print('{} Loss: {:.4f} \tAcc: {:.4f}'.format('train', epoch_loss_train, epoch_acc_train)) print('{} \tLoss: {:.4f} '.format('valid', epoch_loss_valid))
st84101
Both losses are decreasing, which is generally fine. Which criterion are you using and what kind of use case are you currently working on, as the range looks a bit different.
st84102
I fix the problem. I put model.fc = classifier in my models.densenet121 and I thinks this is the source of the erro. But I do not know why , what is the difference between model.fc and model.classifier ?
st84103
model.fc and model.classifier are just the internal names for some submodules. If you print the model, you’ll find the name of the last layer, which you could replace with a custom one for your use case: model = models.densenet121() print(model) In this example, the densenet121 uses the attribute name classifier for the last nn.Linear layer, so you should use this attribute name. If you just assign a custom linear layer to model.fc it won’t be used and trained without changing the forward method.
st84104
Hi, I’m using torch.einsum() in a loop and noting some very strange behaviour, perhaps not related to einsum itself. When I run it for 10, 100, and 1000 iterations, it takes ~0.003s, ~0.01s, and ~7.3s respectively: 10 trials: 0.0027196407318115234 s 100 trials: 0.010590791702270508 s 1000 trials: 7.267224550247192 s It seems like the time per call increases drastically after about 250 iterations, see the following plot: To reproduce: import time import numpy as np import torch np.random.seed(123) device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') d = 4 p = 500 N = 8000 Z = torch.tensor(np.random.randn(N, p), device=device).float() V = torch.tensor(np.random.randn(d, p, p), device=device).float() for trials in [10, 100, 1000]: start = time.time() for i in range(trials): var = torch.einsum("dpq,np,nq->nd", V, Z, Z) print(f'{trials} trials: {time.time() - start} s') Any ideas? I already tried using torch.no_grad() et.c. to no effect. Thanks, John
st84105
Solved by JoelSjogren in post #2 Duplicate of https://github.com/pytorch/pytorch/issues/15793. You are not measuring timing accurately. CUDA launches are asynchronous… torch.cuda.synchronize()
st84106
Duplicate of https://github.com/pytorch/pytorch/issues/15793 9. You are not measuring timing accurately. CUDA launches are asynchronous… torch.cuda.synchronize()
st84107
I’m using a resnet18 network from torchvision.models. During test time, I observe If I alter the batch size for the data loader, the accuracy of the model on test data changes. I can not comprehend why should this happen. Isn’t the case that the weights of the network are fixed during testing(I haven’t called any optimizer.step())? I also went through resnet’s architecture, there is no randomized output at any layer. Any pointers where I could be wrong in my code or my information?
st84108
Solved by dirtbag in post #2 I think it’s because of the BatchNorm layer. The effect was more visible with smaller batch sizes. Accuracy dropped from 99.7% for a batch size of 25 to 14.85 % for a batch size of 1.
st84109
I think it’s because of the BatchNorm layer. The effect was more visible with smaller batch sizes. Accuracy dropped from 99.7% for a batch size of 25 to 14.85 % for a batch size of 1.
st84110
Have to set the model to evaluation mode with model.eval()? The batch size should not change the predictions!
st84111
Thanks! After model.eval() is called, the model starts using running mean and variance for normalization. I never went through this part of the documentation.
st84112
Hey ! I got the same issue on my code. The results are very bad with a batch size of 1, which is not very practical for evaluation of a single image. I put model.eval() as well as for child in model.children(): if type(child) == nn.BatchNorm2d: child.track_running_stats = False EDIT: does eval makes the use of the running stats but do not save the variance and mean ? Are the variance and mean supposed to scale with the batch size ? Any ideas ? The network is a stock ResNet adapted for regression instead of classification
st84113
My input has variable size. I haven’t found a way to use a DatasetLoader without padding the inputs with the maximum size in the batch. Is there any way around it? Is it possible without using the DatasetLoader class?
st84114
Solved by justusschock in post #4 This does not work with the default dataloader but in general you could handle the loading by yourself and simply add a batch dimension to your data and use torch.cat to stack them to a batch: batch_elements = [] for i in range(curr_batch_size): # generate some sample data tmp = torch.rand…
st84115
A batch must always consist of elements of the same size. However if your input is large enough and you can handle the corresponding output sizes you can feed batches off different sizes to the model.
st84116
This does not work with the default dataloader but in general you could handle the loading by yourself and simply add a batch dimension to your data and use torch.cat 13 to stack them to a batch: batch_elements = [] for i in range(curr_batch_size): # generate some sample data tmp = torch.rand((1, 50, 50)) # add batch dimension tmp = tmp.unsqueeze(0) batch_elements.append(tmp) batch = torch.cat(batch_elements, 0) Replace tmp = torch.rand((1, 50, 50)) by your own data samples. In This case I used 50x50 pixel images with one channel as sample data. To show the workflow with general data I did not integrate the batch dimension into the shape of the random tensor but added it afterwards. EDIT: Alternatively you could use something like this 75. But note that this will pad your input and (depending on the maximal difference of your input sizes) you could end up with padding an enormous amount of zeros (or constants or whatever).
st84117
justusschock: However if your input is large enough and you can handle the corresponding output sizes you can feed batches off different sizes to the model. How exactly are you doing this? Your answer seems to assume everything is of same size already. The original question is if padding is required for variable size input. Let’s be direct and answer that directly. Is that always necessary or not? When is padding necessary and when is it not? To my understanding it’s always required cuz batches have to be of the same size (unless I’m not understanding something or don’t know something).
st84118
Items in the same batch have to be the same size, yes, but having a fully convolutional network you can pass batches of different sizes, so no, padding is not always required. In the extreme case you could even use batchsize of 1 and your input size could be completely random (assuming, that you adjusted strides, kernelsize, dilation etc in a proper way). This is why it is hard to answer this question in general.
st84119
My take on how to solve this issue: def collate_fn_padd(batch): ''' Padds batch of variable length note: it converts things ToTensor manually here since the ToTensor transform assume it takes in images rather than arbitrary tensors. ''' ## get sequence lengths lengths = torch.tensor([ t.shape[0] for t in batch ]).to(device) ## padd batch = [ torch.Tensor(t).to(device) for t in batch ] batch = torch.nn.utils.rnn.pad_sequence(batch) ## compute mask mask = (batch != 0).to(device) return batch, lengths, mask There seems to be a large collection of posts all over pytorch that makes it difficult to solve this issue. I have collected a list of all of them hopefully making things easier for all of us. Here: How to create batches of a list of varying dimension tensors? 99 How to create a dataloader with variable-size input 47 Using variable sized input - Is padding required? DataLoader for various length of data 37 How to do padding based on lengths? 20 bucketing: Tensorflow-esque bucket by sequence length 9 Also, Stack-overflow has a version of this question too: stackoverflow.com How does Pytorch Dataloader handle variable size data? 85 python, pytorch, tensor, variable-length asked by Trung Le on 10:08AM - 07 Mar 19 UTC crossposted: https://www.quora.com/unanswered/How-does-Pytorch-Dataloader-handle-variable-size-data 9
st84120
hi,I want to init weights of all layers of the model, How to traverse all layers of the model? if my model has nested nn.Sequential modules Sequential( (0): Bottleneck( (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (downsample): Sequential( (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): Bottleneck( (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) ) thanks
st84121
Solved by Nikronic in post #2 Hi, First you need to define your weight initializer function regarding one layer. Here is an example: def init_weights(m): """ Initialize weights of layers using Kaiming Normal (He et al.) as argument of "Apply" function of "nn.Module" :param m: Layer to initialize :return: …
st84122
Hi, First you need to define your weight initializer function regarding one layer. Here is an example: def init_weights(m): """ Initialize weights of layers using Kaiming Normal (He et al.) as argument of "Apply" function of "nn.Module" :param m: Layer to initialize :return: None """ if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d): torch.nn.init.kaiming_normal_(m.weight, mode='fan_out') nn.init.constant_(m.bias, 0) elif isinstance(m, nn.BatchNorm2d): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) You just need to include different type of layers using if/else code. Then after initializing your model, you call .apply and it will recursively initialize all of your model’s nested layers. Here is example: model = ModelNet() model.apply(init_weights)
st84123
You’re welcome mate. I just have noticed that using .data to initialize weights is not good practice while it works. It is better to remove .data from code above. I have edited the post.
st84124
Using the pytorch-nightly version provided on anaconda repository throws an illegal instruction on my laptop. Is avx still the highest instruction set required to run pytorch-nightly?
st84125
Is it safe to have: device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") there is not some unexpected error that can happen right? If I import stuff those files will use the right device cuz they are all ran on the same machine, right? (cuz I don’t really like using globals so I was trying to avoid that…but I can’t figure out what the standard in pytorch is or what is safe or distinguish between them…)
st84126
If cuda is available this code will automatically run in cuda. If you want to have options you can manually mention using some flags.
st84127
Hello, all. This is my first time within the PyTorch ecosystem (having switched over because of GpyTorch’s seemingly excellent GP modelling capabilities), and I would appreciate help getting through my first model. Following the first example in the docs 6, I want to run a very simple model, but I encounter an error at the loss = -mll(output, y_torch) step: ## Generating data for regression # First, regular sine wave + normal noise x = np.linspace(0,40, num=300) noise1 = np.random.normal(0,0.3,300) y = np.sin(x) + noise1 # Second, an upward trending starting halfway, with its own normal noise temp = x[150:] noise2 = 0.004*temp**2 + np.random.normal(0,0.1,150) y[150:] = y[150:] + noise2 x_torch = torch.from_numpy(x) y_torch = torch.from_numpy(y) # initialize likelihood and model class ExactGPModel(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, likelihood): super(ExactGPModel, self).__init__(train_x, train_y, likelihood) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) likelihood = gpytorch.likelihoods.GaussianLikelihood() model = ExactGPModel(x_torch, y_torch, likelihood) # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam([ {'params': model.parameters()}, # Includes GaussianLikelihood parameters ], lr=0.1) # "Loss" for GPs - the marginal log likelihood mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model) training_iter = 50 for i in range(training_iter): # Zero gradients from previous iteration optimizer.zero_grad() # Output from model output = model(x_torch) # Calc loss and backprop gradients loss = -mll(output, y_torch) loss.backward() print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % ( i + 1, training_iter, loss.item(), model.covar_module.base_kernel.lengthscale.item(), model.likelihood.noise.item() )) optimizer.step() At that line, I get a RuntimeError: expected backend CPU and dtype Double but got backend CPU and dtype Float error. Having looked around, this seems like a common problem, but always has to do with CPU/CUDA errors, with the answers being vague, and I just want to run my model first on my CPU. I would appreciate any help on the matter.
st84128
maybe you can use model.cpu(): mdel.cpu() likelihood.cpu() Find optimal model hyperparameters model.train() likelihood.train()
st84129
Not sure why this happens, but the solution for me was to call .double() on the pytorch Tensors: x_torch = torch.from_numpy(x).double() y_torch = torch.from_numpy(y).double() and also on the model, before calling train: model.double() model.train()
st84130
I wanted to train an AlexNet model on cifar with the architecture from: “Understanding deep learning requires rethinking generalization” 21 Is the following the recommended way to do it: github.com bearpaw/pytorch-classification/blob/master/models/cifar/alexnet.py 86 '''AlexNet for CIFAR10. FC layers are removed. Paddings are adjusted. Without BN, the start learning rate should be 0.01 (c) YANG, Wei ''' import torch.nn as nn __all__ = ['alexnet'] class AlexNet(nn.Module): def __init__(self, num_classes=10): super(AlexNet, self).__init__() self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=5), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(64, 192, kernel_size=5, padding=2), nn.ReLU(inplace=True), This file has been truncated. show original or is there a standard way to do this in pytorch for cifar?
st84131
That looks like a reasonable way to initialize it, though your model is not exactly the same as AlexNet both in the layer definitions and the fully connected layers. There is also torchvision which has a pre-setup version of the model - https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py 134
st84132
is the alexnet on that paper the alex net on Cifar 10 or Imagenet? (I want one for Cifar 10, I know the original one was on imagenet)
st84133
Crossposted: https://qr.ae/TWnjL4 42 stackoverflow.com How to build AlexNet for Cifar10 from "Understanding deep learning requires rethinking generalization” for Pytorch? 60 python, machine-learning, deep-learning, conv-neural-network asked by Charlie Parker on 11:15PM - 24 Jul 19 UTC
st84134
Does messy code, for example when you are defining your model, cause your code to run slower? Thank you
st84135
Solved by thorsten in post #4 Given that you already coded the model, you could for example follow https://stackoverflow.com/questions/7370801/measure-time-elapsed-in-python to measure the training time (e.g. for one epoch) on both implementations.
st84136
It depends, what “messy code” means. If you are computing unnecessary stuff, sure your code will run slower. Also, using global variables is usually slower than local ones, but that probably shouldn’t make much a difference (of course it again depends what you are doing). Could you explain your code a bit and what makes it messy?
st84137
For example, when defining a model, especially a huge model like Resnet for something similar. Method A: import torch.nn as nn import torch.nn.functional as F class MethodA(nn.Module): def __init__(self): super(MethodA, self).__init__() self.conv1 = nn.Conv2d(...) self.conv2 = nn.Conv2d(...) . . . def forward(self, x): x = F.relu(F.batcnorm2d(self.conv1(x))) x = F.relu(F.batchnorm(self.conv2(x))) . . . return x Method B: import torch.nn as nn import torch.nn.functional as F def convbn(in_feat, out_feat, kernel_size, stride, pad, dilation): return nn.Sequential(nn.Conv2d(in_feat, out_feat, kernel_size=kernel_size, stride=stride, padding=pad, dilation=dilation, bias=False), nn.BatchNorm2d(out_feat)) class MethodB(nn.Module): def __init__(self): super(self, MethodB).__init__() self.conv1 = convbn(...) self.conv2 = convbn(...) self.conv3 = convbn(...) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = self.conv3(x) return (x) The example I provide here may not be clear. Another example will be when defining the resnet model, it can be tedious to type out line by line and I saw someone used a _make_layer function. And by using this function the resnet model can be defined easily. def _make_layer(self, block, planes, blocks, stride=1, dilation=1, multi_grid=1): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False), SynchronizedBatchNorm2d(planes * block.expansion)) layers = [] generate_multi_grid = lambda index, grids: grids[index%len(grids)] if isinstance(grids, tuple) else 1 layers.append(block(self.inplanes, planes, stride,dilation=dilation, downsample=downsample, multi_grid=generate_multi_grid(0, multi_grid))) self.inplanes = planes * block.expansion for i in range(1, blocks): layers.append(block(self.inplanes, planes, dilation=dilation, multi_grid=generate_multi_grid(i, multi_grid))) return nn.Sequential(*layers) So does it affect the speed of my model if I use a function like _make_layer or define the model layer by layer?
st84138
Given that you already coded the model, you could for example follow https://stackoverflow.com/questions/7370801/measure-time-elapsed-in-python 8 to measure the training time (e.g. for one epoch) on both implementations.
st84139
Is there any way to get id of module when I need that. I can currently have a name but names may be duplicates: for c in model.children(): name = c._get_name()
st84140
Solved by ptrblck in post #2 for name, moduke in model.named_modules(): # or model.named_children(): print(name) should return the module names, which should be unique. Would this work for you?
st84141
for name, moduke in model.named_modules(): # or model.named_children(): print(name) should return the module names, which should be unique. Would this work for you?
st84142
Hello guys, My cost function is very specific and needs a lot of external calculations, including using external libraries, to get the cost value. Because of this, it is impossible to keep such calculations under tensors. The problem with getting out of the tensor context is that I lose track of the autograd history. How could I take the scalar value produced by my cost function, encapsulate it in a tensor, and reconnect it to the autograd graph?
st84143
You can do that by defining a custom autograd.Function 1. This allows you to do anything you want in the forward. The caveat is that you have to give the derivative yourself in the backward (as PyTorch cannot be aware of it when you are not using tensors). Best regards Thomas
st84144
Hello Thomas, thank you very much for the reply. I will test here and as soon as possible share the result.
st84145
Hello, guys. Is there pytorch equivalent of “tf.nn.rnn_cell.MultiRNNCell” that stacks multiple cells?
st84146
Hi guys, I am testing my model for the same image and the prediction output is going different every time I run the code. model = loader(‘balanca.pth’) classe = (‘Balanca’,‘False’) model.eval() img = Image.open(“ImgTest/All/0n.png”) x = transform(img) x = x.unsqueeze(0) output = model(x) pred = torch.argmax(output,1) print(‘Image predict as:’,classe[pred])
st84147
Hi, Did you set the random seed manually? seed = 0 torch.manual_seed(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False import numpy as np np.random.seed(seed)
st84148
Hi all, I have been trying to obtain the following behavior when defining an own model: In essence, what I would want is to have a Parameter whose name would be weights, so that in model.state_dict() I would see weights. But, for simplicity in coding, I would want to refer to it in the code as model.W, instead of model.weights. I have tried the following: from torch.nn import Module, Parameter from torch import randn class mymodel(Module): def __init__(self): super(mymodel, self).__init__() W = Parameter(randn(5,4)) self.register_parameter("weights", W) self.W = W model = mymodel() With this code, I can call model.W, but also model.weights. Plus, when I call model.state_dict(), then I obtain a list of two parameters, “W” and “weights”. I wanted to ask, firstly, if the kind of behavior I would desire (being able to call only to model.W and having only “weights” in model.state_dict(), both referring to the same Parameter) is common/good practice, and if so, how to implement it properly. Thanks in advance!
st84149
Solved by tom in post #2 No! This is decidedly not good style. You are trading conceptual simplicity for saving a few characters (6 instead of 12). Now, what you can do if your forward uses self.weights so often that you don’t want to spell it? I think a perfectly good solution is to do a local assignment W = self.weight…
st84150
apozas: I wanted to ask, firstly, if the kind of behavior I would desire (being able to call only to model.W and having only “weights” in model.state_dict() , both referring to the same Parameter ) is common/good practice No! This is decidedly not good style. You are trading conceptual simplicity for saving a few characters (6 instead of 12). Now, what you can do if your forward uses self.weights so often that you don’t want to spell it? I think a perfectly good solution is to do a local assignment W = self.weights at the beginning of the forward (or whatever method uses W). If you use it often enough, it’ll even be less characters. Best regards Thomas
st84151
I see. I was kind of expecting this would be the situation. I was looking for a simple way of being a bit more explicit on what W meant in a big code that I had already developed, but probably the simplest thing to do in terms of both explainability of the code and simplicity of the change is just renaming Wto weightseverywhere. Thank you very much for the clarification.
st84152
In tensorflow, there is a training quantificationtensorflow_quantize So I wonder if anyone has any ideas to support training quantification in pytorch, or has samples? Now it is quite hard for me to add op(such as quant weight) or modify model struct(such as fold conv and norm op), So if anyone can help me?
st84153
Quantization is coming to PyTorch 10. It won’t be long before it is fully operational, but you can already try it out on master or a nightly release.
st84154
Hello, I have training input of this size (5000, 142, 30) where (Batch Size, Sequence Length, number of inputs) respectively. And so my output should be (5000, 3). My model should take 30 input numbers 142 times (time series) and then give me one of class output. And this happen with every batch in 5000 batches. I am just interested is my Model correct for this task? After some tests I assuming it work correctly, but I am still in doubt. The part with star seems to me most questionable. class Model(nn.Module): def __init__(self, input_number, ouput_number, hidden_number, layers_number): super().__init__() self.hidden_number = hidden_number self.rnn = nn.LSTM(input_number, hidden_number, layers_number, batch_first=True) self.fc1 = nn.Linear(hidden_number, ouput_number) def forward(self, data_in): hidden = None out, hidden = self.rnn(data_in, hidden) out = out[:, -1, : ] * out = out.contiguous().view(-1, self.hidden_number) out = torch.sigmoid(self.fc1(out)) return out, hidden
st84155
By the way, my input is changed to (5000(or more), 132, 30) but still this is not changing question.
st84156
To use script annotation mode to convert Pytorch model into C++, we replace class MyModule(torch.nn.Module): with class MyModule(torch.jit.ScriptModule):. But how shall we deal with below droptout layer? class RNNDropout(nn.Dropout): """ Dropout layer for the inputs of RNNs. Apply the same dropout mask to all the elements of the same sequence in a batch of sequences of size (batch, sequences_length, embedding_dim). """ def forward(self, sequences_batch): """ Apply dropout to the input batch of sequences. Args: sequences_batch: A batch of sequences of vectors that will serve as input to an RNN. Tensor of size (batch, sequences_length, emebdding_dim). Returns: A new tensor on which dropout has been applied. """ ones = sequences_batch.data.new_ones(sequences_batch.shape[0], sequences_batch.shape[-1]) dropout_mask = nn.functional.dropout(ones, self.p, self.training, inplace=False) return dropout_mask.unsqueeze(1) * sequences_batch
st84157
I would define an RNNDropout class based on ScriptModule as well. p should be dealt with as a python-defined constant 1. Best regards Thomas
st84158
Since we are not going to use RNNDropout layer in test/prediction phase, does this mean we do not need to convert the RNNDropout into ScriptModule when converting pytorch into C++ used in test/prediction phase? Thus, we can use original class RNNDropout(nn.Dropout): in both training and testing phase?
st84159
When tracing, this should work if you just don’t call it. For scripting, you might need a backup plan.
st84160
Hi, In the DataParallel documentation 7 it is mentioned that: The batch size should be larger than the number of GPUs used. When using DataLoader, during training one can specify drop_last=True so that we can make sure that no batch has size smaller than the number of GPUs. But when evaluating the model, you cannot simply drop the last batch. You should evaluate that batch as well (Although it might not make a lot of difference). Depending on the size of the Dataset, it is possible that the last batch will have size smaller than the number of GPUs. How should one handle this situation in an elegant way? In summary: I want to use DataParallel at evaluation time. I want to evaluate all batches, including the last batch. The last batch can have size smaller than number of GPUs. What should I do?
st84161
Solved by yassersouri in post #3 What I found is that, Although is it mentioned in the docs that the batch size should be larger than the number of GPUs, at least during evaluation, there is no error if the batch size is smaller than the number of GPUs. So I guess the answer is to do nothing. Do not drop the last batch when evalua…
st84162
You can feed the last batch through a non-parallelized version of your model. Or you can extend the final batch with zeroes and ignore those outputs.
st84163
What I found is that, Although is it mentioned in the docs that the batch size should be larger than the number of GPUs, at least during evaluation, there is no error if the batch size is smaller than the number of GPUs. So I guess the answer is to do nothing. Do not drop the last batch when evaluating. And it won’t cause an error.
st84164
Hello, I’m trying to get pytorch to work with ml-agents (0.81). After installing latest version of pytorch (python 3.6 Cuda 9.0) it fails to import: numpy.core.multiarray failed to import Ml-agents requires Numpy version to be <=1.14.5. I tried pytorch with the newest version of numpy and it worked. How can I get it to work with 1.14.5?
st84165
Hello all, I am using dice loss for multiple class (4 classes problem). I want to use weight for each class at each pixel level. So, my weight will have size of BxCxHxW (C=4) in my case. How can I use the weight to assign to dice loss? This is my current solution that multiple the weight with the input (network prediction) after softmax class SoftDiceLoss(nn.Module): def __init__(self, n_classes): super(SoftDiceLoss, self).__init__() self.one_hot_encoder = One_Hot(n_classes).forward self.n_classes = n_classes def forward(self, input, target, weight): smooth = 0.01 batch_size = input.size(0) input = F.softmax(input, dim=1) input = input*weight input = input.view(batch_size, self.n_classes, -1) target = self.one_hot_encoder(target).contiguous().view(batch_size, self.n_classes, -1) inter = torch.sum(input * target, 2) + smooth union = torch.sum(input, 2) + torch.sum(target, 2) + smooth score = torch.sum(2.0 * inter / union) score = 1.0 - score / (float(batch_size) * float(self.n_classes)) return score And the second solution is that multiply the weight in the inter and union position class SoftDiceLoss(nn.Module): def __init__(self, n_classes): super(SoftDiceLoss, self).__init__() self.one_hot_encoder = One_Hot(n_classes).forward self.n_classes = n_classes def forward(self, input, target, weight): smooth = 0.01 batch_size = input.size(0) input = F.softmax(input, dim=1).view(batch_size, self.n_classes, -1) target = self.one_hot_encoder(target).contiguous().view(batch_size, self.n_classes, -1) weight = weight.view(batch_size, self.n_classes, -1) inter = torch.sum(input * target * weight, 2) + smooth union = torch.sum(input*weight, 2) + torch.sum(target*weight, 2) + smooth score = torch.sum(2.0 * inter / union) score = 1.0 - score / (float(batch_size) * float(self.n_classes)) return score Which one is correct?
st84166
I was reading the documentation 9 and I came across the following sentence: is any number of trailing dimensions, including none. what is the meaning of “trailing dimensions”? May I also get an example to clarify the point? for context: Packs a Tensor containing padded sequences of variable length. input can be of size T x B x * where T is the length of the longest sequence (equal to lengths[0] ), B is the batch size, and * is any number of dimensions (including 0). If batch_first is True , B x T x * input is expected. For unsorted sequences, use enforce_sorted = False. If enforce_sorted is True , the sequences should be sorted by length in a decreasing order, i.e. input[:,0] should be the longest sequence, and input[:,B-1] the shortest one. enforce_sorted = True is only necessary for ONNX export. I actually meant to ask about pad_sequence https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.pad_sequence 1
st84167
If you have an input of shape (T, B), you will get a packed sequence with shape (sum of lengths,) as data. If you have an input of shape (T, B, F_1), you will get a packed sequence with shape (sum of lengths, F_1) as data. If you have an input of shape (T, B, F_1, F_2), you will get a packed sequence with shape (sum of lengths, F_1, F_2) as data. … So * is a wildcard (similar to its use in file glob masks) representing any possible value of input.shape[2:]. Best regards Thomas