id
stringlengths
3
8
text
stringlengths
1
115k
st84468
Good evening, When using torch.bmm() to multiply many (>10k) small 3x3 matrices, we hit a performance bottleneck apparently due to cuBLAS heuristics when choosing which kernel to call. For example, the colab notebook below shows that for 2^15 matrices the call takes 2s but only 0.5s for 2^16 matrices. What’s the easiest way to fix this, keeping in mind that we’d like to keep the differentiability via autograd ? Minimal Example Colaboratory: https://colab.research.google.com/drive/1BsqH2xYe61gzC8C4YuGCGmnCUzhSwBcS 15 StackOverflow discussion on batched GEMMs: https://stackoverflow.com/questions/48519861/strange-cublas-gemm-batched-performance 17 Thanks, Séb
st84469
Quite likely, some sort of doing your own is required. If neither bmm nor spelling out the contraction (for 10k x 3x3 that might be an option, probably not for very large ones) works for you, you might allocate the result array and fee batches through bmm_out. Another alternative could be to see if CUTLASS works for you. Matrix multiplication should be simple to implement the backward for (which just is a couple of matrix multiplications itself). Best regards Thomas
st84470
Thanks for your answer! By spelling of the contraction, do you mean writing the operation as an einsum ? Using another library and writing the backward seems like a somewhat pain-free solution too, but I was wondering if there would be a way to indicate to cuBLAS which kernel to call. In the end, I suspect we might have to go that route. Cheers, Seb
st84471
seba-1511: By spelling of the contraction, do you mean writing the operation as an einsum ? No, einsum will itself use bmm, I thought of materializing the elementwise product and .sum. (I do have a branch somewhere that uses TensorIterators for einsum instead. It’s so terrible on CPU (no AVX) that I didn’t look at GPU, but if you want to benchmark it on GPU, I can push it.) Best regards Thomas
st84472
For an example of using CUTLASS for batched matrix multiply look here https://github.com/mlperf/training_results_v0.6/blob/master/NVIDIA/benchmarks/transformer/implementations/pytorch/fairseq/modules/strided_batched_gemm/strided_batched_gemm_cuda.cu 54. It’s hardcoded to half data type, so some changes are required. Cublas allows specifying algorithms, but pytorch uses “default” as the algo, in the hopes that it would be optimal. Apparently it’s not, in your case. There’s also a new “matmul” interface in cublas (not yet integrated into pytorch) that gives somewhat finer control over selected algorithms, https://docs.nvidia.com/cuda/cublas/index.html#using-the-cublasLt-api 13, it also might be useful, but integrating it into pytorch is some work.
st84473
So, I have a lot of things going on under the hood, but everything is pretty normal, no fancy stuff going on. It should be a pretty normal looking codebase. With that in mind, I have this smaaaaall model In [178]: net Out[178]: Model( (conv): Sequential( (0): Conv2d(3, 10, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(10, 2, kernel_size=(1, 1), stride=(1, 1)) ) ) And this ONE image Figure_1.png800×800 101 KB And I have trained for over 2000+ epochs. This isn’t converging. I want to segment the sky. Actually, I am debugging. And this obviously means there is some bug somewhere. What are your thoughts?
st84474
Note that you are not using any non-linearity between the conv layers. Given that and a kernel of a single pixel, it looks like your model captures the blue-ish color successfully.
st84475
Yep! You are right! I added ReLU to get a better convergence. But, still didn’t reached 0 loss. I increased the kernel_size upto 5x5 for both the Conv Layers. I went very close, but still didn’t converged. Shouldn’t the loss go to 0?
st84476
If the model has enough “capacity”, then the loss should converge towards zero. Just for the sake of debugging this example, you could increase the number of kernels, their spatial size, or try a huge linear layer instead.
st84477
import torch.nn as nn import torch.nn.functional as F import numpy as np import matplotlib.pyplot as plt import torchvision import torchvision.transforms as transforms data2=torchvision.datasets.VOCDetection("./",download=True,transform=transforms.ToTensor(),target_transform=transforms.ToTensor()) img,tar =data2[0] I get the following error: ----> 1 img,tar=data1[0] /usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py in to_tensor(pic) 48 “”" 49 if not(_is_pil_image(pic) or _is_numpy_image(pic)): —> 50 raise TypeError(‘pic should be PIL Image or ndarray. Got {}’.format(type(pic))) 51 52 if isinstance(pic, np.ndarray): TypeError: pic should be PIL Image or ndarray. Got <class ‘dict’>
st84478
Solved by ptrblck in post #4 You would have to remove the target_transform and use the XML tree for your detection task. Have a look at this tutorial. While another dataset is used, it might be a good starter for your VOCDetection.
st84479
The VOCDetection target is a dictionary of the XML tree as stated in the docs 2. Since ToTensor works on PIL.Images, you cannot use it as a target_transform.
st84480
You would have to remove the target_transform and use the XML tree for your detection task. Have a look at this tutorial 15. While another dataset is used, it might be a good starter for your VOCDetection.
st84481
Hello! I’m wondering if anyone has any good recommendations for interpreting model weights for a general audience. I know this isn’t a code specific question, but I’m having trouble figuring out how I can use my weights in a meaningful way to describe the significance of my various input features in their use of predicting output. I guess something like a p-value or a similar metric would be ideal. Does anyone have any thoughts on this? Thanks.
st84482
I want to correct the gradient for a custom layer using the Taylor expansion: f'(x+delta) = f'(x) + 1/2*f''(x)*delta where x is the original output of the layer and f’(x) the default, unmodified gradient, for simplicity this example is the univariate case and delta is something I calculate (it’s the secret sauce for the research idea, so I can’t really go too much into detail what it’s doing). So I need to access the hessian (not inverse hessian) during for the custom gradient in my nn.module. I have never worked with Hessians and wonder what my options with PyTorch are. Can the autograd framework just compute it for me?
st84483
Solved by LeanderK in post #2 Ah, I’ve found a related issue that solves a similiar problem: Calculating Hessian Vector Product
st84484
Ah, I’ve found a related issue that solves a similiar problem: Calculating Hessian Vector Product 104
st84485
Hello PyTorch Community, I am seeking some help/ideas on some training I want to pursue. The idea is to train a VNET for a segmentation task and use its output for two tasks: a) calculating the main loss (which could be DICE or whatever) b) use it as the input to a pretrained AE model and calculating an additional loss loss = mainLoss + additional loss Pretrained model: AutoEncoder that was trained to learn the most prominent properties of the structure to segment. Model to train: VNET The idea is to have a pretrained model on a segmentation via autoencoder to preserve information about the shape of the structure to segment. Do you have any tips on what to be aware of when doing that? Let me know if you have any questions. Thank you, Christian
st84486
How did you pretrain the Autoencoder? Based on your explanation, it seems you’ve used the real images: che85: AutoEncoder that was trained to learn the most prominent properties of the structure to segment. Would this approach work, if you now feed the segmentation output of your VNET to this AE? The image statistics, ranges etc. should be quite different, so I’m not sure if it’ll work out of the box or if you would need to train both models end-to-end. It’s an interesting approach and reminds me of Adversarial Learning for Semi-Supervised Semantic Segmentation 1, but with an AE instead of Discriminator.
st84487
Thanks for replying so quickly. What I am trying to do is described in this paper: Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation 2 The AutoEncoder is pretrained with the ACNN input segmentations. The idea of the paper is to use the predicted segmentation from the VNET (or whatever segmentation network you are using). The predicted segmentation and the ground-truth segmentation will be the input to 2 separate AE instances where the latent representations of both will be compared via Euclidean distance measure. Does this make more sense now? Thanks for your ideas/comments. Christian
st84488
For high-dimensional tensors, the matrix multiplication can only be operated on the last two dimensions, which requires the previous dimensions to be equal. But in my research, matrix multiplication on the former dimensions also makes sense. For example, we can do this: # Mat_A's size is (10, 20, 2, 32) # Mat_B's size is (10, 20, 32, 3) # torch.matmul(Mat_A, Mat_b)'s size is (10, 20, 2, 3) torch.matmul(Mat_A, Mat_b) But how can we implement this: # Mat_a's size is (10, 20, 30) # Mat_b's size is (5, 10) # torch.some_operate(Mat_b, Mat_a)'s size is (5, 20, 30) torch.some_operate(Mat_b, Mat_a) I can use loops to do the calculations, but it’s not very elegant. This is a simple example: Mat_a is: [[a, b], [c, d]] Mat_b is: [[A, B], [C, D]] then the result is: [[aA+bC, aB+bD], [cA+dC, cB+dD]] where a, b, c, d are scalars and A, B, C, D are vectors.
st84489
torch.einsum('ijk,li->ljk', Mat_a, Mat_b) It’ll do the rearranging of dimensions for you. Best regards Thomas Update: See Tim Rocktäschel’s post on einsum 55 for lots of applications.
st84490
Wow, I have learned einsum but never thought about this idea. That’s awesome! TX!
st84491
I was curious, let’s say I have a tensor x=torch.ones(10). Then I do x=x[:5]. Now x.storage() is still a storage of size 10 even though I have a view of size 5. If I have no other reference pointing to that part of the storage, can another tensor use it automatically if it is needed? Or is it considered occupied?
st84492
Solved by pietern in post #2 It is considered occupied. If you want to release it, you’ll have to make a copy of the view, which will allocate a new tensor of the right size AFAIK.
st84493
It is considered occupied. If you want to release it, you’ll have to make a copy of the view, which will allocate a new tensor of the right size AFAIK.
st84494
Hello everyone, I just imported an array from numpy into pytorch: import numpy as np import torch a = np.array([ 1.07790701e-01, 5.27183563e-02, 1.03966855e-01]) print(a) # [0.1077907 0.05271836 0.10396685] b = torch.from_numpy(a) print(b) # tensor([0.1078, 0.0527, 0.1040], dtype=torch.float64) Even though the tensor is float64 type, every value in tensorb is rounded up to 2 decimals compared with the original array a. How can I keep the values of b unchanged from a? Many thanks,
st84495
It’s not a rounding error, it’s about print. PyTorch uses less precision to print. a-b.numpy() Out[3]: array([0., 0., 0.]) np.linalg.norm((a-b.numpy())) Out[4]: 0.0 print(b[0]) tensor(0.1078, dtype=torch.float64) print(b[0].item()) 0.107790701
st84496
To increase the precision for print outputs, just set: torch.set_printoptions(precision=10)
st84497
I recently moved my regression model from Keras to Pytorch and I have been getting much worse results in Pytorch, to say the least. At first the model wasn’t even converging, it was just getting worse with each epoch(the std of the error kept growing both on the training set and the validation set). This was happening even though I had taken the utmost care to have the weight/bias initializations, optimizer and loss function parameters, learning rate and batch size exactly the same as I had them in Keras. But then, as I was trying out different architectures to see what the problem was, I accidentally forgot to add the ReLU activation to one of them and my mistake somehow made the model converge! After realizing this I tried removing the ReLU activations from the other architectures as well and they all started converging. Has anyone else been experiencing the same problem? Is there something wrong with ReLU in Pytorch? Also, as a side note, I haven’t been able to find the source code for torch.relu(), if anyone knows where to find it and could share a link it would be very helpful. Thank you!
st84498
Solved by colesbury in post #2 There’s nothing wrong with the ReLU implementation. It’s widely used and pretty simple: (ReLU calls threshold) CPU: https://github.com/pytorch/pytorch/blob/3b1c3996e1c82ca8f43af9efa196b33e36efee37/aten/src/ATen/native/cpu/Activation.cpp#L33 CUDA: https://github.com/pytorch/pytorch/blob/3b1c3996e1…
st84499
There’s nothing wrong with the ReLU implementation. It’s widely used and pretty simple: (ReLU calls threshold) CPU: https://github.com/pytorch/pytorch/blob/3b1c3996e1c82ca8f43af9efa196b33e36efee37/aten/src/ATen/native/cpu/Activation.cpp#L33 CUDA: https://github.com/pytorch/pytorch/blob/3b1c3996e1c82ca8f43af9efa196b33e36efee37/aten/src/ATen/native/cuda/Activation.cu#L290 The choice of derivative for torch.relu at 0 may vary between frameworks. The subgradient includes the interval [0,1]. PyTorch uses 0 for the derivative. I think TensorFlow also uses 0, but other frameworks might use 1. (Values between 0 and 1 are also subgradients, but would make for an awkward choice.) (Image from https://medium.com/@danqing/a-practical-guide-to-relu-b83ca804f1f7)
st84500
Also removing non linearities (like ReLU) typically makes models easier to optimize, but less powerful. If you only have linear functions (or more precisely affine) like nn.Linear, nn.Conv2d, and + than you can only learn a linear (affine) function of your inputs. My guess is that there’s some difference in the interpretation of optimizer parameters (like momentum) plus maybe some other small differences. I remember that there’s a different interpretation of momentum in PyTorch vs. Caffe2, but I don’t remember how it differs from Caffe2. For SGD, I think dampening=True more closely matches the Caffe2 behavior: https://pytorch.org/docs/stable/optim.html#torch.optim.SGD 1
st84501
@colesbury Thank you for such a quick response. Although the model is still yielding problematic results in Pytorch using ReLU, I was at least able to get it to converge using selu instead. I guess hoping the model would behave the same in different frameworks was an unrealistic expectation in the first place.
st84502
Hi all, I have the following setting: I want to learn a gaussian distribution via two NNs, i.e., one NN maps onto the mean vector and the other NN maps onto the covariance matrix. My issue is that I am feeding batches of shape (batch_dim, observation_dim) into the NN and hence obtain for the mean vector also a vector of size (batch_dim, observation_dim) and for the covariance matrix a tensor of shape (batch_dim, observation_dim * observation_dim). However, I do not want to have an individual mean vector and covariance matrix for each sample in the batch, but instead one mean vector and one covariance matrix for the whole batch. I.e., I want to get rid of the batch_dim in the output: The mean vector should have shape (observation_dim) and the covariance matrix should have shape (observation_dim, observation_dim). How can this be achieved? Thank you in advance!
st84503
Hi, I am trying to run a faster r-cnn model based on the torchvision example 35 for a custom dataset. However, I have noticed that when training, if xmax is smaller than xmin, the rpn_box_reg loss goes to nan. xmax and ymax represent the top left corner and xmin and ymin represent the bottom right corner. This is a snippet of the error that i get with the bounding boxes printed: tensor([[ 44., 108., 49., 224.], [ 29., 73., 210., 230.], [ 31., 58., 139., 228.], [ 22., 43., 339., 222.]], device='cuda:0') Epoch: [0] [ 0/1173] eta: 0:09:46 lr: 0.000000 loss: 9.3683 (9.3683) loss_classifier: 1.7522 (1.7522) loss_box_reg: 0.0755 (0.0755) loss_objectness: 6.1522 (6.1522) loss_rpn_box_reg: 1.3884 (1.3884) time: 0.4997 data: 0.1162 max mem: 5696 tensor([[ 0., 0., 640., 512.]], device='cuda:0') tensor([[ 28., 57., 197., 220.]], device='cuda:0') tensor([[ 23., 46., 281., 222.]], device='cuda:0') tensor([[ 20., 28., 328., 210.]], device='cuda:0') tensor([[ 37., 45., 47., 161.], [ 31., 39., 111., 154.]], device='cuda:0') tensor([[ 0., 0., 640., 512.]], device='cuda:0') tensor([[ 33., 85., 546., 222.], [ 31., 85., 527., 213.]], device='cuda:0') tensor([[ 40., 76., 29., 211.], [ 64., 51., 26., 206.], [ 40., 77., 1., 221.]], device='cuda:0') Loss is nan, stopping training {'loss_classifier': tensor(1.78, device='cuda:0', grad_fn=<NllLossBackward>), 'loss_box_reg': tensor(0., device='cuda:0', grad_fn=<DivBackward0>), 'loss_objectness': tensor(16.28, device='cuda:0', grad_fn=<BinaryCrossEntropyWithLogitsBackward>), 'loss_rpn_box_reg': tensor(nan, device='cuda:0', grad_fn=<DivBackward0>)} An exception has occurred, use %tb to see the full traceback. As you can see, for each box is set as [xmin, ymin, xmax, ymax]. Thank you in advance.
st84504
Solved by ahmed in post #18 This should be the solution to our issue: https://github.com/pytorch/vision/issues/1128 However, we will have to skip those images without annotations. So I have followed the solution given here: Ignore images without annotations Good luck!
st84505
Hello, sometimes if your learning rate is too high the proposals will go outside the image and the rpn_box_regression loss will be too high, resulting in nan eventually. Try printing the rpn_box_regression loss and see if this is the case, if so, try lowering the learning rate. Remember to scale your learning rate linearly according to your batch size. Hope this helps
st84506
Thank you for your quick reply. I have tried reducing the learning all the way down to 0.00001, but i continue to get the same issue. This is the settings: params = [p for p in model_ft.parameters() if p.requires_grad] optimizer = torch.optim.SGD(params, lr=0.00001, momentum=0.9, weight_decay=0.0005) lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.1) I have noticed that this only seems to emerge when xmax value is lower than xmin value. Would you have any idea why this may be the case? I will have a look at the rpn_box_regression as you suggested to see if the values are high.
st84507
You can use the anomaly detection tool 82 to check where the nans are produced. In my experience nans appear when you have high values in your gradients or if you are doing a mathematically undefined operation (e.g log(0)). In this case the latter is not likely, so it must be the former. Try and double check that your dataloader is working correctly and that it is providing you with the correct annotations.
st84508
Hi, a simple alternative would be to preprocess your groundtruth before training, you can discard the invalid samples in your groundtruth. several things to notice: 1 xmax, ymax > xmin ymin 2 xmax ymax and xmin ymin are always inside your image.
st84509
Thank you for you advice. I have checked the datalaoder, and it seems to be fine. I even tested the annotations that were giving me issues by plotting the images and placing the bounding boxes on them and it looks to be working as it should. I haven’t managed to get the anomaly detection tool working yet. I will continue to work on it tomorrow. Would you have an example of how it would be used?
st84510
Hi, all the samples that I am using have are valid. The issue that I am facing is when the x1 (xmax) value is smaller than x2 (xmin), the loss rpn_box_reg loss becomes NaN. For example, for the image below, the bouding boxes are tensor([[ 53., 89., 7., 226.]]) i.e. [x2, y2, x1, y1]. When the x1 value is smaller than x2, the loss goes to zero, however, it works fine when x1>x2. In fact, it trains quite well. As you can see, the values are correct as the cyclist has the correct bounding box based on the values above. I hope this makes it a bit more clear as to the issue I am facing. test.jpg720×720 103 KB
st84511
Well I think I get your point, and why don’t you remove this annotation from the beginning?
st84512
Thank you for your reply. I would preferably like to keep these annotations as possible as they’re limited annotations for cyclists. Also, would it not mean that during evaluation, the model outputs would be always be coordinates where x1>x2 and therefore it would never pick up the objects like the cyclist in the lower left-hand of the image? I’m hoping this is making sense. I am new to machine learning and pytorch so I may not fully understand some of the concepts.
st84513
Even I discussed this issue here 49. I tried with two different datasets. In both cases, rpn_box_reg becomes nan in first epoch itself.
st84514
Is this torchvision specific or my code has some issue ? I’ve filed an issue. I checked my data thoroughly and everything seems to be working fine except the value of specific loss.
st84515
i’m not sure but i think it may have something to do with how the rpn box loss calculated. again, i’m not sure and this is only a guess. Perhaps as x1 is smaller than x2, a negative loss is calculated and that is what is causing the NaN value. I am still looking into how the rpn box loss is being calculated based on the rpn.py file.
st84516
I have decreased the learning rate, which is quite small. Would you think that i decrease it further? ahmed: optimizer = torch.optim.SGD(params, lr=0.00001, momentum=0.9, weight_decay=0.0005)
st84517
No, decreasing LR won’t help. In my case, fmassa says that I’ve an issue with bbox notation. I’ll work on the fix
st84518
This should be the solution to our issue: https://github.com/pytorch/vision/issues/1128 256 However, we will have to skip those images without annotations. So I have followed the solution given here: Ignore images without annotations 105 Good luck!
st84519
error message: ./torch/include/ATen/core/interned_strings.h:325:1: error: expected unqualified-id before ‘do’ FORALL_NS_SYMBOLS(DEFINE_SYMBOL) ^ ./torch/include/ATen/core/interned_strings.h:325:1: error: expected unqualified-id before ‘while’
st84520
I have an embedding layer with each sentence having a length of 20 and the dimension as 16. I pass this through a 1D convolutional layer -> relu -> avgPool and the output dimension of avgPool is [128, 10, 10] where 128 is the batch size. Now I want to know how to concatenate this avgPool with the embedding with dimension [128, 20, 16] so that I can pass this to the next CNN layer. I have been stuck with the error invalid argument 0: Sizes of tensors must match except in dimension 0. Got 20 and 10 in dimension 1 at /pytorch/aten/src/TH/generic/THTensorMoreMath.cpp:1307. self.c_1 = nn.Conv1d(in_channels=self.embedding_dim, out_channels=10, kernel_size=2, padding=1) self.relu_1 = nn.ReLU() self.avg_pool_1 = nn.AvgPool1d(2)
st84521
As the error message states you cannot concatenate these tensors, as two dimensions have different sizes. As far as I understand you would like to create something like a residual connection, where you pass the output of the embedding layer to this conv -> avg_pool block and try to concatenate it with the input. If that’s the case and your embedding output has a shape of [128, 20, 16], the output shape of the conv-block will be torch.Size([128, 10, 8]): x = torch.randn(128, 20, 16) c_1 = nn.Conv1d(in_channels=20, out_channels=10, kernel_size=2, padding=1) avg_pool_1 = nn.AvgPool1d(2) output = c_1(x) output = avg_pool_1(output) Could you explain your use case a bit, as I’m not sure how you would like to concatenate these tensors now?
st84522
My goal is to concatenate my sentence embedding with shape [128, 20, 16] with the output of the averagepool with shape [128, 10, 10] at position 2 which is 10 + 16. The output of this concatenation is given as input to the next Conv1d layer. torch.cat((embed.view(-1, sent.shape[1], embed.shape[2]), avg_pool_level_1), 2) where shape of embed is torch.Size([128, 20, 16]) and shape of avg_pool_1 is torch.Size([128, 10, 10])
st84523
Concatenating these tensors won’t work unfortunately, as the sizes of two dimensions are different. You could pad one dimension (e.g. dim1) and concatenate it in dim2.
st84524
Thanks @ptrblck. I wanted to know if there is a way I can concatenate the input word embedding with the output of CNN-POOL layer considering that the dimensions keep reducing depending on the size of the POOL layer. Any technique that can be followed for this?
st84525
For a 1-dimensional signal, you could try to match the number of channels and concatenate in dim2. To do this just set out_channels=20 in your conv layer.
st84526
Cool thanks! Also in pytorch how can we add the dimensions of a FC layer which comes after avgpool if I don’t know what the output dimension of avgpool will be? Like if the dimensions of avgpool is [128, 10, 57] and FC comes after this then initially I won’t be able to know the value 57 as the first dimension unless I execute the code once. Here 10 is the out_channel and 114 was the sequence length of each sentence which halved due to avgpool.
st84527
You could calculate the output shapes looking at the formulas in the docs 3. If you don’t want to do that, you could just run a single iteration, add some print statements to show the output shapes, and change the number of input features accordingly. However, if you are dealing with variable sized inputs, I would recommend to use an adaptive pooling layer, which will output a defined shape.
st84528
Yes I had so far used the single iteration and printing the shapes and modifying accordingly.
st84529
The following is from official tutorial. My question is: what’s the point of no_grad here? My understanding is that it is only useful if you want to save costs while in running the forward pass. with torch.no_grad(): for param in model.parameters(): param -= learning_rate * param.grad
st84530
Solved by ptrblck in post #2 The no_grad guard disables gradient calculations, so Autograd won’t track any operations applied in this block. While it’s useful for inference, it can be also applied when you would like to manipulate some internal parameters without using the .data attribute.
st84531
The no_grad guard disables gradient calculations, so Autograd won’t track any operations applied in this block. While it’s useful for inference, it can be also applied when you would like to manipulate some internal parameters without using the .data attribute.
st84532
As above. I tried passing numpy to Dataloader directly, it worked. How does this contrast with converting numpy to tensor and passing it to TensorDataset then pass the dataset to Dataloader?
st84533
How did you pass the numpy arrays to the DataLoader? Did you get the expected batches for both, the data and target? I would personally see it as a bit of a hack, if that’s working. This simple example won’t throw an error, but doesn’t yield the expected results: data, target = np.zeros((100, 1)), np.ones((100, 1)) loader_np = DataLoader((data, target), batch_size=1) for x in loader_np: print(x.shape) print(type(x)) print(x)
st84534
Hi, the question is very basic. PyTorch uses default weight initialization method as discussed here 108, but it also provides a way to initialize weights using Xavier equation. In many places 1 17, 2 6 the default method is also referred as Xavier’s. Can anyone explain where I am going wrong? Any help is much appreciated
st84535
Solved by ptrblck in post #2 The post is from January 2018 and outdated by now. You can find the current weight init here, which is init.kaiming_uniform_(self.weight, a=math.sqrt(5)) for the weights.
st84536
The post is from January 2018 and outdated by now. You can find the current weight init here 254, which is init.kaiming_uniform_(self.weight, a=math.sqrt(5)) for the weights.
st84537
Great! Thanks:) One small clarification, is the method mentioned here 64 is actually Xavier Initialization?
st84538
This doesn’t seem to be a xavier init, as only fan_in is used, while xavier_uniform_ 81 uses a different scaling factor and the sum of the number of input and output features.
st84539
Hi, I have a high-dimensional tensor whose values I’m trying to update along specific indices using another tensor like so: N = 5 C = 2 K = 9 J = 4 x = torch.randn(N, C, K, J) # Shape: (5, 2, 9, 4) new_values = torch.randn((N, C)) # Shape: (5, 2) index1 = torch.arange(N) index3 = torch.randint(K, (N,)) index4 = torch.randint(J, (N,)) x[index1 , :, index3, index4] = new_values This works but I’m looking for a similar operation that’d take the indices as a tuple (the indices are generated dynamically so I can’t hardcode the indexing of x like above) and would work for an arbitrary number of dimensions as in the example below: mystery_update_operation(x, (index1, index3, index4), new_values) I have looked into scatter_and index_put_ but can’t figure out how to make them work in this case. Cheers.
st84540
I recently downgraded from PyTorch 1.1 to 1.0.1 to get an old project running. Specs/Environment: Ubuntu 18.04 GPU: Nvidia 2080 CUDA: V10.0.130 (from nvcc --version) After downloading/installing PyTorch 1.0.1 from https://pytorch.org/get-started/previous-versions/ 17 via pip install torch==1.0.1 -f https://download.pytorch.org/whl/cu100/stable and running torch.version.cuda, It returns 9.0.176. I believe this is the reason why I’m getting CUDA errors in my project. How can I get torch.version.cuda to return the correct CUDA version (10.0)? Thank you in advance
st84541
Solved by PyDennis in post #2 I have solved the problem by downloading/installing a different .whl file via pip install https://download.pytorch.org/whl/cu100/torch-1.0.1-cp36-cp36m-linux_x86_64.whl. This comment from the official PyTorch Github has led me to the solution.
st84542
I have solved the problem by downloading/installing a different .whl file via pip install https://download.pytorch.org/whl/cu100/torch-1.0.1-cp36-cp36m-linux_x86_64.whl. This comment 22 from the official PyTorch Github has led me to the solution.
st84543
I have a Pytorch model consisting of a convolution2d followed by BatchNorm2d and I am printing the output of each layer in the forward pass. I cannot seem to understand the result of the output of BatchNorm based on the values of weight and bias it holds. The following is the outputs as printed in Pytorch(conv output (which is also input to BatchNorm )and BatchNorm output): tensor([[[[-0.0403, 0.0103, 0.0185], [ 0.0240, 0.0535, 0.0137], [ 0.0233, 0.0239, -0.0202]], [[-0.1044, -0.1664, -0.2347], [-0.1708, -0.2092, -0.2356], [-0.2202, -0.2412, -0.2733]]]], grad_fn=<MkldnnConvolutionBackward>) tensor([[[[-1.6799, -0.0496, 0.2127], [ 0.3922, 1.3428, 0.0598], [ 0.3674, 0.3883, -1.0339]], [[ 0.4344, 0.1697, -0.1216], [ 0.1510, -0.0127, -0.1253], [-0.0596, -0.1495, -0.2863]]]], grad_fn=<NativeBatchNormBackward>) the outputs were printed from the forward function as: x1 = self.conv1(x) print(x1) x2 = self.bn(x1) print(x2) Now when I print the weights and bias respectively of the BatchNorm layer it shows this: Parameter containing: tensor([0.8352, 0.2056], requires_grad=True) Parameter containing: tensor([0., 0.], requires_grad=True) If BatchNorm is (weights*previoustensor + bias), then the first output value should have been (0.8352 * -0.0403) + 0 = -0.0336 but it shows -1.6799 Could someone please explain? I ask this as one of my colleagues pointed this out. In our internal code, our output is indeed -0.033 for the first index so we wanted to understand what was the value reasoning behind Pytorch or if there are other factors involved.
st84544
I think I figured this out. Someone can confirm: it basically normalizes the output from conv per channel so that we have C means and variances. It then adjusts the output of conv by subtracting mean (for that channel) and dividing by variance (for that channel) and then multiplies by result by the weight of Batchnorm for that channel to get the value.
st84545
Yes, that’s the applied method in the train() mode. Additionally the bias is also added to the result. If you call model.eval(), the running estimates will be used to normalize the input instead of the current batch statistic.
st84546
thanks! I am trying to ensure that BatchNorm is used in training mode and is frozen because there are other layers that will be updated. Do I need to do this to the module after the net object is created?? net.bn.weight.requires_grad=False net.bn.bias.requires_grad=False net.bn.train()
st84547
If you don’t want to train the affine parameters at all (weight and bias), you could just initialize the batch norm layer with affine=False. Otherwise to disable their updates temporarily, you could set the .requires_grad attribute to False as shown in your example.
st84548
Ok I will try that. But is net.bn.train() absolutely required so that it behaves as though the layer is in not in eval mode? If I am not wrong all modules are in train() mode by default maybe then this is not needed. On the contrary if I needed to do inference, I would compulsorily require net.bn.eval() ?
st84549
Yes, that’s right. All modules are in training mode by default after initialization. Sorry, I’ve overlooked the last line of code. For inference, I would rather call net.eval(), which will set all modules recursively to evaluation mode.
st84550
I’m dealing with MRI data, and I converted these files into numpy files and saved them. my data consisted of input : [3600, 512, 512] (N,H,W) -1.8G and mask : [3600,8,512,512] (N,classes,H,W)-28.125G. I’m using U-Net for segmentation and GPU is ‘NVIDIA GeForce RTX 2080Ti’ with 18G ram. Data has been loaded as follows: class trainDataset(torch.utils.data.Dataset): def __init__(self, data, target, transform=None): self.data = data.astype(np.float32) self.data = normalize(data) self.target = target.astype(np.float32) self.transform = transform def __getitem__(self, index): x = self.data[index] y = self.target[index] if self.transform: x = self.transform(x) return x, y def __len__(self): return len(self.data) numpy_data = np.load(image_path+'MRtrain.npy') numpy_target = np.load(mask_path+'RStrain.npy') traindataset = trainDataset(numpy_data, numpy_target ,transform = transform) trainloader = torch.utils.data.DataLoader(traindataset, batch_size = batch_size, shuffle=True, num_workers=0, pin_memory=False) and train loop as follow : def fit(epoch,model,data_loader,phase='train',volatile=False): if phase == 'train': exp_lr_scheduler.step() model.train() if phase == 'valid': model.eval() running_loss = 0.0 for batch_idx , (data,target) in enumerate(data_loader): inputs,target = data.cpu(),target.cpu() if is_cuda: inputs,target = data.cuda(),target.cuda() inputs , target = Variable(inputs),Variable(target) if phase == 'train': optimizer.zero_grad() output = model(inputs) pred = torch.sigmoid(output) loss = dice(pred,target) running_loss += loss.data.item() if phase == 'train': loss.backward() optimizer.step() loss = running_loss/len(data_loader.dataset) print('{} Dice_Loss: {:.4f}'.format( phase, loss)) return loss Training to 515x515 size will cause cuda memory error. I think it’s inefficient to deal with numpy file, but is there a good way?
st84551
I assume you are running out or memory or are you really seeing a MemoryError? In the former case, could you try to lower the number of kernels in your UNet and check the memory usage? The OOM shouldn’t be related to loading the data as numpy arrays. PS: based on the target shape, I assume you are dealing with a multi-label classification, i.e. each pixel might correspond to multiple classes?
st84552
I got error massage as follow and memory usage is about 97%(14.8G), GPU memory usage is about 6.8G : --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-10-f1bcb8a65b63> in <module> 18 print('-' * 10) 19 epoch_loss = fit(epoch,model,trainloader,phase='train') ---> 20 val_epoch_loss = fit(epoch,model,validloader,phase='valid') 21 train_losses.append(epoch_loss) 22 val_losses.append(val_epoch_loss) <ipython-input-8-cb42f6cac88b> in fit(epoch, model, data_loader, phase, volatile) 14 optimizer.zero_grad() 15 ---> 16 output = model(inputs) 17 pred = torch.sigmoid(output) 18 loss = dice(pred,target) C:\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 491 result = self._slow_forward(*input, **kwargs) 492 else: --> 493 result = self.forward(*input, **kwargs) 494 for hook in self._forward_hooks.values(): 495 hook_result = hook(self, input, result) <ipython-input-5-ef5c656e99e9> in forward(self, x) 45 46 x = self.dconv_up3(x) ---> 47 x = self.upsample(x) 48 x = torch.cat([x, conv2], dim=1) 49 C:\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 491 result = self._slow_forward(*input, **kwargs) 492 else: --> 493 result = self.forward(*input, **kwargs) 494 for hook in self._forward_hooks.values(): 495 hook_result = hook(self, input, result) C:\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\upsampling.py in forward(self, input) 129 @weak_script_method 130 def forward(self, input): --> 131 return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners) 132 133 def extra_repr(self): C:\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\functional.py in interpolate(input, size, scale_factor, mode, align_corners) 2561 raise NotImplementedError("Got 4D input, but linear mode needs 3D input") 2562 elif input.dim() == 4 and mode == 'bilinear': -> 2563 return torch._C._nn.upsample_bilinear2d(input, _output_size(2), align_corners) 2564 elif input.dim() == 4 and mode == 'trilinear': 2565 raise NotImplementedError("Got 4D input, but trilinear mode needs 5D input") RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 11.00 GiB total capacity; 8.19 GiB already allocated; 174.59 MiB free; 96.81 MiB cached) as you’ve advised, lowering the number of kernels in my UNet work! Thanks! I tried to do a multi-organ classification, but now I want to do one organ classification. my numpy file shapes are image : [3600,512,512] , mask : [3600,512,512] input tensor is image : [4,512,512] , mask : [4,512,512] divied by batch but output tensor is [4,1,512,512]. I think class should be 2 because of background+organ, but I don’t know how to include background in mask class. Do you have a good idea?
st84553
Good to hear it’s working now. Is the input shape defined as [batch_size, height, width] or [channels, height, width]? In either case, you should pass an input as [batch_size, channels, height, width] to conv layers. If you are dealing with single channel images, just unsqueeze dim1: input = input.unsqueeze(1) For nn.CrossEntropyLoss (and nn.NLLLoss), the target should contain class indices in the shape [batch_size, height, width]. I assume your target already contains the background class as some class index (e.g. class0)? If so, just transform all other classes to the background class: background_class_index = 0 desired_class_index = 1 target[target!=desired_class_index] = background_class_index
st84554
Thanks for your Reply! Unfortunately my target doesn’t contain the background class as some class index. All of my targets consist of a pixel value 0 (background) or other(desire target). ex. one of 3x3 size target array consists of [0 1 0 0 1 1 0 0 1] <- stack 180 slice and batch_size 4 : target shape : [4,3,3] ; [batch_size, height, width] I this case, If I apply the transform method that you suggested without having to process a pixel with a value of 0, will it have two classes?
st84555
skyunyoo: All of my targets consist of a pixel value 0 (background) or other(desire target). That would mean, that the background class is the class index 0. skyunyoo: I this case, If I apply the transform method that you suggested without having to process a pixel with a value of 0, will it have two classes? In my example I created a target containing only two valid classes: background with class index 0 not-background with class index 1 If your non-background class uses another index than 1, you should also convert it to 1 for a binary classification use case.
st84556
Class index has been well solved as your reply. But I had another problem… for x in [numpy_data, numpy_target]: print(x.min(), x.max()) When I loaded numpy files having shape (480,512,512), I got for input min: 0, max: 499 target min: 0, max: 3 I use Normalize function as follow : def normalize(img): arr = img.copy().astype(np.float32) M = np.float32(np.max(img)) if M != 0: arr *= 1./M return arr And i use Dataset as follow: class trainDataset(torch.utils.data.Dataset): def __init__(self, data, target, transform=None): self.data = data.astype(np.float32) self.data = normalize(data) self.target = target.astype(np.float32) self.target = normalize(target) self.transform = transform def __getitem__(self, index): x = self.data[index] y = self.target[index] if self.transform: x = self.transform(x) return x, y def __len__(self): return len(self.data) transform = transforms.Compose([transforms.ToPILImage(mode=None), transforms.Resize(512), transforms.ToTensor() ]) traindataset = trainDataset(train_numpy_data, train_numpy_target ,transform = transform) validdataset = trainDataset(valid_numpy_data, valid_numpy_target ,transform = transform) trainloader = torch.utils.data.DataLoader(traindataset, batch_size = batch_size, shuffle=True, num_workers=0, pin_memory=False) validloader = torch.utils.data.DataLoader(validdataset, batch_size = batch_size, shuffle=True, num_workers=0, pin_memory=False) inputs, masks = next(iter(trainloader)) print(inputs.shape, masks.shape) print(inputs.min(), inputs.max()) print(masks.min(), masks.max()) I got : torch.Size([4, 1, 512, 512]) torch.Size([4, 512, 512]) tensor(0.) tensor(0.7675) tensor(0.) tensor(0.) I don’t know why min-max value doesn’t get 0-1… It’s annoying, but i’d really appreciate it if you help me once more. Thank you!
st84557
Using your code, I get normalized value in the range [0, 1]: train_numpy_data = np.random.randint(0, 500, (480, 512, 512)) train_numpy_target = np.random.randint(0, 4, (480, 512, 512)) traindataset = trainDataset(train_numpy_data, train_numpy_target ,transform = transform) trainloader = torch.utils.data.DataLoader(traindataset, batch_size = 4, shuffle=True, num_workers=0, pin_memory=False) inputs, masks = next(iter(trainloader)) print(inputs.shape, masks.shape) > torch.Size([4, 1, 512, 512]) torch.Size([4, 512, 512]) print(inputs.min(), inputs.max()) > tensor(0.) tensor(1.) print(masks.min(), masks.max()) > tensor(0.) tensor(1.) However, you are also normalizing the masks, which would be wrong for a classification use case. If you would like to get rid of unwanted classes, you should follow the code snippet given before: background_class_index = 0 desired_class_index = 3 train_numpy_target[train_numpy_target!=desired_class_index] = background_class_index train_numpy_target[train_numpy_target==desired_class_index] = 1
st84558
oh, problem is numpy type. The type was int64, but it changed to int32, so it was nomalized as you suggested. Thanks you!!
st84559
I installed pytorch nightly for cuda10 using pip3 by running: pip3 install torch_nightly -f https://download.pytorch.org/whl/nightly/cu100/torch_nightly.html 4 Now whenever I try to import torchvision I get: from torchvision import _C ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory Cuda 10 is installed on my system and is symlinked to /usr/local/cuda/lib64 Here is the output of nvidia-smi image.png778×358 36.3 KB What went wrong? Why is torchvisions looking for cuda 9.0 files?
st84560
Solved by ptrblck in post #4 The mentioned package was maintained by @stas and, if I’m not mistaken, wasn’t updated anymore, so you would have to wait for the next official torchvision-nightly release or build from source.
st84561
This might be related to this issue 9. I’m not sure, if there is a better way (torchvision-nightly?), but it seems if you would like to use the PyTorch nightly binaries, you should build torchvision from source (or use a stable PyTorch binary).
st84562
Seems like there is a torchvision nightly as discussed here 15. I should install that one with pytorch nightly, yes?
st84563
The mentioned package was maintained by @stas and, if I’m not mistaken, wasn’t updated anymore, so you would have to wait for the next official torchvision-nightly release or build from source.
st84564
I wrote my own custom dataset class but when I try to iterate through its data one by one I get an infinite loop. I went to the extreme and have the __len__ method always return 0 and that didn’t stop it from continually looping through my dataset. How do I stop it? Why does: for i, data in enumerate(dataset): print(i) print(data) why does this keep calling the next method? How does it stop? In the hope of having a reproducible error I coded this: class MyDataset(Dataset): def __init__(self): self.bob = [0,1,2] def __len__(self): return 0 def __getitem__(self, idx): print(f'idx = {idx}') return self.bob[idx] now it does stop even though __len__ is zero…
st84565
Hi, I tried this- class MyDataset(Dataset): def __init__(self): self.bob = [0,1,2] def __len__(self): return len(self.bob) def __getitem__(self, idx): print(f'idx = {idx}') return self.bob[idx] dataset = DataLoader(MyDataset(), batch_size = 1, shuffle = True, num_workers = 0) for i, data in enumerate(dataset): print(i) print(data) And got the following output that did not loop infinitely- idx = 1 0 tensor([1]) idx = 0 1 tensor([0]) idx = 2 2 tensor([2]) Is your DataLoader() correct, because your custom Dataset looks correct.
st84566
I think I figured it out. I think its that the stopping condition of looping through the dataset has to be using the len function. Though for some reason it DOES work on the strange example I cooked up without looping forever…so it might be a weird edge case. I expect once I wrap it with the dataloader class everything should work fine (I hope).
st84567
Make sure your __getitem__ raises an IndexError for illegal indexes. If your __getitem__ function never raises an exception then it will loop forever. The for loop doesn’t make use of __len__. The example you posted will raise an IndexError at the correct time because __getitem__ with 3 calls self.bob[3] which raises the error. https://docs.python.org/3/reference/datamodel.html 28