instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
How to calculate the 3x3 covariance matrix for RGB values across an image dataset? | I need to calculate the covariance matrix for RGB values across an image dataset, and then apply Cholesky decomposition to the final result.
The covariance matrix for RGB values is a 3x3 matrix M, where M_(i, i) is the variance of channel i and M_(i, j) is the covariance between channels i and j.
The end result should be something like this:
([[0.26, 0.09, 0.02],
[0.27, 0.00, -0.05],
[0.27, -0.09, 0.03]])
I'd prefer to stick to PyTorch functions even though Numpy has a Cov function.
I attempted to recreate the numpy Cov function in PyTorch here based on other cov implementations and clones:
def pytorch_cov(tensor, tensor2=None, rowvar=True):
if tensor2 is not None:
tensor = torch.cat((tensor, tensor2), dim=0)
tensor = tensor.view(1, -1) if tensor.dim() < 2 else tensor
tensor = tensor.t() if not rowvar and tensor.size(0) != 1 else tensor
tensor = tensor - torch.mean(tensor, dim=1, keepdim=True)
return 1 / (tensor.size(1) - 1) * tensor.mm(tensor.t())
def cov_vec(x):
c = x.size(0)
m1 = x - torch.sum(x, dim=[1],keepdims=True)/ c
out = torch.einsum('ijk,ilk->ijl',m1,m1) / (c - 1)
return out
The dataset loading would be like this:
dataset = torchvision.datasets.ImageFolder(data_path)
loader = torch.utils.data.DataLoader(dataset)
for images, _ in loader:
batch_size = images.size(0)
...
For the moment I'm just experimenting with images created with torch.randn(batch_size, 3, height, width).
Edit:
I'm attempting to replicate the matrix from Tensorflow's Lucid here, and somewhat explained on distill.pub here.
Second Edit:
In order to make the output resemble the example one, you have to do this instead of using Cholesky:
rgb_cov_tensor = rgb_cov_tensor / len(loader.dataset)
U,S,V = torch.svd(rgb_cov_tensor)
epsilon = 1e-10
svd_sqrt = U @ torch.diag(torch.sqrt(S + epsilon))
The resulting matrix can then be used to perform color decorrelation, which is useful for visualizing features (DeepDream). I've implemented it in my project here.
| Here is a function for computing the (unbiased) sample covariance matrix on a 3 channel image, named rgb_cov. Cholesky decomposition is straightforward with torch.cholesky:
import torch
def rgb_cov(im):
'''
Assuming im a torch.Tensor of shape (H,W,3):
'''
im_re = im.reshape(-1, 3)
im_re -= im_re.mean(0, keepdim=True)
return 1/(im_re.shape[0]-1) * im_re.T @ im_re
#Test:
im = torch.randn(50,50,3)
cov = rgb_cov(im)
L_cholesky = torch.cholesky(cov)
| https://stackoverflow.com/questions/64015444/ |
Any reason to save a pretrained BERT tokenizer? | Say I am using tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True), and all I am doing with that tokenizer during fine-tuning of a new model is the standard tokenizer.encode()
I have seen in most places that people save that tokenizer at the same time that they save their model, but I am unclear on why it's necessary to save since it seems like an out-of-the-box tokenizer that does not get modified in any way during training.
| In your case, if you are using tokenizer only to tokenize the text (encode()), then you need not have to save the tokenizer. You can always load the tokenizer of the pretrained model.
However, sometimes you may want to use the tokenizer of the pretrained model, then add new tokens to it's vocabulary, or redefine the special symbols such as '[CLS]', '[MASK]', '[SEP]', '[PAD]' or any such special tokens. In this case, since you have made the changes to the tokenizer, it will be useful to save the tokenizer for the future use.
| https://stackoverflow.com/questions/64018723/ |
How to compute hessian matrix for all parameters in a network in pytorch? | Suppose vector \theta is all the parameters in a neural network, I wonder how to compute hessian matrix for \theta in pytorch.
Suppose the network is as follows:
class Net(Module):
def __init__(self, h, w):
super(Net, self).__init__()
self.c1 = torch.nn.Conv2d(1, 32, 3, 1, 1)
self.f2 = torch.nn.Linear(32 * h * w, 5)
def forward(self, x):
x = self.c1(x)
x = x.view(x.size(0), -1)
x = self.f2(x)
return x
I know the second derivative can be calculated by calling torch.autograd.grad() twice, but the parameters in pytorch is organized by net.parameters(), and I don't know how to compute the hessian for all parameters.
I have tried to use torch.autograd.functional.hessian() in pytorch 1.5 as follows:
import torch
import numpy as np
from torch.nn import Module
import torch.nn.functional as F
class Net(Module):
def __init__(self, h, w):
super(Net, self).__init__()
self.c1 = torch.nn.Conv2d(1, 32, 3, 1, 1)
self.f2 = torch.nn.Linear(32 * h * w, 5)
def forward(self, x):
x = self.c1(x)
x = x.view(x.size(0), -1)
x = self.f2(x)
return x
def func_(a, b c, d):
p = [a, b, c, d]
x = torch.randn(size=[8, 1, 12, 12], dtype=torch.float32)
y = torch.randint(0, 5, [8])
x = F.conv2d(x, p[0], p[1], 1, 1)
x = x.view(x.size(0), -1)
x = F.linear(x, p[2], p[3])
loss = F.cross_entropy(x, y)
return loss
if __name__ == '__main__':
net = Net(12, 12)
h = torch.autograd.functional.hessian(func_, tuple([_ for _ in net.parameters()]))
print(type(h), len(h))
h is a tuple, and the results are in strange shape. For example, the shape of \frac{\delta Loss^2}{\delta c1.weight^2} is [32,1,3,3,32,1,3,3]. It seems like I can combine them into a complete H, but I don't know which part it is in the whole Hessian Matrix and the corresponding order.
| Here is one solution, I think it's a little too complex but could be instructive.
Considering about these points:
First, about torch.autograd.functional.hessian() the first argument must be a function, and the second argument should be a tuple or list of tensors. That means we cannot directly pass a scalar loss to it. (I don't know why, because I think there is no large difference between a scalar loss or a function that returns a scalar)
Second, I want to obtain a complete Hessian matrix, which is the second derivative of all parameters, and it should be in an appropriate order.
So here is the solution:
import torch
import numpy as np
from torch.nn import Module
import torch.nn.functional as F
class Net(Module):
def __init__(self, h, w):
super(Net, self).__init__()
self.c1 = torch.nn.Conv2d(1, 32, 3, 1, 1)
self.f2 = torch.nn.Linear(32 * h * w, 5)
def forward(self, x):
x = self.c1(x)
x = x.view(x.size(0), -1)
x = self.f2(x)
return x
def haha(a, b, c, d):
p = [a.view(32, 1, 3, 3), b, c.view(5, 32 * 12 * 12), d]
x = torch.randn(size=[8, 1, 12, 12], dtype=torch.float32)
y = torch.randint(0, 5, [8])
x = F.conv2d(x, p[0], p[1], 1, 1)
x = x.view(x.size(0), -1)
x = F.linear(x, p[2], p[3])
loss = F.cross_entropy(x, y)
return loss
if __name__ == '__main__':
net = Net(12, 12)
h = torch.autograd.functional.hessian(haha, tuple([_.view(-1) for _ in net.parameters()]))
# Then we just need to fix tensors in h into a big matrix
I build a new function haha that works in the same way with the neural network Net. Notice that arguments a, b, c, d are all expanded into one-dimensional vectors, so that the shapes of tensors in h are all two dimensional, in good order and easy to be combined into a large hessian matrix.
In my example, the shapes of tensors in h is
# with relation to c1.weight and c1.weight, c1.bias, f2.weight, f2.bias
[288,288]
[288,32]
[288,23040]
[288,5]
# with relation to c2.bias and c1.weight, c1.bias, f2.weight, f2.bias
[32, 288]
[32, 32]
[32, 23040]
[32, 5]
...
So it is easy to see the meaning of the tensors and which part it is. All we need to do is to allocate a (288+32+23040+5)*(288+32+23040+5) matrix and fix the tensors in h into the corresponding locations.
I think the solution still could be improved, like we don't need to build a function works the same way with neural network, and transform the shape of parameters twice. But for now I don't have better ideas, if there is any better solution, please let me know.
| https://stackoverflow.com/questions/64024312/ |
Creating a generator over a zarr array with start and end for pytorch dataloader | I'm working on a pytorch project where my data is saved in zarr.
Random access on zarr is costly, but thanks to zarr using a blockwise cache, iteration is really quick. To harness this fact, I use an IterableDataset together with multiple workers:
class Data(IterableDataset):
def __init__(self, path, start=None, end=None):
super(Data, self).__init__()
store = zarr.DirectoryStore(path)
self.array = zarr.open(store, mode='r')
if start is None:
start = 0
if end is None:
end = self.array.shape[0]
assert end > start
self.start = start
self.end = end
def __iter__(self):
return islice(self.array, self.start, self.end)
The issue is that self.array has on the order of 10e9 rows and for consecutive workers, as self.start and self.end naturally get bigger, creating the generators like itertools.islice(array, start, end) takes a significant time out of my training/validation processes, because islice still has to iterate over the unneeded elements until it gets to start. Once a generator is created per each worker, this works like a charm, but to get there takes too long.
Is there a better way to create such a generator? Or maybe there's a smarter way to use zarr in pytorch?
| Update: As of March 2021 this change has been merged into zarr.
I took a small dive into zarr and it looks like this will most easily be enabled from inside zarr. I have opened an issue here, in the meantime I made a fork of zarr that implements the function array.islice(start, end).
The dataset __iter__ method then looks like this:
def __iter__(self):
return self.array.islice(self.start, self.end)
| https://stackoverflow.com/questions/64028213/ |
How to prepare this PyTorch official ImageNet example? | This is a technical question on preparing a dataset.
I'm trying to follow this official example
https://github.com/pytorch/examples/tree/master/imagenet
but I cannot even start with because I don't understand the requirements. It says
Install PyTorch (pytorch.org)
pip install -r requirements.txt
Download the ImageNet dataset from http://www.image-net.org/
Then, and move validation images to labeled subfolders, using the following shell script
For the first requirement, I'm working on Colab, so I don't think I need to install PyTorch again on my local pc.
The second one doesn't work, as there's obviously no module named "requirements.txt". This is where I'm beginning to realize there's something on this git repo that I completely don't understand how to use. Anyway, I could just open the text file from the git repo directly, and it just says use torch and torchvision. Okay, I have no problem importing them.
The third requirement. So I went to ImageNet website and signed the agreement for the research use. Now the requirement tells me to download THE ImageNet data, but I see bunch of various options there (like by published years, purposes like for a competition, resolution, etc.). Which one is THE DATASET?
I'm new to PyTorch, and I think I'm missing some protocol about how the PyTorch dev community provides examples via this way...
Any help will be appreciated. Thank you.
|
there's obviously no module named "requirements.txt"
It's the requirements.txt file in that repo. You can add package names in a file such as this and install all packages at once using pip, that's why pip install -r requirements.txt. Of course, since it only contains torch and torvision, you don't need to install it as these are already installed on google colab.
Which one is THE DATASET?
I can't access this page without signing up, though you can download any dataset (of any year etc), the important thing is that in order to train it using pytorch using Imagefolder api (which is the one used in the repo you mentioned), its structure should be like this:
train/
dog/
xxx.png
xxy.png
cat/
xxz.png
val/
...
You can use the script they mentioned for Imagenet data to do so.
If you're just getting started with pytorch, I'd advise you to go through pytorch tutorials such as this one.
| https://stackoverflow.com/questions/64038769/ |
Pytorch BERT: Misshaped inputs | I am running into issues of evaluating huggingface's BERT model ('bert-base-uncased') on large input sequences.
model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)
token_ids = [101, 1014, 1016, ...] # len(token_ids) == 33286
token_tensors = torch.tensor([token_ids]) # shape == [1, 33286]
segment_tensors = torch.tensor([[1] * len(token_ids)]) # shape == [1, 33286]
model(token_tensors, segment_tensors)
Traceback
self.model(token_tensors, segment_tensors)
File "/home/.../python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/.../python3.8/site-packages/transformers/modeling_bert.py", line 824, in forward
embedding_output = self.embeddings(
File "/home/.../python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/.../python3.8/site-packages/transformers/modeling_bert.py", line 211, in forward
embeddings = inputs_embeds + position_embeddings + token_type_embeddings
RuntimeError: The size of tensor a (33286) must match the size of tensor b (512) at non-singleton dimension 1
I noticed that model.embeddings.positional_embeddings.weight.shape == (512, 768). I.e. when I restrict the input size to model(token_tensors[:, :10], segment_tensors[:, :10]) it works. I am misunderstanding how the the token_tensors and segment_tensors should be shaped. I thought they should be sized (batch_size, sequence_length)
Thanks for the help
| I just discovered that pretrained BERT models from huggingface have a maximum input length of 512 ( https://github.com/huggingface/transformers/issues/225 )
| https://stackoverflow.com/questions/64044200/ |
Pytorch weighted Tensor | I'm porting a little bit complex TF2 code to Pytorch. Since TF2 does not distinguish Tensor and numpy array, it was straightforward on it. However, I feel like I came back to the TF1 era when I encountered several errors saying 'you cannot mix Tensor and numpy array here in Pytorch!'. Here is the original TF2 code:
def get_weighted_imgs(points, centers, imgs):
weights = np.array([[tf.norm(p - c) for c in centers] for p in points], dtype=np.float32)
weighted_imgs = np.array([[w * img for w, img in zip(weight, imgs)] for weight in weights])
weights = tf.expand_dims(1 / tf.reduce_sum(weights, axis=1), axis=-1)
weighted_imgs = tf.reshape(tf.reduce_sum(weighted_imgs, axis=1), [len(weights), 64*64*3])
return weights * weighted_imgs
And my problematic Pytorch code:
def get_weighted_imgs(points, centers, imgs):
weights = torch.Tensor([[torch.norm(p - c) for c in centers] for p in points])
weighted_imgs = torch.Tensor([[w * img for w, img in zip(weight, imgs)] for weight in weights])
weights = torch.unsqueeze(1 / torch.sum(weights, dim=1), dim=-1)
weighted_imgs = torch.sum(weighted_imgs, dim=1).view([len(weights), 64*64*3])
return weights * weighted_imgs
def reproducible():
points = torch.Tensor(np.random.random((128, 5)))
centers = torch.Tensor(np.random.random((10, 5)))
imgs = torch.Tensor(np.random.random((10, 64, 64, 3)))
weighted_imgs = get_weighted_imgs(points, centers, imgs)
I can guarantee that there is no issue with the dimension order or shape of the tensors/arrays. The error message I got is
ValueError: only one element tensors can be converted to Python scalars
which comes from
weighted_imgs = torch.Tensor([[w * img for w, img in zip(weight, imgs)] for weight in weights])
Could someone help me to solve this problem? That would be greatly appreciated.
| Perhaps this will help you, but I'm not sure about your final multiplication between weights and weighted_imgs since they don't have the same shape, even after reshaping as you probably wanted. I am not sure I understood correctly your logic:
import torch
def get_weighted_imgs(points, centers, imgs):
weights = torch.Tensor([[torch.norm(p - c) for c in centers] for p in points])
imgs = imgs.unsqueeze(0).repeat(weights.shape[0],1,1,1,1)
dims_to_rep = list(imgs.shape[-3:])
weights = weights.unsqueeze(-1).unsqueeze(-1).unsqueeze(-1).repeat(1,1,*dims_to_rep)
weights /= torch.sum(weights[...,0:1,0:1,0:1],dim=1, keepdim=True)
weighted_imgs = torch.sum(imgs * weights, dim=1).view(weights.shape[0], -1)
return weighted_imgs #weights.view(weighted_imgs.shape[0],-1) *\
#weighted_imgs # Shapes are torch.Size([128, 122880]) and torch.Size([128, 12288])
def reproducible():
points = torch.Tensor(np.random.random((128, 5)))
centers = torch.Tensor(np.random.random((10, 5)))
imgs = torch.Tensor(np.random.random((10, 64, 64, 3)))
weighted_imgs = get_weighted_imgs(points, centers, imgs)
#Test:
reproducible()
| https://stackoverflow.com/questions/64048508/ |
Pytorch Unfold and Fold: How do I put this image tensor back together again? | I am trying to filter a single channel 2D image of size 256x256 using unfold to create 16x16 blocks with an overlap of 8. This is shown below:
# I = [256, 256] image
kernel_size = 16
stride = bx/2
patches = I.unfold(1, kernel_size,
int(stride)).unfold(0, kernel_size, int(stride)) # size = [31, 31, 16, 16]
I have started to attempt to put the image back together with fold but I’m not quite there yet. I’ve tried to use view to get the image to ‘fit’ the way it’s supposed to but I don’t see how this would preserve the original image. Perhaps I’m overthinking this.
# patches.shape = [31, 31, 16, 16]
patches = = filt_data_block.contiguous().view(-1, kernel_size*kernel_size) # [961, 256]
patches = patches.permute(1, 0) # size = [951, 256]
Any help would be greatly appreciated. Thanks very much.
| A slightly less elegant solution than that proposed by Gil:
I took inspiration from this post on the Pytorch forums, formatting my image tensor to be of standard shape B x C x H x W (1 x 1 x 256 x 256). Unfolding:
# CREATE THE UNFOLDED IMAGE SLICES
I = image # shape [256, 256]
kernel_size = bx #shape [16]
stride = int(bx/2) #shape [8]
I2 = I.unsqueeze(0).unsqueeze(0) #shape [1, 1, 256, 256]
patches2 = I2.unfold(2, kernel_size, stride).unfold(3, kernel_size, stride)
#shape [1, 1, 31, 31, 16, 16]
Following this, I do some transforms and filtering to my tensor stack. Before doing this I apply a cosine window and normalise:
# NORMALISE AND WINDOW
Pvv = torch.mean(torch.pow(win, 2))*torch.numel(win)*(noise_std**2)
Pvv = Pvv.double()
mean_patches = torch.mean(patches2, (4, 5), keepdim=True)
mean_patches = mean_patches.repeat(1, 1, 1, 1, 16, 16)
window_patches = win.unsqueeze(0).unsqueeze(0).unsqueeze(0).unsqueeze(0).repeat(1, 1, 31, 31, 1, 1)
zero_mean = patches2 - mean_patches
windowed_patches = zero_mean * window_patches
#SOME FILTERING ....
#ADD MEAN AND WINDOW BEFORE FOLDING BACK TOGETHER.
filt_data_block = (filt_data_block + mean_patches*window_patches) * window_patches
The above code works for me, but a mask would be more simple. Next, I prepare my tensor of form [1, 1, 31, 31, 16, 16] to be transformed back into the original [1, 1, 256, 256]:
# REASSEMBLE THE IMAGE USING FOLD
patches = filt_data_block.contiguous().view(1, 1, -1, kernel_size*kernel_size)
patches = patches.permute(0, 1, 3, 2)
patches = patches.contiguous().view(1, kernel_size*kernel_size, -1)
IR = F.fold(patches, output_size=(256, 256), kernel_size=kernel_size, stride=stride)
IR = IR.squeeze()
This allowed me to create an overlapping sliding window and seamlessly stitch the image back together. Cutting out the filtering makes for an identical image.
| https://stackoverflow.com/questions/64048720/ |
What does Pytorch's nn.Linear(x,y) return? | I am new to object orientation, and I am having troubles understanding the following:
import torch.nn as nn
class mynet(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(20, 64)
def forward(self, x):
x = self.fc1(x)
The line self.fc1 = nn.Linear(20, 64) is supposed to create a member variable fc1 to my class, right? But what is the return value of nn.Linear(20, 64)?
According to the documentation, nn.Linear is defined as
class torch.nn.Linear(in_features: int, out_features: int, bias: bool = True).
However, in my basic OOP tutorial I have only seen something like class CLASSNAME(BASECLASS) so that the class CLASSNAME inherits from BASECLASS. What does the documentation mean with its way of writing all that stuff in between the brackets?
Also, the line x=fc1(x) somehow makes it look as if fc1 was a function now.
I seem to lack OOP knowledge here... Any help appreciated!
| You can create a little examination:
import torch
import torch.nn as nnn
fc1 = nn.Linear(20, 64)
print(fc1, type(fc1))
ret = fc1(torch.randn(20))
print(ret, type(ret), ret.shape)
Out:
Linear(in_features=20, out_features=64, bias=True) <class 'torch.nn.modules.linear.Linear'>
tensor([-0.2795, 0.8476, -0.8207, 0.3943, 0.1464, -0.2174, 0.6605, 0.6072,
-0.6881, -0.1118, 0.8226, 0.1515, 1.3658, 0.0814, -0.8751, -0.9587,
0.1310, 0.2539, -0.3072, -0.0225, 0.4663, -0.0019, 0.0404, 0.9279,
0.4948, -0.3420, 0.9061, 0.1752, 0.1809, 0.5917, -0.1010, -0.3210,
1.1910, 0.5145, 0.2254, 0.2077, -0.0040, -0.6406, -0.1885, 0.5270,
0.0824, -0.0787, 1.5140, -0.7958, 1.1727, 0.1862, -1.0700, 0.0431,
0.6849, 0.1393, 0.7547, 0.0917, -0.3264, -0.2152, -0.0728, -0.6441,
-0.1162, 0.4154, 0.3486, -0.1693, 0.6697, 0.0229, 0.0311, 0.1433],
grad_fn=<AddBackward0>) <class 'torch.Tensor'> torch.Size([64])
fc1 is of type class 'torch.nn.modules.linear.Linear'.
It needs some "juice" to work. In your case it needs the input tensor torch.randn(20) to return the output of torch.Size([64]).
So fc1 is a class instance that you can run with () in which case the forward() method of a class nn.Linear will be called.
In most cases when working with your modules (like mynet in your case) you will list the modules in __init__, and then in the forward of your module you will be defining what will happen (the behavior).
The three kind of modules in PyTorch are:
Functional modules
Default modules
Custom modules
Custom modules like mynet you created typically use default modules:
nn.Identity()
nn.Embedding()
nn.Linear()
nn.Conv2d()
nn.BatchNorm() (BxHxW)
nn.LayerNorm() (CxHxW)
nn.Dropout()
nn.ReLU()
And many many other modules that I haven't set. But of course, you can create custom modules without any default modules, just by using nn.Parameter(), see the last example.
The third kind functional modules are defined here.
Also check nn.Linear implementation. You may note the F.linear() functional module is used.
You may test the naive implementation of Linear from Fastai Book:
import torch
import torch.nn as nn
import math
class Linear(nn.Module):
def __init__(self, n_in, n_out):
super().__init__()
self.weight = nn.Parameter(torch.randn(n_out, n_in) * math.sqrt(2/n_in))
self.bias = nn.Parameter(torch.zeros(n_out))
def forward(self, x): return x @ self.weight.T + self.bias
fc = Linear(20,64)
ret = fc(torch.randn(20))
print(ret.shape) # 64
You may try to understand the difference between the naive implementation provided inside PyTorch.
| https://stackoverflow.com/questions/64054961/ |
Install specific PyTorch version (pytorch==1.0.1) | I'm trying to install specific PyTorch version under conda env:
Using pip:
pip3 install pytorch==1.0.1
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement pytorch==1.0.1 (from versions: 0.1.2, 1.0.2)
ERROR: No matching distribution found for pytorch==1.0.1
Using Conda:
conda install pytorch==1.0.1
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- pytorch==1.0.1
Current channels:
- https://repo.anaconda.com/pkgs/main/osx-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/osx-64
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
I was able to find this version under https://anaconda.org/soumith/pytorch but is there a way to find it and install from console?
| You can download/install the version you like from the official Pytorch's Conda package. the link you specified is an old version and is not supported/updated for quit some time now!.
Install your desired version like this :
conda install pytorch==1.0.1 torchvision==0.2.2 -c pytorch
If you are looking for a pip version, you can view and access all versions from here as well.
and simply do :
pip install torch===1.0.1 -f https://download.pytorch.org/whl/torch_stable.html
You can always check the previous versions here as well.
| https://stackoverflow.com/questions/64062637/ |
Pytorch reading tensors from file of tensors (stream training from disk) | I have some really big input tensors and I was running into memory issues while building them, so I read them one by one into a .pt file. As I run the script that generates and saves the file, the file gets bigger and bigger, so I am assuming that the tensors are saving correctly. Here is that code:
with open(a_sync_save, "ab") as f:
print("saved")
torch.save(torch.unsqueeze(torch.cat(tensors, dim=0), dim=0), f)
I want to read a certain amount of these tensors from the file at a time, because I do not want to run into a memory issue again. When I try to read each tensor saved to the file I can only manage to get the first tensor.
with open(a_sync_save, "rb") as f:
for tensor in torch.load(f):
print(tensor.shape)
The output here is the shape of the first tensor, then quits peacfully.
| Here is some code that I used to answer this question. A lot of it is specific to what I am doing, but the jist of it can be used by others who are facing the same problem I was.
def stream_training(filepath, epochs=100):
"""
:param filepath: file path of pkl file
:param epochs: number of epochs to run
"""
def training(train_dataloader, model_obj, criterion, optimizer):
for j, data in enumerate(train_dataloader, start=0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs, labels = inputs.cuda(), labels.cuda()
outputs = model_obj(inputs.float())
outputs = torch.flatten(outputs)
loss = criterion(outputs, labels.float())
print(loss)
# zero the parameter gradients
optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm_(model_obj.parameters(), max_norm=1)
optimizer.step()
tensors = []
expected_values = []
model= Model(1000, 1, 256, 1)
model.cuda()
criterion = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr=0.00001, betas=(0.9, 0.99999), eps=1e-08, weight_decay=0.001,
amsgrad=True)
for i in range(epochs):
with (open(filepath, 'rb')) as openfile:
while True:
try:
data_list = pickle.load(openfile)
tensors.append(data_list[0])
expected_values.append(data_list[1])
if len(tensors) % BATCH_SIZE == 0:
tensors = torch.cat(tensors, dim=0)
tensors = torch.reshape(tensors, (tensors.shape[0], tensors.shape[1], -1))
train_loader = make_dataset(tensors, expected_values) # makes a dataloader for the batch that comes in
training(train_loader, model, criterion, optimizer) #Performs forward and back prop
tensors = [] # washes out the batch to conserve memory on my computer.
expected_values = []
except EOFError:
print("This file has finished training")
break
Here is the model for fun.
class Model(nn.Module):
def __init__(self, input_size, output_size, hidden_dim, n_layers):
super(Model, self).__init__()
# dimensions
self.hidden_dim = hidden_dim
self.n_layers = n_layers
#Define the layers
#GRU
self.gru = nn.GRU(input_size, hidden_dim, n_layers, batch_first=True)
self.fc1 = nn.Linear(hidden_dim, hidden_dim)
self.bn1 = nn.BatchNorm1d(num_features=hidden_dim)
self.fc2 = nn.Linear(hidden_dim, hidden_dim)
self.bn2 = nn.BatchNorm1d(num_features=hidden_dim)
self.fc3 = nn.Linear(hidden_dim, hidden_dim)
self.bn3 = nn.BatchNorm1d(num_features=hidden_dim)
self.fc4 = nn.Linear(hidden_dim, hidden_dim)
self.bn4 = nn.BatchNorm1d(num_features=hidden_dim)
self.fc5 = nn.Linear(hidden_dim, hidden_dim)
self.output = nn.Linear(hidden_dim, output_size)
def forward(self, x):
x = x.float()
x = F.relu(self.gru(x)[1])
x = x[-1,:,:] # eliminates first dim
x = F.dropout(x, 0.5)
x = F.relu(self.bn1(self.fc1(x)))
x = F.dropout(x, 0.5)
x = F.relu(self.bn2(self.fc2(x)))
x = F.dropout(x, 0.5)
x = F.relu(self.bn3(self.fc3(x)))
x = F.dropout(x, 0.5)
x = F.relu(self.bn4(self.fc4(x)))
x = F.dropout(x, 0.5)
x = F.relu(self.fc5(x))
return torch.sigmoid(self.output(x))
def init_hidden(self, batch_size):
hidden = torch.zeros(self.n_layers, batch_size, self.hidden_dim)
return hidden
| https://stackoverflow.com/questions/64073517/ |
Training custom model | I am trying to train my dataset on yolov5 I normalized data as discussed in the docs on github but I always end up with this error.
from n params module arguments
0 -1 1 8800 models.common.Focus [3, 80, 3]
1 -1 1 115520 models.common.Conv [80, 160, 3, 2]
2 -1 1 315680 models.common.BottleneckCSP [160, 160, 4]
3 -1 1 461440 models.common.Conv [160, 320, 3, 2]
4 -1 1 3311680 models.common.BottleneckCSP [320, 320, 12]
5 -1 1 1844480 models.common.Conv [320, 640, 3, 2]
6 -1 1 13228160 models.common.BottleneckCSP [640, 640, 12]
7 -1 1 7375360 models.common.Conv [640, 1280, 3, 2]
8 -1 1 4099840 models.common.SPP [1280, 1280, [5, 9, 13]]
9 -1 1 20087040 models.common.BottleneckCSP [1280, 1280, 4, False]
10 -1 1 820480 models.common.Conv [1280, 640, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 5435520 models.common.BottleneckCSP [1280, 640, 4, False]
14 -1 1 205440 models.common.Conv [640, 320, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 1360960 models.common.BottleneckCSP [640, 320, 4, False]
18 -1 1 922240 models.common.Conv [320, 320, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 1 5025920 models.common.BottleneckCSP [640, 640, 4, False]
21 -1 1 3687680 models.common.Conv [640, 640, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 1 20087040 models.common.BottleneckCSP [1280, 1280, 4, False]
24 [17, 20, 23] 1 0 models.yolo.Detect [3, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]]]
Traceback (most recent call last):
File "train.py", line 404, in <module>
train(hyp)
File "train.py", line 80, in train
model = Model(opt.cfg).to(device)
File "/content/yolov5/models/yolo.py", line 62, in __init__
m.stride = torch.tensor([128 / x.shape[-2] for x in self.forward(torch.zeros(1, ch, 128, 128))]) # forward
File "/content/yolov5/models/yolo.py", line 90, in forward
return self.forward_once(x, profile) # single-scale inference, train
File "/content/yolov5/models/yolo.py", line 107, in forward_once
x = m(x) # run
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/content/yolov5/models/yolo.py", line 26, in forward
x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
RuntimeError: shape '[1, 3, 8, 16, 16]' is invalid for input of size 81920
These are the flags used
!python train.py --img 1024 --batch 4 --epochs 30 \
--data ./data/mask.yaml --cfg ./models/yolov5x.yaml --weights yolov5x.pt \
--cache --name maskmodel
this is the file structure
| For anyone who was facing the same problem, I found my issue when splitting data for training and validation make sure you pick a seed. Furthermore, when normalizing files for yolov5 input make sure that they ID as number without text in them. Thank you
| https://stackoverflow.com/questions/64079249/ |
How To Import The MNIST Dataset From Local Directory Using PyTorch | I am writing a code of a well-known problem MNIST database of handwritten digits in PyTorch. I downloaded the train and testing dataset (from the main website) including the labeled dataset. The dataset format is t10k-images-idx3-ubyte.gz and after extract t10k-images-idx3-ubyte. My dataset folder looks like
MINST
Data
train-images-idx3-ubyte.gz
train-labels-idx1-ubyte.gz
t10k-images-idx3-ubyte.gz
t10k-labels-idx1-ubyte.gz
Now, I wrote a code to load data like bellow
def load_dataset():
data_path = "/home/MNIST/Data/"
xy_trainPT = torchvision.datasets.ImageFolder(
root=data_path, transform=torchvision.transforms.ToTensor()
)
train_loader = torch.utils.data.DataLoader(
xy_trainPT, batch_size=64, num_workers=0, shuffle=True
)
return train_loader
My code is showing Supported extensions are: .jpg,.jpeg,.png,.ppm,.bmp,.pgm,.tif,.tiff,.webp
How can I solve this problem and I also want to check that my images are loaded (just a figure contains the first 5 images) from the dataset?
| Read this Extract images from .idx3-ubyte file or GZIP via Python
Update
You can import data using this format
xy_trainPT = torchvision.datasets.MNIST(
root="~/Handwritten_Deep_L/",
train=True,
download=True,
transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()]),
)
Now, what is happening at download=True first your code will check at the root directory (your given path) contains any datasets or not.
If no then datasets will be downloaded from the web.
If yes this path already contains a dataset then your code will work using the existing dataset and will not download from the internet.
You can check, first give a path without any dataset (data will be downloaded from the internet), and then give another path which already contains dataset data will not be downloaded.
| https://stackoverflow.com/questions/64080130/ |
n_jobs > 1 using sklearn and pytorch is possible inside Neuraxle? | I built my own sklearn-like estimator using pytorch training inside GPU (cuda) and it works fine with RandomizedSearchCV when n_jobs==1. When n_jobs > 1, I get the following error:
PicklingError: Can't pickle <class 'main.LSTM'>: attribute lookup LSTM on main failed
This is the piece of code giving me the error:
model = my_model(input_size=1, hidden_layer_size=80, n_lstm_units=3, bidirectional=False,
output_size=1, training_batch_size=60, epochs=7500, device=device)
model.to(device)
hidden_layer_size = random.uniform(40, 200, 20).astype("int")
n_lstm_units = arange(1, 4)
parametros = {'hidden_layer_size': hidden_layer_size, 'n_lstm_units': n_lstm_units}
splitter = ShuffleSplit()
regressor = model
cv_search = \
RandomizedSearchCV(estimator=regressor, cv=splitter,
search_spaces=parametros,
refit=True,
n_iter=4,
verbose=1,
n_jobs=2,
scoring=make_scorer(mean_squared_error,
greater_is_better=False,
needs_proba=False))
cv_search = MetaSKLearnWrapper(cv_search)
cv_search.fit(X, y)
Using Neuraxle wrapper leads to exactly same error, changes nothing.
I found closest solution here, but still don't know how to use RandomizedSearchCV within Neuraxle. It is a brand new project, so I couldn't find an answer on their docs or community examples. If anyone can give me an example or a good indication it will save my life. Thank you
Ps: Any way to run RandomizedSearchCV with my pytorch model on the gpu without Neuraxle also helps, I just need n_jobs>1.
Ps2: My model has a fit() method that creates and moves tensors to the gpu and works already tested.
| There are multiple criteria that must be respected here for your code to work:
You need to use Neuraxle's RandomSearch instead of sklearn's random search for this to work. Use Neuraxle's base classes when possible.
Make sure that you use a Neuraxle BaseStep for your pytorch model, instead of a sklearn base classe.
Also, you should create your PyTorch code only in the setup() method or later. You can't create a PyTorch model in the __init__ of the BaseStep that contains pytorch code. You will want to read this page.
You will probably have to create a Saver for your BaseStep that contains PyTorch code if you want to serialize and then load your trained pipeline again. You can see how we created our TensorFlow Saver for our TensorFlow BaseStep and do something similar. Your saver will probably be much simpler than ours due to the more eager nature of PyTorch. For instance, you could have self.model inside your extension of the BaseStep class. The role of the saver would be to save and strip away this simple variable from self, and to be able to reload it whenever needed.
To sum up: you'd need to create two classes, and your two classes should look very similar our two TensorFlow step and saver classes here, to the exception that you PyTorch model is in a self.model variable of your step.
I'd be glad to see your implementation of your PyTorch base step and of your PyTorch saver!
You could then also even use the AutoML class (see AutoML example here) to save experiments in a Hyperparameter Repository as seen in the example.
| https://stackoverflow.com/questions/64084881/ |
Is the gradient of the sum equal to the sum of the gradients for a neural network in pytorch? | Let's suppose I have the code below and I want to calculate the jacobian of L, which is the prediction made by a neural network in Pytorch, L is of size nx1 where n is the number of samples in a mini batch. In order to avoid a for loop for each entry of L (n entries) to calculate the jacobian for each sample in the mini batch some codes I found just sum the n predictions of the neural network (L) with respect with the inputs and then calculate the gradient of the sum. First I can't understand why is the gradient of the sum the same of the sum of the gradients for each sample in pytorch architecture. Second I tried both with the sum and with a for loop and the results diverge. Could it be due to numerical approximations or because the sum just doesn't make sense?
The code is below, where both functions belong to a nn.module:
def forward(self, x):
with torch.set_grad_enabled(True):
def function(x,t):
self.n = n = x.shape[1]//2
qqd = x.requires_grad_(True)
L = self._lagrangian(qqd).sum()
J = grad(L, qqd, create_graph=True)[0]
def _lagrangian(self, qqd):
x = F.softplus(self.fc1(qqd))
x = F.softplus(self.fc2(x))
x = F.softplus(self.fc3(x))
L = self.fc_last(x)
return L
| I think it should, this is just a toy example
w = torch.tensor([2.], requires_grad=True)
x1 = torch.tensor([3.], requires_grad=True)
x2 = torch.tensor([4.], requires_grad=True)
y = w * a + w * b
y.backward() # calculate gradient
return
>>> w.grad
tensor([7.])
| https://stackoverflow.com/questions/64086949/ |
How to downgrade CUDA to 10.0.10 with conda, without conflicts? | I would like to go to CUDA (cudatoolkit) version compatible with
Nvidie-430 driver, i.e., 10.0.130 as recommended by the Nvidias site.
Based on this answer I did,
conda install -c pytorch cudatoolkit=10.0.130
And then I get this error (pastebin
link). (very-short version below):
(fastaiclean) eghx@eghx-nitro:~$ conda install -c pytorch cudatoolkit=10.0.130
...
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package _libgcc_mutex conflicts for:
pyzmq -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main]
libgcc-ng -> _libgcc_mutex=[build=main]
lcms2 -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main]
...
The following specifications were found to be incompatible with your system:
- feature:/linux-64::__cuda==10.1=0
- feature:|@/linux-64::__cuda==10.1=0
Your installed version is: 10.1
Why am I getting conflicts? Why does it say 10.1 when cuda toolkit
is 10.2.89 (conda list)? how to handle conflicts? What can I do
with this error? The conflicts are so huge, I don't know where to start.
Other
Nvidia driver 430
current cudatoolkit: 10.2.89
| Check current version with
torch.version.cuda
I had 10.2. But I need 10.1 according to: table 1 here and my 430 NVIDIA driver installed.
Uninstall and Install
conda remove pytorch torchvision cudatoolkit
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1.168 -c pytorch
Say yes to everything for the above commands.
| https://stackoverflow.com/questions/64088157/ |
Pytorch detection of CUDA | Which is the command to see the "correct" CUDA Version that pytorch in conda env is seeing? This, is a similar question, but doesn't get me far.
nvidia-smi says I have cuda version 10.1
conda list tells me cudatoolkit version is 10.2.89
torch.cuda.is_available() shows FALSE, so it sees No CUDA?
print(torch.cuda.current_device()), I get 10.0.10 (10010??) (it
looks like):
AssertionError: The NVIDIA driver on your system is too old
(found version 10010)
print(torch._C._cuda_getCompiledVersion(), 'cuda compiled version') tells me my version is 10.0.20 (10020??)?
10020 cuda compiled version
Why are there so many different versions? What am I missing?
P.S
I have Nvidia driver 430 on Ubuntu 16.04 with Geforce 1050. It comes
with libcuda1-430 when I installed the driver from additional drivers tab in ubuntu (Software and Updates). I installed pytorch
with conda which also installed the cudatoolkit using conda install -c fastai -c pytorch -c anaconda fastai
| In the conda env (myenv) where pytorch is installed do the following:
conda activate myenv
torch.version.cuda
Nvidia-smi only shows compatible version. Does not seem to talk about the version pytorch's own cuda is built on.
| https://stackoverflow.com/questions/64089854/ |
Validation dataset in PyTorch using DataLoaders | I want to load MNIST dataset in PyTorch and Torchvision, dividing it into train, validation and test parts. So far I have:
def load_dataset():
train_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST(
'/data/', train=True, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor()])),
batch_size=batch_size_train, shuffle=True)
test_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST(
'/data/', train=False, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor()])),
batch_size=batch_size_test, shuffle=True)
How can I divide the training dataset into training and validation if it's in the DataLoader? I want to use last 10000 examples from the training dataset as a validation dataset (I know that I should do CV for more accurate results, I just want a quick validation here).
| Splitting the training dataset into training and validation in PyTorch turns out to be much harder than it should be.
First, split the training set into training and validation subsets (class Subset), which are not datasets (class Dataset):
train_subset, val_subset = torch.utils.data.random_split(
train, [50000, 10000], generator=torch.Generator().manual_seed(1))
Then get actual data from those datasets:
X_train = train_subset.dataset.data[train_subset.indices]
y_train = train_subset.dataset.targets[train_subset.indices]
X_val = val_subset.dataset.data[val_subset.indices]
y_val = val_subset.dataset.targets[val_subset.indices]
Note that this way we don't have Dataset objects, so we can't use DataLoader objects for batch training. If you want to use DataLoaders, they work directly with Subsets:
train_loader = DataLoader(dataset=train_subset, shuffle=True, batch_size=BATCH_SIZE)
val_loader = DataLoader(dataset=val_subset, shuffle=False, batch_size=BATCH_SIZE)
| https://stackoverflow.com/questions/64092369/ |
Computing mean of list of generator objects | // Initialize 2-D array, each entry is some neural network
phi = [[None] * n for _ in range(m)]
for i in range(m):
for j in range(n):
phi[i][j] = NeuralNetwork()
// Let k, i be arbitrary indices
p1 = torch.nn.utils.parameters_to_vector(phi[k][i - 1].parameters())
p2 = torch.nn.utils.parameters_to_vector(mean of phi[:][i-1])
I want to basically compute the mean squared error between the parameters phi[k][i-1] and average of the entire column phi[:][i-1] i.e. ((p1 - p2)**2).sum() I tried in the following way:
tmp = [x.parameters() for x in self.phi[:][i - 1]]
mean_params = torch.mean(torch.stack(tmp), dim=0)
p2 = torch.nn.utils.parameters_to_vector(mean_params)
But this doesn't work out because tmp is a list of generator objects. More specifically, I guess my problem is to compute the mean from that generator object.
| First we can define a function that computes the average parameters for a list of models. To avoid creating a copy of the parameters of each model all at the same time we probably want to compute this as a running sum. For example
def average_parameters_vector(model_list):
n = len(model_list)
avg = 0
for model in model_list:
avg = avg + torch.nn.utils.parameters_to_vector(model.parameters()) / n
return avg
Then you can just create p1 and p2 and compute the mean-squared error
p1 = torch.nn.utils.parameters_to_vector(phi[k][i - 1].parameters())
p2 = average_parameters_vector(phi[:][i - 1])
mse = ((p1 - p2)**2).mean()
If you really want a one-line solution that's also possibly the fastest you could compute this by making a single tensor containing all the parameters of the models in phi[:][i - 1], then mean reducing them. But as mentioned earlier this will significantly increase memory usage, especially if your models have millions of parameters as is often the case.
# Uses lots of memory but potentially the fastest solution
def average_parameters_vector(model_list):
return torch.stack([torch.nn.utils.parameters_to_vector(model.parameters()) for model in model_list]).mean(dim=0)
On the other extreme, if you're very concerned about memory usage then you could compute the average of each individual parameter at a time.
# more memory efficient than original solution but probably slower
def average_parameters_vector(model_list):
n = len(model_list)
num_params = len(list(model_list[0].parameters()))
averages = [0] * num_params
for model in model_list:
for pidx, p in enumerate(model.parameters()):
averages[pidx] = averages[pidx] + p.data.flatten() / n
return torch.cat(averages)
| https://stackoverflow.com/questions/64093982/ |
Shouldn't same neural network weights produce same results? | So I am working with different deep learning frameworks as part of my research and have observed something weird (at least I cannot explain the cause of it).
I trained a fairly simple MLP model (on mnist dataset) in Tensorflow, extracted trained weights, created the same model architecture in PyTorch and applied the trained weights to PyTorch model. Now my expectation is to get same test accuracy from both Tensorflow and PyTorch models but this isn't the case. I get different results.
So my question is: If a model is trained to some optimal value, shouldn't the trained weights produce same results every time testing is done on the same dataset (regardless of the framework used)?
PyTorch Model:
class Net(nn.Module):
def __init__(self) -> None:
super(Net, self).__init__()
self.fc1 = nn.Linear(784, 24)
self.fc2 = nn.Linear(24, 10)
def forward(self, x: Tensor) -> Tensor:
x = torch.flatten(x, 1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
Tensorflow Model:
def build_model() -> tf.keras.Model:
# Build model layers
model = models.Sequential()
# Flatten Layer
model.add(layers.Flatten(input_shape=(28,28)))
# Fully connected layer
model.add(layers.Dense(24, activation='relu'))
model.add(layers.Dense(10))
# compile the model
model.compile(
optimizer='sgd',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy']
)
# return newly built model
return model
To extract weights from Tensorflow model and apply them to Pytorch model I use following functions:
Extract Weights:
def get_weights(model):
# fetch latest weights
weights = model.get_weights()
# transpose weights
t_weights = []
for w in weights:
t_weights.append(np.transpose(w))
# return
return t_weights
Apply Weights:
def set_weights(model, weights):
"""Set model weights from a list of NumPy ndarrays."""
state_dict = OrderedDict(
{k: torch.Tensor(v) for k, v in zip(model.state_dict().keys(), weights)}
)
self.load_state_dict(state_dict, strict=True)
| Providing solution in answer section for the benefit of community. From comments
If you are using the same weights in the same manner then results
should be the same, though float rounding error should also be
accounted. Also it doesn't matter if model is trained at all. You can
think of your model architecture as a chain of matrix multiplications
with element-wise nonlinearities in between. How big is the
difference? Are you comparing model outputs, our metrics computed over
dataset? As a suggestion, intialize model with some random values in
Keras, do a forward pass for a single batch (paraphrased from jdehesa and Taras Sereda)
| https://stackoverflow.com/questions/64099580/ |
Predict only one class (person) in YOLACT/YOLACT++ | I want to predict only one class i.e. person from all the 84 classes that are being checked for and predicted.
For YOLACT reference https://github.com/dbolya/yolact
The results are pretty fine but I guess I just need to modify one of the codes and in a very short way but I cant manage to find out
There is one issue related to this in which I did what he mentioned like adding the 4 lines in Yolact/layers/output_utils.py and changing nothing else. Those lines are as following:
boxes = torch.cat((boxes[classes==0], boxes[classes==2]),dim=0)
scores = torch.cat((scores[classes==0], scores[classes==2]),dim=0)
masks = torch.cat((masks[classes==0], masks[classes==2]),dim=0)
classes = torch.cat((classes[classes==0], classes[classes==2]),dim=0)
But it gives the following error:
RuntimeError: strides[cur - 1] == sizes[cur] * strides[cur] INTERNAL ASSERT FAILED at
/opt/conda/conda-bld/pytorch_1573049310284/work/torch/csrc/jit/fuser/executor.cpp:175,
please report a bug to PyTorch.
The above operation failed in interpreter, with the following stack trace:
terminate called without an active exception
Aborted (core dumped)
I tried adding the if condition as mentioned but still it gives error. I am using pytorch 1.3
| In order to show a single class (person, id:0) output at the time of inference, you simply need to add
cur_scores[1:] *= 0
after cur_scores = conf_preds[batch_idx, 1:, :] in line 83 of yolact/layers/functions/detection.py.
Then running
!python eval.py --trained_model=weights/yolact_resnet50_54_800000.pth --score_threshold=0.15 --top_k=15 --image=input_image.png:output_image.png
will give you single class inference.
As mentioned by the author in issue#218:
you can make the change to save on NMS computation,
simply add cur_scores[<everything but your desired class>] *= 0
For the index, if you wanted only person (class 0), you can put 1:, but if you wanted another class than that you'd need to do 2 statements: one with :<class_idx> and the other with <class_idx>+1:. Then when you run eval, run it with --cross_class_nms=True and that'll remove all the other classes from NMS.
Other method is to modify the output in output_utils.py.
| https://stackoverflow.com/questions/64104148/ |
How to convert a list of Torch tensor with grad to tensor | I have a variable called pts which is shaped [batch, ch, h, w]. This is a heatmap and I want to convert it to 2nd co-ordinates. The goal is, pts_o = heatmap_to_pts(pts) where pts_o will be [batch, ch, 2]. I have wrote this function so far,
def heatmap_to_pts(self, pts): <- pts [batch, 68, 128, 128]
pt_num = []
for i in range(len(pts)):
pt = pts[i]
if type(pt) == torch.Tensor:
d = torch.tensor(128) * get the
m = pt.view(68, -1).argmax(1) * indices
indices = torch.cat(((m / d).view(-1, 1), (m % d).view(-1, 1)), dim=1) * from heatmaps
pt_num.append(indices.type(torch.DoubleTensor) ) <- store the indices in a list
b = torch.Tensor(68, 2) * trying to convert
c = torch.cat(pt_num, out=b) *error* * a list of tensors with grad
c = c.reshape(68,2) * to a tensor like [batch, 68, 2]
return c
The error says "cat(): functions with out=... arguments don't support automatic differentiation, but one of the arguments requires grad.". It's unable to do the operations because tensors in pt_num requires grad".
How can I convert that list to a tensor?
| The error says,
cat(): functions with out=... arguments don't support automatic differentiation, but one of the arguments requires grad.
What that means is that the output of functions such as torch.cat() which as an out= kwarg cannot be used as input to the autograd engine (which performs automatic differentiation).
The reason is that the tensors (in your Python list pt_num) have different values for the requires_grad attribute, i.e., some tensors have requires_grad=True while some of them have requires_grad=False.
In your code, the following line is (logically) troublesome:
c = torch.cat(pt_num, out=b)
The return value of torch.cat(), irrespective of whether you use out= kwarg or not, is the concatenation of tensors along the mentioned dimension.
So, the tensor c is already the concatenated version of individual tensors in pt_num. Using out=b redundant. Thus, you can simply get rid of the out=b and everything should be fine.
c = torch.cat(pt_num)
| https://stackoverflow.com/questions/64104251/ |
How to understand tensors with multiple dimensions? | I've been recently a bit confused about tensors. Say if we have a tensor with shape (3,2,3,4), we mean that in the first dimension there are 3 groups of numbers? OR it just means that there are exactly just 3 numbers in the first dimension?
Then here comes the second question, with the tensor A that has the shape (3,2), why is that the output of torch.max(A,0) returns a group of max values that contains 2 max values instead of 3,considering the fact that there are 3 numbers in the first dimension.
>>>a = torch.randn(3,2)
a tensor([[-1.1254, -0.1549],
[-0.5308, 1.0427],
[-0.1268, 1.0866]])
>>>torch.max(a,0)
torch.return_types.max(
values=tensor([-0.1268, 1.0866]),
indices=tensor([2, 2]))
I mean why doesn't it return a list of 3 max values?
Then the third question, if we have two tensors with shape(3,3,10,2) and (2,4,10,1), can we just concatenate these two tensors on the third dimension considering they have the same size on that dimension? If it is feasible, what is the reason behind it?
I'll be much appreciated if you help me understand this!
| Tensors are just high level dimensions of vectors. You should start with a vector first. Ex: 4 elements vector A = [1,2,3,4]. A 6-vectors A together is B = [[1,2,3,4],[1,2,3,4],...,[1,2,3,4]]. This is called matrix (2 dimensions), the shape is 6x4. Now if you take 2 matrices B, it will form a tensor C, shape is (2,6,4). Extend C for a list of 3 C, you can get a 4-dimensions tensor (3,2,6,4) and so on. You can take this picture for better illustration.
For the torch max, torch.max(input, dim, keepdim=False, out=None) when you choose the dim=0 (the dimension to reduce to this mean you will find the max along the 0 axis (aka the first shape), it will follow through this axis to find the maximum number corresponding to the rest tensor (according to your tensor, you only have to check in the 2 elements vectors), which is left. If you extend this for higher dimension, like (3,4,5) shape, the max will be (4,5).
For the last question, there is a no for that tensor. Because when you look at the illustration figure, there is no way for you to just base on 1 shape to concatenate 2 different size matching tensor. You have to maintain the same shape for all dimensions except the on you are going to concatenate.
>>> b = np.random.randint(0,2,size=(2,3,2,3))
>>> a = np.random.randint(0,2,size=(2,3,1,3))
>>> np.concatenate([a,b],axis=2)
Else, the only way for you to do it is flattening both of the 2 tensors so that you only need a vector represent for both a and b.
| https://stackoverflow.com/questions/64111559/ |
How can we check if a matrix is PSD is PyTorch? | There is a poste on checking if a matrix is PSD in Python. I am wondering how we can check it in PyTorch? is there a function for that?
| Haven't found a PyTorch function for that, but you should be able to determine it easily, and similarly to the post you've linked, by checking whether the matrix is symmetric and all eigenvalues are non-negative:
def is_psd(mat):
return bool((mat == mat.T).all() and (torch.eig(mat)[0][:,0]>=0).all())
#Test:
is_psd(torch.randn(2,2))
| https://stackoverflow.com/questions/64113700/ |
PyTorch LSTM categorical model - output to target mapping | I have a network which outputs a vector of length two. My targets are in the form of 1 or zeros, referring to two possible categories. What is the best way to get the loss - i.e. should I transform the targets, for example into a dimension 2 vector, or should I transform the output of the network, e.g. take the location of the max number as the output?
My network looks like:
class LSTMClassifier(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super().__init__()
self.hidden_dim = hidden_dim
self.layer_dim = layer_dim
self.lstm1 = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True)
self.lstm2 = nn.LSTM(hidden_dim, hidden_dim, layer_dim, batch_first=True)
self.fc1 = nn.Linear(hidden_dim, 32)
self.fc2 = nn.Linear(32, 1)
self.dropout = nn.Dropout(p=0.2)
self.batch_normalisation1 = nn.BatchNorm1d(layer_dim)
self.batch_normalisation2 = nn.BatchNorm1d(2)
self.activation = nn.Softmax(dim=2)
def forward(self, x):
h0, c0 = self.init_hidden(x)
out, (hn1, cn1) = self.lstm1(x, (h0, c0))
out = self.dropout(out,)
out = self.batch_normalisation1(out)
h1, c1 = self.init_hidden(out)
out, (hn2, cn2) = self.lstm2(out, (h1, c1))
out = self.dropout(out)
out = self.batch_normalisation1(out)
h2, c2 = self.init_hidden(out)
out, (hn3, cn3) = self.lstm2(out, (h2, c2))
out = self.dropout(out)
out = self.batch_normalisation1(out)
out = self.fc1(out[:, -1, :])
out = self.dropout(out)
out = self.fc2(out)
return out
def init_hidden(self, x):
h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim)
c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim)
return [t for t in (h0, c0)]
def pred(self, x):
out = self(x)
return out > 0
An example of input to this network is:
tensor([[[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[2.3597e-04, 1.1507e-02, 8.7719e-02, 6.1093e-02, 9.5556e-01],
[2.1474e-03, 5.3805e-03, 9.6491e-02, 2.2508e-01, 8.2222e-01]]])
which has shape torch.Size([1, 3, 5]). The target is currently 1 or 0. However, the network outputs a vector such as:
tensor([[0.5293, 0.4707]], grad_fn=<SoftmaxBackward>)
What would be the best way to set up the loss between these target and the network output?
Update:
I can now train the model as suggested in the answers as:
model = LSTMClassifier(5, 128, 3, 1)
Epochs = 10
batch_size = 32
criterion = nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=1e-6)
for epoch in range(Epochs):
if epoch == 0:
accurate = 0
for X_instance, y_instance in zip(val_x, val_y):
if int(y_instance) == 1 and model.pred(X_instance.view(-1, 3, 5)).item():
accurate += 1
print(f"Untrained accuracy test set: {accurate/len(val_x)}")
print(f"Epoch {epoch + 1}")
for n, (X, y) in enumerate(train_batches):
model.train()
optimizer.zero_grad()
y_pred = model(X)
loss = criterion(y_pred, y)
loss.backward()
optimizer.step()
model.eval()
accurate = 0
for X_instance, y_instance in zip(val_x, val_y):
if int(y_instance) == 1 and model.pred(X_instance.view(-1, 3, 5)).item():
accurate += 1
print(f"Accuracy test set: {accurate/len(val_x)}")
| You shouldn't use any activation at the end of your network and output only a single neuron instead of two (trained with BCEWithLogitsLoss).
Below is your neural network code with commentary and removal of unnecessary parts:
class LSTMClassifier(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super().__init__()
self.hidden_dim = hidden_dim
self.layer_dim = layer_dim
self.lstm1 = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True)
self.lstm2 = nn.LSTM(hidden_dim, hidden_dim, layer_dim, batch_first=True)
self.fc1 = nn.Linear(hidden_dim, 32)
# Output 1 neuron instead of two
self.fc2 = nn.Linear(32, 1)
# Model should not depend on batch size
# self.batch_size = None
# You are not using this variable
# self.hidden = None
self.dropout = nn.Dropout(p=0.2)
self.batch_normalisation1 = nn.BatchNorm1d(layer_dim)
self.batch_normalisation2 = nn.BatchNorm1d(2)
def forward(self, x):
# Hidden are initialized with 0 explicitly
# h0, c0 = self.init_hidden(x)
out, _ = self.lstm1(x)
# No need for initial values
# out, (hn1, cn1) = self.lstm1(x, (h0, c0))
out = self.dropout(out)
out = self.batch_normalisation1(out)
# Same for all other cells you re-init with zeros, it's implicit
out, _ = self.lstm2(out)
out = self.dropout(out)
out = self.batch_normalisation1(out)
out, _ = self.lstm2(out)
out = self.dropout(out)
out = self.batch_normalisation1(out)
out = self.fc1(out[:, -1, :])
out = self.dropout(out)
# No need for activation
# out = F.softmax(self.fc2(out))
out = self.fc2(out)
return out
# Return True (1) or False (0)
def pred(self, x):
return self(x) > 0
I have also added pred method which transforms logits into targets (e.g. to use with some metrics).
Basically, if your logit is lower than 0 it is False, otherwise it is True. No need for activation in this case.
| https://stackoverflow.com/questions/64117649/ |
Pytorch custom randomcrop for semantic segmentation | I am trying to implement a custom dataset loader. Firstly I resize the images and labels with the same ratio between (0.98, 1,1) then I randomly crop both images and labels with same parameters so that I can feed them into NN. However, I am getting an error from PyTorch functional. Here is my code:
class RandomCrop(object):
def __init__(self, size, padding=None, pad_if_needed=True, fill=0, padding_mode='constant'):
self.size = size
self.padding = padding
self.pad_if_needed = pad_if_needed
self.fill = fill
self.padding_mode = padding_mode
@staticmethod
def get_params(img, output_size):
w, h = img.size
th, tw = output_size
if w == tw and h == th:
return 0, 0, h, w
i = random.randint(0, h - th)
j = random.randint(0, w - tw)
return i, j, th, tw
def __call__(self, data):
img,mask = data["image"],data["mask"]
# pad the width if needed
if self.pad_if_needed and img.size[0] < self.size[1]:
img = F.pad(img, (self.size[1] - img.size[0], 0), self.fill, self.padding_mode)
mask = F.pad(mask, (self.size[1] - mask.size[0], 0), self.fill, self.padding_mode)
# pad the height if needed
if self.pad_if_needed and img.size[1] < self.size[0]:
img = F.pad(img, (0, self.size[0] - img.size[1]), self.fill, self.padding_mode)
mask = F.pad(mask, (0, self.size[0] - mask.size[1]), self.fill, self.padding_mode)
i, j, h, w = self.get_params(img, self.size)
crop_image = transforms.functional.crop(img, i, j, h, w)
crop_mask = transforms.functional.crop(mask, i, j, h, w)
return{"image": crop_image, "mask": crop_mask }
Here is the error:
AttributeError: 'Image' object has no attribute 'dim'
| Mistakenly I imported nn.functional.pad instead of the transforms.functional.pad. After changing it everything went smoothly
| https://stackoverflow.com/questions/64120309/ |
Defining python class method using arguments from __init__ | What I want to achieve
I want to define a class with two modes A and B, so that the forward method of the class changes accordingly.
class MyClass():
def __init__(self, constant):
self.constant=constant
def forward(self, x1,x2,function):
if function=='A':
return x1+self.constant
elif function=='B':
return x1*x2+self.constant
else:
print('please provide the correct function')
model1 = MyClass(2)
model1.forward(2, None, 'A')
output>>>4
model2 = MyClass(2)
model2.forward(2, 2, 'B')
output>>>6
It works, but it is not optimal, since every time when calling the forward method, it will check which function to use. However, the forward function is already set and will never be changed once the class is define, therefore, checking which function to use inside forward is super redundant in my case. (For those who notice this, I am writing my neural network model using PyTorch, two models share 90% of the network architecture, the only 10% differences is the way they do feedforward).
My desired version
I want to set the forward method when the class is defined, so that I can achieve this
model1 = MyClass(2, 'A')
model1.forward(2)
output>>>4
model2 = MyClass(2, 'B')
model2.forward(2, 2)
output>>>6
So I rewrote my class to be:
class MyClass():
def __init__(self, constant, function):
self.constant=constant # There would be a lot of shared parameters for the two methods
self.function=function # This controls the feedforward method of this class
if self.function=='A':
def forward(self, x1):
return x1+self.constant
elif self.function=='B':
def forward(self, x1, x2):
return x1*x2+self.constant
else:
print('please provide the correct function')
However, it gives me the following error.
NameError: name 'self' is not defined
How do I write the class for that it defines different forward method based on the args from __init__?
| You have been trying to redefine the class with your code, such that each new object would change the definition of forward for all objects, before and after.
Fortunately, you didn't figure out how to do that.
Instead, make the chosen function an attribute of the object. Code the two functions you want, and then assign the desired variant as you create each instance.
class MyClass():
def __init__(self, constant, function):
self.constant=constant
if function == 'A':
self.forward = self.forwardA
elif function=='B':
self.forward = self.forwardB
else:
print('please provide the correct function')
def forwardA(self, x1):
return x1+self.constant
def forwardB(self, x1, x2):
return x1*x2+self.constant
# Main
model1 = MyClass(2, 'A')
print(model1.forward(2))
model2 = MyClass(2, 'B')
print(model2.forward(2, 2))
Output:
4
6
| https://stackoverflow.com/questions/64130632/ |
Unable to load model from checkpoint in Pytorch-Lightning | I am working with a U-Net in Pytorch Lightning. I am able to train the model successfully but after training when I try to load the model from checkpoint I get this error:
Complete Traceback:
Traceback (most recent call last):
File "src/train.py", line 269, in <module>
main(sys.argv[1:])
File "src/train.py", line 263, in main
model = Unet.load_from_checkpoint(checkpoint_callback.best_model_path)
File "/home/africa_wikilimo/miniconda3/envs/xarray_test/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 153, in load_from_checkpoint
model = cls._load_model_state(checkpoint, *args, strict=strict, **kwargs)
File "/home/africa_wikilimo/miniconda3/envs/xarray_test/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 190, in _load_model_state
model = cls(*cls_args, **cls_kwargs)
File "src/train.py", line 162, in __init__
self.inc = double_conv(self.n_channels, 64)
File "src/train.py", line 122, in double_conv
nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1),
File "/home/africa_wikilimo/miniconda3/envs/xarray_test/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 406, in __init__
super(Conv2d, self).__init__(
File "/home/africa_wikilimo/miniconda3/envs/xarray_test/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 50, in __init__
if in_channels % groups != 0:
TypeError: unsupported operand type(s) for %: 'dict' and 'int'
I tried surfing the github issues and forums, am not able to figure out what the issue is. Please help.
Here's the code of my model and the checkpoint loading step:
Model:
class Unet(pl.LightningModule):
def __init__(self, n_channels, n_classes=5):
super(Unet, self).__init__()
# self.hparams = hparams
self.n_channels = n_channels
self.n_classes = n_classes
self.bilinear = True
self.logger = WandbLogger(name="Adam", project="pytorchlightning")
def double_conv(in_channels, out_channels):
return nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
)
def down(in_channels, out_channels):
return nn.Sequential(
nn.MaxPool2d(2), double_conv(in_channels, out_channels)
)
class up(nn.Module):
def __init__(self, in_channels, out_channels, bilinear=False):
super().__init__()
if bilinear:
self.up = nn.Upsample(
scale_factor=2, mode="bilinear", align_corners=True
)
else:
self.up = nn.ConvTranspose2d(
in_channels // 2, in_channels // 2, kernel_size=2, stride=2
)
self.conv = double_conv(in_channels, out_channels)
def forward(self, x1, x2):
x1 = self.up(x1)
# [?, C, H, W]
diffY = x2.size()[2] - x1.size()[2]
diffX = x2.size()[3] - x1.size()[3]
x1 = F.pad(
x1, [diffX // 2, diffX - diffX // 2, diffY // 2, diffY - diffY // 2]
)
x = torch.cat([x2, x1], dim=1)
return self.conv(x)
self.inc = double_conv(self.n_channels, 64)
self.down1 = down(64, 128)
self.down2 = down(128, 256)
self.down3 = down(256, 512)
self.down4 = down(512, 512)
self.up1 = up(1024, 256)
self.up2 = up(512, 128)
self.up3 = up(256, 64)
self.up4 = up(128, 64)
self.out = nn.Conv2d(64, self.n_classes, kernel_size=1)
def forward(self, x):
x1 = self.inc(x)
x2 = self.down1(x1)
x3 = self.down2(x2)
x4 = self.down3(x3)
x5 = self.down4(x4)
x = self.up1(x5, x4)
x = self.up2(x, x3)
x = self.up3(x, x2)
x = self.up4(x, x1)
return self.out(x)
def training_step(self, batch, batch_nb):
x, y = batch
y_hat = self.forward(x)
loss = self.MSE(y_hat, y)
# wandb_logger.log_metrics({"loss":loss})
return {"loss": loss}
def training_epoch_end(self, outputs):
avg_train_loss = torch.stack([x["loss"] for x in outputs]).mean()
self.logger.log_metrics({"train_loss": avg_train_loss})
return {"average_loss": avg_train_loss}
def test_step(self, batch, batch_nb):
x, y = batch
y_hat = self.forward(x)
loss = self.MSE(y_hat, y)
return {"test_loss": loss, "pred": y_hat}
def test_end(self, outputs):
avg_loss = torch.stack([x["test_loss"] for x in outputs]).mean()
return {"avg_test_loss": avg_loss}
def MSE(self, logits, labels):
return torch.mean((logits - labels) ** 2)
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.1, weight_decay=1e-8)
Main Function:
def main(expconfig):
# Define checkpoint callback
checkpoint_callback = ModelCheckpoint(
filepath="/home/africa_wikilimo/data/model_checkpoint/",
save_top_k=1,
verbose=True,
monitor="loss",
mode="min",
prefix="",
)
# Initialise datasets
print("Initializing Climate Dataset....")
clima_train = Clima_Dataset(expconfig[0])
# Initialise dataloaders
print("Initializing train_loader....")
train_dataloader = DataLoader(clima_train, batch_size=2, num_workers=4)
# Initialise model and trainer
print("Initializing model...")
model = Unet(n_channels=9, n_classes=5)
print("Initializing Trainer....")
if torch.cuda.is_available():
model.cuda()
trainer = pl.Trainer(
max_epochs=1,
gpus=1,
checkpoint_callback=checkpoint_callback,
early_stop_callback=None,
)
else:
trainer = pl.Trainer(max_epochs=1, checkpoint_callback=checkpoint_callback)
trainer.fit(model, train_dataloader=train_dataloader)
print(checkpoint_callback.best_model_path)
model = Unet.load_from_checkpoint(checkpoint_callback.best_model_path)
| Cause
This happens because your model is unable to load hyperparameters(n_channels, n_classes=5) from the checkpoint as you do not save them explicitly.
Fix
You can resolve it by using the self.save_hyperparameters('n_channels', 'n_classes')method in your Unet class's init method.
Refer PyTorch Lightning hyperparams-docs for more details on the use of this method. Use of save_hyperparameters lets the selected params to be saved in the hparams.yaml along with the checkpoint.
Thanks @Adrian Wälchli
(awaelchli) from the PyTorch Lightning core contributors team who suggested this fix, when I faced the same issue.
| https://stackoverflow.com/questions/64131993/ |
Fill regions from contour based on cumsum | I got an image for scene classification:
And using cumsum, I want to segment the three parts of it.
I take this simple operation in pytorch (can use tensorflow of course too, or python)
| You can use torch.flipud to perform the cumsum in both directions:
mask = (src_img.cumsum(dim=0) >0 ) + 2* torch.flipud(torch.flipud(src_img).cumsum(dim=0)>0)
Resulting with:
| https://stackoverflow.com/questions/64134515/ |
What does PyTorch classifier output? | So i am new to deep learning and started learning PyTorch. I created a classifier model with following structure.
class model(nn.Module):
def __init__(self):
super(model, self).__init__()
resnet = models.resnet34(pretrained=True)
layers = list(resnet.children())[:8]
self.features1 = nn.Sequential(*layers[:6])
self.features2 = nn.Sequential(*layers[6:])
self.classifier = nn.Sequential(nn.BatchNorm1d(512), nn.Linear(512, 3))
def forward(self, x):
x = self.features1(x)
x = self.features2(x)
x = F.relu(x)
x = nn.AdaptiveAvgPool2d((1,1))(x)
x = x.view(x.shape[0], -1)
return self.classifier(x)
So basically I wanted to classify among three things {0,1,2}. While evaluating, I passed the image it returned a Tensor with three values like below
(tensor([[-0.1526, 1.3511, -1.0384]], device='cuda:0', grad_fn=<AddmmBackward>)
So my question is what are these three numbers? Are they probability ?
P.S. Please pardon me If I asked something too silly.
| The final layer nn.Linear (fully connected layer) of self.classifier of your model produces values, that we can call a scores, for example, it may be: [10.3, -3.5, -12.0], the same you can see in your example as well: [-0.1526, 1.3511, -1.0384] which are not normalized and cannot be interpreted as probabilities.
As you can see it's just a kind of "raw unscaled" network output, in other words these values are not normalized, and it's hard to use them or interpret the results, that's why the common practice is converting them to normalized probability distribution by using softmax after the final layer, as @skinny_func has already described. After that you will get the probabilities in the range of 0 and 1, which is more intuitive representation.
| https://stackoverflow.com/questions/64136822/ |
Neural Networks & ML without GPU. what are my options? | I am starting working with Neural Networks, ML etc. I don't have a GPU, what are my options?
Are Pytorch and Tensorflow good options? If so, which are the pros and cons of each, and which should I choose?
I will be using Python and Linux (ubuntu 20.04) if that makes any difference.
(I cant afford a GPU or cloud services because im broke and 15 y.o.)
| It depends what you want to do. If you are working on images, you will need GPU. If you are working on other type of data, with classification or regression, you can use your CPU.
Working on Ubuntu is perfect, it is my choice too.
Now for Neural Networks, Tensorflow and Pytorch ARE the options, you may try any of them. I worked mostly on Tensorflow.
If you want to use GPU, you may try the Kaggle competitions where you may have a good amount of GPU every week. You may aslo try Google Colab notebooks, where you may have an amount of GPU, you have to connect with a Google account and eventually connect your notebook to your drive, where you can stock your data.
| https://stackoverflow.com/questions/64137099/ |
Why can't I use Cross Entropy Loss for multilabel? | I'm in the process of finetuning a BERT model to the long answer task in the Natural Questions dataset. I'm training the model just like a SQuAD model (predicting start and end tokens).
I use Huggingface and PyTorch.
So the targets and labels have a shape/size of [batch, 2]. My problem is that I can't input "multi-targets" which I think is refering to the fact that the last shape is 2.
RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:18
Should I choose another loss function or is there another way to bypass this problem?
This code I'm using:
def loss_fn(preds, targets):
return nn.CrossEntropyLoss()(preds,labels)
class DecoderModel(nn.Module):
def __init__(self, model_args, encoder_config, loss_fn):
super(DecoderModel, self).__init__()
# ...
def forward(self, pooled_output, labels):
pooled_output = self.dropout(pooled_output)
logits = self.linear(pooled_output)
start_logits, end_logits = logits.split(1, dim = -1)
start_logit = torch.squeeze(start_logits, axis=-1)
end_logit = torch.squeeze(end_logits, axis=-1)
# Concatenate into a "label"
preds = torch.cat((start_logits, end_logits), -1)
# Calculate loss
loss = self.loss_fn(
preds = preds,
labels = labels)
return loss, preds
The targets properties are:
torch.int64 & [3,2]
The predictions properties are:
torch.float32 & [3,2]
SOLVED - this is my solution
def loss_fn(preds:list, labels):
start_token_labels, end_token_labels = labels.split(1, dim = -1)
start_token_labels = start_token_labels.squeeze(-1)
end_token_labels = end_token_labels.squeeze(-1)
print('*'*50)
print(preds[0].shape) # preds [0] and [1] has the same shape and dtype
print(preds[0].dtype) # preds [0] and [1] has the same shape and dtype
print(start_token_labels.shape) # labels [0] and [1] has the same shape and dtype
print(start_token_labels.dtype) # labels [0] and [1] has the same shape and dtype
start_loss = nn.CrossEntropyLoss()(preds[0], start_token_labels)
end_loss = nn.CrossEntropyLoss()(preds[1], end_token_labels)
avg_loss = (start_loss + end_loss) / 2
return avg_loss
Basically I'm splitting the logits (just not concatinating them) and the labels. I then do Cross Entropy loss on both of them and at last taking the average loss between the two. Hope this gives you an idea to solve your own problem!
| You should not give 1-hot vectors to CrossEntropyLoss, rather the labels directly
Target: (N) where each value is 0≤targets[i]≤C−1 , or (N, d_1, d_2, ..., d_K) with K≥1 in the case of K-dimensional loss.
You can reproduce your error looking at the docs:
>>> loss = nn.CrossEntropyLoss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.empty(3, dtype=torch.long).random_(5)
>>> output = loss(input, target)
>>> output.backward()
but if you change target to target = torch.empty((3, 5), dtype=torch.long).random_(5) then you get the error:
RuntimeError: 1D target tensor expected, multi-target not supported
Use nn.BCELoss with logits as inputs instead, see this example: https://discuss.pytorch.org/t/multi-label-classification-in-pytorch/905/41
>>> nn.BCELoss()(torch.softmax(input, axis=1), torch.softmax(target.float(), axis=1))
>>> tensor(0.6376, grad_fn=<BinaryCrossEntropyBackward>)
| https://stackoverflow.com/questions/64138426/ |
PyTorch expected CPU got CUDA tensor | I've been struggling to find what's wrong in my code. I'm trying to implement DCGAN paper and from the past 1 hour, I'm going through these errors. Could anyone please help me fix this?
I'm training this on Google colab with GPU runtime but I'm getting this error. Yesterday, I implemented the first GAN paper by Ian Goodfellow and I did not got this error. I don't know what's happening any help would be appreciated. Also, please check whether the gen_input is correct or not.
Here is the code:
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from torchvision.utils import save_image
import torch.optim as optim
#---------------configuration part------------------#
lr = 0.00002 #learning rate
nc = 3 #color channels
nz = 100 #size of latent vector or size of generator input
ngf = 64 #size of feature maps in generator
ndf = 64 #size of feature maps in discriminator
height = 128 #height of the image
width = 128 #width of the image
num_epochs = 100 #the variable name tells everything
workers = 2 #number of workers to load the data in batches
batch_size = 64 #batch size
image_size = 128 #resizing parameter
root = '/content/gdrive/My Drive/sharingans/' #path to the training directory
beta1 = 0.4
#---------------------------------------------------#
#define the shape of the image
img_shape = (nc, height, width)
#---------------------------------------------------#
#define the weights initialization function
#in the DCGAN paper they state that all weights should be
#randomly initialize weights from normal distribution
#the following function does that
def weights_init(m):
classname = m.__class__.__name__ #returns the class name(eg: Conv2d or ConvTranspose2d)
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02) #0.0 is mean and 0.02 is standard deviation
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1, 0.02) #1 is mean and 0.02 is standard deviation
nn.init.constant_(m.bias.data, 0.0)
#---------------------------------------------------#
#implement the data loader function to load images
def load_data(image_size, root):
transform = transforms.Compose([
transforms.Resize(image_size),
transforms.ToTensor(),
transforms.Normalize((0.486, 0.486, 0.486), (0.486, 0.486, 0.486))
])
train_set = torchvision.datasets.ImageFolder(root = root, transform = transform)
return train_set
train_set = load_data(128, root)
#getting the batches of data
train_data = torch.utils.data.DataLoader(train_set, batch_size = batch_size, shuffle = True, num_workers = workers)
#---------------------------------------------------#
#implement the generator network
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.convt1 = nn.ConvTranspose2d(in_channels = nz, out_channels = ngf*8, kernel_size = 4, stride = 1, padding = 0, bias = False)
self.convt2 = nn.ConvTranspose2d(in_channels = ngf*8, out_channels = ngf*4, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.convt3 = nn.ConvTranspose2d(in_channels = ngf*4, out_channels = ngf*2, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.convt4 = nn.ConvTranspose2d(in_channels = ngf*2, out_channels = ngf, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.convt5 = nn.ConvTranspose2d(in_channels = ngf, out_channels = 3, kernel_size=4, stride = 2, padding = 1, bias = False)
def forward(self, t):
t = self.convt1(t)
t = nn.BatchNorm2d(t)
t = F.relu(t)
t = self.convt2(t)
t = nn.BatchNorm2d(t)
t = F.relu(t)
t = self.convt3(t)
t = nn.BatchNorm2d(t)
t = F.relu(t)
t = self.convt4(t)
t = nn.BatchNorm2d(t)
t = F.relu(t)
t = self.convt5(t)
t = F.tanh(t)
return t
#---------------------------------------------------#
#implement the discriminator network
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 3, out_channels = ndf, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.conv2 = nn.Conv2d(in_channels = ndf, out_channels = ndf*2, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.conv3 = nn.Conv2d(in_channels = ndf*2, out_channels = ndf*4, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.conv4 = nn.Conv2d(in_channels = ndf*4, out_channels = ndf*8, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.conv5 = nn.Conv2d(in_channels = ndf*8, out_channels = 1, kernel_size = 4, stride = 1, padding = 0, bias = False)
def forward(self, t):
t = self.conv1(t)
t = F.leaky_relu(t, 0.2)
t = self.conv2(t)
t = nn.BatchNorm2d(t)
t = F.leaky_relu(t, 0.2)
t = self.conv3(t)
t = nn.BatchNorm2d(t)
t = F.leaky_relu(t, 0.2)
t = self.conv4(t)
t = nn.BatchNorm2d(t)
t = F.leaky_relu(t, 0.2)
t = self.conv5(t)
t = F.sigmoid(t)
return t
#---------------------------------------------------#
#create the instances of networks
generator = Generator()
discriminator = Discriminator()
#apply the weights_init function to randomly initialize weights to mean = 0 and std = 0.02
generator.apply(weights_init)
discriminator.apply(weights_init)
print(generator)
print(discriminator)
#---------------------------------------------------#
#define the loss function
criterion = nn.BCELoss()
#fixed noise
noise = torch.randn(64, nz, 1, 1).cuda()
#conventions for fake and real labels
real_label = 1
fake_label = 0
#create the optimizer instances
optimizer_d = optim.Adam(discriminator.parameters(), lr = lr, betas = (beta1, 0.999))
optimizer_g = optim.Adam(generator.parameters(), lr = lr, betas = (beta1, 0.999))
#---------------------------------------------------#
if torch.cuda.is_available():
generator = generator.cuda()
discriminator = discriminator.cuda()
criterion = criterion.cuda()
Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
#---------------------------------------------------#
#Training loop
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_data):
#ones is passed when the data is coming from original dataset
#zeros is passed when the data is coming from generator
ones = Tensor(images.size(0), 1).fill_(1.0)
zeros = Tensor(images.size(0),1).fill_(0.0)
real_images = images.cuda()
optimizer_g.zero_grad()
#following is the input to the generator
#we create tensor with random noise of size 100
gen_input = np.random.normal(0,3,(512,100,4,4))
gen_input = torch.tensor(gen_input, dtype = torch.float32)
gen_input = gen_input.cuda()
#we then pass it to generator()
gen = generator(gen_input) #this returns a image
#now calculate the loss wrt to discriminator output
g_loss = criterion(discriminator(gen), ones)
#backpropagation
g_loss.backward()
#update weights
optimizer_g.step()
#above was for generator network
#now for the discriminator network
optimizer_d.zero_grad()
#calculate the real loss
real_loss = criterion(discriminator(real_images), ones)
#calculate the fake loss from the generated image
fake_loss = criterion(discriminator(gen.detach()),zeros)
#average out the losses
d_loss = (real_loss + fake_loss)/2
#backpropagation
d_loss.backward()
#update weights
optimizer_d.step()
if i%100 == 0:
print("[EPOCH %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]"%(epoch, epochs, i, len(dataset), d_loss.item(), g_loss.item()))
total_batch = epoch * len(dataset) + i
if total_batch%20 == 0:
save_image(gen.data[:5], '/content/gdrive/My Drive/tttt/%d.png' % total_batch, nrow=5)
And here's the error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-36-0af32f223344> in <module>()
18 gen_input = gen_input.cuda()
19 #we then pass it to generator()
---> 20 gen = generator(gen_input) #this returns a image
21
22 #now calculate the loss wrt to discriminator output
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/batchnorm.py in __init__(self, num_features, eps, momentum, affine, track_running_stats)
40 self.track_running_stats = track_running_stats
41 if self.affine:
---> 42 self.weight = Parameter(torch.Tensor(num_features))
43 self.bias = Parameter(torch.Tensor(num_features))
44 else:
TypeError: expected CPU (got CUDA)
Any help would be appreciated. Thank you!
| Do you use colab? Then you should activate the GPU. But if you want to stay on the CPU:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
Now do this on EVERY model or tensor you create, for example:
x = torch.tensor(...).to(device=device)
model = Model(...).to(device=device)
Then, if you switch around between cpu and gpu it handles it automaticaly for you. But as I said, you probably want to activate cuda by switching to colabs GPU
| https://stackoverflow.com/questions/64139164/ |
Difficulties calculating mean square error between 2 tensors | I'm trying to build a loss function which will calculate the mean squared error of 2 tenors of the same size.
In other words I need a function that calcualtes the difference of every 2 cells (with the same row and column) cell on matrix A and matrix B, squares it and calculate the mean of the differences.
As far as I understand nn.MSELoss should do exactly that.
When I pass the 2 tensors to nn.MSELoss I'm getting the following error message:
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
Here's my code
nn.MSELoss(stack_3[0,:],stack_7[0,:])
The tensors are floats of same shape.
stack_3.shape, stack_7.shape
(torch.Size([6131, 784]), torch.Size([6131, 784]))
| nn.MSELoss is a callable class, not a function. You need to first define an instance of nn.MSELoss, then you can call it. Alternatively you can directly use torch.nn.functional.mse_loss.
from torch import nn
criterion = nn.MSELoss()
loss = criterion(stack_3[0, :], stack_7[0, :])
or
import torch.nn.functional as F
loss = F.mse_loss(stack_3[0, :], stack_7[0, :])
| https://stackoverflow.com/questions/64142033/ |
Pytorch: Overfitting on a small batch: Debugging | I am building a multi-class image classifier.
There is a debugging trick to overfit on a single batch to check if there any deeper bugs in the program.
How to design the code in a way that can do it in a much portable format?
One arduous and a not smart way is to build a holdout train/test folder for a small batch where test class consists of 2 distribution - seen data and unseen data and if the model is performing better on seen data and poorly on unseen data, then we can conclude that our network doesn't have any deeper structural bug.
But, this does not seems like a smart and a portable way, and have to do it with every problem.
Currently, I have a dataset class where I am partitioning the data in train/dev/test in the below way -
def split_equal_into_val_test(csv_file=None, stratify_colname='y',
frac_train=0.6, frac_val=0.15, frac_test=0.25,
):
"""
Split a Pandas dataframe into three subsets (train, val, and test).
Following fractional ratios provided by the user, where val and
test set have the same number of each classes while train set have
the remaining number of left classes
Parameters
----------
csv_file : Input data csv file to be passed
stratify_colname : str
The name of the column that will be used for stratification. Usually
this column would be for the label.
frac_train : float
frac_val : float
frac_test : float
The ratios with which the dataframe will be split into train, val, and
test data. The values should be expressed as float fractions and should
sum to 1.0.
random_state : int, None, or RandomStateInstance
Value to be passed to train_test_split().
Returns
-------
df_train, df_val, df_test :
Dataframes containing the three splits.
"""
df = pd.read_csv(csv_file).iloc[:, 1:]
if frac_train + frac_val + frac_test != 1.0:
raise ValueError('fractions %f, %f, %f do not add up to 1.0' %
(frac_train, frac_val, frac_test))
if stratify_colname not in df.columns:
raise ValueError('%s is not a column in the dataframe' %
(stratify_colname))
df_input = df
no_of_classes = 4
sfact = int((0.1*len(df))/no_of_classes)
# Shuffling the data frame
df_input = df_input.sample(frac=1)
df_temp_1 = df_input[df_input['labels'] == 1][:sfact]
df_temp_2 = df_input[df_input['labels'] == 2][:sfact]
df_temp_3 = df_input[df_input['labels'] == 3][:sfact]
df_temp_4 = df_input[df_input['labels'] == 4][:sfact]
dev_test_df = pd.concat([df_temp_1, df_temp_2, df_temp_3, df_temp_4])
dev_test_y = dev_test_df['labels']
# Split the temp dataframe into val and test dataframes.
df_val, df_test, dev_Y, test_Y = train_test_split(
dev_test_df, dev_test_y,
stratify=dev_test_y,
test_size=0.5,
)
df_train = df[~df['img'].isin(dev_test_df['img'])]
assert len(df_input) == len(df_train) + len(df_val) + len(df_test)
return df_train, df_val, df_test
def train_val_to_ids(train, val, test, stratify_columns='labels'): # noqa
"""
Convert the stratified dataset in the form of dictionary : partition['train] and labels.
To generate the parallel code according to https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel
Parameters
-----------
csv_file : Input data csv file to be passed
stratify_columns : The label column
Returns
-----------
partition, labels:
partition dictionary containing train and validation ids and label dictionary containing ids and their labels # noqa
"""
train_list, val_list, test_list = train['img'].to_list(), val['img'].to_list(), test['img'].to_list() # noqa
partition = {"train_set": train_list,
"val_set": val_list,
}
labels = dict(zip(train.img, train.labels))
labels.update(dict(zip(val.img, val.labels)))
return partition, labels
P.S - I know about the Pytorch lightning and know that they have an overfitting feature which can be used easily but I don't want to move to PyTorch lightning.
| I don't know how portable it will be, but a trick that I use is to modify the __len__ function in the Dataset.
If I modified it from
def __len__(self):
return len(self.data_list)
to
def __len__(self):
return 20
It will only output the first 20 elements in the dataset (regardless of shuffle). You only need to change one line of code and the rest should work just fine so I think it's pretty neat.
| https://stackoverflow.com/questions/64143366/ |
Lstm with different input sizes for each batch (pyTorch) | I am switching from TensorFlow to PyTorch and I having some troubles with my net.
I have made a Collator(for the DataLoader) that pads each tensor(originally sentence) in each batch into the Maslen of each batch.
so I have different input sizes per each batch.
my network consists of LSTM -> LSTM -> DENSE
my question is, how can I specify this variable input size to the LSTM?
I assume that in TensorFlow I would do Input((None,x)) bedsore the LSTM.
Thank you in advance
| The input size of the LSTM is not how long a sample is. So lets say you have a batch with three samples: The first one has the length of 10, the second 12 and the third 15. So what you already did is pad them all with zeros so that all three have the size 15. But the next batch may have been padded to 16. Sure.
But this 15 is not the input size of the LSTM. The input size in the size of one element of a sample in the batch. And that should always be the same.
For example when you want to classify names:
the inputs are names, for example "joe", "mark", "lucas".
But what the LSTM takes as input are the characters. So "J" then "o" so on. So as input size you have to put how many dimensions one character have.
If you use embeddings, the embedding size. When you use one-hot encoding, the vector size (probably 26). An LSTM takes iteratively the characters of the words. Not the entire word at once.
self.lstm = nn.LSTM(input_size=embedding_size, ...)
I hope this answered your question, if not please clearify it! Good luck!
| https://stackoverflow.com/questions/64146213/ |
How can I fix this expected CUDA got CPU error in PyTorch? | I've been struggling to find what's wrong in my code. I'm trying to implement DCGAN paper and from the past 2 days, I'm going through these errors. Could anyone please help me fix this?
I'm training this on Google colab with GPU runtime but I'm getting this error. Yesterday, I implemented the first GAN paper by Ian Goodfellow and I did not got this error. I don't know what's happening any help would be appreciated. Also, please check whether the gen_input is correct or not.
I already asked this question and no one replied to the old post. one person replied to it but all he said was to change the lines up and down which also gives the same error. please please help me
Here is the code:
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from torchvision.utils import save_image
import torch.optim as optim
lr = 0.00002 #learning rate
nc = 3 #color channels
nz = 100 #size of latent vector or size of generator input
ngf = 64 #size of feature maps in generator
ndf = 64 #size of feature maps in discriminator
height = 128 #height of the image
width = 128 #width of the image
num_epochs = 5 #the variable name tells everything
workers = 2 #number of workers to load the data in batches
batch_size = 64 #batch size
image_size = 128 #resizing parameter
root = './simpsons/' #path to the training directory
beta1 = 0.5
img_shape = (nc, height, width)
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.convt1 = nn.ConvTranspose2d(in_channels = nz, out_channels = ngf*8, kernel_size = 4, stride = 1, padding = 0, bias = False)
self.convt2 = nn.ConvTranspose2d(in_channels = ngf*8, out_channels = ngf*4, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.convt3 = nn.ConvTranspose2d(in_channels = ngf*4, out_channels = ngf*2, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.convt4 = nn.ConvTranspose2d(in_channels = ngf*2, out_channels = ngf, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.convt5 = nn.ConvTranspose2d(in_channels = ngf, out_channels = 3, kernel_size=4, stride = 2, padding = 1, bias = False)
def forward(self, t):
t = self.convt1(t)
t = nn.BatchNorm2d(t)
t = F.relu(t)
t = self.convt2(t)
t = nn.BatchNorm2d(t)
t = F.relu(t)
t = self.convt3(t)
t = nn.BatchNorm2d(t)
t = F.relu(t)
t = self.convt4(t)
t = nn.BatchNorm2d(t)
t = F.relu(t)
t = self.convt5(t)
t = F.tanh(t)
return t
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 3, out_channels = ndf, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.conv2 = nn.Conv2d(in_channels = ndf, out_channels = ndf*2, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.conv3 = nn.Conv2d(in_channels = ndf*2, out_channels = ndf*4, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.conv4 = nn.Conv2d(in_channels = ndf*4, out_channels = ndf*8, kernel_size = 4, stride = 2, padding = 1, bias = False)
self.conv5 = nn.Conv2d(in_channels = ndf*8, out_channels = 1, kernel_size = 4, stride = 1, padding = 0, bias = False)
def forward(self, t):
t = self.conv1(t)
t = F.leaky_relu(t, 0.2)
t = self.conv2(t)
t = nn.BatchNorm2d(t)
t = F.leaky_relu(t, 0.2)
t = self.conv3(t)
t = nn.BatchNorm2d(t)
t = F.leaky_relu(t, 0.2)
t = self.conv4(t)
t = nn.BatchNorm2d(t)
t = F.leaky_relu(t, 0.2)
t = self.conv5(t)
t = F.sigmoid(t)
return t
def weights_init(m):
classname = m.__class__.__name__ #returns the class name(eg: Conv2d or ConvTranspose2d)
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02) #0.0 is mean and 0.02 is standard deviation
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1, 0.02) #1 is mean and 0.02 is standard deviation
nn.init.constant_(m.bias.data, 0.0)
def load_data(image_size, root):
transform = transforms.Compose([
transforms.Resize(image_size),
transforms.ToTensor(),
transforms.Normalize((0.486, 0.486, 0.486), (0.486, 0.486, 0.486))
])
train_set = torchvision.datasets.ImageFolder(root = root, transform = transform)
return train_set
#getting the batches of data
train_set = load_data(image_size, root)
dataloader = torch.utils.data.DataLoader(train_set, batch_size = batch_size, shuffle = True, num_workers = workers)
generator = Generator()
discriminator = Discriminator()
generator.apply(weights_init)
discriminator.apply(weights_init)
print(generator)
print(discriminator)
criterion = nn.BCELoss()
noise = torch.randn(64, nz, 1, 1)
optimizer_G = optim.Adam(generator.parameters(), lr = lr, betas=(beta1, 0.999))
optimizer_D = optim.Adam(discriminator.parameters(), lr = lr, betas=(beta1, 0.999))
if torch.cuda.is_available():
print("CUDA available")
generator = generator.to('cuda')
discriminator = discriminator.to('cuda')
criterion = criterion.cuda('cuda')
Tensor = torch.cuda.FloatTensor
print("Networks moved on to cuda")
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(dataloader):
val = Tensor(images.size(0), 1).fill_(1.0)
fake = Tensor(images.size(0),1).fill_(0.0)
real_images = images
optimizer_G.zero_grad()
gen_input = Tensor(np.random.normal(0,1,(512,100,4,4)))
gen = generator(gen_input)
g_loss = loss_func(discriminator(gen), val)
g_loss.backward()
optimizer_G.step()
optimizer_D.zero_grad()
real_loss = loss_func(discriminator(real_images), val)
fake_loss = loss_func(discriminator(gen.detach()),fake)
d_loss = (real_loss + fake_loss)/2
d_loss.backward()
optimizer_D.step()
if i%900 == 0:
print("[EPOCH %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]"%(epoch, num_epochs, i, len(dataset), d_loss.item(), g_loss.item()))
total_batch = epoch * len(dataset) + i
if total_batch%400 == 0:
save_image(gen.data[:25], 'output/%d.png' % total_batch, nrow=5)
And here's the error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-36-0af32f223344> in <module>()
18 gen_input = gen_input.cuda()
19 #we then pass it to generator()
---> 20 gen = generator(gen_input) #this returns a image
21
22 #now calculate the loss wrt to discriminator output
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/batchnorm.py in __init__(self, num_features, eps, momentum, affine, track_running_stats)
40 self.track_running_stats = track_running_stats
41 if self.affine:
---> 42 self.weight = Parameter(torch.Tensor(num_features))
43 self.bias = Parameter(torch.Tensor(num_features))
44 else:
TypeError: expected CPU (got CUDA)
Any help would be appreciated. Thank you!
| You are using nn.BatchNorm2d in a wrong way.
BatchNorm is a layer, just like Conv2d. It has internal parameters and buffers.
Therefore, you must define these layers in the __init__ of your generator/discriminator.
Right now, you define the layer in your forward pass - this is wrong in so many ways...
| https://stackoverflow.com/questions/64152094/ |
Torch installed but repeat_interleave not found | I have PyTorch installed.
import torch runs without error. However, the function torch.repeat_interleave() is not found:
x = torch.tensor([1, 2, 3])
x.repeat_interleave(2)
gives AttributeError: 'Tensor' object has no attribute 'repeat_interleave'
Why?
| The torch.repeat_interleave operator was introduced in 1.1.0 release of pytorch, so please, consider updating to 1.1.0+ version of pytorch to use this method smoothly.
| https://stackoverflow.com/questions/64152866/ |
Generic Computation of Distance Matrices in Pytorch | I have two tensors a & b of shape (m,n), and I would like to compute a distance matrix m using some distance metric d. That is, I want m[i][j] = d(a[i], b[j]). This is somewhat like cdist(a,b) but assuming a generic distance function d which is not necessarily a p-norm distance. Is there a generic way to implement this in PyTorch?
And a more specific side question: Is there an efficient way to perform this with the following metric
d(x,y) = 1 - cos(x,y)
edit
I've solved the specific case above using this answer:
def metric(a, b, eps=1e-8):
a_norm, b_norm = a.norm(dim=1)[:, None], b.norm(dim=1)[:, None]
a_norm = a / torch.max(a_norm, eps * torch.ones_like(a_norm))
b_norm = b / torch.max(b_norm, eps * torch.ones_like(b_norm))
similarity_matrix = torch.mm(a_norm, b_norm.transpose(0, 1))
return 1 - similarity_matrix
| I'd suggest using broadcasting: since a,b both have shape (m,n) you can compute
m = d( a[None, :, :], b[:, None, :])
where d needs to operate on the last dimension, so for instance
def d(a,b): return 1 - (a * b).sum(dim=2) / a.pow(2).sum(dim=2).sqrt() / b.pow(2).sum(dim=2).sqrt()
(here I assume that cos(x,y) represents the normalized inner product between x and y)
| https://stackoverflow.com/questions/64153684/ |
Add dense layer on top of Huggingface BERT model | I want to add a dense layer on top of the bare BERT Model transformer outputting raw hidden-states, and then fine tune the resulting model. Specifically, I am using this base model. This is what the model should do:
Encode the sentence (a vector with 768 elements for each token of the sentence)
Keep only the first vector (related to the first token)
Add a dense layer on top of this vector, to get the desired transformation
So far, I have successfully encoded the sentences:
from sklearn.neural_network import MLPRegressor
import torch
from transformers import AutoModel, AutoTokenizer
# List of strings
sentences = [...]
# List of numbers
labels = [...]
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
# 2D array, one line per sentence containing the embedding of the first token
encoded_sentences = torch.stack([model(**tokenizer(s, return_tensors='pt'))[0][0][0]
for s in sentences]).detach().numpy()
regr = MLPRegressor()
regr.fit(encoded_sentences, labels)
In this way I can train a neural network by feeding it with the encoded sentences. However, this approach clearly does not fine tune the base BERT model. Can anybody help me? How can I build a model (possibly in pytorch or using the Huggingface library) that can be entirely fine tuned?
| There are two ways to do it: Since you are looking to fine-tune the model for a downstream task similar to classification, you can directly use:
BertForSequenceClassification class. Performs fine-tuning of logistic regression layer on the output dimension of 768.
Alternatively, you can define a custom module, that created a bert model based on the pre-trained weights and adds layers on top of it.
from transformers import BertModel
class CustomBERTModel(nn.Module):
def __init__(self):
super(CustomBERTModel, self).__init__()
self.bert = BertModel.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
### New layers:
self.linear1 = nn.Linear(768, 256)
self.linear2 = nn.Linear(256, 3) ## 3 is the number of classes in this example
def forward(self, ids, mask):
sequence_output, pooled_output = self.bert(
ids,
attention_mask=mask)
# sequence_output has the following shape: (batch_size, sequence_length, 768)
linear1_output = self.linear1(sequence_output[:,0,:].view(-1,768)) ## extract the 1st token's embeddings
linear2_output = self.linear2(linear2_output)
return linear2_output
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
model = CustomBERTModel() # You can pass the parameters if required to have more flexible model
model.to(torch.device("cpu")) ## can be gpu
criterion = nn.CrossEntropyLoss() ## If required define your own criterion
optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()))
for epoch in epochs:
for batch in data_loader: ## If you have a DataLoader() object to get the data.
data = batch[0]
targets = batch[1] ## assuming that data loader returns a tuple of data and its targets
optimizer.zero_grad()
encoding = tokenizer.batch_encode_plus(data, return_tensors='pt', padding=True, truncation=True,max_length=50, add_special_tokens = True)
outputs = model(input_ids, attention_mask=attention_mask)
outputs = F.log_softmax(outputs, dim=1)
input_ids = encoding['input_ids']
attention_mask = encoding['attention_mask']
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
| https://stackoverflow.com/questions/64156202/ |
PyTorch Factorial Function | There does not seem to be a PyTorch function for computing a factorial. Is there a method to do this in PyTorch? I am looking to manually compute a Poisson distribution in Torch (I am aware this exists: https://pytorch.org/docs/stable/generated/torch.poisson.html) and the formula requires a factorial in the denominator.
Poisson Distribution: https://en.wikipedia.org/wiki/Poisson_distribution
| I think you can find it as torch.jit._builtins.math.factorial BUT pytorch as well as numpy and scipy (Factorial in numpy and scipy) uses python's builtin math.factorial:
import math
import numpy as np
import scipy as sp
import torch
print(torch.jit._builtins.math.factorial is math.factorial)
print(np.math.factorial is math.factorial)
print(sp.math.factorial is math.factorial)
True
True
True
But, in contrast, scipy in addition to "mainstream" math.factorial contains the very "special" factorial function scipy.special.factorial. Unlike function from math module it operates on arrays:
from scipy import special
print(special.factorial is math.factorial)
False
# the all known factorial functions
factorials = (
math.factorial,
torch.jit._builtins.math.factorial,
np.math.factorial,
sp.math.factorial,
special.factorial,
)
# Let's run some tests
tnsr = torch.tensor(3)
for fn in factorials:
try:
out = fn(tnsr)
except Exception as err:
print(fn.__name__, fn.__module__, ':', err)
else:
print(fn.__name__, fn.__module__, ':', out)
factorial math : 6
factorial math : 6
factorial math : 6
factorial math : 6
factorial scipy.special._basic : tensor(6., dtype=torch.float64)
tnsr = torch.tensor([1, 2, 3])
for fn in factorials:
try:
out = fn(tnsr)
except Exception as err:
print(fn.__name__, fn.__module__, ':', err)
else:
print(fn.__name__, fn.__module__, ':', out)
factorial math : only integer tensors of a single element can be converted to an index
factorial math : only integer tensors of a single element can be converted to an index
factorial math : only integer tensors of a single element can be converted to an index
factorial math : only integer tensors of a single element can be converted to an index
factorial scipy.special._basic : tensor([1., 2., 6.], dtype=torch.float64)
| https://stackoverflow.com/questions/64157192/ |
How to randomly set a fixed number of elements in each row of a tensor in PyTorch | I was wondering if there is any more efficient alternative for the below code, without using the "for" loop in the 4th line?
import torch
n, d = 37700, 7842
k = 4
sample = torch.cat([torch.randperm(d)[:k] for _ in range(n)]).view(n, k)
mask = torch.zeros(n, d, dtype=torch.bool)
mask.scatter_(dim=1, index=sample, value=True)
Basically, what I am trying to do is to create an n by d mask tensor, such that in each row exactly k random elements are True.
| Here's a way to do this with no loop. Let's start with a random matrix where all elements are drawn iid, in this case uniformly on [0,1]. Then we take the k'th quantile for each row and set all smaller or equal elements to True and the rest to False on each row:
rand_mat = torch.rand(n, d)
k_th_quant = torch.topk(rand_mat, k, largest = False)[0][:,-1:]
mask = rand_mat <= k_th_quant
No loop needed :) x2.1598 faster than the code you attached on my CPU.
| https://stackoverflow.com/questions/64162672/ |
PyTorch installation problem- package not found using Jupyter notebook and Conda navigator | I have tried to install PyTorch using the installation code form official PyTorch website.
I run it locally in the Jupyter notebook on Conda navigator
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
I received the following mistake
PackagesNotFoundError: The following packages are not available from current channels:
- pytorch
- cudatoolkit=10.2
Current channels:
- https://conda.anaconda.org/pytorch/win-32
- https://conda.anaconda.org/pytorch/noarch
- https://repo.anaconda.com/pkgs/main/win-32
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/win-32
- https://repo.anaconda.com/pkgs/r/noarch
- https://repo.anaconda.com/pkgs/msys2/win-32
- https://repo.anaconda.com/pkgs/msys2/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
What to do?
| TL;DR Use 64-bit Anaconda
conda manages packages for one platform and architecture.
It looks like you installed 32-Bit (x86) Win Anaconda version, here:
- https://conda.anaconda.org/pytorch/win-32
You can check that channel under that link (https://conda.anaconda.org/pytorch/win-32) does not contain any pytorch package, and same is for this one: https://conda.anaconda.org/pytorch/noarch
If you look at the win64: https://conda.anaconda.org/pytorch/win-64 it actually contains pytorch packages.
So, there is no pytorch x86 packages in pytorch channels and in addition it is not possible to create environment of another architecture, meaning that you need to install 64-bit Anaconda to use pytorch.
| https://stackoverflow.com/questions/64170444/ |
IndexError: Target 60972032 is out of bounds | I am trying to call cross entropy loss but it says the index is out of range
loss = nn.CrossEntropyLoss()
target = torch.empty(128, dtype=torch.long)
result = loss(output, target)
Note the output has the shape torch.Size([128, 10])
| The target tensor from provided example is not initialized, see torch.empty
It's empty, to fix that use, for example, .random_ method, like in CrossEntropyLoss docs example:
...
target = torch.empty(128, dtype=torch.long).random_(10)
...
| https://stackoverflow.com/questions/64180233/ |
Modify existing Pytorch code to run on multiple GPUs | I'm trying to run Pytoch UNet from the following link on 2 or more GPUs
Pytorch-UNet github
the changes the I did till now is:
1.
from:
net = UNet(n_channels=3, n_classes=1, bilinear=True)
logging.info(f'Network:\n'
f'\t{net.module.n_channels} input channels\n'
f'\t{net.module.n_classes} output channels (classes)\n'
f'\t{"Bilinear" if net.module.bilinear else "Transposed conv"} upscaling')
to:
net = UNet(n_channels=3, n_classes=1, bilinear=True)
net = nn.DataParallel(net)
logging.info(f'Network:\n'
f'\t{net.module.n_channels} input channels\n'
f'\t{net.module.n_classes} output channels (classes)\n'
f'\t{"Bilinear" if net.module.bilinear else "Transposed conv"} upscaling')
in each place where was:
net.<something>
replaced to:
net.module.<something>
I know that pytorch see more that 1 GPU because torch.cuda.device_count() return
2
.
But as long a I try to run train that need more momery than what the first GPU have I'm getting:
RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0;
11.91 GiB total capacity; 10.51 GiB already allocated; 82.56 MiB free; 818.92 MiB cached)
I change required memory to train by changing the batch size.
Any help welcome
EDIT
I see that training run twice faster with 2 GPUs, but max batch size of run with single GPU is the same as for two GPU. Is there any way to use the memory of 2 GPUs together during a single training run?
| My mistake was changing output = net(input) (commonly named as model) to:
output = net.module(input)
you can find information here
| https://stackoverflow.com/questions/64185568/ |
Extracting blocks from block diagonal PyTorch tensor | I have a tensor of shape (m*n, m*n) and I want to extract a tensor of size (n, m*n) containing the m blocks of size n*n that are on the diagonal. For example:
>>> a
tensor([[1, 2, 0, 0],
[3, 4, 0, 0],
[0, 0, 5, 6],
[0, 0, 7, 8]])
I want to have a function extract(a, m, n) that will output:
>>> extract(a, 2, 2)
tensor([[1, 2, 5, 6],
[3, 4, 7, 8]])
I've thought of using some kind of slicing, because the blocks can be expressed by:
>>> for i in range(m):
... print(a[i*m: i*m + n, i*m: i*m + n])
tensor([[1, 2],
[3, 4]])
tensor([[5, 6],
[7, 8]])
| You can take advantage of reshape and slicing:
import torch
import numpy as np
def extract(a, m, n):
s=(range(m), np.s_[:], range(m), np.s_[:]) # the slices of the blocks
a.reshape(m, n, m, n)[s] # reshaping according to blocks and slicing
return a.reshape(m*n, n).T # reshape to desired output format
Example:
a = torch.arange(36).reshape(6,6)
a
tensor([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]])
extract(a, 3, 2)
tensor([[ 0, 6, 14, 20, 28, 34],
[ 1, 7, 15, 21, 29, 35]])
extract(a, 2, 3)
tensor([[ 0, 6, 12, 21, 27, 33],
[ 1, 7, 13, 22, 28, 34],
[ 2, 8, 14, 23, 29, 35]])
| https://stackoverflow.com/questions/64195225/ |
How can I create a 2D tensor with one element a sine and the other a cosine wave? | I'm doing
train_data = np.array([np.sin(time), np.cos(time)])
This gives me something with a shape of (2, 4000). I think I need it to be (4000, 2), so for each timestep, I can get the sin and cos.
How can I do that?
| You can use numpy transpose to switch the array around.
Try this:
train_data = np.array([np.sin(time), np.cos(time)]).T
Here's what I did to get (4000,2).
import numpy as np
time = np.arange(4000)
train_data = np.array([np.sin(time), np.cos(time)])
print (train_data.shape)
print (train_data.T.shape)
The output for this was:
(2, 4000)
(4000, 2)
| https://stackoverflow.com/questions/64197388/ |
How do I rotate a PyTorch image tensor around it's center in a way that supports autograd? | I'd like to randomly rotate an image tensor (B, C, H, W) around it's center (2d rotation I think?). I would like to avoid using NumPy and Kornia, so that I basically only need to import from the torch module. I'm also not using torchvision.transforms, because I need it to be autograd compatible. Essentially I'm trying to create an autograd compatible version of torchvision.transforms.RandomRotation() for visualization techniques like DeepDream (so I need to avoid artifacts as much as possible).
import torch
import math
import random
import torchvision.transforms as transforms
from PIL import Image
# Load image
def preprocess_simple(image_name, image_size):
Loader = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()])
image = Image.open(image_name).convert('RGB')
return Loader(image).unsqueeze(0)
# Save image
def deprocess_simple(output_tensor, output_name):
output_tensor.clamp_(0, 1)
Image2PIL = transforms.ToPILImage()
image = Image2PIL(output_tensor.squeeze(0))
image.save(output_name)
# Somehow rotate tensor around it's center
def rotate_tensor(tensor, radians):
...
return rotated_tensor
# Get a random angle within a specified range
r_degrees = 5
angle_range = list(range(-r_degrees, r_degrees))
n = random.randint(angle_range[0], angle_range[len(angle_range)-1])
# Convert angle from degrees to radians
ang_rad = angle * math.pi / 180
# test_tensor = preprocess_simple('path/to/file', (512,512))
test_tensor = torch.randn(1,3,512,512)
# Rotate input tensor somehow
output_tensor = rotate_tensor(test_tensor, ang_rad)
# Optionally use this to check rotated image
# deprocess_simple(output_tensor, 'rotated_image.jpg')
Some example outputs of what I'm trying to accomplish:
| So the grid generator and the sampler are sub-modules of the Spatial Transformer (JADERBERG, Max, et al.). These sub-modules are not trainable, they let you apply a learnable, as well as non-learnable, spatial transformation.
Here I take these two submodules and use them to rotate an image by theta using PyTorch's functions torch.nn.functional.affine_grid and torch.nn.functional.affine_sample (these functions are implementations of the generator and the sampler, respectively):
import torch
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
def get_rot_mat(theta):
theta = torch.tensor(theta)
return torch.tensor([[torch.cos(theta), -torch.sin(theta), 0],
[torch.sin(theta), torch.cos(theta), 0]])
def rot_img(x, theta, dtype):
rot_mat = get_rot_mat(theta)[None, ...].type(dtype).repeat(x.shape[0],1,1)
grid = F.affine_grid(rot_mat, x.size()).type(dtype)
x = F.grid_sample(x, grid)
return x
#Test:
dtype = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
#im should be a 4D tensor of shape B x C x H x W with type dtype, range [0,255]:
plt.imshow(im.squeeze(0).permute(1,2,0)/255) #To plot it im should be 1 x C x H x W
plt.figure()
#Rotation by np.pi/2 with autograd support:
rotated_im = rot_img(im, np.pi/2, dtype) # Rotate image by 90 degrees.
plt.imshow(rotated_im.squeeze(0).permute(1,2,0)/255)
In the example above, assume we take our image, im, to be a dancing cat in a skirt:
rotated_im will be a 90-degrees CCW rotated dancing cat in a skirt:
And this is what we get if we call rot_img with theta eqauls to np.pi/4:
And the best part that it's differentiable w.r.t the input and has autograd support! Hooray!
| https://stackoverflow.com/questions/64197754/ |
Pytorch Transform library explaination | transforms.Normalize([0.5]*3, [0.5]*3)
Can someone help me to understand what this and how it works?
| You have the documentation for the Normalizetransform here. It says : Normalize a tensor image with mean and standard deviation. Given mean: (mean[1],...,mean[n]) and std: (std[1],..,std[n]) for n channels, this transform will normalize each channel of the input torch.Tensor i.e., output[channel] = (input[channel] - mean[channel]) / std[channel]
So in your case, you are constructing a Normalize transform with mean=std=[0.5,0.5,0.5]. This means that you are expecting an input with 3 channels, and for each channel you want to normalize with the function
x -> (x-0.5)/0.5 = 2x-1
| https://stackoverflow.com/questions/64203642/ |
how does nn.embedding for developing an encoder-decoder model works? | In this tutorial, it teaches how to develop a simple encoder-decoder model with attention using pytorch.
However, in the encoder or decoder, self.embedding = nn.Embedding(input_size, hidden_size) (or similar) is defined. In pytorch documents, nn.Embedding is defined as "A simple lookup table that stores embeddings of a fixed dictionary and size."
So I am confused that, in the initialization, where does this lookup table has come from? Does it initialize some random embeddings for the indices and then they will be trained? Is it really necessary to be in the encoder/decoder part?
Thanks in advance.
| Answering the last bit first: Yes, we do need Embedding or an equivalent. At least when dealing with discrete inputs (e.g. letters or words of a language), because these tokens come encoded as integers (e.g. 'a' -> 1, 'b' -> 2, etc.), but those numbers do not carry meaning: The letter 'b' is not "like 'a', but more", which its original encoding would suggest. So we provide the Embedding so that the network can learn how to represent these letters by something useful, e.g. making vowels similar to one another in some way.
During the initialization, the embedding vector are sampled randomly, in the same fashion as other weights in the model, and also get optimized with the rest of the model. It is also possible to initialize them from some pretrained embeddings (e.g. from word2vec, Glove, FastText), but caution must then be exercised not to destroy them by backprop through randomly initialized model.
Embeddings are not stricly necessary, but it would be very wasteful to force network to learn that 13314 ('items') is very similar to 89137 ('values'), but completely different to 13315 ('japan'). And it would probably not even remotely converge anyway.
| https://stackoverflow.com/questions/64204703/ |
What can I use in Pytorch to remplace caffe's weight filler | I'm writing a pytorch model based on the caffe model below. Do you know how I can write weight filler and bias filler in pytorch ?
layer {
name: "conv3"
type: "Convolution"
bottom: "pool2"
top: "conv3"
param {
lr_mult: 1
decay_mult: 1
}
convolution_param {
num_output: 128
pad_h: 0
pad_w: 1
kernel_h: 1
kernel_w: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.1
}
bias_filler {
type: "constant"
value: 0
}
}
}
Thank you
| Pytorch has torch.nn.init library to help with init weights of a network.
You probably want to use nn.init.normal_ for the "gaussian" filler, and nn.init.constant_ for the "constant" filler of the bias.
You can use a function to fill the weights of a module m:
def init_weights(m):
if type(m) == nn.Conv2d:
torch.nn.init.normal_(m.weight, std=0.1)
if m.bias is not None:
torch.nn.init.constant_(m.bias, val=0)
# define the net
net = MyCaffeLikeNetwork()
# use the function to init all weights of the net
net.apply(init_weights)
For more information on weight init in pytorch you can look at this detailed answer.
| https://stackoverflow.com/questions/64205755/ |
PyTorch - RuntimeError: [enforce fail at inline_container.cc:209] . file not found: archive/data.pkl | Problem
I'm trying to load a file using PyTorch, but the error states archive/data.pkl does not exist.
Code
import torch
cachefile = 'cacheddata.pth'
torch.load(cachefile)
Output
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-4-8edf1f27a4bd> in <module>
1 import torch
2 cachefile = 'cacheddata.pth'
----> 3 torch.load(cachefile)
~/opt/anaconda3/envs/matching/lib/python3.8/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
582 opened_file.seek(orig_position)
583 return torch.jit.load(opened_file)
--> 584 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
585 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
586
~/opt/anaconda3/envs/matching/lib/python3.8/site-packages/torch/serialization.py in _load(zip_file, map_location, pickle_module, **pickle_load_args)
837
838 # Load the data (which may in turn use `persistent_load` to load tensors)
--> 839 data_file = io.BytesIO(zip_file.get_record('data.pkl'))
840 unpickler = pickle_module.Unpickler(data_file, **pickle_load_args)
841 unpickler.persistent_load = persistent_load
RuntimeError: [enforce fail at inline_container.cc:209] . file not found: archive/data.pkl
Hypothesis
I'm guessing this has something to do with pickle, from the docs:
This save/load process uses the most intuitive syntax and involves the
least amount of code. Saving a model in this way will save the entire
module using Python’s pickle module. The disadvantage of this approach
is that the serialized data is bound to the specific classes and the
exact directory structure used when the model is saved. The reason for
this is because pickle does not save the model class itself. Rather,
it saves a path to the file containing the class, which is used during
load time. Because of this, your code can break in various ways when
used in other projects or after refactors.
Versions
PyTorch version: 1.6.0
Python version: 3.8.0
| Turned out the file was somehow corrupted. After generating it again it loaded without issue.
| https://stackoverflow.com/questions/64206070/ |
Pytorch model doesn't learn identity function? | I wrote some models in pytorch which was not able to learn anything even after many epochs. In order to debug the problem I made a simple model which models identity function of an input. The difficulty is this model also doesn't learn nothing despite training for 50k epochs,
import torch
import torch.nn as nn
torch.manual_seed(1)
class Net(nn.Module):
def __init__(self):
super().__init__()
self.input = nn.Linear(2,4)
self.hidden = nn.Linear(4,4)
self.output = nn.Linear(4,2)
self.relu = nn.ReLU()
self.softmax = nn.Softmax(dim=1)
self.dropout = nn.Dropout(0.5)
def forward(self,x):
x = self.input(x)
x = self.dropout(x)
x = self.relu(x)
x = self.hidden(x)
x = self.dropout(x)
x = self.relu(x)
x = self.output(x)
x = self.softmax(x)
return x
X = torch.tensor([[1,0],[1,0],[0,1],[0,1]],dtype=torch.float)
net = Net()
criterion = nn.CrossEntropyLoss()
opt = torch.optim.Adam(net.parameters(), lr=0.001)
for i in range(100000):
opt.zero_grad()
y = net(X)
loss = criterion(y,torch.argmax(X,dim=1))
loss.backward()
if i%500 ==0:
print("Epoch: ",i)
print(torch.argmax(y,dim=1).detach().numpy().tolist())
print("Loss: ",loss.item())
print()
Output
Epoch: 52500
[0, 0, 1, 0]
Loss: 0.6554909944534302
Epoch: 53000
[0, 0, 0, 0]
Loss: 0.7004914283752441
Epoch: 53500
[0, 0, 0, 0]
Loss: 0.7156486511230469
Epoch: 54000
[0, 0, 0, 0]
Loss: 0.7171240448951721
Epoch: 54500
[0, 0, 0, 0]
Loss: 0.691678524017334
Epoch: 55000
[0, 0, 0, 0]
Loss: 0.7301554679870605
Epoch: 55500
[0, 0, 0, 0]
Loss: 0.728650689125061
What is wrong with my implementation?
| There are a few mistakes:
Missing optimizer.step():
optimizer.step() updates the parameters based on backpropagated gradients and other accumulated momentum and all.
Usage of softmax with CrossEntropy Loss:
Pytorch CrossEntropyLoss criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class. i.e. it applies softmax then takes negative log. So in your case you are taking softmax(softmax(output)). Correct way is use linear output layer while training and use softmax layer or just take argmax for prediction.
High dropout value for small network:
Which results in underfitting.
Here's the corrected code:
import torch
import torch.nn as nn
torch.manual_seed(1)
class Net(nn.Module):
def __init__(self):
super().__init__()
self.input = nn.Linear(2,4)
self.hidden = nn.Linear(4,4)
self.output = nn.Linear(4,2)
self.relu = nn.ReLU()
self.softmax = nn.Softmax(dim=1)
# self.dropout = nn.Dropout(0.0)
def forward(self,x):
x = self.input(x)
# x = self.dropout(x)
x = self.relu(x)
x = self.hidden(x)
# x = self.dropout(x)
x = self.relu(x)
x = self.output(x)
# x = self.softmax(x)
return x
def predict(self, x):
with torch.no_grad():
out = self.forward(x)
return self.softmax(out)
X = torch.tensor([[1,0],[1,0],[0,1],[0,1]],dtype=torch.float)
net = Net()
criterion = nn.CrossEntropyLoss()
opt = torch.optim.Adam(net.parameters(), lr=0.001)
for i in range(100000):
opt.zero_grad()
y = net(X)
loss = criterion(y,torch.argmax(X,dim=1))
loss.backward()
# This was missing before
opt.step()
if i%500 ==0:
print("Epoch: ",i)
pred = net.predict(X)
print(f'prediction: {torch.argmax(pred, dim=1).detach().numpy().tolist()}, actual: {torch.argmax(X,dim=1)}')
print("Loss: ", loss.item())
Output:
Epoch: 0
prediction: [0, 0, 0, 0], actual: tensor([0, 0, 1, 1])
Loss: 0.7042869329452515
Epoch: 500
prediction: [0, 0, 1, 1], actual: tensor([0, 0, 1, 1])
Loss: 0.1166711300611496
Epoch: 1000
prediction: [0, 0, 1, 1], actual: tensor([0, 0, 1, 1])
Loss: 0.05215628445148468
Epoch: 1500
prediction: [0, 0, 1, 1], actual: tensor([0, 0, 1, 1])
Loss: 0.02993333339691162
Epoch: 2000
prediction: [0, 0, 1, 1], actual: tensor([0, 0, 1, 1])
Loss: 0.01916157826781273
Epoch: 2500
prediction: [0, 0, 1, 1], actual: tensor([0, 0, 1, 1])
Loss: 0.01306679006665945
Epoch: 3000
prediction: [0, 0, 1, 1], actual: tensor([0, 0, 1, 1])
Loss: 0.009280549362301826
.
.
.
| https://stackoverflow.com/questions/64206560/ |
Is there a training/validation split happening internally, or is there just one training set and testing set? | so recently I've been following the tutorial in https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html and I came up with the following question: is there a training/validation split happening internally?
The thing is, in this tutorial, the main dataset is spliced into training and testing. Here, the training set is used for training and the testing in the evaluate() function.
To my knowledge, when dealing with neural networks usually the data is split into 3 sets: training, validation and testing. In this tutorial though, it is only split into training and testing. For what I know, usually the model is trained and then evaluated, and the weights are then updated according to what was learnt in the evaluation step. However, I can't seem to find any connection between the evaluate function and training. Therefore, in this example the model is being evaluated AND tested using the same dataset.
Is there something here that I might be missing? Is there an internal split of the training dataset happening during training (into training and validation) and the function evaluate() is simply used for testing the performance of the model?
for epoch in range(num_epochs):
# train for one epoch, printing every 10 iterations
train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
# update the learning rate
lr_scheduler.step()
# evaluate on the test dataset
evaluate(model, data_loader_test, device=device)```
|
Is there a training/validation split happening internally?
Is there an internal split of the training dataset happening during
training (into training and validation) and the function evaluate() is
simply used for testing the performance of the model?
No, you are not missing anything. What you see is exactly what's being done there. There is no internal splitting happening. It's just an example to show how something is done in pytorch without cluttering it unncessarily.
Some datasets such as CIFAR10/CIFAR100, come only with a train/test set and usually it has been the norm to just train and then evaluate on the test set in examples. However, nothing stops you from splitting the training set however you like, it's up to you. In such tutorials they just tried to keep everything as simple as possible.
| https://stackoverflow.com/questions/64208188/ |
what's the difference between "self-attention mechanism" and "full-connection" layer? | I am confused with these two structures. In theory, the output of them are all connected to their input. what magic make 'self-attention mechanism' is more powerful than the full-connection layer?
| Ignoring details like normalization, biases, and such, fully connected networks are fixed-weights:
f(x) = (Wx)
where W is learned in training, and fixed in inference.
Self-attention layers are dynamic, changing the weight as it goes:
attn(x) = (Wx)
f(x) = (attn(x) * x)
Again this is ignoring a lot of details but there are many different implementations for different applications and you should really check a paper for that.
| https://stackoverflow.com/questions/64218678/ |
What is the idea behind using nn.Identity for residual learning? | So, I've read about half the original ResNet paper, and am trying to figure out how to make my version for tabular data.
I've read a few blog posts on how it works in PyTorch, and I see heavy use of nn.Identity(). Now, the paper also frequently uses the term identity mapping. However, it just refers to adding the input for a stack of layers the output of that same stack in an element-wise fashion. If the in and out dimensions are different, then the paper talks about padding the input with zeros or using a matrix W_s to project the input to a different dimension.
Here is an abstraction of a residual block I found in a blog post:
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, activation='relu'):
super().__init__()
self.in_channels, self.out_channels, self.activation = in_channels, out_channels, activation
self.blocks = nn.Identity()
self.shortcut = nn.Identity()
def forward(self, x):
residual = x
if self.should_apply_shortcut: residual = self.shortcut(x)
x = self.blocks(x)
x += residual
return x
@property
def should_apply_shortcut(self):
return self.in_channels != self.out_channels
block1 = ResidualBlock(4, 4)
And my own application to a dummy tensor:
x = tensor([1, 1, 2, 2])
block1 = ResidualBlock(4, 4)
block2 = ResidualBlock(4, 6)
x = block1(x)
print(x)
x = block2(x)
print(x)
>>> tensor([2, 2, 4, 4])
>>> tensor([4, 4, 8, 8])
So at the end of it, x = nn.Identity(x) and I'm not sure the point of its use except to mimic math lingo found in the original paper. I'm sure that's not the case though, and that it has some hidden use that I'm just not seeing yet. What could it be?
EDIT Here is another example of implementing residual learning, this time in Keras. It does just what I suggested above and just keeps a copy of the input for adding to the output:
def residual_block(x: Tensor, downsample: bool, filters: int, kernel_size: int = 3) -> Tensor:
y = Conv2D(kernel_size=kernel_size,
strides= (1 if not downsample else 2),
filters=filters,
padding="same")(x)
y = relu_bn(y)
y = Conv2D(kernel_size=kernel_size,
strides=1,
filters=filters,
padding="same")(y)
if downsample:
x = Conv2D(kernel_size=1,
strides=2,
filters=filters,
padding="same")(x)
out = Add()([x, y])
out = relu_bn(out)
return out
|
What is the idea behind using nn.Identity for residual learning?
There is none (almost, see the end of the post), all nn.Identity does is forwarding the input given to it (basically no-op).
As shown in PyTorch repo issue you linked in comment this idea was first rejected, later merged into PyTorch, due to other use (see the rationale in this PR). This rationale is not connected to ResNet block itself, see end of the answer.
ResNet implementation
Easiest generic version I can think of with projection would be something along those lines:
class Residual(torch.nn.Module):
def __init__(self, module: torch.nn.Module, projection: torch.nn.Module = None):
super().__init__()
self.module = module
self.projection = projection
def forward(self, inputs):
output = self.module(inputs)
if self.projection is not None:
inputs = self.projection(inputs)
return output + inputs
You can pass as module things like two stacked convolutions and add 1x1 convolution (with padding or with strides or something) as projection module.
For tabular data you could use this as module (assuming your input has 50 features):
torch.nn.Sequential(
torch.nn.Linear(50, 50),
torch.nn.ReLU(),
torch.nn.Linear(50, 50),
torch.nn.ReLU(),
torch.nn.Linear(50, 50),
)
Basically, all you have to do is is add input to some module to it's output and that is it.
Rationale behing nn.Identity
It might be easier to construct neural networks (and read them afterwards), example for batch norm (taken from aforementioned PR):
batch_norm = nn.BatchNorm2d
if dont_use_batch_norm:
batch_norm = Identity
Now you can use it with nn.Sequential easily:
nn.Sequential(
...
batch_norm(N, momentum=0.05),
...
)
And when printing the network it always has the same number of submodules (with either BatchNorm or Identity) which also makes the whole thing a little smoother IMO.
Another use case, mentioned here might be removing parts of existing neural networks:
net = tv.models.alexnet(pretrained=True)
# Assume net has two parts
# features and classifier
net.classifier = Identity()
Now, instead of running net.features(input) you can run net(input) which might be easier for others to read as well.
| https://stackoverflow.com/questions/64229717/ |
PyTorch VERY different results on different machines using docker and CPU | I am trying to create some CI tests for a deep learning model and therefore created a Dockerfile to install PyTorch and all my requirements and finally run the tests.
FROM pytorch/pytorch
ADD . / project/
RUN (cd project/; pip install -r requirements.txt)
CMD ( cd project/; pytest -v --cov=my_project)
The tests are basically computing an image from 0 - 1 and comparing it to a reference image (saved as npy). The test is checking
if the avg L2 norm of the pixels is below a threshold of 1e-7.
diff_image = np.linalg.norm(target_image_np - reference_image_np, axis=2)
avg_error = np.mean(diff_image)
assert avg_error < 1e-7
The tests pass 12/15 test cases. However, 3 cases are failing quite badly.
=========================== short test summary info ============================
FAILED test_nst.py::test_nst_gatys - assert 0.0021541715 < 1e-07
FAILED test_nst.py::test_nst_gatys_style - assert 0.12900369 < 1e-07
FAILED test_nst.py::test_nst_wct - assert 0.027357593 < 1e-07
=================== 3 failed, 12 passed in 670.27s (0:11:10) ===================
The weird thing is that this ONLY happens on the CI server. On my local machine, all tests pass. Does anybody have an idea why this is happening? As far as I know, using the CPU as well as fixed seeds should return at least results which differ only numerically.
Thanks for any feedback!
| I found a solution myself. It is required to set the number of threads to one. Finally, this is all the code required to get reproducible results on different machines.
np.random.seed(42)
torch.manual_seed(42)
os.environ["PYTHONHASHSEED"] = "42"
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.set_num_threads(1)
| https://stackoverflow.com/questions/64240440/ |
Top K indices of a multi-dimensional tensor | I have a 2D tensor and I want to get the indices of the top k values. I know about pytorch's topk function. The problem with pytorch's topk function is, it computes the topk values over some dimension. I want to get topk values over both dimensions.
For example for the following tensor
a = torch.tensor([[4, 9, 7, 4, 0],
[8, 1, 3, 1, 0],
[9, 8, 4, 4, 8],
[0, 9, 4, 7, 8],
[8, 8, 0, 1, 4]])
pytorch's topk function will give me the following.
values, indices = torch.topk(a, 3)
print(indices)
# tensor([[1, 2, 0],
# [0, 2, 1],
# [0, 1, 4],
# [1, 4, 3],
# [1, 0, 4]])
But I want to get the following
tensor([[0, 1],
[2, 0],
[3, 1]])
This is the indices of 9 in the 2D tensor.
Is there any approach to achieve this using pytorch?
| v, i = torch.topk(a.flatten(), 3)
print (np.array(np.unravel_index(i.numpy(), a.shape)).T)
Output:
[[3 1]
[2 0]
[0 1]]
Flatten and find top k
Convert 1D indices to 2D using unravel_index
| https://stackoverflow.com/questions/64241325/ |
Loading images in PyTorch | I am new to PyTorch and working on a GAN model. I want to load my image dataset. The way its done using Keras is:
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
def load_images(path, size=(128,128)):
data_list = list()
# enumerate filenames in directory, assume all are images
for filename in listdir(path):
# load and resize the image
pixels = load_img(path + filename, target_size=size)
# convert to numpy array
pixels = img_to_array(pixels)
# store.
data_list.append(pixels)
return asarray(data_list)
# dataset path
path = 'mypath/'
# load dataset A
dataA = load_images(path + 'A/')
dataAB = load_images(path + 'B/')
I want to know how to do the same in PyTorch.
Any help is appreciated. Thanks
| import torchvision, torch
from torchvision import datasets, models, transforms
def load_training(root_path, dir, batch_size, kwargs):
transform = transforms.Compose(
[transforms.Resize([256, 256]),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor()])
data = datasets.ImageFolder(root=root_path + dir, transform=transform)
train_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True, drop_last=True, **kwargs)
return train_loader
I hope it'll work ...
| https://stackoverflow.com/questions/64245159/ |
what is the function of default_loader in torch? | import os
import pandas as pd
import numpy as np
from torchvision.datasets.folder import default_loader
from torchvision.datasets.utils import download_url
from torch.utils.data import Dataset
class Sample_Class(Dataset):
def __init__(self,root,train=True,transform=None,loader=default_loader):
self.root = os.path.expanduser(root)
self.transform = transform
self.loader = default_loader
In the above code snippet , what is the significance of loader=default_loader, what exactly does that do?
| This Sample_Class is likely imitating the behavior of ImageFolder, DatasetFolder, and ImageNet. The function should take a filename as input and return either a PIL.Image or accimage.Image depending on the selected image backend.
The default_loader function is defined in torchvision/datasets/folder.py
def pil_loader(path):
# open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
def accimage_loader(path):
import accimage
try:
return accimage.Image(path)
except IOError:
# Potentially a decoding problem, fall back to PIL.Image
return pil_loader(path)
def default_loader(path):
from torchvision import get_image_backend
if get_image_backend() == 'accimage':
return accimage_loader(path)
else:
return pil_loader(path)
Note : default_loader by default will be PIL reader
| https://stackoverflow.com/questions/64251669/ |
Sliding Window on 2D Tensor using PyTorch | How can we use a sliding window on a 2D PyTorch tensor t with shape (6, 10) such that we end up with a 3D PyTorch tensor with shape (3, 4, 10)?
For example, if we have the tensor t:
t = torch.range(1, 6*10).reshape((7, 10))
tensor([[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.],
[11., 12., 13., 14., 15., 16., 17., 18., 19., 20.],
[21., 22., 23., 24., 25., 26., 27., 28., 29., 30.],
[31., 32., 33., 34., 35., 36., 37., 38., 39., 40.],
[41., 42., 43., 44., 45., 46., 47., 48., 49., 50.],
[51., 52., 53., 54., 55., 56., 57., 58., 59., 60.]]
how can we reshape it (using PyTorch) such that we get the tensor:
tensor([[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.],
[11., 12., 13., 14., 15., 16., 17., 18., 19., 20.],
[21., 22., 23., 24., 25., 26., 27., 28., 29., 30.],
[31., 32., 33., 34., 35., 36., 37., 38., 39., 40.]],
[[11., 12., 13., 14., 15., 16., 17., 18., 19., 20.],
[21., 22., 23., 24., 25., 26., 27., 28., 29., 30.],
[31., 32., 33., 34., 35., 36., 37., 38., 39., 40.],
[41., 42., 43., 44., 45., 46., 47., 48., 49., 50.]],
[[21., 22., 23., 24., 25., 26., 27., 28., 29., 30.],
[31., 32., 33., 34., 35., 36., 37., 38., 39., 40.],
[41., 42., 43., 44., 45., 46., 47., 48., 49., 50.],
[51., 52., 53., 54., 55., 56., 57., 58., 59., 60.]])
| Looks like a case for unfold:
t.unfold(0,4,1).transpose(2,1)
Output:
tensor([[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39]],
[[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49]],
[[20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49],
[50, 51, 52, 53, 54, 55, 56, 57, 58, 59]]])
| https://stackoverflow.com/questions/64252131/ |
Found no NVIDIA driver on your system error on WSL2 conda environment with Python 3.7 | I have a Nvidia 1080Ti GPU, and I want to run Pytorch on WSL2, but I got error "Found no NVIDIA driver on your system" but I did installed the NVIDIA driver. Here is the step I did.
I installed WSL2, and installed NVIDIA driver for Cuda on WSL from GeForce Driver:
https://developer.nvidia.com/cuda/wsl/download
I activate a clean conda environment with Python 3.7
Then I run the Pytorch installation:
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
Then the error occurred saying Found no NVIDIA driver. I came across a post on Pytorch forum, and someone did get it to run in a similar settings: Ubuntu 18.04 + Conda + Pytorch
https://discuss.pytorch.org/t/found-no-nvidia-driver-on-your-system-but-its-there/35063/4
I don't have multiple GPUs, so I don't know how to get my driver recognized in WSL2. Thank you for any thoughts!
| What's your Windows version? (Run winver.exe)
You need to run a Windows Insider build 20145 or superior in order to use CUDA in WSL2.
You will know the gpu is detected if /dev/dxg file exists.
| https://stackoverflow.com/questions/64256241/ |
Libtorch C++ - no matching member function for call to 'size' for InterpolateFuncOptions | Using Libtorch 1.6.0 in C++, I get the following error:
error: no matching member function for call to 'size'
My line is the following:
image = F::interpolate(image, F::InterpolateFuncOptions().size({target_height, target_width}).mode(torch::kNearest));
But in the documentation it seems correct... Any idea?
Thanks in advance
| You should wrap it with std::vector like this:
image = F::interpolate(image,
F::InterpolateFuncOptions()
.size(std::vector<>{target_height, target_width})
.mode(torch::kNearest));
Reason for this is size has no overloaded call for std::initializer_list that you were trying to use (see size docs here).
| https://stackoverflow.com/questions/64258996/ |
Using Captum with Pytorch Lightning? | So I tried to use Captum with PyTorch Lightning. I am having issues when passing the Module to Captum, since it seems to do weird reshaping of the tensors.
For example in the below minimal example, the lightning code works easy and well.
But when I use IntegratedGradient with "n_step>=1" I get an issue.
The code of the LighningModule is not that important I would say, I wonder more at the code line at the very bottom.
Does anyone know how to work around this?
from captum.attr import IntegratedGradients
from torch import nn, optim, rand, sum as tsum, reshape, device
import torch.nn.functional as F
from pytorch_lightning import seed_everything, LightningModule, Trainer
from torch.utils.data import DataLoader, Dataset
SAMPLE_DIM = 3
class CustomDataset(Dataset):
def __init__(self, samples=42):
self.dataset = rand(samples, SAMPLE_DIM).cuda().float() * 2 - 1
def __getitem__(self, index):
return (self.dataset[index], (tsum(self.dataset[index]) > 0).cuda().float())
def __len__(self):
return self.dataset.size()[0]
class OurModel(LightningModule):
def __init__(self):
super(OurModel, self).__init__()
# Network layers
self.linear = nn.Linear(SAMPLE_DIM, 2048)
self.linear2 = nn.Linear(2048, 1)
self.output = nn.Sigmoid()
# Hyper-parameters, that we will auto-tune using lightning!
self.lr = 0.001
self.batch_size = 512
def forward(self, x):
x = self.linear(x)
x = self.linear2(x)
output = self.output(x)
return reshape(output, (-1,))
def configure_optimizers(self):
return optim.Adam(self.parameters(), lr=self.lr)
def train_dataloader(self):
loader = DataLoader(CustomDataset(samples=1000), batch_size=self.batch_size, shuffle=True)
return loader
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.binary_cross_entropy(self(x), y)
return {'loss': loss, 'log': {'train_loss': loss}}
if __name__ == '__main__':
seed_everything(42)
device = device("cuda")
model = OurModel().to(device)
trainer = Trainer(max_epochs=2, min_epochs=1, auto_lr_find=False,
progress_bar_refresh_rate=10)
trainer.fit(model)
# ok Now the Problem
test_input = CustomDataset(samples=1).__getitem__(0)[0].requires_grad_()
ig = IntegratedGradients(model)
attr, delta = ig.attribute(test_input, target=1, return_convergence_delta=True)
| The solution was to wrap the forward function. Make sure that the shape going into the mode.foward() is correct!
# Solution is this wrapper function
def modified_f(in_vec):
# Shape here is wrong
print("IN:", in_vec.size())
x = torch.reshape(in_vec, (int(in_vec.size()[0]/SAMPLE_DIM), SAMPLE_DIM))
print("x:", x.size())
res = model.forward(x)
print("res:", res.size())
res = torch.reshape(res, (res.size()[0], 1))
print("res2:", res.size())
return res
ig = IntegratedGradients(modified_f)
attr, delta = ig.attribute(test_input, return_convergence_delta=True, n_steps=STEP_AMOUNT)
| https://stackoverflow.com/questions/64259774/ |
pytorch: compute vector-Jacobian product for vector function | Good day!
I am trying to grasp torch.autograd basics. In particular I want to test this statement from https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py
So my idea is to construct a vector function, say:
(y_1; y_2; y_3) = (x_1*x_1 + x_2; x_2 * x_2 + x_3; x_3 * x_3)
Then count Jacobian matrix at point (1,1,1) and multiply it on vector (3, 5, 7).
Jacobian = (2x_1; 1. ; 0. )
(0. ; 2x_2 ; 1. )
(0. ; 0. ; 2x_3)
I am expecting result Jacobian(x=(1,1,1)) * v = (6+5, 10 + 7, 2 * 7) = (11, 17, 14).
Now below is my attempt to do it in pytorch:
import torch
x = torch.ones(3, requires_grad=True)
print(x)
y = torch.tensor([x[0]**2 + x [1], x[1]**2 + x[2], x[2]**2], requires_grad=True)
print(y)
v = torch.tensor([3, 5, 7])
y.backward(v)
x.grad
which gives not expected result (2., 2., 1.). I think I define tensor y in a wrong way. If I simply do y = x * 2, then gradient will work, but what about creating more complex tensor like in this case?
Thank you.
| You should not define tensor y by torch.tensor(), torch.tensor() is a tensor constructor, not an operator, so it is not trackable in the operation graph. You should use torch.stack() instead.
Just change that line to:
y = torch.stack((x[0]**2+x[1], x[1]**2+x[2], x[2]**2))
the result of x.grad should be tensor([ 6., 13., 19.])
| https://stackoverflow.com/questions/64260561/ |
How to restructure the output tensor of a cnn layer for use by a linear layer in a simple pytorch model | Given a pytorch input dataset with dimensions:
dat.shape = torch.Size([128, 3, 64, 64])
This is a supervised learning problem: we have a separate labels.txt file containing one of C classes for each input observation. The value of C is calculated by the number of distinct values in the labeles file and is presently in the single digits.
I could use assistance on how to mesh the layers of a simple mix of convolutional and linear layers network that is performing multiclass classification. The intent is to pass through:
two cnn layers with maxpooling after each
a linear "readout" layer
softmax activation before the output/labels
Here is the core of my (faulty/broken) network. I am unable to determine the proper size/shape required of:
Output of Convolutional layer -> Input of Linear [Readout] layer
class CNNClassifier(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 16, 3)
self.maxpool = nn.MaxPool2d(kernel_size=3,padding=1)
self.conv2 = nn.Conv2d(16, 32, 3)
self.linear1 = nn.Linear(32*16*16, C)
self.softmax1 = nn.LogSoftmax(dim=1)
def forward(self, x):
x = self.conv1(x)
x = self.maxpool(F.leaky_relu(x))
x = self.conv2(x)
x = self.maxpool(F.leaky_relu(x))
x = self.linear1(x) # Size mismatch error HERE
x = self.softmax1(x)
return x
Training of the model is started by :
Xout = model(dat)
This results in :
RuntimeError: size mismatch, m1: [128 x 1568], m2: [8192 x 6]
at the linear1 input. What is needed here ? Note I have seen uses of wildcard input sizes e.g via a view:
..
x = x.view(x.size(0), -1)
x = self.linear1(x) # Size mismatch error HERE
If that is included then the error changes to
RuntimeError: size mismatch, m1: [28672 x 7], m2: [8192 x 6]
Some pointers on how to think about and calculate the cnn layer / linear layer input/output sizes would be much appreciated.
| The error
You have miscalculated the output size from convolutional stack. It is actually [batch, 32, 7, 7] instead of [batch, 32, 16, 16].
You have to use reshape (or view) as output from Conv2d has 4 dimensions ([batch, channels, width, height]), while input to nn.Linear is required to have 2 dimensions ([batch, features]).
Use this for nn.Linear:
self.linear1 = nn.Linear(32 * 7 * 7, C)
And this in forward:
x = self.linear1(x.view(x.shape[0], -1))
Other possibilities
Current new architectures use pooling across channels (usually called global pooling). In PyTorch there is an torch.nn.AdaptiveAvgPool2d (or Max pooling). Using this approach allows you to have variable size of height and width of your input image as only one value per channel is used as input to nn.Linear. This is how it looks:
class CNNClassifier(torch.nn.Module):
def __init__(self, C=10):
super().__init__()
self.conv1 = nn.Conv2d(3, 16, 3)
self.maxpool = nn.MaxPool2d(kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(16, 32, 3)
self.pooling = torch.nn.AdaptiveAvgPool2d(output_size=1)
self.linear1 = nn.Linear(32, C)
self.softmax1 = nn.LogSoftmax(dim=1)
def forward(self, x):
x = self.conv1(x)
x = self.maxpool(F.leaky_relu(x))
x = self.conv2(x)
x = self.maxpool(F.leaky_relu(x))
x = self.linear1(self.pooling(x).view(x.shape[0], -1))
x = self.softmax1(x)
return x
So now images of torch.Size([128, 3, 64, 64]) and torch.Size([128, 3, 128, 128]) can be passed to the network.
| https://stackoverflow.com/questions/64263839/ |
Restoring mask to image | My PyTorch model outputs a segmented image with values (0,1,2) for each one of the three classes. During the preparation of the set, I mapped black to 0, red to 1 and white to 2. I have two questions:
How can I show what each class represents? for example take a look at the image:
I am currently using the following method to show each class:
output = net(input)
input = input.cpu().squeeze()
input = transforms.ToPILImage()(input)
probs = F.softmax(output, dim=1)
probs = probs.squeeze(0)
full_mask = probs.squeeze().cpu().numpy()
fig, (ax0, ax1, ax2, ax3, ax4) = plt.subplots(1, 5, figsize=(20,10), sharey=True)
ax0.set_title('Input Image')
ax1.set_title('Background Class')
ax2.set_title('Neuron Class')
ax3.set_title('Dendrite Class')
ax4.set_title('Predicted Mask')
ax0.imshow(input)
ax1.imshow(full_mask[0, :, :].squeeze())
ax2.imshow(full_mask[1, :, :].squeeze())
ax3.imshow(full_mask[2, :, :].squeeze())
full_mask = np.argmax(full_mask, 0)
img = mask_to_image(full_mask)
But there appears to be shared pixels between the classes, is there a better way to show this (I want the first image to only of the background class, the the second only of the neuron class and the third only of the dendrite class)?
2.My second question is about generating a black, red and white image from the mask, currently the mask is of shape (512,512) and has the following values:
[[0 0 0 ... 0 0 0]
[0 0 0 ... 2 0 0]
[0 0 0 ... 2 2 0]
...
[2 1 2 ... 2 2 2]
[2 1 2 ... 2 2 2]
[0 2 0 ... 2 2 2]]
And the results look like this:
Since I am using this code to convert to image:
def mask_to_image(mask):
return Image.fromarray((mask).astype(np.uint8))
|
But there appears to be shared pixels between the classes, is there a
better way to show this (I want the first image to only of the
background class, the the second only of the neuron class and the
third only of the dendrite class)?
Yes, you can take argmax along 0th dimension so the one with highest logit (unnormalized probability) will be 1, rest will be zero:
output = net(input)
binary_mask = torch.argmax(output, dim=0).cpu().numpy()
ax.set_title('Neuron Class')
ax.imshow(binary_mask == 0)
My second question is about generating a black, red and white image
from the mask, currently the mask is of shape (512,512) and has the
following values
You can spread [0, 1, 2] values into the zero-th axis making it channel-wise. Now [0, 0, 0] values across all channels for single pixel will be black, [255, 255, 255] would be white and [255, 0, 0] would be red (as PIL is RGB format):
import torch
tensor = torch.randint(high=3, size=(512, 512))
red = tensor == 0
white = tensor == 2
zero_channel = red & white
image = torch.stack([zero_channel, white, white]).int().numpy() * 255
Image.fromarray((image).astype(np.uint8))
| https://stackoverflow.com/questions/64264678/ |
Pytorch why is .float() needed here for RuntimeError: expected scalar type Float but found Double | Simple question, i wanted to experiment with the simplest possible network, but i kept running into RuntimeError: expected scalar type Float but found Double unless i casted data into .float() (see below code with comment)
What i dont understand is, why is this casting needed? data is already a torch.float64 type. Whys the explicit re-casting in the output = model(data.float()) line needed?
Code
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLR
from sklearn.datasets import make_classification
from torch.utils.data import TensorDataset, DataLoader
# =============================================================================
# Simplest Example
# =============================================================================
X, y = make_classification()
X, y = torch.tensor(X), torch.tensor(y)
print("X Shape :{}".format(X.shape))
print("y Shape :{}".format(y.shape))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(X.shape[1], 128)
self.fc2 = nn.Linear(128, 10)
self.fc3 = nn.Linear(10, 2)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
device = torch.device("cuda")
lr = 1
batch_size = 32
gamma = 0.7
epochs = 14
args = {'log_interval': 10, 'dry_run':False}
kwargs = {'batch_size': batch_size}
kwargs.update({'num_workers': 1,
'pin_memory': True,
'shuffle': True},
)
model = Net().to(device)
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = StepLR(optimizer, step_size=1, gamma=gamma)
my_dataset = TensorDataset(X,y) # create dataset
train_loader = DataLoader(my_dataset,**kwargs) #generate dataloader
cross_entropy_loss = torch.nn.CrossEntropyLoss()
for epoch in range(1, epochs + 1):
## Train step ##
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data.float()) #HERE: why is .float() needed here?
loss = cross_entropy_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args['log_interval'] == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
if args['dry_run']:
break
scheduler.step()
| In PyTorch, 64-bit floating point corresponds to torch.float64 or torch.double.
While, 32-bit floating point corresponds to torch.float32 or torch.float.
Thus,
data is already a torch.float64 type
i.e. data is a 64 floating point type (torch.double).
By casting it using .float(), you convert it into 32-bit floating point.
a = torch.tensor([[1., -1.], [1., -1.]], dtype=torch.double)
print(a.dtype)
# torch.float64
print(a.float().dtype)
# torch.float32
Check different data types in PyTorch.
| https://stackoverflow.com/questions/64268046/ |
Understanding the reason behind 'RuntimeError: leaf variable has been moved into the graph interior' | I am trying to understand pytorch and how autograd works in it. I tried creating a tensor by filling it with values from other tensors and then checking the gradients. However, I am running into RuntimeError: leaf variable has been moved into the graph interior if I don't set requires_grad equal to False.
code:
x = torch.ones(3,5,requires_grad=True)
y = x+2
z = y*y*3
out1 = z.mean()
out2 = 2*z.mean()
outi = torch.empty(2,requires_grad=True)
outi[0] = out1
outi[1] = out2
outi.backward(torch.tensor([0.,1.]))
output:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-22-1000fc52a64c> in <module>
13 outi[1] = out2
14
---> 15 outi.backward(torch.tensor([0.,1.]))
~/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
183 products. Defaults to ``False``.
184 """
--> 185 torch.autograd.backward(self, gradient, retain_graph, create_graph)
186
187 def register_hook(self, hook):
~/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
123 retain_graph = create_graph
124
--> 125 Variable._execution_engine.run_backward(
126 tensors, grad_tensors, retain_graph, create_graph,
127 allow_unreachable=True) # allow_unreachable flag
RuntimeError: leaf variable has been moved into the graph interior
however, I can change the requires_grad to False and it will all work just fine
x = torch.ones(3,5,requires_grad=True)
y = x+2
z = y*y*3
out1 = z.mean()
out2 = 2*z.mean()
outi = torch.empty(2,requires_grad=False)
outi[0] = out1
outi[1] = out2
outi.backward(torch.tensor([0.,1.]))
output:
empty. it worked
Can somebody please help me understand what happened under the hood and, what changed with setting require_grad to True that resulted in this behavior? Thank you for reading
| Intro
First, definition of what a leaf variable in PyTorch is, you can check official documentation for tensor.is_leaf (emphasis mine):
All Tensors that have requires_grad which is False will be leaf
Tensors by convention.
For Tensors that have requires_grad which is True, they will be
leaf Tensors if they were created by the user. This means that they
are not the result of an operation and so grad_fn is None.
So let's see how this looks for outi variable in original code. Immediately after creation, running this snippet:
outi = torch.empty(2, requires_grad=True)
print(outi.is_leaf, outi.grad_fn, outi.requires_grad)
gives:
True, None, True
as it was created by user and there is no previous operation creating it so it should be the second bolded case from the above citation.
Now this line:
outi[0] = out1
outi[1] = out2
Uses two nodes which are not leafs and are part of the graph which goes back to x (which is the only leaf in it). By doing this outi is also part of the original x graph and would have to be backpropagated through, yet you specified it as a leaf (more on that later), which cannot be backpropagated through (by the definition they either don't require gradient or are created by user). Version of outi as leaf was already put on graph, after above assignment, this snippet:
print(outi.is_leaf, outi.grad_fn, outi.requires_grad)
changes to:
False <CopySlices object at 0x7f2dfa83a3d0> True
Error
Now, I agree it's a pretty uninformative error given that changing requires_grad=False does not make it non-leaf variable (requires_grad=False is implicit):
outi = torch.empty(2)
print(outi.is_leaf, outi.grad_fn, outi.requires_grad)
# > True None False
But this tensor could be "upgraded" to non-leaf tensor if you use assignment as you did without breaking the expected behaviour.
Why? Because you implicitly (or explicitly in case of your code) said you don't need gradient for this variable and PyTorch retains gradient only for leaf variables (unless you specify .retain_grad for specific tensor) due to memory optimization. So the only change here would be it will no longer be a leaf, but this would not break promises as .grad would be None anyway.
If you were to have requires_grad=True as you originally did you could, reasonably, according to PyTorch semantics, think that this:
outi.grad
Will give you a tensor with gradient. But if this requires_grad=True tensor were to be changed to non-leaf tensor, then, by definition it wouldn't have this field (as non-leaf tensors have .grad=None).
To me it seems like a design decision on their part to avoid confusion with requires_grad=True and breaking expected user experience.
BTW. If they were to disallow leaf variables inside graph then operation which works fine now (requires_grad=False) should be disallowed as well. But as requires_grad=False is implicit and often used (creating tensors or something like you did) it seems not to be to much of a stretch to allow it. Disallowing it would be much more severe. On the other hand if you specify requires_grad=True it could be assumed you know better what you are doing and really need that gradient here.
BTW2. This explanation might be a stretch but hopefully will shed some light. I haven't found anything official regarding this error (admittedly though I didn't dig too deep).
Some resources here, here (this one is important, someone was asking for justification of some design decisions though didn't get one AFAIK).
Comments
Comment 1
I think the requires_grad is getting inherited from the slice and also
.grad is available.
Yes, it has requires_grad as True also as it's part of the graph now, BUT grad is not available as it is no longer a leaf. Printing outi.grad after backward gives you None and the following warning:
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor
is being accessed. Its .grad attribute won't be populated during
autograd.backward(). If you indeed want the gradient for a non-leaf
Tensor, use .retain_grad() on the non-leaf Tensor. If you access the
non-leaf Tensor by mistake, make sure you access the leaf Tensor
instead. See github.com/pytorch/pytorch/pull/30531 for more
informations.
So the .grad attribute is None anyway as user would expect giving requires_grad=False as creation argument. User could expect gradient to be not None if he was to set requires_grad=True and that's when PyTorch raises the error, IMO due to possible inconsistency with user expectation in that case.
Comment 2
For example:
a = torch.ones(2,requires_grad=False)
b = 2*a
b.requires_grad=True
print(b.is_leaf) #True
I have changed your code a little to go through it step by step:
a = torch.ones(2, requires_grad=False)
print(a.is_leaf) # True
We should start with a here, a is a leaf according to docs as:
All Tensors that have requires_grad which is False will be leaf
Tensors by convention.
b = a * 2
print(b.is_leaf)
Now b is leaf as it does not require gradient (because a does not need a gradient it doesn't have to be backpropagated through this branch). Manipulating tensors with requires_grad=False creates tensors which do not require_grad otherwise it would be wasteful and non-sensical to turn it on.
b.requires_grad = True
print(b.is_leaf)
Now this one returns True as well. Once again, docs wording might not be the best here (as I've stated before), but (my additions in bold):
For Tensors that have requires_grad which is True (our case now)
they will be leaf Tensors if they were created by the user (debatable about creation here as you have modified the existing one, indeed). This means that they are not the result of an operation and so grad_fn is None (this one IMO clarifies the previous point)
About clarification- as this tensor is not a result of any mathematical operation, you simply said you want this b tensor to require_grad.
IMO it is a user created tensor as it was placed (created) on graph for the first time (before there was no need for that as it didn't require gradient).
And it does have it's requires_grad set to True, you did it explicitly here.
Comment 3 & 4
Everything with requires_grad=True is on the graph
Yes, but something with requires_grad=False can be on a graph as well if it is a leaf. Actually every PyTorch operation is created and added dynamically onto computational graph, here we use simplification: it's on graph if it takes part in backpropagation. For example neural network parameters are leafs yet they are on graph as the last part during backpropagation and they have their gradients (as they are optimized they need to be in graph in order to backprop through it).
Everything not on the graph is a leaf
Yes, essentially
everything on the graph that is not the product of operation on graph
tensors is a leaf
Yes, if you add some tensor to it (e.g. created by torch.randn or such) is a leaf
every leaf on the graph and non-leaf where I set retain_grad=True
manually will get .grad attribute populated.
Yes, it if it is part of backpropagation which is almost always the case in our "mental graph" case (I think at least). Unless it already has requires_grad=True, in this case it will be populated with gradient. Basically, except for creation, you shouldn't tinker with setting requires_grad=True as it is prone to fail (as you saw) and will definitely raise some eyebrows for other people reading your code.
every non-leaf on the graph has a grad_fn associated with it
Yes, that follows as some operation had to create it (and if it was created by some operation and this operation is differentiable, grad_fn is registered to be used during backward() call).
| https://stackoverflow.com/questions/64272718/ |
Segmentation Fault when exporting to onnx a quantized Pytorch model | I am trying to export a model to the onnx format. The architecture is complicated so I won't share it here, but basically, I have the network weights in a .pth file. I'm able to load them, create the network and perform inference with it.
It's important to note that I have adapted the code to be able to quantize the network. I have added quantize and dequantize operators as well as some torch.nn.quantized.FloatFunctional() operators.
However, whenever I try to export it with
torch.onnx.export(torch_model, # model being run
input_example, # model input
model_name, # where to save the model
export_params=True, # store the trained parameter
opset_version=11, # the ONNX version to export
# the model to
do_constant_folding=True, # whether to execute constant
# folding for optimization
)
I get Segmentation fault (core dumped)
I am working on Ubuntu 20, with the following packages installed :
torch==1.6.0
torchvision==0.7.0
onnx==1.7.0
onnxruntime==1.4.0
Note that the according to some prints I have left in the code, the inference part of the exporting completes. The segmentation fault happens afterward.
Does anyone see any reason why this may happen ?
[Edit] : I can export my network when it is not adapted for quantized operations. Therefore, the problem is not a broken installation but more a problem of some quantized operators for onnx saving.
| Well, it turns out that ONNX does not support quantized models (but does not warn you in anyway when running, it just throws out a segfault). It does not seem to be on the agenda yet, so a solution can be to use TensorRT.
| https://stackoverflow.com/questions/64277224/ |
Trying to download the CIFAR10 dataset using 'torchvision.datasets' on my computer | i am trying to download the CIFAR10 dataset on my computer using the last two lines of the following code:
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./Users/Sohrab/Downloads', train=True,
download=True, transform=transform)
I execute it in Google Colab,but after that,when i look in my Downloads file on my computer,i do not see
the CIFAR10 dataset.
Is it actually stored on my computer or does Colab store it somewhere else(in a cloud architecture for instance)?
Thank you in advance.
| colab stores the data in running instance's disk. If it resets you have to redownload. You could also save it in your Google Drive but that's some trouble and IMO not worth it for CIFAR10.
| https://stackoverflow.com/questions/64280013/ |
What is the upsampling method called 'area' used for? | The PyTorch function torch.nn.functional.interpolate contains several modes for upsampling, such as: nearest, linear, bilinear, bicubic, trilinear, area.
What is the area upsampling modes used for?
| As jodag said, it is resizing using adaptive average pooling. While the answer at the link aims to explain what adaptive average pooling is, I find the explanation a bit vague.
TL;DR the area mode of torch.nn.functional.interpolate is probably one of the most intuitive ways to think of when one wants to downsample an image.
You can think of it as applying an averaging Low-Pass Filter(LPF) to the original image and then sampling. Applying an LPF before sampling is to prevent potential aliasing in the downsampled image. Aliasing can result in Moiré patterns in the downscaled image.
It is probably called "area" because it (roughly) preserves the area ratio between the input and output shapes when averaging the input pixels. More specifically, every pixel in the output image will be the average of a respective region in the input image where the 1/area of this region will be roughly the ratio between output image's area and input image's area.
Furthermore, the interpolate function with mode = 'area' calls the source function adaptie_avg_pool2d (implemented in C++) which assigns each pixel in the output tensor the average of all pixel intensities within a computed region of the input. That region is computed per pixel and can vary in size for different pixels. The way it is computed is by multiplying the output pixel's height and width by the ratio between the input and output (in that order) height and width (respectively) and then taking once the floor (for the region's starting index) and once the ceil (for the region's ending index) of the resulting value.
Here's an in-depth analysis of what happens in nn.AdaptiveAvgPool2d:
First of all, as stated there you can find the source code for adaptive average pooling (in C++) here: source
Taking a look at the function where the magic happens (or at least the magic on CPU for a single frame), static void adaptive_avg_pool2d_single_out_frame, we have 5 nested loops, running over channel dimension, then width, then height and within the body of the 3rd loop the magic happens:
First compute the region within the input image which is used to calculate the value of the current pixel (recall we had width and height loop to run over all pixels in the output).
How is this done?
Using a simple computation of start and end indices for height and width as follows: floor((input_height/output_height) * current_output_pixel_height) for the start and ceil((input_height/output_height) * (current_output_pixel_height+1)) and similarly for the width.
Then, all that is done is to simply average the intensities of all pixels in that region and current channel and place the result in the current output pixel.
I wrote a simple Python snippet that does the same thing, in the same fashion (loops, naive) and produces equivalent results. It takes tensor a and uses adaptive average pool to resize a to shape output_shape in 2 ways - once using the built-in nn.AdaptiveAvgPool2d and once with my translation into Python of the source function in C++: static void adaptive_avg_pool2d_single_out_frame. Built-in function's result is saved into b and my translation is saved into b_hat. You can see that the results are equivalent (you can further play with the spatial shapes and validate this):
import torch
from math import floor, ceil
from torch import nn
a = torch.randn(1, 3, 15, 17)
out_shape = (10, 11)
b = nn.AdaptiveAvgPool2d(out_shape)(a)
b_hat = torch.zeros(b.shape)
for d in range(a.shape[1]):
for w in range(b_hat.shape[3]):
for h in range(b_hat.shape[2]):
startW = floor(w * a.shape[3] / out_shape[1])
endW = ceil((w + 1) * a.shape[3] / out_shape[1])
startH = floor(h * a.shape[2] / out_shape[0])
endH = ceil((h + 1) * a.shape[2] / out_shape[0])
b_hat[0, d, h, w] = torch.mean(a[0, d, startH: endH, startW: endW])
'''
Prints Mean Squared Error = 0 (or a very small number, due to precision error)
as both outputs are the same, proof of output equivalence:
'''
print(nn.MSELoss()(b_hat, b))
| https://stackoverflow.com/questions/64284755/ |
PyTorch network with customized layer works fine on CPU but get cudaErrorIllegalAddress when moved to GPU | I'm trying to implement my own version of the Graph-Attention-network
The customized GAT layer is as following
class GATLayer(nn.Module):
def __init__(self, input_dim: int, output_dim: int, adj: torch.tensor):
super().__init__()
self.W = nn.Parameter(torch.zeros(size=(output_dim, input_dim)))
self.a = nn.Parameter(torch.zeros(size=(2 * output_dim,)))
self.adj = adj
self.n_points = adj.shape[0]
#print(f"input dim:{input_dim}")
def forward(self, h: torch.Tensor):
B, T, N, F = h.size()
hh = functional.linear(h, self.W)
output = torch.zeros_like(hh)
for i in range(self.n_points):
# print(i)
hhj = hh[:, :, self.adj[i], :]
hhi = torch.cat([hh[:, :, i:i + 1, :]] * hhj.size(2), 2)
hhij = torch.cat([hhi, hhj], 3)
e = torch.mm(hhij.reshape(B * T * hhj.size(2), -1), self.a.reshape(self.a.size(0), 1)).reshape(B, T, -1)
alpha = functional.softmax(e, dim=2)
output[:, :, i, :] = torch.sum(hhj * torch.cat([torch.unsqueeze(alpha, 3)] * hhj.size(3), 3), dim=2)
return output
And the whole network is defined as:
class AQIP(nn.Module):
def __init__(self, adj: torch.tensor, seq_len: int, with_aqi: bool = True):
super().__init__()
self.hid_size = 128
self.seq_len = seq_len
self.gat_layers = [
GATLayer(input_dim=16 + int(with_aqi), output_dim=128, adj=adj),
GATLayer(input_dim=128, output_dim=128, adj=adj),
]
self.rnns = [
nn.LSTM(input_size=128, hidden_size=128, num_layers=4, bias=True, batch_first=True),
]
self.linear = nn.Linear(in_features=128 * 4, out_features=1, bias=True)
def forward(self, x: torch.Tensor, site_idx: int):
h = torch.zeros(size=(4, x.size(0), 128))
c = torch.zeros(size=(4, x.size(0), 128))
for gat in self.gat_layers:
x = gat(x)
for rnn in self.rnns:
x[:, :, site_idx, :], (h, c) = rnn(x[:, :, site_idx, :], (h, c))
h = h.permute(1, 0, 2)
h = h.reshape(h.size(0), -1)
return self.linear(h).squeeze()
When I independently test the customized GAT layer with the following code, it turned out that the GAT layer worked fine even on GPU
model = GATLayer(3, 1024, torch.tensor(np.array([[1, 0, 1], [0, 0, 1], [1, 0, 1]], dtype='bool')))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
print(model(torch.randn(5, 5, 3, 3)).shape)
Which outputs torch.Size([5, 5, 3, 1024])
When I test the whole network with CPU and the following code, it worked fine as well
#device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
adj = torch.tensor(np.array([[1, 0, 0], [0, 1, 1], [1, 1, 1]], dtype="bool"))
exp = torch.randn(3, 8, 3, 17)
gpus = [0]
model = AQIP(adj, seq_len=8)
#model = model.to(device, non_blocking=True)
print(model(exp, 1))
Which outputs tensor([-0.0320, -0.0320, -0.0320], grad_fn=<SqueezeBackward0>)
But as long as I try to move the model to GPU and de-comment the device and to lines, I get the following error with trace back to some Formatter class that is irrelevant to my code:
RuntimeError: copy_if failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered
When using CUDA_LAUNCH_BLOCKING=1 to run the code, I get:
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
Which did not help me to locate error at all
I had also run official examples on the same machine and tried to move them to GPU, it turned out that they all worked fine. So I guess it's not about the CUDA and the Cudnn or the GPU driver version incompatible. But I can not locate the problem in my code as well. PLEASE HELP! I'll be much appreciated if you could save me out of this.
| After numerous endeavors, I finally found the problem. It turned out that if you put layers into list like
self.gat_layers = [
GATLayer(input_dim=16 + int(with_aqi), output_dim=128, adj=adj).cuda(),
GATLayer(input_dim=128, output_dim=128, adj=adj).cuda(),
]
Then the PyTorch won't automatically identify those layers so that when .to(device) is called, their parameters won't be transferred to the GPU. So one of the solutions is to declare the layers one by one.
While the better solution is to use the nn.ModuleList to contain all the layers you want, so the code could be changed to
self.gat_layers = nn.ModuleList([
GATLayer(input_dim=16 + int(with_aqi), output_dim=128, adj=adj).cuda(),
GATLayer(input_dim=128, output_dim=128, adj=adj).cuda(),
])
| https://stackoverflow.com/questions/64289716/ |
The bounding box's position and size is incorrect, how to improve it's accuracy? | I'm using detectron2 for solving a segmentation task,
I'm trying to classify an object into 4 classes,
so I have used COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml.
I have applied 4 kind of augmentation transforms and after training I get about 0.1
total loss.
But for some reason the accuracy of the bbox is not great on some images on the test set,
the bbox is drawn either larger or smaller or doesn't cover the whole object.
Moreover sometimes the predictor draws few bboxes, it assumes there are few different objects although there is only a single object.
Are there any suggestions how to improve it's accuracy?
Are there any good practice approaches how to resolve this issue?
Any suggestion or reference material will be helpful.
| I would suggest the following:
Ensure that your training set has the object you want to detect in all sizes: in this way, the network learns that the size of the object can be different and less prone to overfitting (detector could assume your object should be only big for example).
Add data. Rather than applying all types of augmentations, try adding much more data. The phenomenon of detecting different objects although there is only one object leads me to believe that your network does not generalize well. Personally I would opt for at least 500 annotations per class.
The biggest step towards improvement will be achieved by means of (2).
Once you have a decent baseline, you could also experiment with augmentations.
| https://stackoverflow.com/questions/64291573/ |
pytorch view tensor and reduce one dimension | So I have a 4d tensor with shape [4,1,128,678] and I would like to view/reshape it as [4,678,128].
I have to do this for multiple tensors where the last shape value 678 is not always know and could be different, so [4,1,128,575]should also go to [4,575,128]
Any idea on what is the optimal operation to transform the tensor? view/reshape? and how?
Thanks
| You could also use (less to write and IMO cleaner):
# x.shape == (4, 1, 128, 678)
x.squeeze().permute(0, 2, 1)
If you were to use view you would lose dimension information (but maybe that is what you want), in this case it would be:
x.squeeze().view(4, -1, 128)
permute reorders tensors, while shape only gives a different view without restructuring underlying memory. You can see the difference between those two operations in this StackOverflow answer.
| https://stackoverflow.com/questions/64295110/ |
Pytorch tensor get the index of the element with specific values? | I have two tensores, tensor a and tensor b.
I want to get all indexes of values in tensor b.
For example.
a = torch.Tensor([1,2,2,3,4,4,4,5])
b = torch.Tensor([1,2,4])
I want the index of 1, 2, 4 in tensor a. I can do this by the following code.
a = torch.Tensor([1,2,2,3,4,4,4,5])
b = torch.Tensor([1,2,4])
mask = torch.zeros(a.shape).type(torch.bool)
print(mask)
for e in b:
mask = mask + (a == e)
print(mask)
How can I do it without for?
| Update:
As @zaydh kindly pointed out in the comments, since PyTorch 1.10, isin() and isinf()(and many other numpy equivalents) are available as well, thus you can simply do:
torch.isin(a, b)
which would give you :
Out[4]: tensor([ True, True, True, False, True, True, True, False])
Old answer:
Is this what you want? :
np.in1d(a.numpy(), b.numpy())
will result in :
array([ True, True, True, False, True, True, True, False])
| https://stackoverflow.com/questions/64300830/ |
How to check if an object is a certain PyTorch optimizer? | I want to check the class of objects like this:
from torch.optim import Adam, SGD, AdamW
adam_range = (0.8, 1.0)
adamw_range = (0.6, 0.7)
sgd_range = (0.0, 0.5)
targets = []
for cfg in configs:
if isinstance(cfg["optimizer"], Adam):
sample = np.random.uniform(low=adam_range[0], high=adam_range[1], size=1)
elif isinstance(cfg["optimizer"], AdamW):
sample = np.random.uniform(low=adamw_range[0], high=adamw_range[1], size=1)
elif isinstance(cfg["optimizer"], SGD):
sample = np.random.uniform(low=sgd_range[0], high=sgd_range[1], size=1)
where configs is a list of dicts and cfg["optimizer"] are uninitialized references of from torch.optim import Adam, SGD, AdamW. However, none of the if/elif statements evaluate to True.
How do I properly check the type of these references? Is there a better way than checking cfg["optimizer"].__name__? The latter seems inelegant.
Thank you.
| Your cfg["optimizer"] is not an instance of any optimizer, but the type itself.
therefore, you should test it like this:
for cfg in configs:
if cfg["optimizer"] is Adam:
sample = np.random.uniform(low=adam_range[0], high=adam_range[1], size=1)
elif cfg["optimizer"] is AdamW:
sample = np.random.uniform(low=adamw_range[0], high=adamw_range[1], size=1)
elif cfg["optimizer"] is SGD:
sample = np.random.uniform(low=sgd_range[0], high=sgd_range[1], size=1)
To emphasize the difference between an instance of Adam and type Adam, try the following:
opt = Adam(model.parameters(), lr=0.1) # make an _instance_ of Adam
isinstance(opt, Adam) # True - opt is an instance of Adam optimizer
isinstance(Adam, Adam) # False - Adam is a type, not an instance of Adam
isinstnace(Adam, type) # True - "Adam" is a type not an instance of Adam
type(opt) is Adam # True - the type of opt is Adam
type(opt) == Adam # True
I find this page to summarize this difference quite well.
| https://stackoverflow.com/questions/64301566/ |
PyTorch Conv2D returns non-zero output for an input tensor of zeros? | If you input an array consisting only of zeros to a Conv2D layer, the output should also consist only of zeros. In TensorFlow, this is the case. But, in PyTorch, it isn't. Here is some very simple sample Python code to demonstrate this. Why does PyTorch output non-zero numbers in this situation?
import torch
import numpy as np
image = np.zeros((3,3,3), dtype=np.float32)
batch = np.asarray([image])
a = torch.nn.Conv2d(3,3,1)
b = a(torch.tensor(batch).permute(0,3,1,2))
print(b.permute(0,2,3,1))
| Unlike Tensorflow, PyTorch initializes the bias with non-zero values (see the source-code):
def reset_parameters(self) -> None:
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
| https://stackoverflow.com/questions/64309762/ |
How to define the loss function using the output of intermediate layers? | class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.encoder = nn.Linear(300, 100)
self.dense1 = nn.Sequential(nn.Linear(100, 10),nn.ReLU())
self.dense2 = nn.Sequential(nn.Linear(10, 5),nn.ReLU())
self.dense3 = nn.Sequential(nn.Linear(5, 1))
def forward(self, x):
x = self.encoder(x)
x = self.dense1(x)
x = self.dense2(x)
x = self.dense3(x)
return x
I am working on a regression problem, and I need to use the output of the dense2 layer to calculate the loss.
output of dense2 layer is 5 dimensional (5x1).
I am using PyTorch.
Dataset: Suppose i am using 300 features and i need to predict some score(a floating value).
Input: 300 Features
Output: Some Floating Value
| In general, your nn.Module can return as many elements as you like. Moreover, you don't have to use them anywhere - there is no mechanism that checks that. Pytorch philosophy is to compute computational graph on-the-run.
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.encoder = nn.Linear(300, 100)
self.dense1 = nn.Sequential(nn.Linear(100, 10),nn.ReLU())
self.dense2 = nn.Sequential(nn.Linear(10, 5),nn.ReLU())
self.dense3 = nn.Sequential(nn.Linear(5, 1))
def forward(self, x):
enc_output = self.encoder(x)
dense1_output = self.dense1(enc_output)
dense2_output = self.dense2(dense1_output)
dense3_output = self.dense3(dense2_output)
return dense3_output, dense2_output
| https://stackoverflow.com/questions/64316857/ |
pytorch multilabel classification network not training | I'm trying a simple multi label classification example but the network does not seem to be training correctly as the loss is stagnant.
I've used multilabel_soft_margin_loss as the pytorch docs suggest, but there isn't much else to go on..can't find any proper examples in the docs.
Can anyone peer into this and point out whats wrong with it? Fully working example below (also question on prediction below)
Fully working example code
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import StepLR
from sklearn.datasets import make_multilabel_classification
from torch.utils.data import TensorDataset, DataLoader
from sklearn.model_selection import train_test_split
import xgboost as xgb
from sklearn.metrics import accuracy_score
num_classes = 3
X, y = make_multilabel_classification(n_samples=1000,n_classes=num_classes)
X_tensor, y_tensor = torch.tensor(X), torch.tensor(y)
print("X Shape :{}".format(X_tensor.shape))
print("y Shape :{}".format(y_tensor.shape))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(X.shape[1], 300)
self.fc2 = nn.Linear(300, 10)
self.fc3 = nn.Linear(10, num_classes)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
device = torch.device("cpu")
lr = 1
batch_size = 128
gamma = 0.9
epochs = 100
args = {'log_interval': 10, 'dry_run':False}
kwargs = {'batch_size': batch_size}
kwargs.update({'num_workers': 1,
'pin_memory': True,
'shuffle': True},
)
model = Net().to(device)
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=0.1)
scheduler = StepLR(optimizer, step_size=1, gamma=gamma)
# data loader
my_dataset = TensorDataset(X_tensor,y_tensor) # create tensor dataset
train_dataset, test_dataset, = train_test_split(
my_dataset, test_size=0.2, random_state=42)
train_loader = DataLoader(train_dataset,**kwargs)
test_loader = DataLoader(test_dataset,**kwargs)
## Train step ##
for epoch in range(1, epochs + 1):
model.train() # set model to train
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data.float())
loss = F.multilabel_soft_margin_loss(output,target)
loss.backward()
optimizer.step()
if batch_idx % args['log_interval'] == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
if args['dry_run']:
break
scheduler.step()
Training Loss Progress
Train Epoch: 1 [0/800 (0%)] Loss: 0.694400
Train Epoch: 2 [0/800 (0%)] Loss: 0.697095
Train Epoch: 3 [0/800 (0%)] Loss: 0.705593
Train Epoch: 4 [0/800 (0%)] Loss: 0.651981
Train Epoch: 5 [0/800 (0%)] Loss: 0.704895
Train Epoch: 6 [0/800 (0%)] Loss: 0.650302
Train Epoch: 7 [0/800 (0%)] Loss: 0.658809
Train Epoch: 8 [0/800 (0%)] Loss: 0.904834
Train Epoch: 9 [0/800 (0%)] Loss: 0.655516
Train Epoch: 10 [0/800 (0%)] Loss: 0.662808
Train Epoch: 11 [0/800 (0%)] Loss: 0.664752
Train Epoch: 12 [0/800 (0%)] Loss: 0.656390
Train Epoch: 13 [0/800 (0%)] Loss: 0.664982
Train Epoch: 14 [0/800 (0%)] Loss: 0.664430
Train Epoch: 15 [0/800 (0%)] Loss: 0.664603 # stagnates
On top of that, how would I obtain predictions for this? It's not the same as taking the argmax anymore as its a multi label problem right? (Example output of the network below)
Output
tensor([[ 0.2711, 0.1754, -0.3354],
[ 0.2711, 0.1754, -0.3354],
[ 0.2711, 0.1754, -0.3354],
[ 0.2711, 0.1754, -0.3354],
[ 0.2711, 0.1754, -0.3354],
[ 0.2711, 0.1754, -0.3354],
[ 0.2711, 0.1754, -0.3354]]
Thanks!
|
On top of that, how would I obtain predictions for this?
If it's a multilabel task and you are outputting logits (as you are) then simply do:
output = model(data.float())
labels = output > 0
point out whats wrong with it?
It is hard and opinionated, what I would do in order:
validate your data. Your neural network response is the same for every input (given your example output is real). Maybe you are passing the same single sample (though seems unlikely as it's sklearn created data)
start simple; no LR scheduler, no weight decay, simple neural network and optimizer only (Adam can stay). Use weight decay if your model is overfitting, it clearly isn't right now.
fix your learning rate; it is one of the most important hyperparameters. 1 is probably too high, start with something like 3e-4 or 1e-3.
try to overfit (loss ~0.0) on small amount of samples (say 32 samples). If you can't, your neural network probably doesn't have enough capacity or there is an error in your code (didn't spot it from quick glance, besides what I've mentioned above). You should verify input and output shapes are correct and returned values manually (it seems for each sample network returns the same logits?).
if you are sure there is no error increase network capacity. Add new hidden layer or two (there is only one) and overfit on single batch. If it's capable go with more data
I've used multilabel_soft_margin_loss as the pytorch docs suggest,
It is the same thing as using torch.nn.BCEWithLogitsLoss which I think is more common, but that's an addendum.
| https://stackoverflow.com/questions/64319663/ |
Fixing error img should be PIL Image. Got | I tried to create Custom dataset but when show the some images it had the error. Here is my Dataset class and tranforms:
transforms = transforms.Compose([transforms.Resize(224,224)])
class MyDataset(Dataset):
def __init__(self, path, label, transform=None):
self.path = glob.glob(os.path.join(path, '*.jpg'))
self.transform = transform
self.label = label
def __getitem__(self, index):
img = io.imread(self.path[index])
img = torch.tensor(img)
labels = torch.tensor(int(self.label))
if self.transform:
img = self.transform(img)
return (img,labels)
def __len__(self):
return len(self.path)
And here error line:
images, labels = next(iter(train_loader))
| transforms.Resize requires PIL.Image instance as input while your img is a torch.Tensor.
This will solve your issue (see comments in source code):
import torchvision
from PIL import Image
# In your transform you should cast PIL Image to tensor
# When no transforms on PIL Image are needed anymore
transforms = transforms.Compose([transforms.Resize(224, 224), transforms.ToTensor()])
class MyDataset(Dataset):
def __init__(self, path, label, transform=None):
self.path = glob.glob(os.path.join(path, "*.jpg"))
self.transform = transform
self.label = label
def __getitem__(self, index):
img = Image.open(self.path[index])
labels = torch.tensor(int(self.label))
if self.transform is not None:
# Now you have PIL.Image instance for transforms
img = self.transform(img)
return (img, labels)
def __len__(self):
return len(self.path)
| https://stackoverflow.com/questions/64320780/ |
Docker full with URL path | Good day all,
Anyone knows if it's possible to just pull a single container from github? I do have this link https://github.com/aws/sagemaker-pytorch-training-toolkit and I will like to pull the container in this link https://github.com/aws/sagemaker-pytorch-training-toolkit/tree/master/src/sagemaker_pytorch_container.
I did try using build docker build -t https://github.com/abc/sagemaker-pytorch-training-toolkit.git to just build an image of one file but there's an init.py file which i'm not sure if its necessary.
Thanks
| You are on a wrong path.
Github does not store docker images, so there is no way you can pull it from there.
AWS Sagemaker provides pre-built images, you just need to select the one you want to use when creating an instance. see https://docs.aws.amazon.com/sagemaker/latest/dg/howitworks-create-ws.html
If you need a docker with pytorch, just run docker pull pytorch/pytorch
| https://stackoverflow.com/questions/64330589/ |
How to use the func like torch.nn.functional.conv2d() in mxnet? | I want to do some convolution calculation with input data and a kernel.
In torch, I can write a func:
import torch
def torch_conv_func(x, num_groups):
batch_size, num_channels, height, width = x.size()
conv_kernel = torch.ones(num_channels, num_channels, 1, 1)
return torch.nn.functional.conv2d(x, conv_kernel)
It works well and now I need rebuild in MXnet,so I write this:
from mxnet import nd
from mxnet.gluon import nn
def mxnet_conv_func(x, num_groups):
batch_size, num_channels, height, width = x.shape
conv_kernel = nd.ones((num_channels, num_channels, 1, 1))
return nd.Convolution(x, conv_kernel)
And I got the error
mxnet.base.MXNetError: Required parameter kernel of Shape(tuple) is not presented, in operator Convolution(name="")
How to fix it?
| You're missing some extra arguments to mxnet.nd.Convolution. You can do it like this:
from mxnet import nd
def mxnet_convolve(x):
B, C, H, W = x.shape
weight = nd.ones((C, C, 1, 1))
return nd.Convolution(x, weight, no_bias=True, kernel=(1,1), num_filter=C)
x = nd.ones((16, 3, 32, 32))
mxnet_convolve(x)
Since you're not using a bias, you need to set no_bias to True. Also, mxnet requires you to specify the kernel dimensions with the kernel and num_filter argument.
| https://stackoverflow.com/questions/64331511/ |
How to solve "RuntimeError: CUDA error: invalid device ordinal"? | I'm trying to run this code. I don't know what is wrong with it, but this code is not running. and I don't know how to solve this problem.
import cv2
from facial_emotion_recognition import EmotionRecognition
emotion_detector = EmotionRecognition(device='gpu', gpu_id=1)
camera = cv2.VideoCapture(0)
while True:
image = camera.read()[1]
image = emotion_detector.recognise_emotion(image, return_type='BGR')
cv2.imshow('Camera', image)
key = cv2.waitKey(1)
if key == 27:
break
camera.release()
cv2.destroyAllWindows()
but I'm getting this error:
Traceback (most recent call last):
File "/home/fahim/Documents/Python_projects/Python tutorials/pantech AI Master/Computer_Vision/Day 8 Face emotion recognition/emotion.py", line 4, in <module>
emotion_detector = EmotionRecognition(device='gpu', gpu_id=1)
File "/home/fahim/anaconda3/envs/Computer_Vision/lib/python3.7/site-packages/facial_emotion_recognition/facial_emotion_recognition.py", line 25, in __init__
self.network = NetworkV2(in_c=1, nl=32, out_f=7).to(self.device)
File "/home/fahim/anaconda3/envs/Computer_Vision/lib/python3.7/site-packages/torch/nn/modules/module.py", line 607, in to
return self._apply(convert)
File "/home/fahim/anaconda3/envs/Computer_Vision/lib/python3.7/site-packages/torch/nn/modules/module.py", line 354, in _apply
module._apply(fn)
File "/home/fahim/anaconda3/envs/Computer_Vision/lib/python3.7/site-packages/torch/nn/modules/module.py", line 354, in _apply
module._apply(fn)
File "/home/fahim/anaconda3/envs/Computer_Vision/lib/python3.7/site-packages/torch/nn/modules/module.py", line 376, in _apply
param_applied = fn(param)
File "/home/fahim/anaconda3/envs/Computer_Vision/lib/python3.7/site-packages/torch/nn/modules/module.py", line 605, in convert
return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
RuntimeError: CUDA error: invalid device ordinal
Process finished with exit code 1
This is my the configuration of my computer:
GPU: NVIDIA GeForce MX130
CPU: Intel i5-10210U (8) @ 4.200GHz
Help me to solve this please.
| Try changing:
emotion_detector = EmotionRecognition(device='gpu', gpu_id=1)
To:
emotion_detector = EmotionRecognition(device='gpu', gpu_id=0)
gpu_id is only effective when more than one GPU is detected, you only seem to have one GPU, so it throws an error since you tell the function to get GPU 2 (since we count from 0).
| https://stackoverflow.com/questions/64334033/ |
How can I get Class label from Mosaic augmentation in Object Detection Dataloader? | I'm trying to train an object detection model for a multi-class problem. In my training, I am using the Mosaic augmentation, Paper, for this task.
In my training mechanism, I'm a bit stuck to properly retrieving the class labels of each category, as the augmentation mechanism randomly picks the sub-portion of a sample. However, below is a result of a mosaic augmentation that we've achieved with a relevant bounding box until now.
Data Set
I've created a dummy data set. The df.head():
It has 4 class in total and df.object.value_counts():
human 23
car 13
cat 5
dog 3
Data Loader and Mosaic Augmentation
The data loader is defined as follows. However, the mosaic augmentation should be defined inside but for now, I'll create a separate code snippet for better demonstration:
IMG_SIZE = 2000
class DatasetRetriever(Dataset):
def __init__(self, main_df, image_ids, transforms=None, test=False):
super().__init__()
self.image_ids = image_ids
self.main_df = main_df
self.transforms = transforms
self.size_limit = 1
self.test = test
def __getitem__(self, index: int):
image_id = self.image_ids[index]
image, boxes, labels = self.load_mosaic_image_and_boxes(index)
# labels = torch.tensor(labels, dtype=torch.int64) # for multi-class
labels = torch.ones((boxes.shape[0],), dtype=torch.int64) # for single-class
target = {}
target['boxes'] = boxes
target['cls'] = labels
target['image_id'] = torch.tensor([index])
if self.transforms:
for i in range(10):
sample = self.transforms(**{
'image' : image,
'bboxes': target['boxes'],
'labels': target['cls']
})
assert len(sample['bboxes']) == target['cls'].shape[0], 'not equal!'
if len(sample['bboxes']) > 0:
# image
image = sample['image']
# box
target['boxes'] = torch.tensor(sample['bboxes'])
target['boxes'][:,[0,1,2,3]] = target['boxes'][:,[1,0,3,2]]
# label
target['cls'] = torch.stack(sample['labels'])
break
return image, target
def __len__(self) -> int:
return self.image_ids.shape[0]
Basic Transform
def get_transforms():
return A.Compose(
[
A.Resize(height=IMG_SIZE, width=IMG_SIZE, p=1.0),
ToTensorV2(p=1.0),
],
p=1.0,
bbox_params=A.BboxParams(
format='pascal_voc',
min_area=0,
min_visibility=0,
label_fields=['labels']
)
)
Mosaic Augmentation
Note, It should be defined inside the data loader. The main issue is, in this augmentation, while iterating will all 4 samples to create such augmentation, image and bounding_box is rescaled as follows:
mosaic_image[y1a:y2a, x1a:x2a] = image[y1b:y2b, x1b:x2b]
offset_x = x1a - x1b
offset_y = y1a - y1b
boxes[:, 0] += offset_x
boxes[:, 1] += offset_y
boxes[:, 2] += offset_x
boxes[:, 3] += offset_y
In this way, how would I select the relevant class labels for those selected bounding_box? Please, see the full code below:
def load_mosaic_image_and_boxes(self, index, s=3000,
minfrac=0.25, maxfrac=0.75):
self.mosaic_size = s
xc, yc = np.random.randint(s * minfrac, s * maxfrac, (2,))
# random other 3 sample
indices = [index] + random.sample(range(len(self.image_ids)), 3)
mosaic_image = np.zeros((s, s, 3), dtype=np.float32)
final_boxes = [] # box for the sub-region
final_labels = [] # relevant class labels
for i, index in enumerate(indices):
image, boxes, labels = self.load_image_and_boxes(index)
if i == 0: # top left
x1a, y1a, x2a, y2a = 0, 0, xc, yc
x1b, y1b, x2b, y2b = s - xc, s - yc, s, s # from bottom right
elif i == 1: # top right
x1a, y1a, x2a, y2a = xc, 0, s , yc
x1b, y1b, x2b, y2b = 0, s - yc, s - xc, s # from bottom left
elif i == 2: # bottom left
x1a, y1a, x2a, y2a = 0, yc, xc, s
x1b, y1b, x2b, y2b = s - xc, 0, s, s-yc # from top right
elif i == 3: # bottom right
x1a, y1a, x2a, y2a = xc, yc, s, s
x1b, y1b, x2b, y2b = 0, 0, s-xc, s-yc # from top left
# calculate and apply box offsets due to replacement
offset_x = x1a - x1b
offset_y = y1a - y1b
boxes[:, 0] += offset_x
boxes[:, 1] += offset_y
boxes[:, 2] += offset_x
boxes[:, 3] += offset_y
# cut image, save boxes
mosaic_image[y1a:y2a, x1a:x2a] = image[y1b:y2b, x1b:x2b]
final_boxes.append(boxes)
'''
ATTENTION:
Need some mechanism to get relevant class labels
'''
final_labels.append(labels)
# collect boxes
final_boxes = np.vstack(final_boxes)
final_labels = np.hstack(final_labels)
# clip boxes to the image area
final_boxes[:, 0:] = np.clip(final_boxes[:, 0:], 0, s).astype(np.int32)
w = (final_boxes[:,2] - final_boxes[:,0])
h = (final_boxes[:,3] - final_boxes[:,1])
# discard boxes where w or h <10
final_boxes = final_boxes[(w>=self.size_limit) & (h>=self.size_limit)]
return mosaic_image, final_boxes, final_labels
| I parsed the bounding box and class label information at the same time.
Below is the output that we've achieved. To try it with your own data set, for a starter.
| https://stackoverflow.com/questions/64335735/ |
How can I make this PyTorch tensor (B, C, H, W) tiling & blending code simpler and more efficient? | So, I wrote the code below many months ago, and it's worked pretty well. Though I am struggling on how I can simplify it and make it more efficient.
The functions below split an image tensor (B, C, H, W) into equal sized tiles (B, C, H, W) and then you can do stuff individually to the tiles in order to save memory. Then when rebuilding the tensor from the tiles, it uses masks to ensure that the tiles are seamlessly blended back together. The 'special masks' in the masking function handle when tiles in the right most column or tiles in the bottom row can't use the same overlap as the other tiles. This means that the right edge tiles and the bottom tiles may sometimes have almost none of their content visible. This is done to ensure that the tiles are always the exact specified size, regardless of the original image/tensor's size (important for visualization/DeepDream, neural style transfer, etc...). The adjacent row/column to the edge row/column also has special masks as well for where they overlap with the edge row/column.
There are 8 possible masks for every tile, and 4 of those masks can be used at once. The 4 possible masks are left, right, top, and bottom, with a special version for each mask.
# Improved version of: https://github.com/ProGamerGov/neural-dream/blob/master/neural_dream/dream_tile.py
import torch
# Apply blend masks to tiles
def mask_tile(tile, overlap, side='bottom'):
c, h, w = tile.size(1), tile.size(2), tile.size(3)
top_overlap, bottom_overlap, right_overlap, left_overlap = overlap[0], overlap[1], overlap[2], overlap[3]
base_mask = torch.ones_like(tile)
if 'left' in side and 'left-special' not in side:
lin_mask_left = torch.linspace(0,1,left_overlap, device=tile.device).repeat(h,1).repeat(c,1,1).unsqueeze(0)
base_mask[:,:,:,:left_overlap] = base_mask[:,:,:,:left_overlap] * lin_mask_left
if 'right' in side and 'right-special' not in side:
lin_mask_right = torch.linspace(1,0,right_overlap, device=tile.device).repeat(h,1).repeat(c,1,1).unsqueeze(0)
base_mask[:,:,:,w-right_overlap:] = base_mask[:,:,:,w-right_overlap:] * lin_mask_right
if 'top' in side and 'top-special' not in side:
lin_mask_top = torch.linspace(0,1,top_overlap, device=tile.device).repeat(w,1).rot90(3).repeat(c,1,1).unsqueeze(0)
base_mask[:,:,:top_overlap,:] = base_mask[:,:,:top_overlap,:] * lin_mask_top
if 'bottom' in side and 'bottom-special' not in side:
lin_mask_bottom = torch.linspace(1,0,bottom_overlap, device=tile.device).repeat(w,1).rot90(3).repeat(c,1,1).unsqueeze(0)
base_mask[:,:,h-bottom_overlap:,:] = base_mask[:,:,h-bottom_overlap:,:] * lin_mask_bottom
if 'left-special' in side:
lin_mask_left = torch.linspace(0,1,left_overlap, device=tile.device)
zeros_mask = torch.zeros(w-(left_overlap*2), device=tile.device)
ones_mask = torch.ones(left_overlap, device=tile.device)
lin_mask_left = torch.cat([zeros_mask, lin_mask_left, ones_mask], 0).repeat(h,1).repeat(c,1,1).unsqueeze(0)
base_mask = base_mask * lin_mask_left
if 'right-special' in side:
lin_mask_right = torch.linspace(1,0,right_overlap, device=tile.device)
ones_mask = torch.ones(w-right_overlap, device=tile.device)
lin_mask_right = torch.cat([ones_mask, lin_mask_right], 0).repeat(h,1).repeat(c,1,1).unsqueeze(0)
base_mask = base_mask * lin_mask_right
if 'top-special' in side:
lin_mask_top = torch.linspace(0,1,top_overlap, device=tile.device)
zeros_mask = torch.zeros(h-(top_overlap*2), device=tile.device)
ones_mask = torch.ones(top_overlap, device=tile.device)
lin_mask_top = torch.cat([zeros_mask, lin_mask_top, ones_mask], 0).repeat(w,1).rot90(3).repeat(c,1,1).unsqueeze(0)
base_mask = base_mask * lin_mask_top
if 'bottom-special' in side:
lin_mask_bottom = torch.linspace(1,0,bottom_overlap, device=tile.device)
ones_mask = torch.ones(h-bottom_overlap, device=tile.device)
lin_mask_bottom = torch.cat([ones_mask, lin_mask_bottom], 0).repeat(w,1).rot90(3).repeat(c,1,1).unsqueeze(0)
base_mask = base_mask * lin_mask_bottom
# Apply mask to tile and return masked tile
return tile * base_mask
def add_tiles(tiles, base_img, tile_coords, tile_size, overlap):
# Check for any tiles that need different overlap values
r, c = len(tile_coords[0]), len(tile_coords[1])
f_ovlp = (tile_coords[0][r-1] - tile_coords[0][r-2], tile_coords[1][c-1] - tile_coords[1][c-2])
h, w = tiles[0].size(2), tiles[0].size(3)
t=0
column, row, = 0, 0
for y in tile_coords[0]:
for x in tile_coords[1]:
mask_sides=''
c_overlap = overlap.copy()
if row == 0:
if row == len(tile_coords[0]) - 2:
mask_sides += 'bottom-special'
c_overlap[1] = f_ovlp[0] # Change bottom overlap
else:
mask_sides += 'bottom'
elif row > 0 and row < len(tile_coords[0]) -2:
mask_sides += 'bottom,top'
elif row == len(tile_coords[0]) - 2:
if f_ovlp[0] > 0:
mask_sides += 'bottom-special,top'
c_overlap[1] = f_ovlp[0] # Change bottom overlap
elif f_ovlp[0] <= 0:
mask_sides += 'bottom,top'
elif row == len(tile_coords[0]) -1:
if f_ovlp[0] > 0:
mask_sides += 'top-special'
c_overlap[0] = f_ovlp[0] # Change top overlap
elif f_ovlp[0] <= 0:
mask_sides += 'top'
if column == 0:
if column == len(tile_coords[1]) -2:
mask_sides += ',right-special'
c_overlap[2] = f_ovlp[1] # Change right overlap
else:
mask_sides += ',right'
elif column > 0 and column < len(tile_coords[1]) -2:
mask_sides += ',right,left'
elif column == len(tile_coords[1]) -2:
if f_ovlp[1] > 0:
mask_sides += ',right-special,left'
c_overlap[2] = f_ovlp[1] # Change right overlap
elif f_ovlp[1] <= 0:
mask_sides += ',right,left'
elif column == len(tile_coords[1]) -1:
if f_ovlp[1] > 0:
mask_sides += ',left-special'
c_overlap[3] = f_ovlp[1] # Change left overlap
elif f_ovlp[1] <= 0:
mask_sides += ',left'
tile = mask_tile(tiles[t], c_overlap, side=mask_sides)
base_img[:, :, y:y+tile_size[0], x:x+tile_size[1]] = base_img[:, :, y:y+tile_size[0], x:x+tile_size[1]] + tile
t+=1
column+=1
row+=1
column=0
return base_img
# Calculate the coordinates for tiles
def get_tile_coords(d, tile_dim, overlap=0):
move = int(tile_dim * (1-overlap))
c, tile_start, coords = 1, 0, [0]
while tile_start + tile_dim < d:
tile_start = move * c
if tile_start + tile_dim >= d:
coords.append(d - tile_dim)
else:
coords.append(tile_start)
c += 1
return coords
# Calculates info required for tiling
def tile_setup(tile_size, overlap_percent, base_size):
if type(tile_size) is not tuple and type(tile_size) is not list:
tile_size = (tile_size, tile_size)
if type(overlap_percent) is not tuple and type(overlap_percent) is not list:
overlap_percent = (overlap_percent, overlap_percent)
x_coords = get_tile_coords(base_size[1], tile_size[1], overlap_percent[1])
y_coords = get_tile_coords(base_size[0], tile_size[0], overlap_percent[0])
y_ovlp, x_ovlp = int(tile_size[0] * overlap_percent[0]), int(tile_size[1] * overlap_percent[1])
return (y_coords, x_coords), tile_size, [y_ovlp, y_ovlp, x_ovlp, x_ovlp]
# Split tensor into tiles
def tile_image(img, tile_size, overlap_percent, info_only=False):
tile_coords, tile_size, _ = tile_setup(tile_size, overlap_percent, (img.size(2), img.size(3)))
# Cut out tiles
tile_list = []
for y in tile_coords[0]:
for x in tile_coords[1]:
tile = img[:, :, y:y + tile_size[0], x:x + tile_size[1]]
tile_list.append(tile)
return tile_list
# Put tiles back into the original tensor
def rebuild_image(tiles, image_size, tile_size, overlap_percent):
base_img = torch.zeros(image_size, device=tiles[0].device)
tile_coords, tile_size, overlap = tile_setup(tile_size, overlap_percent, (base_img.size(2), base_img.size(3)))
return add_tiles(tiles, base_img, tile_coords, tile_size, overlap)
The above code can be tested with the code below:
import torchvision.transforms as transforms
from PIL import Image
import random
# Load image
def preprocess_simple(image_name, image_size):
Loader = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()])
image = Image.open(image_name).convert('RGB')
return Loader(image).unsqueeze(0)
# Save image
def deprocess_simple(output_tensor, output_name):
output_tensor.clamp_(0, 1)
Image2PIL = transforms.ToPILImage()
image = Image2PIL(output_tensor.squeeze(0))
image.save(output_name)
test_input = preprocess_simple('tubingen.jpg', (1024,1024))
tile_size=260
overlap_percent=0.5
img_tiles = tile_image(test_input, tile_size=tile_size, overlap_percent=overlap_percent)
random.shuffle(img_tiles) # Comment this out to not randomize tile positions
output_tensor = rebuild_image(img_tiles, test_input.size(), tile_size=tile_size, overlap_percent=overlap_percent)
deprocess_simple(output_tensor, 'tiled_image.jpg')
I've included an example of what it does below (top is the original image, and the bottom is when I place the tiles back randomly to show off the blending system):
| I was able to remove all the bugs and simplify the code here: https://github.com/ProGamerGov/dream-creator/blob/master/utils/tile_utils.py
The special masks were really only needed for 2 situations, and their were bugs in rebuild_tensor that I had to fix. Overlap percentages should be equal to or less than 50%.
| https://stackoverflow.com/questions/64339360/ |
Normalizing images passed to torch.transforms.Compose function | How to find the values to pass to the transforms.Normalize function in PyTorch? Also, where in my code, should I exactly do the transforms.Normalize?
Since normalizing the dataset is a pretty well-known task, I was hoping there should be some sort of script for doing that automatically. At least I couldn't find it in PyTorch forum.
transformed_dataset = MothLandmarksDataset(csv_file='moth_gt.csv',
root_dir='.',
transform=transforms.Compose([
Rescale(256),
RandomCrop(224),
transforms.Normalize(mean = [ 0.485, 0.456, 0.406 ],
std = [ 0.229, 0.224, 0.225 ]),
ToTensor()
]))
for i in range(len(transformed_dataset)):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['landmarks'].size())
if i == 3:
break
I know these current values don't pertain to my dataset and pertain to ImageNet but using them I actually get an error:
TypeError Traceback (most recent call last)
<ipython-input-81-eb8dc46e0284> in <module>
10
11 for i in range(len(transformed_dataset)):
---> 12 sample = transformed_dataset[i]
13
14 print(i, sample['image'].size(), sample['landmarks'].size())
<ipython-input-48-9d04158922fb> in __getitem__(self, idx)
30
31 if self.transform:
---> 32 sample = self.transform(sample)
33
34 return sample
~/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py in __call__(self, img)
59 def __call__(self, img):
60 for t in self.transforms:
---> 61 img = t(img)
62 return img
63
~/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py in __call__(self, tensor)
210 Tensor: Normalized Tensor image.
211 """
--> 212 return F.normalize(tensor, self.mean, self.std, self.inplace)
213
214 def __repr__(self):
~/anaconda3/lib/python3.7/site-packages/torchvision/transforms/functional.py in normalize(tensor, mean, std, inplace)
278 """
279 if not torch.is_tensor(tensor):
--> 280 raise TypeError('tensor should be a torch tensor. Got {}.'.format(type(tensor)))
281
282 if tensor.ndimension() != 3:
TypeError: tensor should be a torch tensor. Got <class 'dict'>.
So basically three questions:
How can I find the similar values as in ImageNet mean and std for my own custom dataset?
How to pass these values and where? I assume I should do it in transforms.Compose method but I might be wrong.
I assume I should apply Normalize to my entire dataset not just the training set, am I right?
Update:
Trying the provided solution here didn't work for me: https://discuss.pytorch.org/t/about-normalization-using-pre-trained-vgg16-networks/23560/6?u=mona_jalal
mean = 0.
std = 0.
nb_samples = 0.
for data in dataloader:
print(type(data))
batch_samples = data.size(0)
data.shape(0)
data = data.view(batch_samples, data.size(1), -1)
mean += data.mean(2).sum(0)
std += data.std(2).sum(0)
nb_samples += batch_samples
mean /= nb_samples
std /= nb_samples
error is:
<class 'dict'>
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-51-e8ba3c8718bb> in <module>
5 for data in dataloader:
6 print(type(data))
----> 7 batch_samples = data.size(0)
8
9 data.shape(0)
AttributeError: 'dict' object has no attribute 'size'
this is print(data) result:
{'image': tensor([[[[0.2961, 0.2941, 0.2941, ..., 0.2460, 0.2456, 0.2431],
[0.2953, 0.2977, 0.2980, ..., 0.2442, 0.2431, 0.2431],
[0.2941, 0.2941, 0.2980, ..., 0.2471, 0.2471, 0.2448],
...,
[0.3216, 0.3216, 0.3216, ..., 0.2482, 0.2471, 0.2471],
[0.3216, 0.3241, 0.3253, ..., 0.2471, 0.2471, 0.2450],
[0.3216, 0.3216, 0.3216, ..., 0.2471, 0.2452, 0.2431]],
[[0.2961, 0.2941, 0.2941, ..., 0.2460, 0.2456, 0.2431],
[0.2953, 0.2977, 0.2980, ..., 0.2442, 0.2431, 0.2431],
[0.2941, 0.2941, 0.2980, ..., 0.2471, 0.2471, 0.2448],
...,
[0.3216, 0.3216, 0.3216, ..., 0.2482, 0.2471, 0.2471],
[0.3216, 0.3241, 0.3253, ..., 0.2471, 0.2471, 0.2450],
[0.3216, 0.3216, 0.3216, ..., 0.2471, 0.2452, 0.2431]],
[[0.2961, 0.2941, 0.2941, ..., 0.2460, 0.2456, 0.2431],
[0.2953, 0.2977, 0.2980, ..., 0.2442, 0.2431, 0.2431],
[0.2941, 0.2941, 0.2980, ..., 0.2471, 0.2471, 0.2448],
...,
[0.3216, 0.3216, 0.3216, ..., 0.2482, 0.2471, 0.2471],
[0.3216, 0.3241, 0.3253, ..., 0.2471, 0.2471, 0.2450],
[0.3216, 0.3216, 0.3216, ..., 0.2471, 0.2452, 0.2431]]],
[[[0.3059, 0.3093, 0.3140, ..., 0.3373, 0.3363, 0.3345],
[0.3059, 0.3093, 0.3165, ..., 0.3412, 0.3389, 0.3373],
[0.3098, 0.3131, 0.3176, ..., 0.3450, 0.3412, 0.3412],
...,
[0.2931, 0.2966, 0.2931, ..., 0.2549, 0.2539, 0.2510],
[0.2902, 0.2902, 0.2902, ..., 0.2510, 0.2510, 0.2502],
[0.2864, 0.2900, 0.2863, ..., 0.2510, 0.2510, 0.2510]],
[[0.3059, 0.3093, 0.3140, ..., 0.3373, 0.3363, 0.3345],
[0.3059, 0.3093, 0.3165, ..., 0.3412, 0.3389, 0.3373],
[0.3098, 0.3131, 0.3176, ..., 0.3450, 0.3412, 0.3412],
...,
[0.2931, 0.2966, 0.2931, ..., 0.2549, 0.2539, 0.2510],
[0.2902, 0.2902, 0.2902, ..., 0.2510, 0.2510, 0.2502],
[0.2864, 0.2900, 0.2863, ..., 0.2510, 0.2510, 0.2510]],
[[0.3059, 0.3093, 0.3140, ..., 0.3373, 0.3363, 0.3345],
[0.3059, 0.3093, 0.3165, ..., 0.3412, 0.3389, 0.3373],
[0.3098, 0.3131, 0.3176, ..., 0.3450, 0.3412, 0.3412],
...,
[0.2931, 0.2966, 0.2931, ..., 0.2549, 0.2539, 0.2510],
[0.2902, 0.2902, 0.2902, ..., 0.2510, 0.2510, 0.2502],
[0.2864, 0.2900, 0.2863, ..., 0.2510, 0.2510, 0.2510]]],
[[[0.2979, 0.2980, 0.3015, ..., 0.2825, 0.2784, 0.2784],
[0.2980, 0.2980, 0.2980, ..., 0.2830, 0.2764, 0.2795],
[0.2980, 0.2980, 0.3012, ..., 0.2827, 0.2814, 0.2797],
...,
[0.3282, 0.3293, 0.3294, ..., 0.2238, 0.2235, 0.2235],
[0.3255, 0.3255, 0.3255, ..., 0.2240, 0.2235, 0.2229],
[0.3225, 0.3255, 0.3255, ..., 0.2216, 0.2235, 0.2223]],
[[0.2979, 0.2980, 0.3015, ..., 0.2825, 0.2784, 0.2784],
[0.2980, 0.2980, 0.2980, ..., 0.2830, 0.2764, 0.2795],
[0.2980, 0.2980, 0.3012, ..., 0.2827, 0.2814, 0.2797],
...,
[0.3282, 0.3293, 0.3294, ..., 0.2238, 0.2235, 0.2235],
[0.3255, 0.3255, 0.3255, ..., 0.2240, 0.2235, 0.2229],
[0.3225, 0.3255, 0.3255, ..., 0.2216, 0.2235, 0.2223]],
[[0.2979, 0.2980, 0.3015, ..., 0.2825, 0.2784, 0.2784],
[0.2980, 0.2980, 0.2980, ..., 0.2830, 0.2764, 0.2795],
[0.2980, 0.2980, 0.3012, ..., 0.2827, 0.2814, 0.2797],
...,
[0.3282, 0.3293, 0.3294, ..., 0.2238, 0.2235, 0.2235],
[0.3255, 0.3255, 0.3255, ..., 0.2240, 0.2235, 0.2229],
[0.3225, 0.3255, 0.3255, ..., 0.2216, 0.2235, 0.2223]]]],
dtype=torch.float64), 'landmarks': tensor([[[160.2964, 98.7339],
[223.0788, 72.5067],
[ 82.4163, 70.3733],
[152.3213, 137.7867]],
[[198.3194, 74.4341],
[273.7188, 118.7733],
[117.7113, 80.8000],
[182.0750, 107.2533]],
[[137.4789, 92.8523],
[174.9463, 40.3467],
[ 57.3013, 59.1200],
[129.3375, 131.6533]]], dtype=torch.float64)}
dataloader = DataLoader(transformed_dataset, batch_size=3,
shuffle=True, num_workers=4)
and
transformed_dataset = MothLandmarksDataset(csv_file='moth_gt.csv',
root_dir='.',
transform=transforms.Compose(
[
Rescale(256),
RandomCrop(224),
ToTensor()#,
##transforms.Normalize(mean = [ 0.485, 0.456, 0.406 ],
## std = [ 0.229, 0.224, 0.225 ])
]
)
)
and
class MothLandmarksDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.landmarks_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.landmarks_frame)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = os.path.join(self.root_dir, self.landmarks_frame.iloc[idx, 0])
image = io.imread(img_name)
landmarks = self.landmarks_frame.iloc[idx, 1:]
landmarks = np.array([landmarks])
landmarks = landmarks.astype('float').reshape(-1, 2)
sample = {'image': image, 'landmarks': landmarks}
if self.transform:
sample = self.transform(sample)
return sample
| Source code errors
How to pass these values and where? I assume I should do it in
transforms.Compose method but I might be wrong.
In MothLandmarksDataset it is no wonder it is not working as you are trying to pass Dict (sample) to torchvision.transforms which require either torch.Tensor or PIL.Image as input. here to be exact:
sample = {'image': image, 'landmarks': landmarks}
if self.transform:
sample = self.transform(sample)
You could pass sample["image"] into it although you shouldn't. Applying this operation only to sample["image"] would break its relation to landmarks. What you should be after is something like albumentations library (see here) which can transform image and landmarks in the same way to preserve their relations.
Also there is no Rescale transform in torchvision, maybe you meant Resize?
Mean and variance for normalization
Provided code is fine, but you have to unpack your data into torch.Tensor like this:
mean = 0.0
std = 0.0
nb_samples = 0.0
for data in dataloader:
images, landmarks = data["image"], data["landmarks"]
batch_samples = images.size(0)
images_data = images.view(batch_samples, images.size(1), -1)
mean += images_data.mean(2).sum(0)
std += images_data.std(2).sum(0)
nb_samples += batch_samples
mean /= nb_samples
std /= nb_samples
How to pass these values and where? I assume I should do it in
transforms.Compose method but I might be wrong.
Those values should be passed to torchvision.transforms.Normalize applied only to sample["images"], not to sample["landmarks"].
I assume I should apply Normalize to my entire dataset not just the
training set, am I right?
You should calculate normalization values across training dataset and apply those calculated values to validation and test as well.
| https://stackoverflow.com/questions/64345289/ |
5.51 GiB already allocated; 417.00 MiB free; 5.53 GiB reserved in total by PyTorch CUDA out of memory | I am not sure why running this cell throws CUDA out of memory error and how I could fix it? Everytime I have to do a kill -9 PID from the list of $ nvidi-smi jupyter notebook from which this cell is being run. And still after a fresh start I have the same problem.
#torch.autograd.set_detect_anomaly(True)
network = Network()
network.cuda()
criterion = nn.MSELoss()
optimizer = optim.Adam(network.parameters(), lr=0.0001)
loss_min = np.inf
num_epochs = 10
start_time = time.time()
for epoch in range(1,num_epochs+1):
loss_train = 0
loss_test = 0
running_loss = 0
network.train()
print('size of train loader is: ', len(train_loader))
for step in range(1,len(train_loader)+1):
##images, landmarks = next(iter(train_loader))
##print(type(images))
batch = next(iter(train_loader))
images, landmarks = batch['image'], batch['landmarks']
images = images.permute(0,3,1,2)
images = images.cuda()
#RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[64, 600, 800, 3] to have 3 channels, but got 600 channels instead
landmarks = landmarks.view(landmarks.size(0),-1).cuda()
print('images shape: ', images.shape)
print('landmarks shape: ', landmarks.shape)
##images = torchvision.transforms.Normalize(images)
##landmarks = torchvision.transforms.Normalize(landmarks)
predictions = network(images)
# clear all the gradients before calculating them
optimizer.zero_grad()
# find the loss for the current step
loss_train_step = criterion(predictions.float(), landmarks.float())
print("type(loss_train_step) is: ", type(loss_train_step))
print("loss_train_step.dtype is: ",loss_train_step.dtype)
##loss_train_step = loss_train_step.to(torch.float32)
# calculate the gradients
loss_train_step.backward()
# update the parameters
optimizer.step()
loss_train += loss_train_step.item()
running_loss = loss_train/step
print_overwrite(step, len(train_loader), running_loss, 'train')
network.eval()
with torch.no_grad():
for step in range(1,len(test_loader)+1):
batch = next(iter(train_loader))
images, landmarks = batch['image'], batch['landmarks']
images = images.cuda()
landmarks = landmarks.view(landmarks.size(0),-1).cuda()
predictions = network(images)
# find the loss for the current step
loss_test_step = criterion(predictions, landmarks)
loss_test += loss_test_step.item()
running_loss = loss_test/step
print_overwrite(step, len(test_loader), running_loss, 'Testing')
loss_train /= len(train_loader)
loss_test /= len(test_loader)
print('\n--------------------------------------------------')
print('Epoch: {} Train Loss: {:.4f} Test Loss: {:.4f}'.format(epoch, loss_train, loss_test))
print('--------------------------------------------------')
if loss_test < loss_min:
loss_min = loss_test
torch.save(network.state_dict(), '../moth_landmarks.pth')
print("\nMinimum Test Loss of {:.4f} at epoch {}/{}".format(loss_min, epoch, num_epochs))
print('Model Saved\n')
print('Training Complete')
print("Total Elapsed Time : {} s".format(time.time()-start_time))
The complete log is:
size of train loader is: 12
images shape: torch.Size([64, 3, 600, 800])
landmarks shape: torch.Size([64, 8])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-18-efa8f1a4056e> in <module>
44 ##landmarks = torchvision.transforms.Normalize(landmarks)
45
---> 46 predictions = network(images)
47
48 # clear all the gradients before calculating them
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
<ipython-input-11-46116d2a7101> in forward(self, x)
10 def forward(self, x):
11 x = x.float()
---> 12 out = self.model(x)
13 return out
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/anaconda3/lib/python3.7/site-packages/torchvision/models/resnet.py in forward(self, x)
218
219 def forward(self, x):
--> 220 return self._forward_impl(x)
221
222
~/anaconda3/lib/python3.7/site-packages/torchvision/models/resnet.py in _forward_impl(self, x)
206 x = self.maxpool(x)
207
--> 208 x = self.layer1(x)
209 x = self.layer2(x)
210 x = self.layer3(x)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input)
115 def forward(self, input):
116 for module in self:
--> 117 input = module(input)
118 return input
119
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/anaconda3/lib/python3.7/site-packages/torchvision/models/resnet.py in forward(self, x)
57 identity = x
58
---> 59 out = self.conv1(x)
60 out = self.bn1(out)
61 out = self.relu(out)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
417
418 def forward(self, input: Tensor) -> Tensor:
--> 419 return self._conv_forward(input, self.weight)
420
421 class Conv3d(_ConvNd):
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
414 _pair(0), self.dilation, self.groups)
415 return F.conv2d(input, weight, self.bias, self.stride,
--> 416 self.padding, self.dilation, self.groups)
417
418 def forward(self, input: Tensor) -> Tensor:
RuntimeError: CUDA out of memory. Tried to allocate 470.00 MiB (GPU 0; 7.80 GiB total capacity; 5.51 GiB already allocated; 417.00 MiB free; 5.53 GiB reserved in total by PyTorch)
Here's $ nvidia-smi output right after running this cell:
$ nvidia-smi
Tue Oct 13 23:14:01 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 2070 Off | 00000000:01:00.0 Off | N/A |
| N/A 47C P8 13W / N/A | 7609MiB / 7982MiB | 5% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1424 G /usr/lib/xorg/Xorg 733MiB |
| 0 N/A N/A 1767 G /usr/bin/gnome-shell 426MiB |
| 0 N/A N/A 6420 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 6949 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 8888 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 10610 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 14943 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 16181 C ...mona/anaconda3/bin/python 6429MiB |
+-----------------------------------------------------------------------------+
I have an NVIDIA GeForce RTX 2070 GPU.
I also switched these two lines and still no chance and still same error:
images = images.permute(0,3,1,2)
images = images.cuda()
I also checked the nvidia-smi exactly after running all the cells above this cell, and none of them are causing this CUDA out of memory error.
| Changing the batch_size from 64 to 2 here solved the problem:
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=2, shuffle=True, num_workers=4)
*Thanks a lot to Sepehr Janghorbani for helping me fix this issue.
| https://stackoverflow.com/questions/64346112/ |
How can I add zeroes to the end of a PyTorch tensor based on a modulo operation? | I have a tensor with size: torch.Size([1, 305760])
Since 305760 is not divisible by 400, I want to change the size to: [1, 306000] filling the remaining spaces with 0. How can I do this with PyTorch?
| a = torch.randn(1,305760)
N = a.shape[1]
M = 400
b = torch.zeros([a.shape[0], (N // M + 1 ) * 400 - N])
a = torch.cat((a, b), 1)
| https://stackoverflow.com/questions/64354309/ |
retain_graph problem with GRU in Pytorch 1.6 | I am aware that, while employing loss.backward() we need to specify retain_graph=True if there are multiple networks and multiple loss functions to optimize each network separately. But even with (or without) specifying this parameter I am getting errors. Following is an MWE to reproduce the issue (on PyTorch 1.6).
import torch
from torch import nn
from torch import optim
torch.autograd.set_detect_anomaly(True)
class GRU1(nn.Module):
def __init__(self):
super(GRU1, self).__init__()
self.brnn = nn.GRU(input_size=2, bidirectional=True, num_layers=1, hidden_size=100)
def forward(self, x):
return self.brnn(x)
class GRU2(nn.Module):
def __init__(self):
super(GRU2, self).__init__()
self.brnn = nn.GRU(input_size=200, bidirectional=True, num_layers=1, hidden_size=1)
def forward(self, x):
return self.brnn(x)
gru1 = GRU1()
gru2 = GRU2()
gru1_opt = optim.Adam(gru1.parameters())
gru2_opt = optim.Adam(gru2.parameters())
criterion = nn.MSELoss()
for i in range(100):
gru1_opt.zero_grad()
gru2_opt.zero_grad()
vector = torch.randn((15, 100, 2))
gru1_output, _ = gru1(vector) # (15, 100, 200)
loss_gru1 = criterion(gru1_output, torch.randn((15, 100, 200)))
loss_gru1.backward(retain_graph=True)
gru1_opt.step()
gru1_output, _ = gru1(vector) # (15, 100, 200)
gru2_output, _ = gru2(gru1_output) # (15, 100, 2)
loss_gru2 = criterion(gru2_output, torch.randn((15, 100, 2)))
loss_gru2.backward(retain_graph=True)
gru2_opt.step()
print(f"GRU1 loss: {loss_gru1.item()}, GRU2 loss: {loss_gru2.item()}")
With retain_graph set to True I get the error
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [100, 300]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
The error without the parameter is
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.
which is expected.
Please point at what needs to be changed in the above code for it to begin training. Any help is appreciated.
| In such a case, one can detach the computation graph to exclude the parameters that don't need to be optimized. In this case, the computation graph should be detached after the second forward pass with gru1 i.e.
....
gru1_opt.step()
gru1_output, _ = gru1(vector)
gru1_output = gru1_output.detach()
....
This way, you won't "try to backward through the graph a second time" as the error mentioned.
| https://stackoverflow.com/questions/64355112/ |
How can I chunk a PyTorch tensor into a specified bucket size with overlap? | Specifically, I have a tensor of shape: torch.Size([1, 16])
I want to bucket this into 7 buckets (of 4 each). Example:
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]
should become:
[[1, 2, 3, 4],
[3, 4, 5, 6],
[5, 6, 7, 8],
[7, 8, 9, 10],
[9, 10, 11, 12],
[11, 12, 13, 14],
[13, 14, 15, 16],
]
How can I achieve this with PyTorch?
| Looks like an unfold:
t.unfold(0,4,2)
Output:
tensor([[ 1., 2., 3., 4.],
[ 3., 4., 5., 6.],
[ 5., 6., 7., 8.],
[ 7., 8., 9., 10.],
[ 9., 10., 11., 12.],
[11., 12., 13., 14.],
[13., 14., 15., 16.]])
| https://stackoverflow.com/questions/64355180/ |
Pytorch NN error: Expected input batch_size (64) to match target batch_size (30) | I'm currently training a neural network to classify food groups of food images, resulting in 5 output classes. However, whenever I begin training the network, I get this error:
ValueError: Expected input batch_size (64) to match target batch_size (30).
Here's my neural network definition and training code. I'd really appriciate help, I'm relatively new to pytorch and can't figure out exactly what the problem is in my code. Thanks!
#Define the Network Architechture
model = nn.Sequential(nn.Linear(7500, 4950),
nn.ReLU(),
nn.Linear(4950, 1000),
nn.ReLU(),
nn.Linear(1000, 250),
nn.ReLU(),
nn.Linear(250, 5),
nn.LogSoftmax(dim = 1))
#Define loss
criterion = nn.NLLLoss()
#Initial forward pass
images, labels = next(iter(trainloader))
images = images.view(images.shape[0], -1)
print(images.shape)
logits = model(images)
print(logits.size)
loss = criterion(logits, labels)
print(loss)
#Define Optimizer
optimizer = optim.SGD(model.parameters(), lr = 0.01)
Training the Network:
epochs = 10
for e in range(epochs):
running_loss = 0
for image, labels in trainloader:
#Flatten Images
images = images.view(images.shape[0], -1)
#Set gradients to 0
optimizer.zero_grad()
#Output
output = model(images)
loss = criterion(output, labels) #Where the error occurs
loss.backward()
#Gradient Descent Step
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
| Not 100% sure but I think that the error is in this line:
nn.Linear(7500, 4950)
Put 1 instead of 7500 unless your absolutely sure that your input is 7500. Remember that the first value will always be your input size. By putting 1, you'll ensure that your model can work with any size of images.
By the way, PyTorch has a flatten function. Use nn.Flatten instead of using images.view() because you don't wanna make any shape errors and waste more time necessarily.
Another small error that you made was that you keep on using images and image as variables and parameters in the for loop. This is really bad practice because you're going to confuse someone whenever they read your code. Make sure that you don't reuse the same variables over and over again.
Also, could you give more info on your data? Like is it greyscale, image_size, etc.
| https://stackoverflow.com/questions/64359963/ |
How to implement Batchnorm2d in Pytorch myself? | I'm trying to implement Batchnorm2d() layer with:
class BatchNorm2d(nn.Module):
def __init__(self, num_features):
super(BatchNorm2d, self).__init__()
self.num_features = num_features
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.eps = 1e-5
self.momentum = 0.1
self.first_run = True
def forward(self, input):
# input: [batch_size, num_feature_map, height, width]
device = input.device
if self.training:
mean = torch.mean(input, dim=0, keepdim=True).to(device) # [1, num_feature, height, width]
var = torch.var(input, dim=0, unbiased=False, keepdim=True).to(device) # [1, num_feature, height, width]
if self.first_run:
self.weight = Parameter(torch.randn(input.shape, dtype=torch.float32, device=device), requires_grad=True)
self.bias = Parameter(torch.randn(input.shape, dtype=torch.float32, device=device), requires_grad=True)
self.register_buffer('running_mean', torch.zeros(input.shape).to(input.device))
self.register_buffer('running_var', torch.ones(input.shape).to(input.device))
self.first_run = False
self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean
self.running_var = (1 - self.momentum) * self.running_var + self.momentum * var
bn_init = (input - mean) / torch.sqrt(var + self.eps)
else:
bn_init = (input - self.running_mean) / torch.sqrt(self.running_var + self.eps)
return self.weight * bn_init + self.bias
But after training & testing I found that the results using my layer is incomparable with the results using nn.Batchnorm2d(). There must be something wrong with it, and I guess the problem relates to initializing parameters in forward()? I did that because I don't know how to know the shape of input in __init__(), maybe there is a better way. I don't know how to fix it, please help. Thanks!!
| Got answers from HERE!\
So the shape of weight(bias) is (1, num_features, 1, 1), not (1, num_features, width, height).
| https://stackoverflow.com/questions/64364320/ |
Finding means and stds of a bunch of torch.Tensors (that are converted from ndarray images) | to_tensor = transforms.ToTensor()
img = to_tensor(train_dataset[0]['image'])
img
Converts my images values between 0 and 1 which is expected. It also converts img which is an ndarray to a torch.Tensor.
Previously, without using to_tensor (which I need it now), the following code snippet worked (not sure if this is the best way to find means and stds of the train set, however now doesn't work. How can I make it work?
image_arr = []
for i in range(len(train_dataset)):
image_arr.append(to_tensor(train_dataset[i]['image']))
print(np.mean(image_arr, axis=(0, 1, 2)))
print(np.std(image_arr, axis=(0, 1, 2)))
The error is:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-147-0e007c030629> in <module>
4 image_arr.append(to_tensor(train_dataset[i]['image']))
5
----> 6 print(np.mean(image_arr, axis=(0, 1, 2)))
7 print(np.std(image_arr, axis=(0, 1, 2)))
<__array_function__ internals> in mean(*args, **kwargs)
~/anaconda3/lib/python3.7/site-packages/numpy/core/fromnumeric.py in mean(a, axis, dtype, out, keepdims)
3333
3334 return _methods._mean(a, axis=axis, dtype=dtype,
-> 3335 out=out, **kwargs)
3336
3337
~/anaconda3/lib/python3.7/site-packages/numpy/core/_methods.py in _mean(a, axis, dtype, out, keepdims)
133
134 def _mean(a, axis=None, dtype=None, out=None, keepdims=False):
--> 135 arr = asanyarray(a)
136
137 is_float16_result = False
~/anaconda3/lib/python3.7/site-packages/numpy/core/_asarray.py in asanyarray(a, dtype, order)
136
137 """
--> 138 return array(a, dtype, copy=False, order=order, subok=True)
139
140
ValueError: only one element tensors can be converted to Python scalars
| Here is a working example:
import torch
from torchvision import transforms
train_dataset = torch.rand(100, 32, 32, 3)
image_arr = []
to_tensor = transforms.ToTensor()
for i in range(len(train_dataset)):
# to tensor will give you a tensor which is emulated here by reading the tensor at i
image_arr.append(train_dataset[i])
print(torch.mean(torch.stack(image_arr, dim=0), dim=(0, 1, 2)))
print(torch.std(torch.stack(image_arr, dim=0), dim=(0, 1, 2)))
What did I do?
I used torch.stack to concatenate image array into a single torch tensor and use torch.mean and torch.std to compute stats. I would not recommend converting back to numpy for the purpose of evaluating stats as it can lead to unnecessary conversion from GPU to CPU.
More information on which dimension is the channel:
The above example assumes the last dimension is the channel and the image is 32x32x3 with 100 batch size. This is usually the case when the image is loaded using PIL (pillow) or numpy. Images are loaded as HWC (height width channel) in that case. This also seems to be the dimension in the question asked looking at the code example.
If the image tensor is CHW format, then you should use
print(torch.mean(torch.stack(image_arr, dim=0), dim=(0, 2, 3)))
print(torch.std(torch.stack(image_arr, dim=0), dim=(0, 2, 3)))
Torch tensors are usually CHW format as Conv layers expect CHW format. This is done automatically when the toTensor transform is applied to an image (PIL image). For complete rules see documentation of toTensor here.
| https://stackoverflow.com/questions/64365215/ |
GRU Loss decreased upto 0.9 but not further, PyTorch | the code that I am using for experimenting with GRU.
import torch
import torch.nn as nn
import torch.nn.functional as F
from collections import *
class N(nn.Module):
def __init__(self):
super().__init__()
self.embed = nn.Embedding(5,2)
self.layers = 4
self.gru = nn.GRU(2, 512, self.layers, batch_first=True)
self.bat = nn.BatchNorm1d(4)
self.bat1 = nn.BatchNorm1d(4)
self.bat2 = nn.BatchNorm1d(4)
self.fc = nn.Linear(512,100)
self.fc1 = nn.Linear(100,100)
self.fc2 = nn.Linear(100,5)
self.s = nn.Softmax(dim=-1)
def forward(self,x):
h0 = torch.zeros(self.layers, x.size(0), 512).requires_grad_()
x = self.embed(x)
x,hn = self.gru(x,h0)
x = self.bat(x)
x = self.fc(x)
x = nn.functional.relu(x)
x = self.bat1(x)
x = self.fc1(x)
x = nn.functional.relu(x)
x = self.bat2(x)
x = self.fc2(x)
softmaxed = self.s(x)
return softmaxed
inp = torch.tensor([[4,3,2,1],[2,3,4,1],[4,1,2,3],[1,2,3,4]])
out = torch.tensor([[3,2,1,4],[3,2,4,1],[1,2,3,4],[2,3,4,1]])
k = 0
n = N()
opt = torch.optim.Adam(n.parameters(),lr=0.0001)
while k<10000:
print(inp.shape)
o = n(inp)
o = o.view(-1, o.size(-1))
out = out.view(-1)
loss = nn.functional.cross_entropy(o.view(-1,o.size(-1)),out.view(-1)-1)
acc = ((torch.argmax(o, dim=1) == (out -1)).sum().item() / out.size(0))
if k==10000:
print(torch.argmax(o, dim=1))
print(out-1)
exit()
print(loss,acc)
loss.backward()
opt.step()
opt.zero_grad()
k+=1
print(o[0])
Shrinked Output:
torch.Size([4, 4])
tensor(0.9593, grad_fn=<NllLossBackward>) 0.9375
torch.Size([4, 4])
tensor(0.9593, grad_fn=<NllLossBackward>) 0.9375
tensor([4.8500e-01, 9.7813e-06, 5.1498e-01, 6.2428e-06, 7.5929e-06],
grad_fn=<SelectBackward>)
The Loss is 0.9593 and accuracy reached up to 0.9375. For this simple input data, the GRU loss is this big. What is the reason? Is there anything wrong in this code? I used cross_entropy as loss function and Adam as the optimizer. Learning rate is 0.001. I tried multiple learning rates but all gave the same final result. I added batch normalization, it speed up the training, but the same loss and accuracy. Why loss does not decrease up to 0.2 or something.
| I think it's because you are using cross entropy loss function which in PyTorch combines log-softmax and negative log likelihood. Since your model already performs softmax before returning the output, you actually end up calculating the negative log likelihood for softmax of softmax. Try removing the final softmax from your model.
PyTorch documentation for cross entropy loss: https://pytorch.org/docs/stable/nn.functional.html#cross-entropy
| https://stackoverflow.com/questions/64365333/ |
calculating the mean and std on an array of torch tensors | I am trying to calculate to mean and std for an array of torch tensors. My dataset has 720 training images and each of these images has 4 landmarks with X and Y representing a 2D point on the image.
to_tensor = transforms.ToTensor()
landmarks_arr = []
for i in range(len(train_dataset)):
landmarks_arr.append(to_tensor(train_dataset[i]['landmarks']))
mean = torch.mean(torch.stack(landmarks_arr, dim=0))#, dim=(0, 2, 3))
std = torch.std(torch.stack(landmarks_arr, dim=0)) #, dim=(0, 2, 3))
print(mean.shape)
print("mean is {} and std is {}".format(mean, std))
Result:
torch.Size([])
mean is nan and std is nan
There is a couple of problems above:
Why to_tensor is not converting the values between 0 and 1?
how to calculate mean correctly?
Should I divide by 255 and where?
I have:
len(landmarks_arr)
720
and
landmarks_arr[0].shape
torch.Size([1, 4, 2])
and
landmarks_arr[0]
tensor([[[502.2869, 240.4949],
[688.0000, 293.0000],
[346.0000, 317.0000],
[560.8283, 322.6830]]], dtype=torch.float64)
|
From the pytorch docs of ToTensor():
Converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a
torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] if the PIL Image
belongs to one of the modes (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1) or if the
numpy.ndarray has dtype = np.uint8
In the other cases, tensors are returned without scaling.
Since your Landmark values are not a PIL image, and not within [0, 255], no scaling is applied.
Your calculation appears correct. It seems, that you might have some NaN value within your data.
You can try something like
for i in range(len(train_dataset)):
landmarks = to_tensor(train_dataset[i]['landmarks'])
landmarks[landmarks != landmarks] = 0 # this will set all nan to zero
landmarks_arr.append(landmarks)
within your loop. Or assert for nan within the loop to find the culprit(s):
for i in range(len(train_dataset)):
landmarks = to_tensor(train_dataset[i]['landmarks'])
assert(not torch.isnan(landmarks).any()), f'nan encountered in sample {i}' # will trigger if a landmark contains nan
landmarks_arr.append(landmarks)
No, see 1). You could divide by the max coordinates of the landmarks though to constrain them to [0, 1] if you so desire.
| https://stackoverflow.com/questions/64366121/ |
Increase of GPU memory usage during training | I was training the network on usual MNIST dataset, and encountered the next problem:
when i start to add valid_metrics to a loss_list and accuracy_list, amount of GPU memory that is being used starts increasing with every 1 or 2 epochs.
This is the code of train_loop:
def train_model(model: torch.nn.Module,
train_dataset: torch.utils.data.Dataset,
valid_dataset: torch.utils.data.Dataset,
loss_function: torch.nn.Module = torch.nn.CrossEntropyLoss(),
optimizer_class: Type[torch.optim.Optimizer] = torch.optim,
optimizer_params: Dict = {},
initial_lr = 0.01,
lr_scheduler_class: Any = torch.optim.lr_scheduler.ReduceLROnPlateau,
lr_scheduler_params: Dict = {},
batch_size = 64,
max_epochs = 1000,
early_stopping_patience = 20):
optimizer = torch.optim.Adam(model.parameters(), lr=initial_lr, **optimizer_params)
lr_scheduler = lr_scheduler_class(optimizer, **lr_scheduler_params)
train_loader = torch.utils.data.DataLoader(train_dataset, shuffle=True, batch_size=batch_size)
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=batch_size)
best_valid_loss = None
best_epoch = None
loss_list = list()
accuracy_list = list()
for epoch in range(max_epochs):
print(f'Epoch {epoch}')
start = timer()
train_single_epoch(model, optimizer, loss_function, train_loader)
valid_metrics = validate_single_epoch(model, loss_function, valid_loader)
loss_list.append(valid_metrics['loss'])
accuracy_list.append(valid_metrics['accuracy'])
print('time:', timer() - start)
print(f'Validation metrics: \n{valid_metrics}')
lr_scheduler.step(valid_metrics['loss'])
if best_valid_loss is None or best_valid_loss > valid_metrics['loss']:
print(f'Best model yet, saving')
best_valid_loss = valid_metrics['loss']
best_epoch = epoch
torch.save(model, './best_model.pth')
if epoch - best_epoch > early_stopping_patience:
print('Early stopping triggered')
return loss_list, accuracy_list
and the code of validate_single_epoch:
def validate_single_epoch(model: torch.nn.Module,
loss_function: torch.nn.Module,
data_loader: torch.utils.data.DataLoader):
loss_total = 0
accuracy_total = 0
for data in data_loader:
X, y = data
X, y = X.view(-1, 784), y.to(device)
X = X.to(device)
output = model(X)
loss = loss_function(output, y)
loss_total += loss
y_pred = output.argmax(dim = 1, keepdim=True).to(device)
accuracy_total += y_pred.eq(y.view_as(y_pred)).sum().item()
loss_avg = loss_total / len(data_loader.dataset)
accuracy_avg = 100.0 * accuracy_total / len(data_loader.dataset)
return {'loss' : loss_avg, 'accuracy' : accuracy_avg}
I use GeForce MX250 as GPU
| The problem is likely because the gradients are being computed and stored in the validation loop. To solve that, perhaps the easiest way is to wrap the validation call in a no_grad context:
with torch.no_grad():
valid_metrics = validate_single_epoch(model, loss_function, valid_loader)
If you prefer, you can also decorate the validate_single_epoch(...) with @torch.no_grad():
@torch.no_grad()
def validate_single_epoch(...):
# ...
Not related to your problem, but pay attention that you're using a model in training mode during validation, which may not be what you want. Perhaps there is a missing call to model.eval() in the validation function.
| https://stackoverflow.com/questions/64369385/ |
Pytorch couldn't build multi scaled kernel nested model | I'm trying to create a modified MNIST model which takes input 1x28x28 MNIST tensor images, and it kind of branches into different models with different sized kernels, and accumulates at the end, so as to give a multi-scale-kerneled response in the spatial domain of the images. I'm worried about the model, since, I'm unable to construct it.
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as Data
from torchvision import datasets, transforms
import torch.nn.functional as F
import timeit
import unittest
torch.manual_seed(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(0)
# check availability of GPU and set the device accordingly
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# define a transforms for preparing the dataset
transform = transforms.Compose([
transforms.ToTensor(), # convert the image to a pytorch tensor
transforms.Normalize((0.1307,), (0.3081,)) # normalise the images with mean and std of the dataset
])
# Load the MNIST training, test datasets using `torchvision.datasets.MNIST` using the transform defined above
train_dataset = datasets.MNIST('./data',train=True,transform=transform,download=True)
test_dataset = datasets.MNIST('./data',train=False,transform=transform,download=True)
# create dataloaders for training and test datasets
# use a batch size of 32 and set shuffle=True for the training set
train_dataloader = Data.DataLoader(dataset=train_dataset, batch_size=32, shuffle=True)
test_dataloader = Data.DataLoader(dataset=test_dataset, batch_size=32, shuffle=True)
# My Net
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# define a conv layer with output channels as 16, kernel size of 3 and stride of 1
self.conv11 = nn.Conv2d(1, 16, 3, 1) # Input = 1x28x28 Output = 16x26x26
self.conv12 = nn.Conv2d(1, 16, 5, 1) # Input = 1x28x28 Output = 16x24x24
self.conv13 = nn.Conv2d(1, 16, 7, 1) # Input = 1x28x28 Output = 16x22x22
# define a conv layer with output channels as 32, kernel size of 3 and stride of 1
self.conv21 = nn.Conv2d(16, 32, 3, 1) # Input = 16x26x26 Output = 32x24x24
self.conv22 = nn.Conv2d(16, 32, 5, 1) # Input = 16x24x24 Output = 32x20x20
self.conv23 = nn.Conv2d(16, 32, 7, 1) # Input = 16x22x22 Output = 32x16x16
# define a conv layer with output channels as 64, kernel size of 3 and stride of 1
self.conv31 = nn.Conv2d(32, 64, 3, 1) # Input = 32x24x24 Output = 64x22x22
self.conv32 = nn.Conv2d(32, 64, 5, 1) # Input = 32x20x20 Output = 64x16x16
self.conv33 = nn.Conv2d(32, 64, 7, 1) # Input = 32x16x16 Output = 64x10x10
# define a max pooling layer with kernel size 2
self.maxpool = nn.MaxPool2d(2), # Output = 64x11x11
# define dropout layer with a probability of 0.25
self.dropout1 = nn.Dropout(0.25)
# define dropout layer with a probability of 0.5
self.dropout2 = nn.Dropout(0.5)
# define a linear(dense) layer with 128 output features
self.fc11 = nn.Linear(64*11*11, 128)
self.fc12 = nn.Linear(64*8*8, 128) # after maxpooling 2x2
self.fc13 = nn.Linear(64*5*5, 128)
# define a linear(dense) layer with output features corresponding to the number of classes in the dataset
self.fc21 = nn.Linear(128, 10)
self.fc22 = nn.Linear(128, 10)
self.fc23 = nn.Linear(128, 10)
self.fc33 = nn.Linear(30,10)
def forward(self, x1):
# Use the layers defined above in a sequential way (folow the same as the layer definitions above) and
# write the forward pass, after each of conv1, conv2, conv3 and fc1 use a relu activation.
x = F.relu(self.conv11(x1))
x = F.relu(self.conv21(x))
x = F.relu(self.maxpool(self.conv31(x)))
#x = torch.flatten(x, 1)
x = x.view(-1,64*11*11)
x = self.dropout1(x)
x = F.relu(self.fc11(x))
x = self.dropout2(x)
x = self.fc21(x)
y = F.relu(self.conv12(x1))
y = F.relu(self.conv22(y))
y = F.relu(self.maxpool(self.conv32(y)))
#x = torch.flatten(x, 1)
y = y.view(-1,64*8*8)
y = self.dropout1(y)
y = F.relu(self.fc12(y))
y = self.dropout2(y)
y = self.fc22(y)
z = F.relu(self.conv13(x1))
z = F.relu(self.conv23(z))
z = F.relu(self.maxpool(self.conv33(z)))
#x = torch.flatten(x, 1)
z = z.view(-1,64*5*5)
z = self.dropout1(z)
z = F.relu(self.fc13(z))
z = self.dropout2(z)
z = self.fc23(z)
out = self.fc33(torch.cat((x, y, z), 0))
output = F.log_softmax(out, dim=1)
return output
import unittest
class TestImplementations(unittest.TestCase):
# Dataloading tests
def test_dataset(self):
self.dataset_classes = ['0 - zero',
'1 - one',
'2 - two',
'3 - three',
'4 - four',
'5 - five',
'6 - six',
'7 - seven',
'8 - eight',
'9 - nine']
self.assertTrue(train_dataset.classes == self.dataset_classes)
self.assertTrue(train_dataset.train == True)
def test_dataloader(self):
self.assertTrue(train_dataloader.batch_size == 32)
self.assertTrue(test_dataloader.batch_size == 32)
def test_total_parameters(self):
model = Net().to(device)
#self.assertTrue(sum(p.numel() for p in model.parameters()) == 1015946)
suite = unittest.TestLoader().loadTestsFromModule(TestImplementations())
unittest.TextTestRunner().run(suite)
def train(model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# send the image, target to the device
data, target = data.to(device), target.to(device)
# flush out the gradients stored in optimizer
optimizer.zero_grad()
# pass the image to the model and assign the output to variable named output
output = model(data)
# calculate the loss (use nll_loss in pytorch)
loss = F.nll_loss(output, target)
# do a backward pass
loss.backward()
# update the weights
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
# send the image, target to the device
data, target = data.to(device), target.to(device)
# pass the image to the model and assign the output to variable named output
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
model = Net().to(device)
## Define Adam Optimiser with a learning rate of 0.01
optimizer = torch.optim.Adam(model.parameters(),lr=0.01)
start = timeit.default_timer()
for epoch in range(1, 11):
train(model, device, train_dataloader, optimizer, epoch)
test(model, device, test_dataloader)
stop = timeit.default_timer()
print('Total time taken: {} seconds'.format(int(stop - start)) )
Here is my full code. I couldn't understand what could possibly go wrong...
It is giving
<ipython-input-72-194680537dcc> in forward(self, x1)
46 x = F.relu(self.conv11(x1))
47 x = F.relu(self.conv21(x))
---> 48 x = F.relu(self.maxpool(self.conv31(x)))
49 #x = torch.flatten(x, 1)
50 x = x.view(-1,64*11*11)
TypeError: 'tuple' object is not callable
Error.
P.S.: Pytorch Noob here.
| You have mistakenly placed a comma at the end of the line where you define self.maxpool : self.maxpool = nn.MaxPool2d(2), # Output = 64x11x11 see?
This comma makes self.maxpool a tuple instead of a torch.nn.modules.pooling.MaxPool2d. Drop the comma at the end and this error is fixed.
| https://stackoverflow.com/questions/64373287/ |
CIFAR10 dataloader sampler split | i am trying to split the training data of CIFAR10 so the last 5000 of the training set is used for validation. my code
size = len(CIFAR10_training)
dataset_indices = list(range(size))
val_index = int(np.floor(0.9 * size))
train_idx, val_idx = dataset_indices[:val_index], dataset_indices[val_index:]
train_sampler = SubsetRandomSampler(train_idx)
val_sampler = SubsetRandomSampler(val_idx)
train_dataloader = torch.utils.data.DataLoader(CIFAR10_training,
batch_size=config['batch_size'],
shuffle=False, sampler = train_sampler)
valid_dataloader = torch.utils.data.DataLoader(CIFAR10_training,
batch_size=config['batch_size'],
shuffle=False, sampler = val_sampler)
print(len(train_dataloader.dataset),len(valid_dataloader.dataset),
but the last print statement prints 50000 and 10000. should it not be 45000 and 5000
when i print the train_idx and val_idx it prints the right values([0:44999],[45000:49999]
is there anything wrong with my code
| I cannot replicate your results, when I execute your code, the print statements outputs twice the same number : the number of elements in train_CIFAR10. So I guess you made a mistake when copying your code, and valid_dataloader is actually given CIFAR10_test (or something like that) as parameter. In the following, I'm gonna assume that it's the case, and that your print outputs (50000, 50000), which is the size of the training part of Pytorch's CIFAR10 dataset.
Then it is completely expected, and no it should not output (45000, 5000). You are asking for the length of train_dataloader.dataset and valid_dataloader.dataset, i.e the length of the underlying datasets. For both your loaders, this dataset is CIFAR10_training. Therefore you will get twice the size of this dataset (i.e 50000).
You cannot ask for len(train_dataloader) either, because you that would yield the number of batches in your dataset (approximately 45000/batch_size).
If you need to know the size of your splits, then you have to compute the length of your samplers:
print(len(train_dataloader.sampler), len(valid_dataloader.sampler))
Besides this, your code is fine, you are correctly splitting your data.
| https://stackoverflow.com/questions/64379339/ |
ValueError: Expected tensor to be a tensor image of size (C, H, W). Got tensor.size() = torch.Size([8, 8]) | I am trying to normalize my targets (landmarks) here in which each image has 4 landmarks with x and y values for each landmark (keypoint). Here the batch size is 8.
network = Network()
network.cuda()
criterion = nn.MSELoss()
optimizer = optim.Adam(network.parameters(), lr=0.0001)
loss_min = np.inf
num_epochs = 1
start_time = time.time()
for epoch in range(1,num_epochs+1):
loss_train = 0
loss_test = 0
running_loss = 0
network.train()
print('size of train loader is: ', len(train_loader))
for step in range(1,len(train_loader)+1):
batch = next(iter(train_loader))
images, landmarks = batch['image'], batch['landmarks']
#RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[64, 600, 800, 3] to have 3 channels, but got 600 channels instead
#using permute below to fix the above error
images = images.permute(0,3,1,2)
images = images.cuda()
landmarks = landmarks.view(landmarks.size(0),-1).cuda()
norm_image = transforms.Normalize([0.3809, 0.3810, 0.3810], [0.1127, 0.1129, 0.1130])
for image in images:
image = image.float()
##image = to_tensor(image) #TypeError: pic should be PIL Image or ndarray. Got <class 'torch.Tensor'>
image = norm_image(image)
norm_landmarks = transforms.Normalize(0.4949, 0.2165)
landmarks = norm_landmarks(landmarks)
##landmarks = torchvision.transforms.Normalize(landmarks) #Do I need to normalize the target?
predictions = network(images)
# clear all the gradients before calculating them
optimizer.zero_grad()
print('predictions are: ', predictions.float())
print('landmarks are: ', landmarks.float())
# find the loss for the current step
loss_train_step = criterion(predictions.float(), landmarks.float())
loss_train_step = loss_train_step.to(torch.float32)
print("loss_train_step before backward: ", loss_train_step)
# calculate the gradients
loss_train_step.backward()
# update the parameters
optimizer.step()
print("loss_train_step after backward: ", loss_train_step)
loss_train += loss_train_step.item()
print("loss_train: ", loss_train)
running_loss = loss_train/step
print('step: ', step)
print('running loss: ', running_loss)
print_overwrite(step, len(train_loader), running_loss, 'train')
network.eval()
with torch.no_grad():
for step in range(1,len(test_loader)+1):
batch = next(iter(train_loader))
images, landmarks = batch['image'], batch['landmarks']
images = images.permute(0,3,1,2)
images = images.cuda()
landmarks = landmarks.view(landmarks.size(0),-1).cuda()
predictions = network(images)
# find the loss for the current step
loss_test_step = criterion(predictions, landmarks)
loss_test += loss_test_step.item()
running_loss = loss_test/step
print_overwrite(step, len(test_loader), running_loss, 'Validation')
loss_train /= len(train_loader)
loss_test /= len(test_loader)
print('\n--------------------------------------------------')
print('Epoch: {} Train Loss: {:.4f} Valid Loss: {:.4f}'.format(epoch, loss_train, loss_test))
print('--------------------------------------------------')
if loss_test < loss_min:
loss_min = loss_test
torch.save(network.state_dict(), '../moth_landmarks.pth')
print("\nMinimum Valid Loss of {:.4f} at epoch {}/{}".format(loss_min, epoch, num_epochs))
print('Model Saved\n')
print('Training Complete')
print("Total Elapsed Time : {} s".format(time.time()-start_time))
But I get this error:
size of train loader is: 90
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-13-3e4770ad7109> in <module>
40
41 norm_landmarks = transforms.Normalize(0.4949, 0.2165)
---> 42 landmarks = norm_landmarks(landmarks)
43 ##landmarks = torchvision.transforms.Normalize(landmarks) #Do I need to normalize the target?
44
~/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py in __call__(self, tensor)
210 Tensor: Normalized Tensor image.
211 """
--> 212 return F.normalize(tensor, self.mean, self.std, self.inplace)
213
214 def __repr__(self):
~/anaconda3/lib/python3.7/site-packages/torchvision/transforms/functional.py in normalize(tensor, mean, std, inplace)
282 if tensor.ndimension() != 3:
283 raise ValueError('Expected tensor to be a tensor image of size (C, H, W). Got tensor.size() = '
--> 284 '{}.'.format(tensor.size()))
285
286 if not inplace:
ValueError: Expected tensor to be a tensor image of size (C, H, W). Got tensor.size() = torch.Size([8, 8]).
How should I normalize my landmarks?
| norm_landmarks = transforms.Normalize(0.4949, 0.2165)
landmarks = landmarks.unsqueeze_(0)
landmarks = norm_landmarks(landmarks)
Adding
landmarks = landmarks.unsqueeze_(0)
Fixed the problem.
| https://stackoverflow.com/questions/64381906/ |
Subsets and Splits