instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
ValueError: expected 2D or 3D input (got 1D input) PyTorch
class VAE(torch.nn.Module): def __init__(self, input_size, hidden_sizes, batch_size): super(VAE, self).__init__() self.input_size = input_size self.hidden_sizes = hidden_sizes self.batch_size = batch_size self.fc = torch.nn.Linear(input_size, hidden_sizes[0]) self.BN = torch.nn.BatchNorm1d(hidden_sizes[0]) self.fc1 = torch.nn.Linear(hidden_sizes[0], hidden_sizes[1]) self.BN1 = torch.nn.BatchNorm1d(hidden_sizes[1]) self.fc2 = torch.nn.Linear(hidden_sizes[1], hidden_sizes[2]) self.BN2 = torch.nn.BatchNorm1d(hidden_sizes[2]) self.fc3_mu = torch.nn.Linear(hidden_sizes[2], hidden_sizes[3]) self.fc3_sig = torch.nn.Linear(hidden_sizes[2], hidden_sizes[3]) self.fc4 = torch.nn.Linear(hidden_sizes[3], hidden_sizes[2]) self.BN4 = torch.nn.BatchNorm1d(hidden_sizes[2]) self.fc5 = torch.nn.Linear(hidden_sizes[2], hidden_sizes[1]) self.BN5 = torch.nn.BatchNorm1d(hidden_sizes[1]) self.fc6 = torch.nn.Linear(hidden_sizes[1], hidden_sizes[0]) self.BN6 = torch.nn.BatchNorm1d(hidden_sizes[0]) self.fc7 = torch.nn.Linear(hidden_sizes[0], input_size) def sample_z(self, x_size, mu, log_var): eps = torch.randn(x_size, self.hidden_sizes[-1]) return(mu + torch.exp(log_var/2) * eps) def forward(self, x): ########### # Encoder # ########### out1 = self.fc(x) out1 = nn.relu(self.BN(out1)) out2 = self.fc1(out1) out2 = nn.relu(self.BN1(out2)) out3 = self.fc2(out2) out3 = nn.relu(self.BN2(out3)) mu = self.fc3_mu(out3) sig = nn.softplus(self.fc3_sig(out3)) ########### # Decoder # ########### # sample from the distro sample = self.sample_z(x.size(0), mu, sig) out4 = self.fc4(sample) out4 = nn.relu(self.BN4(out4)) out5 = self.fc5(out4) out5 = nn.relu(self.BN5(out5)) out6 = self.fc6(out5) out6 = nn.relu(self.BN6(out6)) out7 = nn.sigmoid(self.fc7(out6)) return(out7, mu, sig) vae = VAE(input_size, hidden_sizes, batch_size) vae.eval() x_sample, z_mu, z_var = vae(X) The error is: File "VAE_LongTensor.py", line 200, in <module> x_sample, z_mu, z_var = vae(X) ValueError: expected 2D or 3D input (got 1D input)
When you build a nn.Module in pytorch for processing 1D signals, pytorch actually expects the input to be 2D: first dimension is the "mini batch" dimension. Thus you need to add a singleton dimesion to your X: x_sample, z_mu, z_var = vae(X[None, ...])
https://stackoverflow.com/questions/53500511/
Pytorch RuntimeError: size mismatch, m1: [1 x 7744], m2: [400 x 120]
In a simple CNN that classifies 5 objects, I get a size mis-match error: "RuntimeError: size mismatch, m1: [1 x 7744], m2: [400 x 120]" in the convolutional layer . my model.py file: import torch.nn as nn import torch.nn.functional as F class FNet(nn.Module): def __init__(self,device): # make your convolutional neural network here # use regularization # batch normalization super(FNet, self).__init__() num_classes = 5 self.conv1 = nn.Conv2d(3, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 5) def forward(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features if __name__ == "__main__": net = FNet() Complete Error: Traceback (most recent call last): File "main.py", line 98, in <module> train_model('../Data/fruits/', save=True, destination_path='/home/mitesh/E yantra/task1#hc/Task 1/Task 1B/Data/fruits') File "main.py", line 66, in train_model outputs = model(images) File "/home/mitesh/anaconda3/envs/HC#850_stage1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/mitesh/E yantra/task1#hc/Task 1/Task 1B/Code/model.py", line 28, in forward x = F.relu(self.fc1(x)) File "/home/mitesh/anaconda3/envs/HC#850_stage1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/mitesh/anaconda3/envs/HC#850_stage1/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward return F.linear(input, self.weight, self.bias) File "/home/mitesh/anaconda3/envs/HC#850_stage1/lib/python3.6/site-packages/torch/nn/functional.py", line 1024, in linear return torch.addmm(bias, input, weight.t()) RuntimeError: size mismatch, m1: [1 x 7744], m2: [400 x 120] at /opt/conda/conda-bld/pytorch-cpu_1532576596369/work/aten/src/TH/generic/THTensorMath.cpp:2070
If you have a nn.Linear layer in your net, you cannot decide "on-the-fly" what the input size for this layer would be. In your net you compute num_flat_features for every x and expect your self.fc1 to handle whatever size of x you feed the net. However, self.fc1 has a fixed size weight matrix of size 400x120 (that is expecting input of dimension 16*5*5=400 and outputs 120 dim feature). In your case the size of x translated to 7744 dim feature vector that self.fc1 simply cannot handle. If you do want your network to be able to handle any size x, you can have a parameter-free interpolation layer resizing all x to the right size before self.fc1: x = F.max_pool2d(F.relu(self.conv2(x)), 2) # output of conv layers x = F.interpolate(x, size=(5, 5), mode='bilinear') # resize to the size expected by the linear unit x = x.view(x.size(0), 5 * 5 * 16) x = F.relu(self.fc1(x)) # you can go on from here... See torch.nn.functional.interpolate for more information.
https://stackoverflow.com/questions/53500838/
PyTorch - parameters not changing
In an effort to learn how pytorch works, I am trying to do maximum likelihood estimation of some of the parameters in a multivariate normal distribution. However it does not seem to work for any of the covariance related parameters. So my question is: why does this code not work? import torch def make_covariance_matrix(sigma, rho): return torch.tensor([[sigma[0]**2, rho * torch.prod(sigma)], [rho * torch.prod(sigma), sigma[1]**2]]) mu_true = torch.randn(2) rho_true = torch.rand(1) sigma_true = torch.exp(torch.rand(2)) cov_true = make_covariance_matrix(sigma_true, rho_true) dist_true = torch.distributions.MultivariateNormal(mu_true, cov_true) samples = dist_true.sample((1_000,)) mu = torch.zeros(2, requires_grad=True) log_sigma = torch.zeros(2, requires_grad=True) atanh_rho = torch.zeros(1, requires_grad=True) lbfgs = torch.optim.LBFGS([mu, log_sigma, atanh_rho]) def closure(): lbfgs.zero_grad() sigma = torch.exp(log_sigma) rho = torch.tanh(atanh_rho) cov = make_covariance_matrix(sigma, rho) dist = torch.distributions.MultivariateNormal(mu, cov) loss = -torch.mean(dist.log_prob(samples)) loss.backward() return loss lbfgs.step(closure) print("mu: {}, mu_hat: {}".format(mu_true, mu)) print("sigma: {}, sigma_hat: {}".format(sigma_true, torch.exp(log_sigma))) print("rho: {}, rho_hat: {}".format(rho_true, torch.tanh(atanh_rho))) output: mu: tensor([0.4168, 0.1580]), mu_hat: tensor([0.4127, 0.1454], requires_grad=True) sigma: tensor([1.1917, 1.7290]), sigma_hat: tensor([1., 1.], grad_fn=<ExpBackward>) rho: tensor([0.3589]), rho_hat: tensor([0.], grad_fn=<TanhBackward>) >>> torch.__version__ '1.0.0.dev20181127' In other words, why have the estimates of log_sigma and atanh_rho not moved from their initial value?
The way you create your covariance matrix is not backprob-able: def make_covariance_matrix(sigma, rho): return torch.tensor([[sigma[0]**2, rho * torch.prod(sigma)], [rho * torch.prod(sigma), sigma[1]**2]]) When creating a new tensor from (multiple) tensors, only the values of your input tensors will be kept. All additional information from the input tensors is stripped away, thus all graph-connection to your parameters is cut from this point, therefore backpropagation cannot get through. Here is a short example to illustrate this: import torch param1 = torch.rand(1, requires_grad=True) param2 = torch.rand(1, requires_grad=True) tensor_from_params = torch.tensor([param1, param2]) print('Original parameter 1:') print(param1, param1.requires_grad) print('Original parameter 2:') print(param2, param2.requires_grad) print('New tensor form params:') print(tensor_from_params, tensor_from_params.requires_grad) Output: Original parameter 1: tensor([ 0.8913]) True Original parameter 2: tensor([ 0.4785]) True New tensor form params: tensor([ 0.8913, 0.4785]) False As you can see the tensor, created from the parameters param1 and param2, does not keep track of the gradients of param1 and param2. So instead you can use this code that keeps the graph connection and is backprob-able: def make_covariance_matrix(sigma, rho): conv = torch.cat([(sigma[0]**2).view(-1), rho * torch.prod(sigma), rho * torch.prod(sigma), (sigma[1]**2).view(-1)]) return conv.view(2, 2) The values are concatenated to a flat tensor using torch.cat. Then they are brought into right shape using view(). This results in the same matrix output as in your function, but it keeps the connection to your parameters log_sigma and atanh_rho. Here is an output before and after the step with the changed make_covariance_matrix. As you can see, now you can optimize your parameters and the values do change: Before: mu: tensor([ 0.1191, 0.7215]), mu_hat: tensor([ 0., 0.]) sigma: tensor([ 1.4222, 1.0949]), sigma_hat: tensor([ 1., 1.]) rho: tensor([ 0.2558]), rho_hat: tensor([ 0.]) After: mu: tensor([ 0.1191, 0.7215]), mu_hat: tensor([ 0.0712, 0.7781]) sigma: tensor([ 1.4222, 1.0949]), sigma_hat: tensor([ 1.4410, 1.0807]) rho: tensor([ 0.2558]), rho_hat: tensor([ 0.2235]) Hope this helps!
https://stackoverflow.com/questions/53503234/
Row-wise Element Indexing in PyTorch for C++
I am using the C++ frontend for PyTorch and am struggling with a relatively basic indexing problem. I have an 8 by 6 Tensor such as the one below: [ Variable[CUDAFloatType]{8,6} ] 0 1 2 3 4 5 0 1.7107e-14 4.0448e-17 4.9708e-06 1.1664e-08 9.9999e-01 2.1857e-20 1 1.8288e-14 5.9356e-17 5.3042e-06 1.2369e-08 9.9999e-01 2.4799e-20 2 2.6828e-04 9.0390e-18 1.7517e-02 1.0529e-03 9.8116e-01 6.7854e-26 3 5.7521e-10 3.1037e-11 1.5021e-03 1.2304e-06 9.9850e-01 1.4888e-17 4 1.7811e-13 1.8383e-15 1.6733e-05 3.8466e-08 9.9998e-01 5.2815e-20 5 9.6191e-06 2.6217e-23 3.1345e-02 2.3024e-04 9.6842e-01 2.9435e-34 6 2.2653e-04 8.4642e-18 1.6085e-02 9.7405e-04 9.8271e-01 6.3059e-26 7 3.8951e-14 2.9903e-16 8.3518e-06 1.7974e-08 9.9999e-01 3.6993e-20 I have another Tensor with just 8 elements in it such as: [ Variable[CUDALongType]{8} ] 0 3 4 4 4 4 4 4 I would like to index the rows of my first tensor using the second to produce: 0 0 1.7107e-14 1 1.2369e-08 2 9.8116e-01 3 9.9850e-01 4 9.9998e-01 5 9.6842e-01 6 9.8271e-01 7 9.9999e-01 I have tried a few different approaches including index_select but it seems to produce an output that has the same dimensions as the input (8x6). In Python I think I could index with Python's built-in indexing as discussed here: https://github.com/pytorch/pytorch/issues/1080 Unfortunately, in C++ I can only index a Tensor with a scalar (zero-dimensional Tensor) so I don't think that approach works for me here. How can I achieve my desired result without resorting to loops?
It turns out you can do this in a couple different ways. One with gather and one with index. From the PyTorch discussions where I asked the same question: Using torch::gather auto x = torch::randn({8, 6}); int64_t idx_data[8] = { 0, 3, 4, 4, 4, 4, 4, 4 }; auto idx = x.type().toScalarType(torch::kLong).tensorFromBlob(idx_data, 8); auto result = x.gather(1, idx.unsqueeze(1)); Using the C++ specific torch::index auto x = torch::randn({8, 6}); int64_t idx_data[8] = { 0, 3, 4, 4, 4, 4, 4, 4 }; auto idx = x.type().toScalarType(torch::kLong).tensorFromBlob(idx_data, 8); auto rows = torch::arange(0, x.size(0), torch::kLong); auto result = x.index({rows, idx});
https://stackoverflow.com/questions/53507039/
How make customised dataset in Pytorch for images and their masks?
I have two dataset folder of tif images, one is a folder called BMMCdata, and the other one is the mask of BMMCdata images called BMMCmasks(the name of images are corresponds). I am trying to make a customised dataset and also split the data randomly to train and test. at the moment I am getting an error self.filenames.append(fn) AttributeError: 'CustomDataset' object has no attribute 'filenames' Any comment will be appreciated a lot. import torch from torch.utils.data.dataset import Dataset # For custom data-sets from torchvision import transforms from PIL import Image import os.path as osp import glob folder_data = "/Users/parto/PycharmProjects/U-net/BMMCdata/data" class CustomDataset(Dataset): def __init__(self, root): self.filename = folder_data self.root = root self.to_tensor = transforms.ToTensor() filenames = glob.glob(osp.join(folder_data, '*.tif')) for fn in filenames: self.filenames.append(fn) self.len = len(self.filenames) print(fn) def __getitem__(self, index): image = Image.open(self.filenames[index]) return self.transform(image) def __len__(self): return self.len custom_img = CustomDataset(folder_data) # total images in set print(custom_img.len) train_len = int(0.6*custom_img.len) test_len = custom_img.len - train_len train_set, test_set = CustomDataset.random_split(custom_img, lengths=[train_len, test_len]) # check lens of subset len(train_set), len(test_set) train_set = CustomDataset(folder_data) train_set = torch.utils.data.TensorDataset(train_set, train=True, batch_size=4) train_loader = torch.utils.data.DataLoader(train_set, batch_size=4, shuffle=True, num_workers=1) print(train_set) print(train_loader) test_set = torch.utils.data.DataLoader(Dataset, batch_size=4, sampler= train_sampler) test_loader = torch.utils.data.DataLoader(Dataset, batch_size=4)
answer given by @ptrblck in pytorch community. thank you # get all the image and mask path and number of images folder_data = glob.glob("D:\\Neda\\Pytorch\\U-net\\BMMCdata\\data\\*.tif") folder_mask = glob.glob("D:\\Neda\\Pytorch\\U-net\\BMMCmasks\\masks\\*.tif") # split these path using a certain percentage len_data = len(folder_data) print(len_data) train_size = 0.6 train_image_paths = folder_data[:int(len_data*train_size)] test_image_paths = folder_data[int(len_data*train_size):] train_mask_paths = folder_mask[:int(len_data*train_size)] test_mask_paths = folder_mask[int(len_data*train_size):] class CustomDataset(Dataset): def __init__(self, image_paths, target_paths, train=True): # initial logic happens like transform self.image_paths = image_paths self.target_paths = target_paths self.transforms = transforms.ToTensor() def __getitem__(self, index): image = Image.open(self.image_paths[index]) mask = Image.open(self.target_paths[index]) t_image = self.transforms(image) return t_image, mask def __len__(self): # return count of sample we have return len(self.image_paths) train_dataset = CustomDataset(train_image_paths, train_mask_paths, train=True) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=4, shuffle=True, num_workers=1) test_dataset = CustomDataset(test_image_paths, test_mask_paths, train=False) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=4, shuffle=False, num_workers=1)
https://stackoverflow.com/questions/53530751/
How do I split the training dataset into training, validation and test datasets?
I have a custom data set of images and its target. I have created a training data set in PyTorch. I want to split it into 3 parts: training, validation and test. How do I do it?
Once you have the "master" dataset you can use data.Subset to split it. Here's an example for random split import torch from torch.utils import data import random master = data.Dataset( ... ) # your "master" dataset n = len(master) # how many total elements you have n_test = int( n * .05 ) # number of test/val elements n_train = n - 2 * n_test idx = list(range(n)) # indices to all elements random.shuffle(idx) # in-place shuffle the indices to facilitate random splitting train_idx = idx[:n_train] val_idx = idx[n_train:(n_train + n_test)] test_idx = idx[(n_train + n_test):] train_set = data.Subset(master, train_idx) val_set = data.Subset(master, val_idx) test_set = data.Subset(master, test_idx) This can also be achieved using data.random_split: train_set, val_set, test_set = data.random_split(master, (n_train, n_val, n_test))
https://stackoverflow.com/questions/53532352/
Save and load checkpoint pytorch
i make a model and save the configuration as: def checkpoint(state, ep, filename='./Risultati/checkpoint.pth'): if ep == (n_epoch-1): print('Saving state...') torch.save(state,filename) checkpoint({'state_dict':rnn.state_dict()},ep) and then i want load this configuration : state_dict= torch.load('./Risultati/checkpoint.pth') rnn.state_dict(state_dict) when i try, this is the error: Traceback (most recent call last): File "train.py", line 288, in <module> rnn.state_dict(state_dict) File "/home/marco/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 593, in state_dict destination._metadata[prefix[:-1]] = dict(version=self._version) AttributeError: 'dict' object has no attribute '_metadata' where i do wrong? thx in advance
You need to load rnn.state_dict() stored in the dictionary you loaded: rnn.load_state_dict(state_dict['state_dict']) Look at load_state_dict method for more information.
https://stackoverflow.com/questions/53538138/
What is the backward process of max operation in deep learning?
I know that the backward process of deep learning follows the gradient descent algorithm. However, there is never a gradient concept for max operation. How does deep learning frameworks like tensorflow, pytorch deal with the backward of 'max' operation like maxpooling?
You have to think of what the max operator actually does? That is: It returns or lets better say it propagates the maximum. And that's exactly what it does here - it takes two or more tensors and propagates forward (only) the maximum. It is often helpful to take a look at a short example: t1 = torch.rand(10, requires_grad=True) t2 = torch.rand(10, requires_grad=True) s1 = torch.sum(t1) s2 = torch.sum(t2) print('sum t1:', s1, 'sum t2:', s2) m = torch.max(s1, s2) print('max:', m, 'requires_grad:', m.requires_grad) m.backward() print('t1 gradients:', t1.grad) print('t2 gradients:', t2.grad) This code creates two random tensors sums them up and puts them through a max function. Then backward() is called upon the result. Lets take a look at the two possible outcomes: Outcome 1 - sum of t1 is larger: sum t1: tensor(5.6345) sum t2: tensor(4.3965) max: tensor(5.6345) requires_grad: True t1 gradients: tensor([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) t2 gradients: tensor([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) Outcome 2 - sum of t2 is larger: sum t1: tensor(3.3263) sum t2: tensor(4.0517) max: tensor(4.0517) requires_grad: True t1 gradients: tensor([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) t2 gradients: tensor([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) As you would expect in the case s1 represents the maximum gradients will be calculated for t1. Similarly when s2 is the maximum gradients will be calculated for t2. Similarly like in the forward step, back propagation is propagating backwards through the maximum. One thing worth mentioning is that the other tensors which do not represent the maximum are still part of the graph. Only the gradients are set to zero then. If they wouldn't be part of the graph you would get None as gradient, instead of a zero vector. You can check what happens if you use python-max instead of torch.max: t1 = torch.rand(10, requires_grad=True) t2 = torch.rand(10, requires_grad=True) s1 = torch.sum(t1) s2 = torch.sum(t2) print('sum t1:', s1, 'sum t2:', s2) m = max(s1, s2) print('max:', m, 'requires_grad:', m.requires_grad) m.backward() print('t1 gradients:', t1.grad) print('t2 gradients:', t2.grad) Output: sum t1: tensor(4.7661) sum t2: tensor(4.4166) max: tensor(4.7661) requires_grad: True t1 gradients: tensor([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) t2 gradients: None
https://stackoverflow.com/questions/53539348/
Pytorch: Normalize Image data set
I want to normalize custom dataset of images. For that i need to compute mean and standard deviation by iterating over the dataset. How can I normalize my entire dataset before creating the data set?
Well, let's take this image as an example: The first thing you need to do is decide which library you want to use: Pillow or OpenCV. In this example I'll use Pillow: from PIL import Image import numpy as np img = Image.open("test.jpg") pix = np.asarray(img.convert("RGB")) # Open the image as RGB Rchan = pix[:,:,0] # Red color channel Gchan = pix[:,:,1] # Green color channel Bchan = pix[:,:,2] # Blue color channel Rchan_mean = Rchan.mean() Gchan_mean = Gchan.mean() Bchan_mean = Bchan.mean() Rchan_var = Rchan.var() Gchan_var = Gchan.var() Bchan_var = Bchan.var() And the results are: Red Channel Mean: 134.80585625 Red Channel Variance: 3211.35843945 Green Channel Mean: 81.0884125 Green Channel Variance: 1672.63200823 Blue Channel Mean: 68.1831375 Blue Channel Variance: 1166.20433566 Hope it helps for your needs.
https://stackoverflow.com/questions/53542974/
How to mask weights in PyTorch weight parameters?
I am attempting to mask (force to zero) specific weight values in PyTorch. The weights I am trying to mask are defined as so in the def __init__ class LSTM_MASK(nn.Module): def __init__(self, options, inp_dim): super(LSTM_MASK, self).__init__() .... self.wfx = nn.Linear(input_dim, curernt_output, bias=add_bias) The mask is also defined in def __init__ as self.mask_use = torch.Tensor(curernt_output, input_dim) The mask is a constant and the .requires_grad_() is False for the mask parameter. Now in the def forward part of the class I attempt to do an element-wise multiplication of the weight parameter and the mask before the linear operation is completed def forward(self, x): .... self.wfx.weight = self.wfx.weight * self.mask_use wfx_out = self.wfx(x) I get ann error message: self.wfx.weight = self.wfx.weight * self.mask_use File "/home/xyz/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 537, in __setattr__ .format(torch.typename(value), name)) TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'weight' (torch.nn.Parameter or None expected) But when I check on the both parameters with .type() both of them come up as torch.cuda.FloatTensor. I am not sure why there is an error here.
The element-wise operation always returns a FloatTensor. It is not possible to assign normal tensors as weight of layers. There are two possible options to deal with it. You can assign it to the data attribute of your weight, there it is possible assign normal tensors. Or alternatively you convert your result to an nn.Parameter itself, then you can assign it to wfx.weight. Here is an example which shows both ways: import torch import torch.nn as nn wfx = nn.Linear(10, 10) mask_use = torch.rand(10, 10) #wfx.weight = wfx.weight * mask_use #your example - this raises an error # Option 1: write directly to data wfx.weight.data = wfx.weight * mask_use # Option 2: convert result to nn.Parameter and write to weight wfx.weight = nn.Parameter(wfx.weight * mask_use) Disclaimer: When using an = (assignment) on the weights you are replacing the weights tensor of your parameter. This may have unwanted effects on the graph resp. the optimization step.
https://stackoverflow.com/questions/53544901/
Plot the derivative of a function with PyTorch?
I have this code: import torch import matplotlib.pyplot as plt x=torch.linspace(-10, 10, 10, requires_grad=True) y = torch.sum(x**2) y.backward() plt.plot(x.detach().numpy(), y.detach().numpy(), label='function') plt.legend() But, I got this error: ValueError: x and y must have same first dimension, but have shapes (10,) and (1,)
I think the main problem is that your dimensions do not match. Why do you wan't to use torch.sum? This should work for you: # %matplotlib inline added this line only for jupiter notebook import torch import matplotlib.pyplot as plt x = torch.linspace(-10, 10, 10, requires_grad=True) y = x**2 # removed the sum to stay with the same dimensions y.backward(x) # handing over the parameter x, as y isn't a scalar anymore # your function plt.plot(x.detach().numpy(), y.detach().numpy(), label='x**2') # gradients plt.plot(x.detach().numpy(), x.grad.detach().numpy(), label='grad') plt.legend() You get a nicer picture though with more steps, I also changed the interval a bit to torch.linspace(-2.5, 2.5, 50, requires_grad=True). Edit regarding comment: This version plots you the gradients with torch.sum included: # %matplotlib inline added this line only for jupiter notebook import torch import matplotlib.pyplot as plt x = torch.linspace(-10, 10, 10, requires_grad=True) y = torch.sum(x**2) y.backward() print(x.grad) plt.plot(x.detach().numpy(), x.grad.detach().numpy(), label='grad') plt.legend() Output: tensor([-20.0000, -15.5556, -11.1111, -6.6667, -2.2222, 2.2222, 6.6667, 11.1111, 15.5556, 20.0000]) Plot:
https://stackoverflow.com/questions/53546141/
Pytorch Loading two Images from Dataloader
I'm trying to make a GAN which takes a lo-res image, and tries to create a hi-res image from it. To do this, I need to user a Dataloader which has both the hi-res and low-res training images stored in it. data_transform = transforms.Compose([transforms.Resize(imageSize), transforms.Grayscale(num_output_channels=1), transforms.ToTensor()]) dataset_hi = "./hi-res-train" dataset_lo = "./low-res-train" img_data_hi = dset.ImageFolder(root=dataset_hi,transform=data_transform) img_data_lo = dset.ImageFolder(root=dataset_lo,transform=data_transform) dataloader_hi = torch.utils.data.DataLoader(img_data_hi, batch_size = batchSize, shuffle = True, num_workers = 2) dataloader_lo = torch.utils.data.DataLoader(img_data_lo, batch_size = batchSize, shuffle = True, num_workers = 2) I've tried using two separate data loaders (shown above) but when they are shuffled, I cant enumerate through them both because the hi-res and low-res images are not matched up. How can I make it so I can enumerate and shuffle both with pytorch?
Assuming you have similar names for hi & low resolution images (say img01_hi & img01_low), one option is to create a custom Dataloader that returns both images by overriding __getitem__ method. As both images are returned in one call, you can make sure they match by appending _hi & _low to the filename. You may need to create a "cue" text file containing list of all your image file names to make sure you are processing each image file only once.
https://stackoverflow.com/questions/53549717/
What transformation do I need to do in order to run dataset through neural network?
I'm new to deep learning and Pytorch, but I hope someone can help me out with this. My dataset contains images from different sizes. I'm trying to create a simple neural network that can classify images. However, I'm getting mismatch errors. Neural network class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, 3) self.conv2 = nn.Conv2d(32, 32, 3) self.fc1 = nn.Linear(32 * 3 * 3, 200) self.fc2 = nn.Linear(200, 120) def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = F.relu(self.fc1(x)) x = self.fc2(x) return x net = Net() My first convolution layer has 1 input channel, because I transform the images to grayscale images. 32 output channels was an arbitrary decision. The final fully-connected layer has 120 output channels, because there are 120 different classes. Determine transformations and assign training set and validation set transform = transforms.Compose( [transforms.Grayscale(1), transforms.RandomCrop((32,32)), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) data_dir = 'dataset' full_dataset = datasets.ImageFolder(os.path.join(data_dir, 'train'), transform = transform) train_size = int(0.8 * len(full_dataset)) val_size = len(full_dataset) - train_size trainset, valset = torch.utils.data.random_split(full_dataset, [train_size, val_size]) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) valloader = torch.utils.data.DataLoader(valset, batch_size=4, shuffle=False, num_workers=2) classes = full_dataset.classes I transform the images to grayscale, because they are gray anyway. I crop the images to 32, because the images have different sizes and I figured that they must all be the same size when putting it through the neural network. Everything is working fine so far. Train neural network for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') When running this last piece of code, I get the following error: size mismatch, m1: [3584 x 28], m2: [288 x 200] at /Users/soumith/miniconda2/conda-bld/pytorch_1532623076075/work/aten/src/TH/generic/THTensorMath.cpp:2070 when the following line is being executed: outputs = net(inputs) My code is a variation of the code provided in this Pytorch tutorial. Can someone tell me what I'm doing wrong? UPDATE I updated the neural network class to this: class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 5x5 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Max pooling over a (2, 2) window x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # If the size is a square you can only specify a single number x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features net = Net() But now I get an error at loss = criterion(outputs, labels): Assertion cur_target >= 0 && cur_target < n_classes' failed. at /Users/soumith/miniconda2/conda-bld/pytorch_1532623076075/work/aten/src/THNN/generic/ClassNLLCriterion.c:93
In your first configuration, you have configured self.fc1 incorrectly. You need the input to be of dimensions 32 * 28 * 28 instead of 32 * 3 * 3 as your images are 32 * 32 and kernel and stride are 3 and 1 respectively. See this video for a simpler explanation. Try adjusting your second configuration yourself now, if you can't, comment below.
https://stackoverflow.com/questions/53559671/
How to convert a pytorch tensor of ints to a tensor of booleans?
I would like to cast a tensor of ints to a tensor of booleans. Specifically I would like to be able to have a function which transforms tensor([0,10,0,16]) to tensor([0,1,0,1]) This is trivial in Tensorflow by just using tf.cast(x,tf.bool). I want the cast to change all ints greater than 0 to a 1 and all ints equal to 0 to a 0. This is the equivalent of !! in most languages. Since pytorch does not seem to have a dedicated boolean type to cast to, what is the best approach here? Edit: I am looking for a vectorized solution opposed to looping through each element.
What you're looking for is to generate a boolean mask for the given integer tensor. For this, you can simply check for the condition: "whether the values in the tensor are greater than 0" using simple comparison operator (>) or using torch.gt(), which would then give us the desired result. # input tensor In [76]: t Out[76]: tensor([ 0, 10, 0, 16]) # generate the needed boolean mask In [78]: t > 0 Out[78]: tensor([0, 1, 0, 1], dtype=torch.uint8) # sanity check In [93]: mask = t > 0 In [94]: mask.type() Out[94]: 'torch.ByteTensor' Note: In PyTorch version 1.4+, the above operation would return 'torch.BoolTensor' In [9]: t > 0 Out[9]: tensor([False, True, False, True]) # alternatively, use `torch.gt()` API In [11]: torch.gt(t, 0) Out[11]: tensor([False, True, False, True]) If you indeed want single bits (either 0s or 1s), cast it using: In [14]: (t > 0).type(torch.uint8) Out[14]: tensor([0, 1, 0, 1], dtype=torch.uint8) # alternatively, use `torch.gt()` API In [15]: torch.gt(t, 0).int() Out[15]: tensor([0, 1, 0, 1], dtype=torch.int32) The reason for this change has been discussed in this feature-request issue: issues/4764 - Introduce torch.BoolTensor ... TL;DR: Simple one liner t.bool().int()
https://stackoverflow.com/questions/53562417/
pytorch, How can i make same size of tensor model(x) and answer(x)?
I'm try to make a simple linear model to predict parameters of formula. y = 3*x1 + x2 - 2*x3 Unfortunately, there are some problem when i try to compute loss. def answer(x): return 3 * x[:,0] + x[:,1] - 2 * x[:,2] def loss_f(x): y = answer(x) y_hat = model(x) loss = ((y - y_hat).pow(2)).sum() / x.size(0) return loss When i set batch_size = 3, the size of each result is different x = torch.randn(3,3) answer(x) tensor([ 2.0201, -3.8354, 2.0059]) model(x) tensor([[ 0.2085], [-0.0670], [-1.3635]], grad_fn=<ThAddmmBackward>) answer(x.data).size() torch.Size([3]) model(x.data).size() torch.Size([3, 1]) I think the broadcast applied automatically. loss = ((y - y_hat).pow(2)).sum() / x.size(0) How can i make same size of two tensors? Thanks This is my code import torch import torch.nn as nn import torch.optim as optim class model(nn.Module): def __init__(self, input_size, output_size): super(model, self).__init__() self.linear = nn.Linear(input_size, output_size) def forward(self, x): y = self.linear(x) return y model = model(3,1) optimizer = optim.SGD(model.parameters(), lr = 0.001, momentum=0.1) print('Parameters : ') for p in model.parameters(): print(p) print('') print('Optimizer : ') print(optimizer) def generate_data(batch_size): x = torch.randn(batch_size, 3) return x def answer(x): return 3 * x[:,0] + x[:,1] - 2 * x[:,2] def loss_f(x): y = answer(x) y_hat = model(x) loss = ((y - y_hat).pow(2)).sum() / x.size(0) return loss x = torch.randn(3,3) print(x) x = torch.FloatTensor(x) batch_size = 3 epoch_n = 1000 iter_n = 100 for epoch in range(epoch_n): avg_loss = 0 for i in range(iter_n): x = torch.randn(batch_size, 3) optimizer.zero_grad() loss = loss_f(x.data) loss.backward() optimizer.step() avg_loss += loss avg_loss = avg_loss / iter_n x_valid = torch.FloatTensor([[1,2,3]]) y_valid = answer(x_valid) model.eval() y_hat = model(x_valid) model.train() print(avg_loss, y_valid.data[0], y_hat.data[0]) if avg_loss < 0.001: break
You can use Tensor.view https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view So something like answer(x.data).view(-1, 1) should do the trick.
https://stackoverflow.com/questions/53569050/
Get single random example from PyTorch DataLoader
How do I get a single random example from a PyTorch DataLoader? If my DataLoader gives minbatches of multiple images and labels, how do I get a single random image and label? Note that I don't want a single image and label per minibatch, I want a total of one example.
If you want to choose specific images from your Trainloader/Testloader, you should check out the Subset function from master: Here's an example how to use it: testset = ImageFolderWithPaths(root="path/to/your/Image_Data/Test/", transform=transform) subset_indices = [0] # select your indices here as a list subset = torch.utils.data.Subset(testset, subset_indices) testloader_subset = torch.utils.data.DataLoader(subset, batch_size=1, num_workers=0, shuffle=False) This way you can use exactly one image and label. However, you can of course use more than just one index in your subset_indices. If you want to use a specific image from your DataFolder, you can use dataset.sample and build a dictionary to get the index of the image you want to use.
https://stackoverflow.com/questions/53570732/
pytorch Crossentropy error in simple example of NN
H1, I am try to make NN model that satisfy simple formula. y = X1^2 + X2^2 But when i use CrossEntropyLoss for loss function, i get two different error message. First, when i set code like this x = torch.randn(batch_size, 2) y_hat = model(x) y = answer(x).long() optimizer.zero_grad() loss = loss_func(y_hat, y) loss.backward() optimizer.step() i get this message RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at c:\programdata\miniconda3\conda-bld\pytorch_1533090623466\work\aten\src\thnn\generic/Cl assNLLCriterion.c:93 Second, I change code like this x = torch.randn(batch_size, 2) y_hat = model(x) y = answer(x).long().view(batch_size,1,1) optimizer.zero_grad() loss = loss_func(y_hat, y) loss.backward() optimizer.step() then i get message like RuntimeError: multi-target not supported at c:\programdata\miniconda3\conda-bld\pytorch_1533090623466\work\aten\src\thnn\generic/ClassNLLCriterion.c:21 How can i solve this problem? Thanks.(sorry for my English) This is my code import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F def answer(x): y = x[:,0].pow(2) + x[:,1].pow(2) return y class Model(nn.Module): def __init__(self, input_size, output_size): super(Model, self).__init__() self.linear1 = nn.Linear(input_size, 10) self.linear2 = nn.Linear(10, 1) def forward(self, x): y = F.relu(self.linear1(x)) y = F.relu(self.linear2(y)) return y model = Model(2,1) print(model, '\n') loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr = 0.001) batch_size = 3 epoch_n = 100 iter_n = 100 for epoch in range(epoch_n): loss_avg = 0 for i in range(iter_n): x = torch.randn(batch_size, 2) y_hat = model(x) y = answer(x).long().view(batch_size,1,1) optimizer.zero_grad() loss = loss_func(y_hat, y) loss.backward() optimizer.step() loss_avg += loss loss_avg = loss_avg / iter_n if epoch % 10 == 0: print(loss_avg) if loss_avg < 0.001: break Can i make those dataset using dataloader in pytorch? Thanks for your help.
You are using the wrong loss function. CrossEntropyLoss is used for classification problems generally wheread your problem is that of regression. So you should use losses which are meant for regression like tasks like Mean Squared Error Loss, L1 Loss etc. Take a look at this, this, this and this.
https://stackoverflow.com/questions/53571621/
Most efficient way to use a large data set for PyTorch?
Perhaps this question has been asked before, but I'm having trouble finding relevant info for my situation. I'm using PyTorch to create a CNN for regression with image data. I don't have a formal, academic programming background, so many of my approaches are ad-hoc and just terribly inefficient. May times I can go back through my code and clean things up later because the inefficiency is not so drastic that performance is significantly affected. However, in this case, my method for using the image data takes a long time, uses a lot of memory, and it is done every time I want to test a change in the model. What I've done is essentially loaded the image data into numpy arrays, saved those arrays in an .npy file, and then when I want to use said data for the model I import all of the data in that file. I don't think the data set is really THAT large, as it is comprised of 5000, 3 color channel images of size 64x64. Yet my memory usage shoots up to 70%-80% (out of 16gb) when it is being loaded, and it takes 20-30 seconds to load in every time. My guess is that I'm being dumb about the way I'm loading it in, but frankly I'm not sure what the standard is. Should I, in some way, put the image data somewhere before I need it, or should the data be loaded directly from the image files? And in either case, what is the best, most efficient way to do that, independent of file structure? I would really appreciate any help on this.
Here is a concrete example to demonstrate what I meant. This assumes that you've already dumped the images into an hdf5 file (train_images.hdf5) using h5py. import h5py hf = h5py.File('train_images.hdf5', 'r') group_key = list(hf.keys())[0] ds = hf[group_key] # load only one example x = ds[0] # load a subset, slice (n examples) arr = ds[:n] # should load the whole dataset into memory. # this should be avoided arr = ds[:] In simple terms, ds can now be used as an iterator which gives images on the fly (i.e. it doesn't load anything in memory). This should make the whole run time blazing fast. for idx, img in enumerate(ds): # do something with `img`
https://stackoverflow.com/questions/53576113/
How to get two scalars on same chart with tensorboardX?
The docs seem to indicate that add_custom_scalars_multilinechart does it but it is not working. Have something like this: from tensorboardX import SummaryWriter writer = SummaryWriter(comment='test') writer.add_custom_scalars_multilinechart(['loss/train', 'loss/test'], title='losses') for blahblah: ... writer.add_scalar('loss/train', loss.item(), epoch) writer.add_scalar('loss/test', loss_test.item(), epoch)
Plot two scalars on same chart with tensorboardX: from tensorboardX import SummaryWriter Create two summaryWriter for two scalars writer_train = SummaryWriter('runs/train_0') writer_test = SummaryWriter('runs/test_0') Add scalars instances to the summaryWriter respective; they must have same tag, e.g.: "LOSS" for data in loop: writer_train.add_scalar('LOSS', loss.data.item(), idx) writer_test.add_scalar('LOSS', loss_test.data.item(), idx) For working code, please visit github: Examples with tensorboardX (See more_plots_one_chat.py) Tutorial: TensorboardX
https://stackoverflow.com/questions/53581904/
How to connect the input to the output directly using single fully connected layer in PyTorch?
I am new to deep learning and cnn and trying to get familiar with that field using CIFAR10 tutorial code from PyTorch website. So, in that code I was playing with removing/adding layers to better understand the effect of them and I tried to connect the input(which is the initial data with the batch of 4 images) to the output directly with using only single fully connected layer. I know that does not make much sense, but I do it only for the sake of experiment. So, when I tried to do it, I faced with some errors, which are as follows: First, here is the code snippet: ######################################################################## # 2. Define a Convolution Neural Network # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ # Copy the neural network from the Neural Networks section before and modify it to # take 3-channel images (instead of 1-channel images as it was defined). import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() #self.conv1 = nn.Conv2d(3, 6, 5) #self.pool = nn.MaxPool2d(2, 2) #self.conv2 = nn.Conv2d(6, 16, 5) #self.fc1 = nn.Linear(16 * 5 * 5, 120) #self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(768 * 4 * 4, 10) def forward(self, x): #x = self.pool(F.relu(self.conv1(x))) #x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 768 * 4 * 4) #x = F.relu(self.fc1(x)) #x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() ####################################################################### # 3. Define a Loss function and optimizer # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ # Let's use a Classification Cross-Entropy loss and SGD with momentum. import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) ######################################################################## # 4. Train the network # ^^^^^^^^^^^^^^^^^^^^ # # This is when things start to get interesting. # We simply have to loop over our data iterator, and feed the inputs to the # network and optimize. for epoch in range(4): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) print(len(outputs)) print(len(labels)) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') So, when I run the code, I get the following error: Traceback (most recent call last): File "C:\Users\Andrey\Desktop\Machine_learning_Danila\Homework 3\cifar10_tutorial1.py", line 180, in <module> loss = criterion(outputs, labels) File "C:\Program Files\Python36\lib\site-packages\torch\nn\modules\module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "C:\Program Files\Python36\lib\site-packages\torch\nn\modules\loss.py", line 862, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "C:\Program Files\Python36\lib\site-packages\torch\nn\functional.py", line 1550, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "C:\Program Files\Python36\lib\site-packages\torch\nn\functional.py", line 1405, in nll_loss .format(input.size(0), target.size(0))) ValueError: Expected input batch_size (1) to match target batch_size (4). I was trying to check the length of x and it turns out, that it is 4 initially but it becomes 1 after the line x = x.view(-1, 768 * 4 * 4) I think my numbers are correct, but it seems like I am having only 1 tensor instead of 4 as I supposed to have, and I feel like that is what causes that error. I am wondering, why is that and what is the best way to fix that? Also, what would be the best optimal number for output dimension output in nn.Linear(Fully connected Layer) in this case?
There are two obvious errors in your modified code (from the official ones from PyTorch webpage). First, torch.nn.Linear(in_features, out_features) is the correct syntax. But, you're passing 768 * 4 * 4 as in_features. This is 4 times the actual number of neurons (pixels) in one CIFAR10 image (32*32*3 = 3072). The second bug is related to the first one. When you prepare your inputs tensor, # forward + backward + optimize; # `inputs` should be a tensor of shape [batch_size, input_shape] outputs = net(inputs) you should pass it as a tensor of shape [batch_size, input_size], which according to your requirement is [4, 3072] since you want to use a batch size of 4. This is where you should provide the batch dimension; Not in nn.Linear which is what you're currently doing and that is causing the error. Finally, you should also fix the line in forward method. Change the below line x = x.view(-1, 768 * 4 * 4) to x = x.view(-1, 32*32*3) Fixing these bugs should fix your errors. Having said that I'm unsure whether this would actually work, in a conceptual sense. Because this is a simple linear transformation (i.e. an affine transformation without any non-linearity). The data points (which correspond to images in CIFAR10) would most probably be not linearly separable, in this 3072 dimensional space (manifold). Hence, the accuracy would be drastically poor. Thus it is advisable to add at least a hidden layer with non-linearity such as ReLU.
https://stackoverflow.com/questions/53586245/
Pytorch - TypeError: 'torch.Size' object cannot be interpreted as an integer
Hi I am training a PyTorch model and occurred this error: ----> 5 for i, data in enumerate(trainloader, 0): TypeError: 'torch.Size' object cannot be interpreted as an integer Not sure what this error means. You can find my code here : model.train() for epoch in range(10): running_loss = 0 for i, data in enumerate(trainloader, 0): inputs, labels = data optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() if i % 2000 == 0: print (loss.item()) running_loss += loss.item() if i % 1000 == 0: print ('[%d, %5d] loss: %.3f' % (epoch, i, running_loss/ 1000)) running_loss = 0 torch.save(model, 'FeatureNet.pkl') Update This is the codeblock for DataLoader. I am using a customized dataloader and datasets, which x are pictures with size (1025, 16) and y are one-hot encoded vectors for classification. x_train.shape = (1100, 1025, 16) y_train.shape = (1100, 10) clean_dir = '/home/tk/Documents/clean/' mix_dir = '/home/tk/Documents/mix/' clean_label_dir = '/home/tk/Documents/clean_labels/' mix_label_dir = '/home/tk/Documents/mix_labels/' class MSourceDataSet(Dataset): def __init__(self, clean_dir, mix_dir, clean_label_dir, mix_label_dir): with open(clean_dir + 'clean0.json') as f: clean0 = torch.Tensor(json.load(f)) with open(mix_dir + 'mix0.json') as f: mix0 = torch.Tensor(json.load(f)) with open(clean_label_dir + 'clean_label0.json') as f: clean_label0 = torch.Tensor(json.load(f)) with open(mix_label_dir + 'mix_label0.json') as f: mix_label0 = torch.Tensor(json.load(f)) self.spec = torch.cat([clean0, mix0], 0) self.label = torch.cat([clean_label0, mix_label0], 0) def __len__(self): return self.spec.shape def __getitem__(self, index): spec = self.spec[index] label = self.label[index] return spec, label getitem a, b = trainset.__getitem__(1000) print (a.shape) print (b.shape) a.shape = torch.Size([1025, 16]); b.shape = torch.Size([10]) Error message --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-9-3bd71e5c00e1> in <module>() 3 running_loss = 0 4 ----> 5 for i, data in enumerate(trainloader, 0): 6 7 inputs, labels = data ~/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self) 311 def __next__(self): 312 if self.num_workers == 0: # same-process loading --> 313 indices = next(self.sample_iter) # may raise StopIteration 314 batch = self.collate_fn([self.dataset[i] for i in indices]) 315 if self.pin_memory: ~/anaconda3/lib/python3.7/site-packages/torch/utils/data/sampler.py in __iter__(self) 136 def __iter__(self): 137 batch = [] --> 138 for idx in self.sampler: 139 batch.append(idx) 140 if len(batch) == self.batch_size: ~/anaconda3/lib/python3.7/site-packages/torch/utils/data/sampler.py in __iter__(self) 32 33 def __iter__(self): ---> 34 return iter(range(len(self.data_source))) 35 36 def __len__(self): TypeError: 'torch.Size' object cannot be interpreted as an integer
Your problem is the __len__ function. You cannot use the shape as return value. Here is an example for illustration: import torch class Foo: def __init__(self, data): self.data = data def __len__(self): return self.data.shape myFoo = Foo(data=torch.rand(10, 20)) print(len(myFoo)) Will raise exactly the same error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-285-e97aace2f622> in <module> 7 8 myFoo = Foo(data=torch.rand(10, 20)) ----> 9 print(len(myFoo)) TypeError: 'torch.Size' object cannot be interpreted as an integer Since shape represents a torch.Size tuple: print(myFoo.data.shape) Output: torch.Size([10, 20]) So you have to decide which dimension you want to hand over to __len__, for example the first dimension: import torch class Foo: def __init__(self, data): self.data = data def __len__(self): return self.data.shape[0] # choosing first dimension for len myFoo = Foo(data=torch.rand(10, 20)) print(len(myFoo)) # prints 10 Works fine and returns 10. Of course you can also choose any other dimension of your input, but you have to choose one. So in your code of your MSourceDataSet you have to change your __len__ function to for example: def __len__(self): return self.spec.shape[0] # as said of course you can also choose other dimensions This should solve your problem.
https://stackoverflow.com/questions/53588623/
Train SqueezeNet model using MNIST dataset Pytorch
I want to train SqueezeNet 1.1 model using MNIST dataset instead of ImageNet dataset. Can i have the same model as torchvision.models.squeezenet? Thanks!
TorchVision provides only ImageNet data pretrained model for the SqueezeNet architecture. However, you can train your own model using MNIST dataset by taking only the model (but not the pre-trained one) from torchvision.models. In [10]: import torchvision as tv # get the model architecture only; ignore `pretrained` flag In [11]: squeezenet11 = tv.models.squeezenet1_1() In [12]: squeezenet11.training Out[12]: True Now, you can use this architecture to train a model on MNIST data, which should not take too long. One modification to keep in mind is to update the number of classes which is 10 for MNIST. Specifically, the 1000 should be changed to 10, and the kernel and stride accordingly. (classifier): Sequential( (0): Dropout(p=0.5) (1): Conv2d(512, 1000, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace) (3): AvgPool2d(kernel_size=13, stride=1, padding=0) ) Here's the relevant explanation: finetuning_torchvision_models-squeezenet
https://stackoverflow.com/questions/53593363/
Distrubuted PyTorch code halts on multiple nodes when using MPI backend
I am trying to run Pytorch code on three nodes using openMPI but the code just halts without any errors or output. Eventually my purpose is to distribute a Pytorch graph on these nodes. Three of my nodes are connected in same LAN and have SSH access to each other without password and have similar specifications: Ubuntu 18.04 Cuda 10.0 OpenMPI built and installed from source PyTorch built and installed from source The code shown below works on single node - multiple processes, as: > mpirun -np 3 -H 192.168.100.101:3 python3 run.py With following output: INIT 0 of 3 Init env:// INIT 1 of 3 Init env:// INIT 2 of 3 Init env:// RUN 0 of 3 with tensor([0., 0., 0.]) RUN 1 of 3 with tensor([0., 0., 0.]) RUN 2 of 3 with tensor([0., 0., 0.]) Rank 1 has data tensor(1.) Rank 0 has data tensor(1.) Rank 2 has data tensor(1.) But when I placed the code on three nodes and run following command on each node separately, it does nothing: > mpirun -np 3 -H 192.168.100.101:1,192.168.100.102:1,192.168.100.103:1 python3 run.py Please give some idea about any modifications in code or configurations for MPI to run given Pytorch code on multiple nodes? #!/usr/bin/env python import os import torch import torch.distributed as dist from torch.multiprocessing import Process def run(rank, size): tensor = torch.zeros(size) print(f"RUN {rank} of {size} with {tensor}") # incrementing the old tensor tensor += 1 # sending tensor to next rank if rank == size-1: dist.send(tensor=tensor, dst=0) else: dist.send(tensor=tensor, dst=rank+1) # receiving tensor from previous rank if rank == 0: dist.recv(tensor=tensor, src=size-1) else: dist.recv(tensor=tensor, src=rank-1) print('Rank ', rank, ' has data ', tensor[0]) def init_processes(rank, size, fn, backend, init): print(f"INIT {rank} of {size} Init {init}") dist.init_process_group(backend, init, rank=rank, world_size=size) fn(rank, size) if __name__ == "__main__": os.environ['MASTER_ADDR'] = '192.168.100.101' os.environ['BACKEND'] = 'mpi' os.environ['INIT_METHOD'] = 'env://' world_size = int(os.environ['OMPI_COMM_WORLD_SIZE']) world_rank = int(os.environ['OMPI_COMM_WORLD_RANK']) init_processes(world_rank, world_size, run, os.environ['BACKEND'], os.environ['INIT_METHOD']) N.B. NCCL is not an option for me due to arm64-based hardware.
Apologies for replying late to this, but I could solve the issue by adding --mca btl_tcp_if_include eth1 flag to mpirun command. The reason for halt was that openMPI, by default, tries to locate and communicate with other nodes over local loopback network interface e.g. lo. We have to explicitly specify which interface(s) should be included (or excluded) to locate other other nodes. I hope it would save someone's day :)
https://stackoverflow.com/questions/53596010/
Size mismatch for fc.bias and fc.weight in PyTorch
I used the transfer learning approach to train a model and saved the best-detected weights. In another script, I tried to use the saved weights for prediction. But I am getting errors as follows. I have used ResNet for finetuning the network and have 4 classes. RuntimeError: Error(s) in loading state_dict for ResNet: size mismatch for fc.bias: copying a param of torch.Size([1000]) from checkpoint, where the shape is torch.Size([4]) in current model. size mismatch for fc.weight: copying a param of torch.Size([1000, 512]) from checkpoint, where the shape is torch.Size([4, 512]) in current model. I am using the following code for prediction of output: checkpoint = torch.load("./models/custom_model13.model") model = resnet18(pretrained=True) model.load_state_dict(checkpoint) model.eval() def predict_image(image_path): transformation = transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) image_tensor = transformation(image).float() image_tensor = image_tensor.unsqueeze_(0) if torch.cuda.is_available(): image_tensor.cuda() input = Variable(image_tensor) output = model(input) index = output.data.numpy().argmax() return index if __name__ == "main": imagefile = "image.png" imagepath = os.path.join(os.getcwd(),imagefile) prediction = predict_image(imagepath) print("Predicted Class: ",prediction) And the following code to train and save the model: Data_dir = 'Dataset' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = image_datasets['train'].classes device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print (device) def save_models(epochs, model): torch.save(model.state_dict(), "custom_model{}.model".format(epochs)) print("Checkpoint Saved") def train_model(model, criterion, optimizer, scheduler, num_epochs=25): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': scheduler.step() model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) # deep copy the model if phase == 'train' and epoch_acc > best_acc: save_models(epoch,model) best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) return model def visualize_model(model, num_images=6): was_training = model.training model.eval() images_so_far = 0 fig = plt.figure() with torch.no_grad(): for i, (inputs, labels) in enumerate(dataloaders['val']): inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) _, preds = torch.max(outputs, 1) for j in range(inputs.size()[0]): images_so_far += 1 ax = plt.subplot(num_images//2, 2, images_so_far) ax.axis('off') ax.set_title('predicted: {}'.format(class_names[preds[j]])) imshow(inputs.cpu().data[j]) if images_so_far == num_images: model.train(mode=was_training) return model.train(mode=was_training) model_ft = models.resnet18(pretrained=True) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 4) model_ft = model_ft.to(device) criterion = nn.CrossEntropyLoss() optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25)
Cause: You trained a model derived from resnet18 in this way: model_ft = models.resnet18(pretrained=True) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 4) That is, you changed the last nn.Linear layer to output 4 dim prediction instead of the default 1000. When you try and load the model for prediction, your code is: model = resnet18(pretrained=True) model.load_state_dict(checkpoint) You did not apply the same change of the last nn.Linear layer to model therefore the checkpoint you are trying to load does not fit. Fix: (1) Apply the same change before loading the checkpoint: model = resnet18(pretrained=True) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 4) # make the change model.load_state_dict(checkpoint) # load (2) Even better, use num_classes argument to construct resnet with the desired number of outputs to begin with: model = resnet18(pretrained=True, num_classes=4) model.load_state_dict(checkpoint) # load
https://stackoverflow.com/questions/53612835/
Copy GpuMat to CUDA Tensor
I am trying to run model inference in C++. I succesfully traced model in Python with torch.jit.trace. I am able to load model in C++ using torch::jit::load(). I was able to perform inference both on cpu and gpu, however the starting point was always torch::from_blob method which seems to be creating cpu-side tensor. For efficiency, I would like to cast/copy cv::cuda::GpuMat directly to CUDA Tensor. I have been digging through pytorch tests and docs in search of convinient example, but was unable to find one. Question: How to create CUDA Tensor from cv::cuda::GpuMat?
Here is an example: //define the deleter ... void deleter(void* arg) {}; //your convert function cuda::GpuMat gImage; //build or load your image here ... std::vector<int64_t> sizes = {1, static_cast<int64_t>(gImage.channels()), static_cast<int64_t>(gImage.rows), static_cast<int64_t>(gImage.cols)}; long long step = gImage.step / sizeof(float); std::vector<int64_t> strides = {1, 1, step, static_cast<int64_t>(gImage.channels())}; auto tensor_image = torch::from_blob(gImage.data, sizes, strides, deleter, torch::kCUDA); std::cout << "output tensor image : " << tensor_image << std::endl;
https://stackoverflow.com/questions/53615833/
Auto-encoder for vector encodings
Here is an autoencoder I wrote to encode two vectors : [1,2,3] & [1,2,3] The vectors are created in : features = torch.tensor(np.array([ [1,2,3],[1,2,3] ])) This works as per code : %reset -f epochs = 1000 from pylab import plt plt.style.use('seaborn') import torch.utils.data as data_utils import torch import torchvision import torch.nn as nn from torch.autograd import Variable cuda = torch.cuda.is_available() FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor import numpy as np import pandas as pd import datetime as dt features = torch.tensor(np.array([ [1,2,3],[1,2,3] ])) print(features) batch = 2 data_loader = torch.utils.data.DataLoader(features, batch_size=2, shuffle=True) encoder = nn.Sequential(nn.Linear(3,batch), nn.Sigmoid()) decoder = nn.Sequential(nn.Linear(batch,3), nn.Sigmoid()) autoencoder = nn.Sequential(encoder, decoder) optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.001) encoded_images = [] for i in range(epochs): for j, (images, _) in enumerate(data_loader): # images = images.view(images.size(0), -1) images = Variable(images).type(FloatTensor) optimizer.zero_grad() reconstructions = autoencoder(images) loss = torch.dist(images, reconstructions) loss.backward() optimizer.step() encoded_images.append(encoder(images)) But when I add a new vector : features = torch.tensor(np.array([ [1,2,3],[1,2,3],[1,2,3] ])) I receive error : --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-223-3ca45519e975> in <module> 32 encoded_images = [] 33 for i in range(epochs): ---> 34 for j, (images, _) in enumerate(data_loader): 35 # images = images.view(images.size(0), -1) 36 images = Variable(images).type(FloatTensor) ValueError: not enough values to unpack (expected 2, got 1) Have I setup my data loader correctly ?
your Dataset (inside DataLoader) returns only image per item without label. When you iterate and expecting each item to be (image, _) you are trying to unpack a feature without a label into image and _ and this is not possible. This is why you get "not enough values to unpack" error
https://stackoverflow.com/questions/53621440/
PyTorch optimizer.step() function doesn't update weights
The code can be seen below. The problem is, that the optimizer.step() part doesn't work. I'm printing model.parameters() before and after the training, and the weights don't change. I'm trying to make a perceptron that can solve the AND-problem. I've been successful in doing this with my own tiny library, where I've implemented a perceptron with the two functions predict() and train(). Just to clarify, I've just started learning deep learning using PyTorch, so it's probably a very newbie problem. I've tried searching for a solution, but without luck. I've also compared my code with other codes that work, but I don't know what I'm doing wrong. import torch from torch import nn, optim from random import randint class NeuralNet(nn.Module): def __init__(self): super(NeuralNet, self).__init__() self.layer1 = nn.Linear(2, 1) def forward(self, input): out = input out = self.layer1(out) out = torch.sign(out) out = torch.clamp(out, 0, 1) # 0=false, 1=true return out data = torch.Tensor([[0, 0], [0, 1], [1, 0], [1, 1]]) target = torch.Tensor([0, 0, 0, 1]) model = NeuralNet() epochs = 1000 lr = 0.01 print(list(model.parameters())) print() # Print parameters before training loss_func = nn.L1Loss() optimizer = optim.Rprop(model.parameters(), lr) for epoch in range(epochs + 1): optimizer.zero_grad() rand_int = randint(0, len(data) - 1) x = data[rand_int] y = target[rand_int] pred = model(x) loss = loss_func(pred, y) loss.backward() optimizer.step() # Print parameters again # But they haven't changed print(list(model.parameters()))
Welcome to stackoverflow! The issue here is you are trying to perform back-propagation through a non-differentiable function. Non-differentiable means that no gradients can flow back through them, implying that all trainable weights applied before them will not be updated by your optimizer. Such functions are easy to spot; they are discrete, sharp operations that resemble 'if' statements. In your case it is the sign() function. Unfortunately, PyTorch does not do any hand-holding in this regard and will not point you to the issue. What you could do to alleviate the issue would be to transform the range of your output to [-1,1] and apply a Tanh() non-linearity instead of the sign() and clamp() operators.
https://stackoverflow.com/questions/53622076/
How do I display a single image in PyTorch?
How do I display a PyTorch Tensor of shape (3, 224, 224) representing a 224x224 RGB image? Using plt.imshow(image) gives the error: TypeError: Invalid dimensions for image data
Given a Tensor representing the image, use .permute() to put the channels as the last dimension: plt.imshow( tensor_image.permute(1, 2, 0) ) Note: permute does not copy or allocate memory, and from_numpy() doesn't either.
https://stackoverflow.com/questions/53623472/
Updating pre-trained Deep Learning model with respect to new data points
Considering the example of Image classification on ImageNet, How to update the pre-trained model using the new data points. I have loaded the pre-trained model. I have a new data point that is quite different from the distribution of the original data on which the model was previously trained. So, I would like to update/fine-tune the model with the help of new data point. How to go about doing it? Can anyone help me out in doing it? I am using pytorch 0.4.0 for implementation, running on GPU Tesla K40C.
If you don't want to change the output of the classifier (i.e. the number of classes), then you can simply continue training the model with new example images, assuming that they are reshaped to the same shape that the pretrained model accepts. On the other hand, if you want to change the number of classes in a pre-trained model, then you can replace the last fully connected layer with a new one and train only this specific layer on new samples. Here's a sample code for this case from PyTorch's autograd mechanics notes: model = torchvision.models.resnet18(pretrained=True) for param in model.parameters(): param.requires_grad = False # Replace the last fully-connected layer # Parameters of newly constructed modules have requires_grad=True by default model.fc = nn.Linear(512, 100) # Optimize only the classifier optimizer = optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9)
https://stackoverflow.com/questions/53624766/
Differences between `torch.Tensor` and `torch.cuda.Tensor`
We can allocate a tensor on GPU using torch.Tensor([1., 2.], device='cuda'). Are there any differences using that way rather than torch.cuda.Tensor([1., 2.]), except we can pass in a specific CUDA device to the former one? Or in other words, in which scenario is torch.cuda.Tensor() necessary?
So generally both torch.Tensor and torch.cuda.Tensor are equivalent. You can do everything you like with them both. The key difference is just that torch.Tensor occupies CPU memory while torch.cuda.Tensor occupies GPU memory. Of course operations on a CPU Tensor are computed with CPU while operations for the GPU / CUDA Tensor are computed on GPU. The reason you need these two tensor types is that the underlying hardware interface is completely different. Apart from the point it doesn't make sense computationally, you will get an error as soon as you try to do computations between torch.Tensor and torch.cuda.Tensor: import torch # device will be 'cuda' if a GPU is available device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # creating a CPU tensor cpu_tensor = torch.rand(10) # moving same tensor to GPU gpu_tensor = cpu_tensor.to(device) print(cpu_tensor, cpu_tensor.dtype, type(cpu_tensor), cpu_tensor.type()) print(gpu_tensor, gpu_tensor.dtype, type(gpu_tensor), gpu_tensor.type()) print(cpu_tensor*gpu_tensor) Output: tensor([0.8571, 0.9171, 0.6626, 0.8086, 0.6440, 0.3682, 0.9920, 0.4298, 0.0172, 0.1619]) torch.float32 <class 'torch.Tensor'> torch.FloatTensor tensor([0.8571, 0.9171, 0.6626, 0.8086, 0.6440, 0.3682, 0.9920, 0.4298, 0.0172, 0.1619], device='cuda:0') torch.float32 <class 'torch.Tensor'> torch.cuda.FloatTensor --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-15-ac794171c178> in <module>() 12 print(gpu_tensor, gpu_tensor.dtype, type(gpu_tensor), gpu_tensor.type()) 13 ---> 14 print(cpu_tensor*gpu_tensor) RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'other' As the underlying hardware interface is completely different, CPU Tensors are just compatible with CPU Tensor and verse visa GPU Tensors are just compatible to GPU Tensors. Edit: As you can see here that a tensor which is moved to GPU is actually a tensor of type: torch.cuda.*Tensor i.e. torch.cuda.FloatTensor. So cpu_tensor.to(device) or torch.Tensor([1., 2.], device='cuda') will actually return a tensor of type torch.cuda.FloatTensor. In which scenario is torch.cuda.Tensor() necessary? When you want to use GPU acceleration (which is much faster in most cases) for your program, you need to use torch.cuda.Tensor, but you have to make sure that ALL tensors you are using are CUDA Tensors, mixing is not possible here.
https://stackoverflow.com/questions/53628940/
Why does this error pop up while working with Deep Q learning?
I have been working with Deep Q Learning on Windows 10 Machine. I have version 0.4.1 of pytorch with NVIDA graphics card. def select_action(self, state): probs = F.softmax(self.model(Variable(state, volatile = True))*7) action = probs.multinomial() return action.data[0,0] From this section of the code, I keep getting this error: TypeError: multinomial() missing 1 required positional arguments: "num_samples" If any other information is needed, It will be very quickly provided.
Based on the documentation you didn't specify the num_samples of multinomial function to draw your multinomial distribution. torch.multinomial(input, num_samples, replacement=False, out=None) Returns a tensor where each row contains num_samples indices sampled from the multinomial probability distribution located in the corresponding row of tensor input. Change the code as below: def select_action(self, state): probs = F.softmax(self.model(Variable(state, volatile = True))*7) action = probs.multinomial(1) # 1 is the number of samples to draw return action.data[0,0]
https://stackoverflow.com/questions/53639839/
Unexpected increase in validation error in MNIST Pytorch
I'm a bit new to the whole field and thus decided to work on the MNIST dataset. I pretty much adapted the whole code from https://github.com/pytorch/examples/blob/master/mnist/main.py, with only one significant change: Data Loading. I didn't want to use the pre-loaded dataset within Torchvision. So I used MNIST in CSV. I loaded the data from CSV file by inheriting from Dataset and making a new dataloader. Here's the relevant code: mean = 33.318421449829934 sd = 78.56749081851163 # mean = 0.1307 # sd = 0.3081 import numpy as np from torch.utils.data import Dataset, DataLoader class dataset(Dataset): def __init__(self, csv, transform=None): data = pd.read_csv(csv, header=None) self.X = np.array(data.iloc[:, 1:]).reshape(-1, 28, 28, 1).astype('float32') self.Y = np.array(data.iloc[:, 0]) del data self.transform = transform def __len__(self): return len(self.X) def __getitem__(self, idx): item = self.X[idx] label = self.Y[idx] if self.transform: item = self.transform(item) return (item, label) import torchvision.transforms as transforms trainData = dataset('mnist_train.csv', transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((mean,), (sd,)) ])) testData = dataset('mnist_test.csv', transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((mean,), (sd,)) ])) train_loader = DataLoader(dataset=trainData, batch_size=10, shuffle=True, ) test_loader = DataLoader(dataset=testData, batch_size=10, shuffle=True, ) However this code gives me the absolutely weird training error graph that you see in the picture, and a final validation error of 11% because it classifies everything as a '7'. I managed to track the problem down to how I normalize the data and if I use the values given in the example code (0.1307, and 0.3081) for transforms.Normalize, along with reading the data as type 'uint8' it works perfectly. Note that there is very minimal difference in the data which is provided in these two cases. Normalizing by 0.1307 and 0.3081 on values from 0 to 1 has the same effect as normalizing by 33.31 and 78.56 on values from 0 to 255. The values are even mostly the same (A black pixel corresponds to -0.4241 in the first case and -0.4242 in the second). If you would like to see a IPython Notebook where this problem is seen clearly, please check out https://colab.research.google.com/drive/1W1qx7IADpnn5e5w97IcxVvmZAaMK9vL3 I am unable to understand what has caused such a huge difference in behaviour within these two slightly different ways of loading data. Any help would be massively appreciated.
Long story short: you need to change item = self.X[idx] to item = self.X[idx].copy(). Long story long: T.ToTensor() runs torch.from_numpy, which returns a tensor which aliases the memory of your numpy array dataset.X. And T.Normalize() works inplace, so each time the sample is drawn it has mean subtracted and is divided by std, leading to degradation of your dataset. Edit: regarding why it works in the original MNIST loader, the rabbit hole is even deeper. The key line in MNIST is that the image is transformed into a PIL.Image instance. The operation claims to only copy in case the buffer is not contiguous (it is in our case), but under the hood it checks whether it's strided instead (which it is), and thus copies it. So by luck, the default torchvision pipeline involves a copy and thus in-place operation of T.Normalize() does not corrupt the in-memory self.data of our MNIST instance.
https://stackoverflow.com/questions/53652015/
Why was the method of a class called without mentioning the method?
I am currently going through this pytorch tutorial but I think this problem is one regarding Python classes in general: https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py Specifically, a class called Net was created and we created an object called net=Net(). In the Net class there is a method forward(self,X). However, later forward was used just by writing net(X). Shouldn't it be net.forward(X)? import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 5x5 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Max pooling over a (2, 2) window x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # If the size is a square you can only specify a single number x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features net = Net() print(net) input = torch.randn(1, 1, 32, 32) out = net(input) print(out)
If you check the source code of nn.Module you will see that it implements __call__, which makes its instances (and instances of its subclasses) callable. def __call__(self, *input, **kwargs): for hook in self._forward_pre_hooks.values(): hook(self, input) if torch.jit._tracing: result = self._slow_forward(*input, **kwargs) else: result = self.forward(*input, **kwargs) for hook in self._forward_hooks.values(): hook_result = hook(self, input, result) if hook_result is not None: raise RuntimeError( "forward hooks should never return any values, but '{}'" "didn't return None".format(hook)) if len(self._backward_hooks) > 0: var = result while not isinstance(var, torch.Tensor): if isinstance(var, dict): var = next((v for v in var.values() if isinstance(v, torch.Tensor))) else: var = var[0] grad_fn = var.grad_fn if grad_fn is not None: for hook in self._backward_hooks.values(): wrapper = functools.partial(hook, self) functools.update_wrapper(wrapper, hook) grad_fn.register_hook(wrapper) return result That's why net = Net() input = torch.randn(1, 1, 32, 32) out = net(input) Is a totally valid code. net(input) executes net.__call__(input).
https://stackoverflow.com/questions/53653532/
VSCode bug with PyTorch DataLoader?
The following code example works in Python, but fails in VSCode in Linux (but not VSCode in Windows). I am wondering if there is something wrong with my code, or if there is something wrong with VSCode under Linux? #Test of PyTorch DataLoader and Visual Studio Code from torch.utils.data import Dataset, DataLoader class SimpleData(Dataset): """Very simple dataset""" def __init__(self): self.data = range(20) def __len__(self): return len(self.data) def __getitem__(self, idx): return self.data[idx] if __name__ == '__main__': #Initialize DataLoader with above Dataset: dataloader = DataLoader(SimpleData(), batch_size=4, num_workers=1) print('Using DataLoader to show data in batches: ') for i, sample_batch in enumerate(dataloader): #This fails in VSCode in Linux print('batch ', i, ':', sample_batch) print("--- Done ---") The expected output is: Using DataLoader to show data in batches: batch 0 : tensor([0, 1, 2, 3]) batch 1 : tensor([4, 5, 6, 7]) batch 2 : tensor([ 8, 9, 10, 11]) batch 3 : tensor([12, 13, 14, 15]) batch 4 : tensor([16, 17, 18, 19]) --- Done --- But with VSCode in Linux it hangs after printing the first line.
Have you tried with num_workers=0? May be VS Code is not able to spawn a new process properly on linux.
https://stackoverflow.com/questions/53660465/
Implementing a many-to-many regression task
Sorry if I present my problem not clearly, English is not my first language Problem Short description: I want to train a model which map input x (with shape of [n_sample, timestamp, feature]) to an output y (with exact same shape). It's like mapping 2 space Longer version: I have 2 float ndarrays of shape [n_sample, timestamp, feature], representing MFCC feature of n_sample audio file. These 2 ndarray are 2 speakers' speech of the same corpus, which was aligned by DTW. Lets name these 2 arrays x and y. I want to train a model, which predict y[k] given x[k]. It's like mapping from space x to space y, and the output must be exact same shape as the input What I've tried It's time-series problem so I decide to use RNN approach. Here is my code in PyTorch (I put comment along the code. I removed the calculation of average loss for simplicity). Note that I've tried many option for learning rate, the behavior still the same Class define class Net(nn.Module): def __init__(self, in_size, hidden_size, out_size, nb_lstm_layers): super().__init__() self.in_size = in_size self.hidden_size = hidden_size self.out_size = out_size self.nb_lstm_layers = nb_lstm_layers # self.fc1 = nn.Linear() self.lstm = nn.LSTM(input_size=self.in_size, hidden_size=self.hidden_size, num_layers=self.nb_lstm_layers, batch_first=True, bias=True) # self.fc = nn.Linear(self.hidden_size, self.out_size) self.fc1 = nn.Linear(self.hidden_size, 128) self.fc2 = nn.Linear(128, 128) self.fc3 = nn.Linear(128, self.out_size) def forward(self, x, h_state): out, h_state = self.lstm(x, h_state) output_fc = [] for frame in out: output_fc.append(self.fc3(torch.tanh(self.fc1(frame)))) # I added fully connected layer to each frame, to make an output with same shape as input return torch.stack(output_fc), h_state def hidden_init(self): if use_cuda: h_state = torch.stack([torch.zeros(nb_lstm_layers, batch_size, 20) for _ in range(2)]).cuda() else: h_state = torch.stack([torch.zeros(nb_lstm_layers, batch_size, 20) for _ in range(2)]) return h_state Training step: net = Net(20, 20, 20, nb_lstm_layers) optimizer = optim.Adam(net.parameters(), lr=0.0001, weight_decay=0.0001) criterion = nn.MSELoss() for epoch in range(nb_epoch): count = 0 loss_sum = 0 batch_x = None for i in (range(len(data))): # data is my entire data, which contain A and B i specify above. temp_x = torch.tensor(data[i][0]) temp_y = torch.tensor(data[i][1]) for ii in range(0, data[i][0].shape[0] - nb_frame_in_batch*2 + 1): # Create batches batch_x, batch_y = get_batches(temp_x, temp_y, ii, batch_size, nb_frame_in_batch) # this will return 2 tensor of shape (batch_size, nb_frame_in_batch, 20), # with `batch_size` is the number of sample each time I feed to the net, # nb_frame_in_batch is the number of frame in each sample optimizer.zero_grad() h_state = net.hidden_init() prediction, h_state = net(batch_x.float(), h_state) loss = criterion(prediction.float(), batch_y.float()) h_state = (h_state[0].detach(), h_state[1].detach()) loss.backward() optimizer.step() Problem is, the loss seems not to decrease but fluctuate a lot, without a clear behaviour Please help me. Any suggestion will be greatly appreciated. If somebody can inspect my code and provide some comment, that would be so kind. Thanks in advance!
It seems the network learning nothing from your data, hence the loss fluctuation (since weights depends on random initialization only). There are something you can try: Try to normalize the data (this suggestion is quite broad, but I can't give you more details since I don't have your data, but normalize it to a specific range like [0, 1], or to a mean and std value is worth trying) One very typical problem of LSTM in pytorch is its input dimension is quite different to other type of neural network. You must feed into your network a tensor with shape (seq_len, batch, input_size). You should go here, LSTM section for better details One more thing: try to tune your hyperparameters. LSTM is harder to train compare to FC or CNN (to my experience). Tell me if you have improvement. Debugging a neural network is always hard and full of potential coding mistake
https://stackoverflow.com/questions/53667213/
in-place shuffle torch.Tensor in the order of a numpy.ndarray
I want to change the order of elements of a torch.Tensor from default to a numpy.ndarray. In other words, I want to shuffle it so that the order of its elements be specified with a numpy array; the important thing about this problem is that I don't want any third object to be created (because of memory limits) Is there something like below code in python 2.7? torch_tensor.shuffle(order)
Edit: This should be an in-place version: import torch import numpy as np t = torch.rand(10) print('Original Tensor:', t) order = np.array(range(10)) np.random.shuffle(order) print('Order:', order) # in-place changing of values t[np.array(range(10))] = t[order] print('New Tensor:', t) Output: Original Tensor: tensor([ 0.3380, 0.3450, 0.2253, 0.0279, 0.3945, 0.6055, 0.1489, 0.7676, 0.4213, 0.2683]) Order: [7 1 3 6 2 9 0 5 4 8] New Tensor: tensor([ 0.7676, 0.3450, 0.0279, 0.1489, 0.2253, 0.2683, 0.3380, 0.6055, 0.3945, 0.4213]) I hope this is roughly what you were looking for!
https://stackoverflow.com/questions/53673575/
Can pytorch's autograd handle torch.cat?
I'm trying to implement a simple neural network that is supposed to learn an grayscale image. The input consist of the 2d indices of a pixel, the output should be the value of that pixel. The net is constructed as follows: Each neuron is connected to the input (i.e. the indices of the pixel) as well as to the output of each of the previous neurons. The output is just the output of the last neuron in this sequence. This kind of network has been very successfull in learning images, as demonstrated e.g. here. The Problem: In my implementation the loss function stays between 0.2 and 0.4 depending on the number of neurons, the learning rate and the number of iterations used, which is pretty bad. Also if you compare the output to what what we've trained it for there it just looks like noise. But this is the first time I used torch.cat within the network, so I'm not sure whether this is the culprit. Can anyone see what I'm doing wrong? from typing import List import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.nn import Linear class My_Net(nn.Module): lin: List[Linear] def __init__(self): super(My_Net, self).__init__() self.num_neurons = 10 self.lin = nn.ModuleList([nn.Linear(k+2, 1) for k in range(self.num_neurons)]) def forward(self, x): v = x recent = torch.Tensor(0) for k in range(self.num_neurons): recent = F.relu(self.lin[k](v)) v = torch.cat([v, recent], dim=1) return recent def num_flat_features(self, x): size = x.size()[1:] num = 1 for i in size(): num *= i return num my_net = My_Net() print(my_net) #define a small 3x3 image that the net is supposed to learn my_image = [[1.0, 1.0, 1.0], [0.0, 1.0, 0.0], [0.0, 1.0, 0.0]] #represents a T-shape my_image_flat = [] #output of the net is the value of a pixel my_image_indices = [] #input to the net is are the 2d indices of a pixel for i in range(len(my_image)): for j in range(len(my_image[i])): my_image_flat.append(my_image[i][j]) my_image_indices.append([i, j]) #optimization loop for i in range(100): inp = torch.Tensor(my_image_indices) out = my_net(inp) target = torch.Tensor(my_image_flat) criterion = nn.MSELoss() loss = criterion(out.view(-1), target) print(loss) my_net.zero_grad() loss.backward() optimizer = optim.SGD(my_net.parameters(), lr=0.001) optimizer.step() print("output of current image") print([[my_net(torch.Tensor([[i,j]])).item() for i in range(3)] for j in range(3)]) print("output of original image") print(my_image)
Yes, torch.cat is backprob-able. So you use it without problems for this. What's the problem here is that you define a new optimizer at every iteration. Instead you should define it once after you defined your model. So having this changed the code works fine and loss is decreasing continuously. I also added a print out every 5000 iterations to show the progress. from typing import List import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.nn import Linear class My_Net(nn.Module): lin: List[Linear] def __init__(self): super(My_Net, self).__init__() self.num_neurons = 10 self.lin = nn.ModuleList([nn.Linear(k+2, 1) for k in range(self.num_neurons)]) def forward(self, x): v = x recent = torch.Tensor(0) for k in range(self.num_neurons): recent = F.relu(self.lin[k](v)) v = torch.cat([v, recent], dim=1) return recent def num_flat_features(self, x): size = x.size()[1:] num = 1 for i in size(): num *= i return num my_net = My_Net() print(my_net) optimizer = optim.SGD(my_net.parameters(), lr=0.001) #define a small 3x3 image that the net is supposed to learn my_image = [[1.0, 1.0, 1.0], [0.0, 1.0, 0.0], [0.0, 1.0, 0.0]] #represents a T-shape my_image_flat = [] #output of the net is the value of a pixel my_image_indices = [] #input to the net is are the 2d indices of a pixel for i in range(len(my_image)): for j in range(len(my_image[i])): my_image_flat.append(my_image[i][j]) my_image_indices.append([i, j]) #optimization loop for i in range(50000): inp = torch.Tensor(my_image_indices) out = my_net(inp) target = torch.Tensor(my_image_flat) criterion = nn.MSELoss() loss = criterion(out.view(-1), target) if i % 5000 == 0: print('Iteration:', i, 'Loss:', loss) my_net.zero_grad() loss.backward() optimizer.step() print('Iteration:', i, 'Loss:', loss) print("output of current image") print([[my_net(torch.Tensor([[i,j]])).item() for i in range(3)] for j in range(3)]) print("output of original image") print(my_image) Loss output: Iteration: 0 Loss: tensor(0.4070) Iteration: 5000 Loss: tensor(0.1315) Iteration: 10000 Loss: tensor(1.00000e-02 * 8.8275) Iteration: 15000 Loss: tensor(1.00000e-02 * 5.6190) Iteration: 20000 Loss: tensor(1.00000e-02 * 3.2540) Iteration: 25000 Loss: tensor(1.00000e-02 * 1.3628) Iteration: 30000 Loss: tensor(1.00000e-03 * 4.4690) Iteration: 35000 Loss: tensor(1.00000e-03 * 1.3582) Iteration: 40000 Loss: tensor(1.00000e-04 * 3.4776) Iteration: 45000 Loss: tensor(1.00000e-05 * 7.9518) Iteration: 49999 Loss: tensor(1.00000e-05 * 1.7160) So the loss goes down to 0.000017 in this case. I have to admit that your error surface is really ragged. Depending on the on the initial weights it may also converge to a minimum of 0.17, 0.10 .. etc. The local minimum where it converges can be very different. So you might try initializing your weights within a smaller range. Btw. here is the output without changing the location of defining the optimizer: Iteration: 0 Loss: tensor(0.5574) Iteration: 5000 Loss: tensor(0.5556) Iteration: 10000 Loss: tensor(0.5556) Iteration: 15000 Loss: tensor(0.5556) Iteration: 20000 Loss: tensor(0.5556) Iteration: 25000 Loss: tensor(0.5556) Iteration: 30000 Loss: tensor(0.5556) Iteration: 35000 Loss: tensor(0.5556) Iteration: 40000 Loss: tensor(0.5556) Iteration: 45000 Loss: tensor(0.5556)
https://stackoverflow.com/questions/53683116/
Appending a recurrent layer to PyTorch LSTM model with different hidden size
I'm developing a BI-LSTM model for sequence analysis using PyTorch. For which I am using torch.nn.LSTM. Using that module, you can have several layers with just passing a parameter num_layers to be the number of layers (e.g., num_layers=2). However all of them will have the same hidden_size which is partially fine for me, I just want to have all of them the same hidden_size but the last layer with a different size. Basic example follows: rnn = nn.LSTM(input_size=10, hidden_size=20, num_layers=2) inp = torch.randn(5, 3, 10) h0 = torch.randn(2, 3, 20) c0 = torch.randn(2, 3, 20) output, (hn, cn) = rnn(inp, (h0, c0)) The output dim is (5, 3,20) One solution (But unfavorable to me) is implementing extra model that outputs the dimension I need and takes the input from the first model, e.g.,: rnn_two = nn.LSTM(input_size=20, hidden_size=2) output2, _ = rnn_two(output) However, I do not wanna do this because I parallelize the model using DataParallel, so I need all to be one package. I was hoping to find something similar to keras, e.g.,: rnn.add(LSTM, hidden_size=2) I have checked the LSTM source code but couldn't find what I need. Any suggestions?
If I'm not mistaken this can be done like this: import torch.nn as nn import torch.nn.functional as F class RnnWith2HiddenSizesModel(nn.Module): def __init__(self): super(RnnWith2HiddenSizesModel, self).__init__() self.rnn = nn.LSTM(input_size=10, hidden_size=20, num_layers=2) self.rnn_two = nn.LSTM(input_size=20, hidden_size=2) def forward(self, inp, hc): output, _ = self.rnn(inp, hc) output2, _ = self.rnn_two(output) return output2 inp = torch.randn(5, 3, 10) h0 = torch.randn(2, 3, 20) c0 = torch.randn(2, 3, 20) rnn = RnnWith2HiddenSizesModel() output = RnnWith2HiddenSizesModel()(inp, (h0, c0)) tensor([[[-0.0305, 0.0327], [-0.1534, -0.1193], [-0.1393, 0.0474]], [[-0.0370, 0.0519], [-0.2081, -0.0693], [-0.1809, 0.0826]], [[-0.0561, 0.0731], [-0.2307, -0.0229], [-0.1780, 0.0901]], [[-0.0612, 0.0891], [-0.2378, 0.0164], [-0.1760, 0.0929]], [[-0.0660, 0.1023], [-0.2176, 0.0508], [-0.1611, 0.1053]]], grad_fn=<CatBackward>)
https://stackoverflow.com/questions/53686052/
can't find the inplace operation: one of the variables needed for gradient computation has been modified by an inplace operation
I am trying to compute a loss on the jacobian of the network (i.e. to perform double backprop), and I get the following error: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation I can't find the inplace operation in my code, so I don't know which line to fix. *The error occurs in the last line: loss3.backward() inputs_reg = Variable(data, requires_grad=True) output_reg = self.model.forward(inputs_reg) num_classes = output.size()[1] jacobian_list = [] grad_output = torch.zeros(*output_reg.size()) if inputs_reg.is_cuda: grad_output = grad_output.cuda() jacobian_list = jacobian.cuda() for i in range(10): zero_gradients(inputs_reg) grad_output.zero_() grad_output[:, i] = 1 jacobian_list.append(torch.autograd.grad(outputs=output_reg, inputs=inputs_reg, grad_outputs=grad_output, only_inputs=True, retain_graph=True, create_graph=True)[0]) jacobian = torch.stack(jacobian_list, dim=0) loss3 = jacobian.norm() loss3.backward()
grad_output.zero_() is in-place and so is grad_output[:, i-1] = 0. In-place means "modify a tensor instead of returning a new one, which has the modifications applied". An example solution which is not in-place is torch.where. An example use to zero out the 1st column import torch t = torch.randn(3, 3) ixs = torch.arange(3, dtype=torch.int64) zeroed = torch.where(ixs[None, :] == 1, torch.tensor(0.), t) zeroed tensor([[-0.6616, 0.0000, 0.7329], [ 0.8961, 0.0000, -0.1978], [ 0.0798, 0.0000, -1.2041]]) t tensor([[-0.6616, -1.6422, 0.7329], [ 0.8961, -0.9623, -0.1978], [ 0.0798, -0.7733, -1.2041]]) Notice how t retains the values it had before and zeroed has the values you want.
https://stackoverflow.com/questions/53691156/
PyTorch - loading images without sub folders
First of all I wanted to say that I am new to PyTorch, so i apologize in advice if the level of my questions is not that high. I was wondering if you can help me with something (I actually have 2 questions). The story behind them: I am working on image classification. My test data is divided into sub folders based on their labels and I am loading them via DataLoader. First question: 1) Is it true that if you have trained your model with specific batch sized, testing it with other sizes will affect the accuracy? 2) Is there a way to load and use the model with test data located in a single folder (without sub folders). DataLoader needs sub folders as far as i know. Thank you in advance!
It depends if you're using operations that depend on other items in the batch. If you're using things like batch normalization it may, but in general if your network processes batch items separately, it doesn't. If you check the documentation of torch.utils.data.Dataset, you will see that a dataset essentially only requires the __len__ and __getitem__ methods, where the former says how many items the dataset contains and the latter gets the ith item - be it an image and a label, an image and its segmentation mask, or other things. There is nothing stopping you from writing a custom Dataset. I suggest you take a look at source code of DatasetFolder and modify it according to your needs.
https://stackoverflow.com/questions/53693431/
torch.utils.data.dataloader outputs TypeError: 'module' object is not callable
So im trying to learn pytorch and i got this code from a tutorial and its just there to import a mnist dataset but it outputs "TypeError: 'module' object is not callable" In the tutorial "dataloader" was written as "Dataloader" but when i run it like that it outputs "AttributeError: module 'torch.utils.data' has no attribute 'Dataloader'" The data downloaded inside a file mnist but i dont know if it is complete import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optom from torchvision import datasets, transforms from torch.autograd import Variable kwargs={} train=torch.utils.data.dataloader(datasets.MNIST("mnist",train=True,download=True,transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307),(0.3081,) )] ) ),batch_size=64, shuffle=True, **kwargs)
It's neither dataloader nor Dataloader but DataLoader :) Side-note: if you're new to PyTorch, consider using the newest version 1.0. torch.autograd.Variable is deprecated as of PyTorch 0.4.1 (I believe) so you're either using an older version of PyTorch or an outdated tutorial.
https://stackoverflow.com/questions/53693999/
Why we need image.to('CUDA') when we have model.to('CUDA')
I'm taking a course on PyTorch. And I'm wondering why we need to separately tell to torch.utils.data.DataLoader output on what device it's running on. If the model is already on CUDA why doesn't it automatically change the inputs accordingly? This pattern seems funny to me: model.to(device) for ii, (inputs, labels) in enumerate(trainloader): # Move input and label tensors to the GPU inputs, labels = inputs.to(device), labels.to(device) Is there a use case where I'd like to have model running on GPU but my inputs to be on CPU mode or vice versa?
When you call model.to(device) (assuming device is a GPU) your model parameters will be moved to your GPU. Regarding to your comment: they are moved from CPU memory to GPU memory then. By default newly created tensors are created on CPU, if not specified otherwise. So this applies also for your inputs and labels. The problem here is that all operands of an operation need to be on the same device! If you leave out the to and use CPU tensors as input you will get an error message. Here is an short example for illustration: import torch # device will be 'cuda' if a GPU is available device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # creating a CPU tensor cpu_tensor = torch.rand(10) # moving same tensor to GPU gpu_tensor = cpu_tensor.to(device) print(cpu_tensor, cpu_tensor.dtype, type(cpu_tensor), cpu_tensor.type()) print(gpu_tensor, gpu_tensor.dtype, type(gpu_tensor), gpu_tensor.type()) print(cpu_tensor*gpu_tensor) Output: tensor([0.8571, 0.9171, 0.6626, 0.8086, 0.6440, 0.3682, 0.9920, 0.4298, 0.0172, 0.1619]) torch.float32 <class 'torch.Tensor'> torch.FloatTensor tensor([0.8571, 0.9171, 0.6626, 0.8086, 0.6440, 0.3682, 0.9920, 0.4298, 0.0172, 0.1619], device='cuda:0') torch.float32 <class 'torch.Tensor'> torch.cuda.FloatTensor --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-15-ac794171c178> in <module>() 12 print(gpu_tensor, gpu_tensor.dtype, type(gpu_tensor), gpu_tensor.type()) 13 ---> 14 print(cpu_tensor*gpu_tensor) RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'other'
https://stackoverflow.com/questions/53695105/
Issues importing pytorch with conda
My system is: x86_64 DISTRIB_ID=LinuxMint DISTRIB_RELEASE=17.1 DISTRIB_CODENAME=rebecca DISTRIB_DESCRIPTION="Linux Mint 17.1 Rebecca" NAME="Ubuntu" VERSION="14.04.5 LTS, Trusty Tahr" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 14.04.5 LTS" VERSION_ID="14.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" cat: /etc/upstream-release: Is a directory I have recently installed pytorch since it is required for the use of the Python causal discovery toolbox, and I have installed it using Anaconda without the use of CUDA, like so: conda install pytorch-cpu torchvision-cpu -c pytorch This installation was successful, but when I try to run import torch in IDLE, it is not able to find the module. I realize that this question may be a redundancy, but I have searched through similar issues without finding anything that worked, and I am sort on time. Also, I am unfamiliar with Anaconda, so I am not sure how to get packages installed through it to work with the rest of Python. Thank you in advance for any help.
If you would like to use PyTorch, install it in your local environment using conda create -n pytorch_env python=3 source activate pytorch_env conda install pytorch-cpu torchvision -c pytorch Go to python shell and import using the command import torch
https://stackoverflow.com/questions/53697522/
pytorch equivalent tf.gather
I'm having some trouble porting some code over from tensorflow to pytorch. So I have a matrix with dimensions 10x30 representing 10 examples each with 30 features. Then I have another matrix with dimensions 10x5 containing indices of the the 5 closest examples for each examples in the first matrix. I want to 'gather' using the indices contained in the second matrix the 5 closet examples for each example in the first matrix leaving me with a 3d tensor of shape 10x5x30. In tensorflow this is done with tf.gather(matrix1, matrix2). Does anyone know how i could do this in pytorch?
How about this? matrix1 = torch.randn(10, 30) matrix2 = torch.randint(high=10, size=(10, 5)) gathered = matrix1[matrix2] It uses the trick of indexing with an array of integers.
https://stackoverflow.com/questions/53697596/
How can I compute the tensor in Pytorch efficiently?
I have a tensor x and x.shape=(batch_size,10), now I want to take x[i][0] = x[i][0]*x[i][1]*...*x[i][9] for i in range(batch_size) Here is my code: for i in range(batch_size): for k in range(1, 10): x[i][0] = x[i][0] * x[i][k] But when I implement this in forward() and call loss.backward(), the speed of back-propagation is very slow. Why is it slow and is there any way to implement it efficiently?
It is slow because you use two for loops. You can use .prod See: https://pytorch.org/docs/stable/torch.html#torch.prod In your case, x = torch.prod(x, dim=1) or x = x.prod(dim=1) should work
https://stackoverflow.com/questions/53699675/
How to slice Torch images as numpy images
I am working on a problem in which I have the coordinates to slice the image like X cordinate, Y coordinate, Height, width of the region to crop So if if I have torch image obtained using img = Variable(img.cuda()) how can we slice this image to get that specific area of image [y:y+height, x:x+width] . Thanks
If I understand your question correctly, then you can do it just the same way as in numpy. Here is a short example: import torch t = torch.rand(5, 5) # original matrix print(t) h = 2 w = 2 x = 1 y = 1 # cropped out matrix print(t[x:x+h, y:y+w]) Output: tensor([[ 0.5402, 0.4106, 0.9904, 0.9556, 0.2217], [ 0.4533, 0.6300, 0.5352, 0.2710, 0.4307], [ 0.6389, 0.5660, 0.1582, 0.5701, 0.1614], [ 0.1717, 0.4071, 0.4960, 0.2127, 0.5587], [ 0.9529, 0.2865, 0.6667, 0.7401, 0.3372]]) tensor([[ 0.6300, 0.5352], [ 0.5660, 0.1582]]) As you can see a 2x2 matrix is cropped out of t.
https://stackoverflow.com/questions/53706452/
PyTorch - applying attention efficiently
I have build a RNN language model with attention and I am creating context vector for every element of the input by attending all the previous hidden states (only one direction). The most straight forward solution in my opinion is using a for-loop over the RNN output, such that each context vector is computed one after another. import torch import torch.nn as nn import torch.nn.functional as F class RNN_LM(nn.Module): def __init__(self, hidden_size, vocab_size, embedding_dim=None, droprate=0.5): super().__init__() if not embedding_dim: embedding_dim = hidden_size self.embedding_matrix = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(input_size=embedding_dim, hidden_size=hidden_size, batch_first=False) self.attn = nn.Linear(hidden_size, hidden_size) self.vocab_dist = nn.Linear(hidden_size, vocab_size) self.dropout = nn.Dropout(droprate) def forward(self, x): x = self.dropout(self.embedding_matrix(x.view(-1, 1))) x, states = self.lstm(x) #print(x.size()) x = x.squeeze() content_vectors = [x[0].view(1, -1)] # for-loop over hidden states and attention for i in range(1, x.size(0)): prev_states = x[:i] current_state = x[i].view(1, -1) attn_prod = torch.mm(self.attn(current_state), prev_states.t()) attn_weights = F.softmax(attn_prod, dim=1) context = torch.mm(attn_weights, prev_states) content_vectors.append(context) return self.vocab_dist(self.dropout(torch.cat(content_vectors))) Note: The forward method here is only used for training. However this solution is not very efficient as the code is not well parallelizable with computing each context vector sequently. But since the context vectors are not dependent on each other, I wonder if there is a non-sequential way of calculating them. So is there is a way to compute the context vectors without for-loop so that more of computation can be parallelized?
Ok, for clarity: I assume we only really care about vectorizing the for loop. What is the shape of x? Assuming x is 2-dimensional, I have the following code, where v1 executes your loop and v2 is a vectorized version: import torch import torch.nn.functional as F torch.manual_seed(0) x = torch.randn(3, 6) def v1(): for i in range(1, x.size(0)): prev = x[:i] curr = x[i].view(1, -1) prod = torch.mm(curr, prev.t()) attn = prod # same shape context = torch.mm(attn, prev) print(context) def v2(): # we're going to unroll the loop by vectorizing over the new, # 0-th dimension of `x`. We repeat it as many times as there # are iterations in the for loop repeated = x.unsqueeze(0).repeat(x.size(0), 1, 1) # we're looking to build a `prevs` tensor such that # prevs[i, x, y] == prev[x, y] at i-th iteration of the loop in v1, # up to 0-padding necessary to make them all the same size. # We need to build a higher-dimensional equivalent of torch.triu xs = torch.arange(x.size(0)).reshape(1, -1, 1) zs = torch.arange(x.size(0)).reshape(-1, 1, 1) prevs = torch.where(zs < xs, torch.tensor(0.), repeated) # this is an equivalent of the above iteration starting at 1 prevs = prevs[:-1] currs = x[1:] # a batched matrix multiplication prod = torch.matmul(currs, prevs.transpose(1, 2)) attn = prod # same shape context = torch.matmul(attn, prevs) # equivalent of a higher dimensional torch.diagonal contexts = torch.einsum('iij->ij', (context)) print(contexts) print(x) print('\n------ v1 -------\n') v1() print('\n------ v2 -------\n') v2() which vectorizes your loop, with some caveats. First, I assume x is 2-dimensional. Secondly, I skip taking the softmax claiming it doesn't change the size of the input and thus doesn't affect vectorization. That's a true, but unfortunately softmax of a 0-padded vector v is not equal to a 0-padded softmax of unpadded v. This can be fixed with renormalization though. Please let me know if my assumptions are correct and whether this is a good enough starting point for your work.
https://stackoverflow.com/questions/53706462/
pytorch vgg model test on one image
I've trained a vgg model, this is how I transformed the test data test_transform_2= transforms.Compose([transforms.RandomResizedCrop(224), transforms.ToTensor()]) test_data = datasets.ImageFolder(test_dir, transform=test_transform_2) the model's finished training now I want to test it on a single image from scipy import misc test_image = misc.imread('flower_data/valid/1/image_06739.jpg') vgg16(torch.from_numpy(test_image)) Error --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-60-b83587325fea> in <module> ----> 1 vgg16(torch.from_numpy(test_image)) c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 475 result = self._slow_forward(*input, **kwargs) 476 else: --> 477 result = self.forward(*input, **kwargs) 478 for hook in self._forward_hooks.values(): 479 hook_result = hook(self, input, result) c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torchvision\models\vgg.py in forward(self, x) 40 41 def forward(self, x): ---> 42 x = self.features(x) 43 x = x.view(x.size(0), -1) 44 x = self.classifier(x) c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 475 result = self._slow_forward(*input, **kwargs) 476 else: --> 477 result = self.forward(*input, **kwargs) 478 for hook in self._forward_hooks.values(): 479 hook_result = hook(self, input, result) c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\container.py in forward(self, input) 89 def forward(self, input): 90 for module in self._modules.values(): ---> 91 input = module(input) 92 return input 93 c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 475 result = self._slow_forward(*input, **kwargs) 476 else: --> 477 result = self.forward(*input, **kwargs) 478 for hook in self._forward_hooks.values(): 479 hook_result = hook(self, input, result) c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\conv.py in forward(self, input) 299 def forward(self, input): 300 return F.conv2d(input, self.weight, self.bias, self.stride, --> 301 self.padding, self.dilation, self.groups) 302 303 RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 3, 3], but got input of size [628, 500, 3] instead I can tell I need to shape the input, however I don't know how to based on the way it seems to expect the input to be inform of a batch.
Your image is [h, w, 3] where 3 means the rgb channel, and pytorch expects [b, 3, h, w] where b is batch size. So you can reshape it by calling do that by calling reshaped = img.permute(2, 0, 1).unsqueeze(0). I think there is also a utility function for that somewhere, but I can't find it right now. So in your case tensor = torch.from_numpy(test_image) reshaped = tensor.permute(2, 0 1).unsqueeze(0) your_result = vgg16(reshaped)
https://stackoverflow.com/questions/53710313/
Can I input a Byte Tensor to my RNN/LSTM model?
I am developing an RNN/LSTM model to which I want to encode the sequence in a ByteTensor to save memory as I am limited to a very tight memory. However, when I do so, the model returns the following error: Expected object of scalar type Byte but got scalar type Float for argument #2 'mat2' So, there seems to be something else that is need to be Byte tensor as well, but I do not know what is it since the console only shows an error at the line: output = model(predictor)
It means that inside the model there are float tensors which are being used to operate on your byte tensor (most likely operands in matrix multiplications, additions, etc). I believe you can technically cast them to byte by executing model.type(torch.uint8), but your approach will sooner or later fail anyway - since integers are discrete there is no way to used them in gradient calculations necessary for backpropagation. uint8 values can be used in deep learning to improve performance and memory footprint of inference in a network which is already trained, but this is an advanced technique. For this task your best bet are the regular float32s. If your GPU supports it, you could also use float16 aka half, though it introduces additional complexity and I wouldn't suggest it for beginners.
https://stackoverflow.com/questions/53711360/
Will pytorch performs correctly with python calculate codes in net?
Take the fake codes below as an example: class(): def forward(input): x = some_torch_layers(input) x = some_torch_layers(x) ... x = sum(x) # or numpy or other operations x = some_torch_layers(x) return x Will the pytorch net operates well? Especially, while the sum(x) performs well in the backward process.
TL;DR No. In order for PyTorch to "perform well" it needs to propagate gradients through the net. PyTorch doesn't (and can't) know how to differentiate an arbtrary numpy code, it can only propagate gradients through PyTorch tensor operations. In your examples the gradients will stop at the numpy sum so only the top-most torch layers will be trained (layers between numpy operation and the criterion), The other layers (between input and numpy operation) will have zero gradient and therefore their parameters will remain fixed throughout the training.
https://stackoverflow.com/questions/53721444/
PRelue is not supperted with mmdnn?
I converted my caffe model to IR successfully, error happened when I tried convert IR to pytorch: Pytorch Emitter has not supported operator [PRelu] How should I deal with that please?
Yes MMdnn support supports LeakyRelu. Check the link below for pytorch_emitter.py implementation from MMdnn. pytorch_emitter.py If you check the implementation you will find all the supported operations and it doesn't include PRelu.
https://stackoverflow.com/questions/53725125/
torch in-place operations to save memory (softmax)
Some operations in torch are executed in-place. Shorthand operators like += for example. Is it possible to get in-place execution for other operations, such as softmax? I'm currently working with language processing. The model produces a long sequence of probability distributions over a large vocabulary. This final output tensor is responsible for ca 60% of allocated memory. Which is a huge problem, since I need to calculate a softmax over it and that doubles the required memory. Here is an example of the problem. I am not interested in the tensor t, only in its softmax: import numpy as np import torch import torch.nn.functional as F t = torch.tensor(np.zeros((30000,30000))).cuda() #allocates 6.71 GB of GPU softmax = F.softmax(t, 1) #out of memory error del t #too late, program crashed Even the following doesn't work: F.softmax(torch.tensor(np.zeros((30000,30000))).cuda(), 1)
I have created an in-place version of softmax: import numpy as np import torch import torch.nn.functional as F # in-place version t = torch.tensor(np.ones((100,200))) torch.exp(t, out=t) summed = torch.sum(t, dim=1, keepdim=True) t /= summed # original version t2 = torch.tensor(np.ones((100,200))) softmax = F.softmax(t2, 1) assert torch.allclose(t, softmax) To answer my question: If you want in-place functions, you have to create them yourself by plugging together low-level operations: many functions such as torch.exp can be given an optional out parameter. assignments t[idx] = something are in-place shorthand operators /=, *=, +=, -= are in-place This requires careful debugging and can be non-intuitive: t = t / summed #not in-place t /= summed #in-place I've read that in-place operations can produce problems with gradients. I'll do some more testing with this code.
https://stackoverflow.com/questions/53732209/
PyTorch - Incorrect labeling using torchvision.datasets.ImageFolder
I have structured my dataset in the following way: dataset/train/0/456.jpg dataset/train/1/456456.jpg dataset/train/2/456.jpg dataset/train/... dataset/val/0/878.jpg dataset/val/1/234.jpg dataset/val/2/34554.jpg dataset/val/... So I used torchvision.datasets.ImageFolder to import my dataset to PyTorch. However, it seems like it is not giving the right label to the right image. I've added my code below: data_transforms = { 'train': transforms.Compose( [transforms.Resize((176,176)), transforms.RandomRotation((0,360)), transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip(), transforms.CenterCrop(128), transforms.Grayscale(), transforms.ToTensor(), transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5)) ]), 'val': transforms.Compose( [transforms.Resize((128,128)), transforms.Grayscale(), transforms.ToTensor(), transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5)) ]), } data_dir = 'dataset' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = image_datasets['train'].classes device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") I found out that the labels are wrong using the following function: def imshow(img): img = img / 2 + 0.5 npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() dataiter = iter(dataloaders['val']) images, labels = dataiter.next() imshow(torchvision.utils.make_grid(images)) print(labels) Using the shown images and the labels, I manually checked whether they are correct. Unfortunately, the labels do not correspond to the images. Can someone tell me what I'm doing wrong?
Someone helped me out with this. ImageFolder creates its own internal labels. By printing image_datasets['train'].class_to_idx you can see what label is paired to what internal label. Using this dictionary, you can trace back the original label.
https://stackoverflow.com/questions/53732300/
Setting dimensions of layers in a convolutional neural network
Say I have 3x100x100 images in batches of 4 as input and I'm trying to make my first convolutional neural networks with pytorch. I'm really not sure if I'm getting convolutional neural networks right because when I train my input through the following arrangement I run into the error: Expected input batch_size (1) to match target batch_size (4). The following is my forward nnet: Then If I were to pass it through: nn.Conv2d(3, 6, 5) I would get 6 layers of maps each with dimensions (100-5+1). Then If I were to pass it through: nn.MaxPool2d(2, 2) I would get 6 layers of maps each with dimensions (96/2) Then if I were to pass it through: nn.Conv2d(6, 16, 5) I would get 16 layers of maps each with dimensions (48-5+1) Then If I were to pass it through: self.fc1 = nn.Linear(44*44*16, 120) I would get 120 neurons Then If I were to pass it through: self.fc2 = nn.Linear(120, 84) I would get 84 neurons Then If I were to pass it through: self.fc3 = nn.Linear(84, 3) I would get 3 outputs which would be perfect because I have 3 classes of labels. But as I said before, this leads to an error which is really surprising because this makes a lot of sense to me. Full neural network code: import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(44*44*16, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 3) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 *44*44) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() net.to(device)
Your understanding is correct and very detailed. However, you have used two pooling layers (see relevant code below). So output after the second step will be 16 maps with 44/2=22 dimension. x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) To fix this either not pool or change the dimension of the fully-connected layer to 22*22*16. To fix by not pooling modify you forward function as below. def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = F.relu(self.conv2(x)) x = x.view(-1, 16 *44*44) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x To fix by changing the dimension of the fully-connected layer, change the declaration of networks as below. def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(22*22*16, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10)
https://stackoverflow.com/questions/53735130/
How to profiling layer-by-layer in Pytroch?
I have tried to profile layer-by-layer of DenseNet in Pytorch as caffe-time tool. First trial : using autograd.profiler like below ... model = models.__dict__['densenet121'](pretrained=True) model.to(device) with torch.autograd.profiler.profile(use_cuda=True) as prof: model.eval() print(prof) ... But the any results are shown except for this message : <unfinished torch.autograd.profile> Ultimately, I want to profile network archtiectures(i.g.DenseNet) to check where bottlenecks happen. Could anyone do this?
To run profiler you have do some operations, you have to input some tensor into your model. Change your code as following. import torch import torchvision.models as models model = models.densenet121(pretrained=True) x = torch.randn((1, 3, 224, 224), requires_grad=True) with torch.autograd.profiler.profile(use_cuda=True) as prof: model(x) print(prof) This is the sample of the output I got: ----------------------------------- --------------- --------------- --------------- --------------- --------------- Name CPU time CUDA time Calls CPU total CUDA total ----------------------------------- --------------- --------------- --------------- --------------- --------------- conv2d 9976.544us 9972.736us 1 9976.544us 9972.736us convolution 9958.778us 9958.400us 1 9958.778us 9958.400us _convolution 9946.712us 9947.136us 1 9946.712us 9947.136us contiguous 6.692us 6.976us 1 6.692us 6.976us empty 11.927us 12.032us 1 11.927us 12.032us mkldnn_convolution 9880.452us 9889.792us 1 9880.452us 9889.792us batch_norm 1214.791us 1213.440us 1 1214.791us 1213.440us native_batch_norm 1190.496us 1193.056us 1 1190.496us 1193.056us threshold_ 158.258us 159.584us 1 158.258us 159.584us max_pool2d_with_indices 28837.682us 28836.834us 1 28837.682us 28836.834us max_pool2d_with_indices_forward 28813.804us 28822.530us 1 28813.804us 28822.530us batch_norm 1780.373us 1778.690us 1 1780.373us 1778.690us native_batch_norm 1756.774us 1759.327us 1 1756.774us 1759.327us threshold_ 64.665us 66.368us 1 64.665us 66.368us conv2d 6103.544us 6102.142us 1 6103.544us 6102.142us convolution 6089.946us 6089.600us 1 6089.946us 6089.600us _convolution 6076.506us 6076.416us 1 6076.506us 6076.416us contiguous 7.306us 7.938us 1 7.306us 7.938us empty 9.037us 8.194us 1 9.037us 8.194us mkldnn_convolution 6015.653us 6021.408us 1 6015.653us 6021.408us batch_norm 700.129us 699.394us 1 700.129us 699.394us There are many rows below this. I have used (1,3,224,224) tensor as densenet only accepts 224x224 images. In the future change tensor size according to the network.
https://stackoverflow.com/questions/53736966/
KeyError: "filename 'storages' not found"
I'm trying to load pre-trained model weights using this line : state_dict = torch.load('models/seq_to_txt_state_7.tar') and I'm getting: KeyError Traceback (most recent call last) <ipython-input-30-3f7b5be8fc72> in <module>() ----> 1 state_dict = torch.load('models/seq_to_txt_state_7.tar') /home/arash/venvs/marzieh_env/local/lib/python2.7/site-packages/torch/serialization.pyc in load(f, map_location, pickle_module) 365 f = open(f, 'rb') 366 try: --> 367 return _load(f, map_location, pickle_module) 368 finally: 369 if new_fd: /home/arash/venvs/marzieh_env/local/lib/python2.7/site-packages/torch/serialization.pyc in _load(f, map_location, pickle_module) 521 # only if offset is zero we can attempt the legacy tar file loader 522 try: --> 523 return legacy_load(f) 524 except tarfile.TarError: 525 # if not a tarfile, reset file offset and proceed /home/arash/venvs/marzieh_env/local/lib/python2.7/site-packages/torch/serialization.pyc in legacy_load(f) 448 mkdtemp() as tmpdir: 449 --> 450 tar.extract('storages', path=tmpdir) 451 with open(os.path.join(tmpdir, 'storages'), 'rb', 0) as f: 452 num_storages = pickle_module.load(f) /usr/lib/python2.7/tarfile.pyc in extract(self, member, path) 2107 2108 if isinstance(member, basestring): -> 2109 tarinfo = self.getmember(member) 2110 else: 2111 tarinfo = member /usr/lib/python2.7/tarfile.pyc in getmember(self, name) 1827 tarinfo = self._getmember(name) 1828 if tarinfo is None: -> 1829 raise KeyError("filename %r not found" % name) 1830 return tarinfo 1831 KeyError: "filename 'storages' not found" I'm using python 2.7 on Ubuntu 18. In addition the model is saved using this function in first place: def save_state(enc, dec, enc_optim, dec_optim, dec_idx_to_word, dec_word_to_idx, epoch): state = {'enc':enc.state_dict(), 'dec':dec.state_dict(), 'enc_optim':enc_optim.state_dict(), 'dec_optim':dec_optim.state_dict(), 'dec_idx_to_word':dec_idx_to_word, 'dec_word_to_idx':dec_word_to_idx} torch.save(state, epoch_to_save_path(epoch))
@reportgunner is right. The model file was corrupted. End of the message!
https://stackoverflow.com/questions/53743498/
Are there any computational efficiency differences between nn.functional() Vs nn.sequential() in PyTorch
The following is a Feed-forward network using the nn.functional() module in PyTorch import torch.nn as nn import torch.nn.functional as F class newNetwork(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 64) self.fc3 = nn.Linear(64,10) def forward(self,x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.softmax(self.fc3(x)) return x model = newNetwork() model The following is the same Feed-forward using nn.sequential() module to essentially build the same thing. What is the difference between the two and when would i use one instead of the other? input_size = 784 hidden_sizes = [128, 64] output_size = 10 Build a feed-forward network model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) print(model)
There is no difference between the two. The latter is arguably more concise and easier to write and the reason for "objective" versions of pure (ie non-stateful) functions like ReLU and Sigmoid is to allow their use in constructs like nn.Sequential.
https://stackoverflow.com/questions/53745454/
pytorch runs in anaconda prompt but not in python idle
I know this question might be stupid, but I couldn't find any help on the internet. Recently I installed anaconda in my computer, it runs Windows 10 x64. Then I used anaconda prompt to download and install pytorch for 3.6 python: conda install pytorch torchvision cuda100 -c pytorch After the installation I verified in anaconda's prompt that pytorch is installed: >>> Python >>> Import torch >>> torch.cuda.is_available() True I also checked conda list and indeed pytorch is installed in my machine. However, I write Python code in python 3.6.7 IDLE, not in anaconda prompt, so, whenever I try to import pytorch I get the message: Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> import pytorch ModuleNotFoundError: No module named 'pytorch' For some reason, Anacoda prompt recognizes pytorch, but not IDLE. Is there any solution for this? Is there any way to import pytorch module to IDLE? Thanks in advance.
It seems like the python used by idle is not the one from anaconda. In python it's very common to have multiple environments, and you always need to be aware of which environment is activated. To see what environment is activated, you can do something in this in anaconda and idle >>> import sys >>> print(sys.executable) If they have different paths, you need to first figure out how to stay in a specific environment in idle.
https://stackoverflow.com/questions/53752179/
how to calculate cross entropy in 3d image pytorch?
See the figure here left thing is (2, 480, 640) and it's softmax value right thing is (2, 480, 640) and it's one-hot encoding value how to get cross entropy loss in all element?
Exactly the same way as with any other image. Use binary_cross_entropy(left, right). Note that Both have to be of torch.float32 dtype so you may need to first convert right using right.to(torch.float32). If your left tensor contains logits instead of probabilities it is better to call binary_cross_entropy_with_logits(left, right) than to call binary_cross_entropy(torch.sigmoid(left), right)
https://stackoverflow.com/questions/53759952/
What does it mean if a deeper conv layer converges first?
I am training a 3-layer convnet to classify images - a very standard problem, I know. I first tried 3 convolutional layers with ReLU, and got this: weights from layer 1 with ReLU - looks like edge detection weights from layer 3 with ReLU - looks like feature detection The first layer (16 filters) is learning edges, as expected, and the third layer (64 filters) is learning features, as expected. Then, I just wanted to try a different non-linear term so I tried sELU instead. Oddly, the third layer seems to be now learning features, and the first layer seems to not converge at all? What does it mean for a third layer to learn edges, does it mean I need more layers? I don't see why the first layer would fail to learn edges. weights from layer 1 with SELU - looks unconverged? weights from layer 3 with SELU - looks like edge detection? I don't think the architecture is super important, but I have a 180x180 black-and-white image, and the filters are all 10 x 10 with stride 2 (16 filters for layer 1, 32 for layer 2, 64 for layer 3).
First off, you're confusing terminology. The notion of convergence applies to an optimization algorithm and whether it arrives at some fixed location in the parameter space or not. If not, it may keep going forever, either improving at infinitesimally slow rate and never arriving at an optimum, oscillating around it or straight up diverging due to numerical precision/gradient explosion issues. In other words, you can speak of your network optimization having converged but not particular filters. You do that by inspecting the training loss plot, not your kernels. A feature, in deep learning parlance, is a general notion for, well, features - that is any pattern of interest in the data. So edges would definitely be considered features as well. Perhaps you meant texture when mentioning features? With that covered, you're unfortunately too optimistic about the state of theory regarding neural networks. Interpretation of convolution kernels is very difficult and a great research problem. Nobody can responsibly make a general statement about which course of action you should take given the kernels you observe - there are way too many variables, from the dataset, through network architecture, to hyperparameters like learning rate. From my own experience, networks with all their kernels looking like this "noise" above can achieve very good results, at least on segmentation tasks with which I am working. If you are beginning with deep learning (which it looks like you are), I would suggest you also look at feature maps, that is inspect the tensors of intermediate values during the forward propagation of your network - you will see how they react to different parts of your picture and may lead you to more insight. You need to remember that aside from the first layer, the further kernels look at already transformed representations of the image so inspecting them without relation to the input featuremaps will not tell you much. A more advanced technique for understanding your kernels is deep visualization. That being said, I encourage you keep doing this kind of experiments and visualizations, since they will help you develop experience and an intuitive sense for what kernels may look like, how they interact and what is to be expected and what is not.
https://stackoverflow.com/questions/53768085/
inputing numpy array images into pytorch neural net
I have a numpy array representation of an image and I want to turn it into a tensor so I can feed it through my pytorch neural network. I understand that the neural networks take in transformed tensors which are not arranged in [100,100,3] but [3,100,100] and the pixels are rescaled and the images must be in batches. So I did the following: import cv2 my_img = cv2.imread('testset/img0.png') my_img.shape #reuturns [100,100,3] a 3 channel image with 100x100 resolution my_img = np.transpose(my_img,(2,0,1)) my_img.shape #returns [3,100,100] #convert the numpy array to tensor my_img_tensor = torch.from_numpy(my_img) #rescale to be [0,1] like the data it was trained on by default my_img_tensor *= (1/255) #turn the tensor into a batch of size 1 my_img_tensor = my_img_tensor.unsqueeze(0) #send image to gpu my_img_tensor.to(device) #put forward through my neural network. net(my_img_tensor) However this returns the error: RuntimeError: _thnn_conv2d_forward is not implemented for type torch.ByteTensor
The problem is that the input you give to your network is of type ByteTensor while only float operations are implemented for conv like operations. Try the following my_img_tensor = my_img_tensor.type('torch.DoubleTensor') # for converting to double tensor Source PyTorch Discussion Forum Thanks to AlbanD
https://stackoverflow.com/questions/53768796/
How do I install PyTorch v1.0.0+ on Google Colab?
PyTorch v1.0.0 stable was released on 8 December 2018 after being announced 7 months earlier. I want get a version optimised for the hardware that my IPython kernel is running on. How do I get this version on Google Colab?
try the following code snippet (it works equally for the runtime with or without gpu) !pip install -q torch==1.0.0 torchvision to check the version import torch print(torch.__version__) here you have the version 1.0.0 UPDATE !pip install torch Works fine now, as the most stable version is 1.0.0
https://stackoverflow.com/questions/53775508/
How are the pytorch dimensions for linear layers calculated?
In the PyTorch tutorial, the constructed network is Net( (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1)) (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1)) (fc1): Linear(in_features=400, out_features=120, bias=True) (fc2): Linear(in_features=120, out_features=84, bias=True) (fc3): Linear(in_features=84, out_features=10, bias=True) ) And used to process images with dimensions 1x32x32. They mention, that the network cannot be used with images with a different size. The two convolutional layers seem to allow for an arbitrary number of features, so the linear layers seem to be related to getting the 32x32 into into 10 final features. I do not really understand, how the numbers 120 and 84 are chosen there and why the result matches with the input dimensions. And when I try to construct a similar network, I actually get the problem with the dimension of the data. When I for example use a simpler network: Net( (conv1): Conv2d(3, 8, kernel_size=(5, 5), stride=(1, 1)) (conv2): Conv2d(8, 16, kernel_size=(5, 5), stride=(1, 1)) (fc1): Linear(in_features=400, out_features=3, bias=True) ) for an input of the size 3x1200x800, I get the error message: RuntimeError: size mismatch, m1: [1 x 936144], m2: [400 x 3] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:940 Where does the number 936144 come from and how do I need to design the network, such that the dimensions are matching?
The key step is between the last convolution and the first Linear block. Conv2d outputs a tensor of shape [batch_size, n_features_conv, height, width] whereas Linear expects [batch_size, n_features_lin]. To make the two align you need to "stack" the 3 dimensions [n_features_conv, height, width] into one [n_features_lin]. As follows, it must be that n_features_lin == n_features_conv * height * width. In the original code this "stacking" is achieved by x = x.view(-1, self.num_flat_features(x)) and if you inspect num_flat_features it just computes this n_features_conv * height * width product. In other words, your first conv must have num_flat_features(x) input features, where x is the tensor retrieved from the preceding convolution. But we need to calculate this value ahead of time, so that we can initialize the network in the first place... The calculation follows from inspecting the operations one by one. input is 32x32 we do a 5x5 convolution without padding, so we lose 2 pixels at each side, we drop down to 28x28 we do maxpooling with receptive field of 2x2, we cut each dimension by half, down to 14x14 we do another 5x5 convolution without padding, we drop down to 10x10 we do another maxpooling, we drop down to 5x5 and this 5x5 is why in the tutorial you see self.fc1 = nn.Linear(16 * 5 * 5, 120). It's n_features_conv * height * width, when starting from a 32x32 image. If you want to have a different input size, you have to redo the above calculation and adjust your first Linear layer accordingly. For the further operations, it's just a chain of matrix multiplications (that's what Linear does). So the only rule is that the n_features_out of previous Linear matches n_features_in of the next one. Values 120 and 84 are entirely arbitrary, though they were probably chosen by the author such that the resulting network performs well.
https://stackoverflow.com/questions/53784998/
saving and loading pytorch neural nets
So I created a neural net and I would like to save it and load it whenever I want. Specifically, I want to take pictures and do real time processing. I am using the neural net created here I read that the standard way is to create the net then use torch.save(net,'mynet') to save it and then load it with torch.load('mynet'). However if I open a new python3 terminal and use: >>import torch >>torch.load('mynet') It gives me the error: File "<stdin>", line 1, in <module> File "/home/tim/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 367, in load return _load(f, map_location, pickle_module) File "/home/tim/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 538, in _load result = unpickler.load() AttributeError: Can't get attribute 'Net' on <module '__main__' (built-in)> I think this is from not having the Net class defined. Adding import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 15, 3) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(15, 15, 5) self.conv3 = nn.Conv2d(15, 10, 3) self.fc1 = nn.Linear(10*4*4, 100) self.fc2 = nn.Linear(100, 24) self.fc3 = nn.Linear(24, 4) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = self.pool(F.relu(self.conv3(x))) x = x.view(-1, 10*4*4) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x is what you need to do , but why do we need to define the neural net class? What If I load a neural net with a different architecture to the one i specify in the class will the architecture defined in the class get overwritten? surely the object im loading have all the architecture and class information incapsulated in it? Update: Actually it doesn't even work when i define the Net class.
Please refer to the docs on serialization semantics, which first describes the suggested approach and then the one you used as "serialized data is bound to the specific classes and the exact directory structure used, so it can break in various ways when used in other projects, or after some serious refactors." In other words, you need to save/load net.state_dict(), not the net itself.
https://stackoverflow.com/questions/53794900/
Pytorch PPO implementation is not learning
This PPO implementation has a bug somewhere and I can't figure out what's wrong. The network returns a normal distribution and a value estimate from the critic. The last layer of the actor provides four F.tanhed action values, which are used as mean value for the distribution. nn.Parameter(torch.zeros(action_dim)) is the standard deviation. The trajectories for 20 parallel agents are added to the same memory. Episode length is 1000 and memory.sample() returns a np.random.permutation of the 20k memory entries as tensors with batches of size 64. Before stacking the batch tensors, the values are stored as (1,-1) tensors in collection.deques. The returned tensors are detach()ed. environment brain_name = envs.brain_names[0] env_info = envs.reset(train_mode=True)[brain_name] env_info = envs.step(actions.cpu().detach().numpy())[brain_name] next_states = env_info.vector_observations rewards = env_info.rewards dones = env_info.local_done update step def clipped_surrogate_update(policy, memory, num_epochs=10, clip_param=0.2, gradient_clip=5, beta=0.001, value_loss_coeff=0.5): advantages_batch, states_batch, log_probs_old_batch, returns_batch, actions_batch = memory.sample() advantages_batch = (advantages_batch - advantages_batch.mean()) / advantages_batch.std() for _ in range(num_epochs): for i in range(len(advantages_batch)): advantages_sample = advantages_batch[i] states_sample = states_batch[i] log_probs_old_sample = log_probs_old_batch[i] returns_sample = returns_batch[i] actions_sample = actions_batch[i] dist, values = policy(states_sample) log_probs_new = dist.log_prob(actions_sample.to(device)).sum(-1).unsqueeze(-1) entropy = dist.entropy().sum(-1).unsqueeze(-1).mean() ratio = (log_probs_new - log_probs_old_sample).exp() clipped_ratio = torch.clamp(ratio, 1-clip_param, 1+clip_param) clipped_surrogate_loss = -torch.min(ratio*advantages_sample, clipped_ratio*advantages_sample).mean() value_function_loss = (returns_sample - values).pow(2).mean() Loss = clipped_surrogate_loss - beta * entropy + value_loss_coeff * value_function_loss optimizer_policy.zero_grad() Loss.backward() torch.nn.utils.clip_grad_norm_(policy.parameters(), gradient_clip) optimizer_policy.step() del Loss data sampling def collect_trajectories(envs, env_info, policy, memory, tmax=200, nrand=0, gae_tau = 0.95, discount = 0.995): next_episode = False states = env_info.vector_observations n_agents = len(env_info.agents) state_list=[] reward_list=[] prob_list=[] action_list=[] value_list=[] if nrand > 0: # perform nrand random steps for _ in range(nrand): actions = np.random.randn(num_agents, action_size) actions = np.clip(actions, -1, 1) env_info = envs.step(actions)[brain_name] states = env_info.vector_observations for t in range(tmax): states = torch.FloatTensor(states).to(device) dist, values = policy(states) actions = dist.sample() probs = dist.log_prob(actions).sum(-1).unsqueeze(-1) env_info = envs.step(actions.cpu().detach().numpy())[brain_name] next_states = env_info.vector_observations rewards = env_info.rewards dones = env_info.local_done state_list.append(states) reward_list.append(rewards) prob_list.append(probs) action_list.append(actions) value_list.append(values) states = next_states if np.any(dones): next_episode = True break _, next_value = policy(torch.FloatTensor(states).to(device)) reward_arr = np.array(reward_list) undiscounted_rewards = np.sum(reward_arr, axis=0) state_arr = torch.stack(state_list) prob_arr = torch.stack(prob_list) action_arr = torch.stack(action_list) value_arr = torch.stack(value_list) reward_arr = torch.FloatTensor(reward_arr[:, :, np.newaxis]) advantage_list = [] return_list = [] returns = next_value.detach() advantages = torch.FloatTensor(np.zeros((n_agents, 1))) for i in reversed(range(state_arr.shape[0])): returns = reward_arr[i] + discount * returns td_error = reward_arr[i] + discount * next_value - value_arr[i] advantages = advantages * gae_tau * discount + td_error next_value = value_arr[i] advantage_list.append(advantages.detach()) return_list.append(returns.detach()) advantage_arr = torch.stack(advantage_list) return_arr = torch.stack(return_list) for i in range(state_arr.shape[0]): memory.add({'advantages': advantage_arr[i], 'states': state_arr[i], 'log_probs_old': prob_arr[i], 'returns': return_arr[i], 'actions': action_arr[i]}) return undiscounted_rewards, next_episode
In the Generalized Advantage Estimation loop advantages and returns are added in reversed order. advantage_list.insert(0, advantages.detach()) return_list.insert(0, returns.detach())
https://stackoverflow.com/questions/53802453/
Siamese Neural Network in Pytorch
How can I implement a siamese neural network in PyTorch? What is a siamese neural network? A siamese neural network consists in two identical neural networks, each one taking one input. Identical means that the two neural networks have the exact same architecture and share the same weights.
Implementing siamese neural networks in PyTorch is as simple as calling the network function twice on different inputs. mynet = torch.nn.Sequential( nn.Linear(10, 512), nn.ReLU(), nn.Linear(512, 2)) ... output1 = mynet(input1) output2 = mynet(input2) ... loss.backward() When invoking loss.backwad(), PyTorch will automatically sum the gradients coming from the two invocations of mynet. You can find a full-fledged example here.
https://stackoverflow.com/questions/53803889/
Unexpected result of convolution operation
Here is code I wrote to perform a single convolution and output the shape. Using formula from http://cs231n.github.io/convolutional-networks/ to calculate output size : You can convince yourself that the correct formula for calculating how many neurons “fit” is given by (W−F+2P)/S+1 The formula for computing the output size has been implemented below as def output_size(w , f , stride , padding) : return (((w - f) + (2 * padding)) / stride) + 1 The issue is output_size computes a size of 2690.5 which differs to the result of the convolution which is 1350 : %reset -f import torch import torch.nn.functional as F import numpy as np from PIL import Image import torch.nn as nn import torchvision import torchvision.transforms as transforms from pylab import plt plt.style.use('seaborn') %matplotlib inline width = 60 height = 30 kernel_size_param = 5 stride_param = 2 padding_param = 2 img = Image.new('RGB', (width, height), color = 'red') in_channels = 3 out_channels = 3 class ConvNet(nn.Module): def __init__(self): super(ConvNet, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size_param, stride=stride_param, padding=padding_param)) def forward(self, x): out = self.layer1(x) return out # w : input volume size # f : receptive field size of the Conv Layer neurons # output_size computes spatial size of output volume - spatial dimensions are (width, height) def output_size(w , f , stride , padding) : return (((w - f) + (2 * padding)) / stride) + 1 w = width * height * in_channels f = kernel_size_param * kernel_size_param print('output size :' , output_size(w , f , stride_param , padding_param)) model = ConvNet() criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=.001) img_a = np.array(img) img_pt = torch.tensor(img_a).float() result = model(img_pt.view(3, width , height).unsqueeze_(0)) an = result.view(30 , 15 , out_channels).data.numpy() # print(result.shape) # print(an.shape) # print(np.amin(an.flatten('F'))) print(30 * 15 * out_channels) Have I implemented output_size correctly ? How to amend this model so the result of Conv2d has same shape as result of output_size ?
The problem is that your input image is not a square, so you should apply the formula on the width and the heigth of the input image. And also you should not use the nb_channels in the formula because we are explicitly defining how many channels we want in the output. Then you use your f=kernel_size and not f=kernel_size*kernel_size as described in the formula. w = width h = height f = kernel_size_param output_w = int(output_size(w , f , stride_param , padding_param)) output_h = int(output_size(h , f , stride_param , padding_param)) print("Output_size", [out_channels, output_w, output_h]) #--> [1, 3, 30 ,15] And then output size : print("Output size", result.shape) #--> [1, 3, 30 ,15] Formula source : http://cs231n.github.io/convolutional-networks/
https://stackoverflow.com/questions/53807071/
IndexError when iterating my dataset using Dataloader in PyTorch
I iterated my dataset using Dataloader in PyTorch 0.2 like these: dataloader = torch.utils.data.DataLoader(...) data_iter = iter(dataloader) data = data_iter.next() but IndexError was raised. Traceback (most recent call last): File "main.py", line 193, in <module> data_target = data_target_iter.next() File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 201, in __next__ return self._process_next_batch(batch) File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 221, in _process_next_batch raise batch.exc_type(batch.exc_msg) IndexError: Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 40, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 40, in <listcomp> samples = collate_fn([dataset[i] for i in batch_indices]) File "/home/asr4/zhuminxian/adversarial/code/dataset/data_loader.py", line 33, in __getitem__ return self.X_train[idx], self.y_train[idx] IndexError: index 4196 is out of bounds for axis 0 with size 4135 I am wondering why the index was out of bounds. Is it the bug of Pytorch? I tried to run my code again, the same error raised, but at different iteration and with different out-of-bound index.
My guess is that your data.Dataset.__len__ was not overloaded properly and in-fact len(dataloader.dataset) returns a number larger than len(self.X_train). Check your implementation of the underlying dataset in '/home/asr4/zhuminxian/adversarial/code/dataset/data_loader.py'.
https://stackoverflow.com/questions/53810497/
vgg probability doesn't add up to 1, pytorch
I've trained a vgg16 model to predict 102 classes of flowers. It works however now that I'm trying to understand one of it's predictions I feel it's not acting normally. model layout # Imports here import os import numpy as np import torch import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import json from pprint import pprint from scipy import misc %matplotlib inline data_dir = 'flower_data' train_dir = data_dir + '/train' test_dir = data_dir + '/valid' json_data=open('cat_to_name.json').read() main_classes = json.loads(json_data) main_classes = {int(k):v for k,v in classes.items()} train_transform_2 = transforms.Compose([transforms.RandomResizedCrop(224), transforms.RandomRotation(30), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transform_2= transforms.Compose([transforms.RandomResizedCrop(224), transforms.ToTensor()]) # TODO: Load the datasets with ImageFolder train_data = datasets.ImageFolder(train_dir, transform=train_transform_2) test_data = datasets.ImageFolder(test_dir, transform=test_transform_2) # define dataloader parameters batch_size = 20 num_workers=0 # TODO: Using the image datasets and the trainforms, define the dataloaders train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers, shuffle=True) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers, shuffle=True) vgg16 = models.vgg16(pretrained=True) # Freeze training for all "features" layers for param in vgg16.features.parameters(): param.requires_grad = False import torch.nn as nn n_inputs = vgg16.classifier[6].in_features # add last linear layer (n_inputs -> 102 flower classes) # new layers automatically have requires_grad = True last_layer = nn.Linear(n_inputs, len(classes)) vgg16.classifier[6] = last_layer import torch.optim as optim # specify loss function (categorical cross-entropy) criterion = nn.CrossEntropyLoss() # specify optimizer (stochastic gradient descent) and learning rate = 0.001 optimizer = optim.SGD(vgg16.classifier.parameters(), lr=0.001) pre_trained_model=torch.load("model.pt") new=list(pre_trained_model.items()) my_model_kvpair=vgg16.state_dict() count=0 for key,value in my_model_kvpair.items(): layer_name, weights = new[count] my_model_kvpair[key] = weights count+=1 # number of epochs to train the model n_epochs = 6 # initialize tracker for minimum validation loss valid_loss_min = np.Inf # set initial "min" to infinity for epoch in range(1, n_epochs+1): # keep track of training and validation loss train_loss = 0.0 valid_loss = 0.0 ################### # train the model # ################### # model by default is set to train vgg16.train() for batch_i, (data, target) in enumerate(train_loader): # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = vgg16(data) # calculate the batch loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update training loss train_loss += loss.item() if batch_i % 20 == 19: # print training loss every specified number of mini-batches print('Epoch %d, Batch %d loss: %.16f' % (epoch, batch_i + 1, train_loss / 20)) train_loss = 0.0 ###################### # validate the model # ###################### vgg16.eval() # prep model for evaluation for data, target in test_loader: # forward pass: compute predicted outputs by passing inputs to the model output = vgg16(data) # calculate the loss loss = criterion(output, target) # update running validation loss valid_loss += loss.item() # print training/validation statistics # calculate average loss over an epoch train_loss = train_loss/len(train_loader.dataset) valid_loss = valid_loss/len(test_loader.dataset) print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format( epoch+1, train_loss, valid_loss )) # save model if validation loss has decreased if valid_loss <= valid_loss_min: print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format( valid_loss_min, valid_loss)) torch.save(vgg16.state_dict(), 'model.pt') valid_loss_min = valid_loss testing on a single image tensor = torch.from_numpy(test_image) reshaped = tensor.permute(2, 0, 1).unsqueeze(0) floatified = reshaped.to(torch.float32) / 255 vgg16(floatified) >>> tensor([[ 2.5686, -1.1964, -0.0872, -1.7010, -1.6669, -1.0638, 0.4515, 0.1124, 0.0166, 0.3156, 1.1699, 1.5374, 1.8720, 2.5184, 2.9046, -0.8241, -1.1949, -0.5700, 0.8692, -1.0485, 0.0390, -1.3783, -3.4632, -0.0143, 1.0986, 0.2667, -1.1127, -0.8515, 0.7759, -0.7528, 1.6366, -0.1170, -0.4983, -2.6970, 0.7545, 0.0188, 0.1094, 0.5002, 0.8838, -0.0006, -1.7993, -1.3706, 0.4964, -0.3251, -1.7313, 1.8731, 2.4963, 1.1713, -1.5726, 1.5476, 3.9576, 0.7388, 0.0228, 0.3947, -1.7237, -1.8350, -2.0297, 1.4088, -1.3469, 1.6128, -1.0851, 2.0257, 0.5881, 0.7498, 0.0738, 2.0592, 1.8034, -0.5468, 1.9512, 0.4534, 0.7746, -1.0465, -0.7254, 0.3333, -1.6506, -0.4242, 1.9529, -0.4542, 0.2396, -1.6804, -2.7987, -0.6367, -0.3599, 1.0102, 2.6319, 0.8305, -1.4333, 3.3043, -0.4021, -0.4877, 0.9125, 0.0607, -1.0326, 1.3186, -2.5861, 0.1211, -2.3177, -1.5040, 1.0416, 1.4008, 1.4225, -2.7291]], grad_fn=<ThAddmmBackward>) sum([ 2.5686, -1.1964, -0.0872, -1.7010, -1.6669, -1.0638, 0.4515, 0.1124, 0.0166, 0.3156, 1.1699, 1.5374, 1.8720, 2.5184, 2.9046, -0.8241, -1.1949, -0.5700, 0.8692, -1.0485, 0.0390, -1.3783, -3.4632, -0.0143, 1.0986, 0.2667, -1.1127, -0.8515, 0.7759, -0.7528, 1.6366, -0.1170, -0.4983, -2.6970, 0.7545, 0.0188, 0.1094, 0.5002, 0.8838, -0.0006, -1.7993, -1.3706, 0.4964, -0.3251, -1.7313, 1.8731, 2.4963, 1.1713, -1.5726, 1.5476, 3.9576, 0.7388, 0.0228, 0.3947, -1.7237, -1.8350, -2.0297, 1.4088, -1.3469, 1.6128, -1.0851, 2.0257, 0.5881, 0.7498, 0.0738, 2.0592, 1.8034, -0.5468, 1.9512, 0.4534, 0.7746, -1.0465, -0.7254, 0.3333, -1.6506, -0.4242, 1.9529, -0.4542, 0.2396, -1.6804, -2.7987, -0.6367, -0.3599, 1.0102, 2.6319, 0.8305, -1.4333, 3.3043, -0.4021, -0.4877, 0.9125, 0.0607, -1.0326, 1.3186, -2.5861, 0.1211, -2.3177, -1.5040, 1.0416, 1.4008, 1.4225, -2.7291]) >>> 5.325799999999998 given this as how I test it on a single image (and the model as usual is trained and tested on batches it returns a prediction matrix that doesn't seem to be normalized or add up to 1. Is this normal?
Yes, official network implementations in PyTorch don't apply softmax to the last linear layer. Check the code for VGG. You can use nn.softmax to achieve what you want: m = nn.Softmax() out = vgg16(floatified) out = m(out) You can also use nn.functional.softmax: out = nn.functional.softmax(vgg16(floatified))
https://stackoverflow.com/questions/53813636/
Copying data from one tensor to another using bit masking
import numpy as np import torch a = torch.zeros(5) b = torch.tensor(tuple((0,1,0,1,0)),dtype=torch.uint8) c= torch.tensor([7.,9.]) print(a[b].size()) a[b]=c print(a) torch.Size([2])tensor([0., 7., 0., 9., 0.]) I am struggling to understand how this works. I initially thought the above code was using Fancy indexing but I realised that values from c tensors are getting copied corresponding to the indices marked 1. Also, if I don't specify dtype of b as uint8 then the above code does not work. Can someone please explain me the mechanism of the above code.
Indexing with arrays works the same as in numpy and most other vectorized math packages I am aware of. There are two cases: When b is of type uint8 (think boolean, pytorch doesn't distinguish bool from uint8), a[b] is a 1-d array containing the subset of values of a (a[i]) for which the corresponding in b (b[i]) was nonzero. These values are aliased to the original a so if you modify them, their corresponding locations will change as well. The alternative type you can use for indexing is an array of int64, in which case a[b] creates an array of shape (*b.shape, *a.shape[1:]). Its structure is as if each element of b (b[i]) was replaced by a[i]. In other words, you create a new array by specifying from which indexes of a should the data be fetched. Again, the values are aliased to the original a, so if you modify a[b] the values of a[b[i]], for each i, will change. An example usecase is shown in this question. These two modes are explained for numpy in integer array indexing and boolean array indexing, where for the latter you have to keep in mind that pytorch uses uint8 in place of bool. Also, if your goal is to copy data from one tensor to another you have to keep in mind that an operation like a[ixs] = b[ixs] is an in-place operation (a is modified in place), which my not play well with autograd. If you want to do out of place masking, use torch.where. An example usecase is shown in this answer.
https://stackoverflow.com/questions/53814772/
How to assign a new value to a pytorch Variable without breaking backpropagation?
I have a pytorch variable that is used as a trainable input for a model. At some point I need to manually reassign all values in this variable. How can I do that without breaking the connections with the loss function? Suppose the current values are [1.2, 3.2, 43.2] and I simply want them to become [1,2,3]. Edit At the time I asked this question, I hadn't realized that PyTorch doesn't have a static graph as Tensorflow or Keras do. In PyTorch, the training loop is made manually and you need to call everything in each training step. (There isn't the notion of placeholder + static graph for later feeding data). Consequently, we can't "break the graph", since we will use the new variable to perform all the further computations again. I was worried about a problem that happens in Keras, not in PyTorch.
You can use the data attribute of tensors to modify the values, since modifications on data do not affect the graph. So the graph will still be intact and modifications of the data attribute itself have no influence on the graph. (Operations and changes on data are not tracked by autograd and thus not present in the graph) Since you haven't given an example, this example is based on your comment statement: 'Suppose I want to change the weights of a layer.' I used normal tensors here, but this works the same for weight.data and bias.data attributes of a layers. Here is a short example: import torch import torch.nn.functional as F # Test 1, random vector with CE w1 = torch.rand(1, 3, requires_grad=True) loss = F.cross_entropy(w1, torch.tensor([1])) loss.backward() print('w1.data', w1) print('w1.grad', w1.grad) print() # Test 2, replacing values of w2 with w1, before CE # to make sure that everything is exactly like in Test 1 after replacing the values w2 = torch.zeros(1, 3, requires_grad=True) w2.data = w1.data loss = F.cross_entropy(w2, torch.tensor([1])) loss.backward() print('w2.data', w2) print('w2.grad', w2.grad) print() # Test 3, replace data after computation w3 = torch.rand(1, 3, requires_grad=True) loss = F.cross_entropy(w3, torch.tensor([1])) # setting values # the graph of the previous computation is still intact as you can in the below print-outs w3.data = w1.data loss.backward() # data were replaced with values from w1 print('w3.data', w3) # gradient still shows results from computation with w3 print('w3.grad', w3.grad) Output: w1.data tensor([[ 0.9367, 0.6669, 0.3106]]) w1.grad tensor([[ 0.4351, -0.6678, 0.2326]]) w2.data tensor([[ 0.9367, 0.6669, 0.3106]]) w2.grad tensor([[ 0.4351, -0.6678, 0.2326]]) w3.data tensor([[ 0.9367, 0.6669, 0.3106]]) w3.grad tensor([[ 0.3179, -0.7114, 0.3935]]) The most interesting part here is w3. At the time backward is called the values are replaced by values of w1. But the gradients are calculated based on the CE-function with values of original w3. The replaced values have no effect on the graph. So the graph connection is not broken, replacing had no influence on graph. I hope this is what you were looking for!
https://stackoverflow.com/questions/53819383/
Error when converting PyTorch model to TorchScript
I'm trying to follow the PyTorch guide to load models in C++. The following sample code works: import torch import torchvision # An instance of your model. model = torchvision.models.resnet18() # An example input you would normally provide to your model's forward() method. example = torch.rand(1, 3, 224, 224) # Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing. traced_script_module = torch.jit.trace(model, example) However, when trying other networks, such as squeezenet (or alexnet), my code fails: sq = torchvision.models.squeezenet1_0(pretrained=True) traced_script_module = torch.jit.trace(sq, example) >> traced_script_module = torch.jit.trace(sq, example) /home/fabio/.local/lib/python3.6/site-packages/torch/jit/__init__.py:642: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error: Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 785] (3.1476082801818848 vs. 3.945478677749634) and 999 other locations (100.00%) _check_trace([example_inputs], func, executor_options, module, check_tolerance, _force_outplace)
I just figured out that models loaded from torchvision.models are in train mode by default. AlexNet and SqueezeNet both have Dropout layers, making the inference nondeterministic if in train mode. Simply changing to eval mode fixed the issue: sq = torchvision.models.squeezenet1_0(pretrained=True) sq.eval() traced_script_module = torch.jit.trace(sq, example)
https://stackoverflow.com/questions/53820175/
Pycharm/Pytorch - 'tensor' is not callable
When creating a pytorch (1.0) tensor : import torch W = torch.tensor(([1.0])) Pycharm (2018.3.1) gives me the following warning : 'tensor' is not callable less... (Ctrl+F1) Inspection info: This inspection highlights attempts to call objects which are not callable, like, for example, tuples My code works fine (tensor() is callable) but I'd like to understand and get rid of this warning.
This has been a known issue to them . The moderator replied with: We will fix this in the next release. It’s being tracked at https://github.com/pytorch/pytorch/issues/7318 However, the reported issue was on PyTorch v0.4.1
https://stackoverflow.com/questions/53826221/
RuntimeError: dimension specified as 0 but tensor has no dimensions
I was trying to Implement simple NN using the MNIST datasets and I keep getting this error import matplotlib.pyplot as plt import torch from torchvision import models from torchvision import datasets, transforms from torch import nn, optim import torch.nn.functional as F import helper transform = transforms.ToTensor() train_data = datasets.MNIST(root='data', train=True, download=True, transform=transform) test_data = datasets.MNIST(root='data', train=False, download=True, transform=transform) train_loader = torch.utils.data.DataLoader(train_data, batch_size = 20, shuffle=True) test_loader = torch.utils.data.DataLoader(test_data, batch_size = 20, shuffle=True) class Net(nn.Module): def __init__(self): super(Net,self).__init__() self.fc1 = nn.Linear(784,10) def forward(self,x): x = x.view(-1,784) x = F.relu(self.fc1(x)) x = F.log_softmax(x, dim = 1) return x model = Net() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr = 0.003) epochs = 20 model.train() for epoch in range(epochs): train_loss = 0 for image, lables in train_data: optimizer.zero_grad() output = model(image) loss = criterion(output, lables) loss.backwards() optimizer.step() train_loss += loss.item()*image.size(0) train_loss = train_loss/len(train_data.dataset) print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch+1, train_loss)) Here is the error RuntimeError: dimension specified as 0 but tensor has no dimensions
The issue you're hitting directly is that NLL loss expects a labels (you're spelling it lables btw) tensor of at least 1 dimension and it's getting a 0-dimensional tensor (aka a scalar). If you see this kind of messages, it's good to just print(output.shape, labels.shape) for easier inspection. The source of this error is that you, probably by mistake, run for image, labels in train_data instead of for image, labels in train_loader. The consequence is that your data is not batched - batching the scalars coming out of dataset would create the missing dimension NLLLoss complains about. Once we fix this, we proceed to fix backwards -> backward and finally len(train_data.dataset) -> len(train_data). Then the loop works (if it's a reasonable net etc, I did not test). As a side remark, you can combine NLLLoss and log_softmax by using CrossEntropyLoss, which has the benefit of extra numerical stability.
https://stackoverflow.com/questions/53841576/
Pytorch - inference all images and back-propagate batch by batch
I have a special use case that I have to separate inference and back-propagation: I have to inference all images and slice outputs into batches followed by back-propagating batches by batches. I don't need to update my network's weights. I modified snippets of cifar10_tutorial into the following to simulate my problem: j is a variable to represent the index which returns by my own logic and I want the gradient of some variables. for epoch in range(2): # loop over the dataset multiple times for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data inputs.requires_grad = True # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) for j in range(4): # j is given by external logic in my own case loss = criterion(outputs[j, :].unsqueeze(0), labels[j].unsqueeze(0)) loss.backward() print(inputs.grad.data[j, :]) # what I really want I got the following errors: RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. My questions are: According to my understanding, the problem arises because the first back-propagate backwards the whole outputs and outputs[1,:].unsqueeze(0) was released so second back-propagate failed. Am I right? In my case, if I set retain_graph=True, will the code run slower and slower according to this post? Is there better way to achieve my goal?
Yes you are correct. When you already back-propagated through outputs the first time (first iteration), the buffers will be freed and it will fail the following time (next iteration of your loop), because then necessary data for this computation have already been removed. Yes, the graph grows bigger and bigger, so it could be slower depending on GPU (or CPU) usage and your network. I had used this once and it was much slower, however this depends much on your network architecture. But certainly you will need more memory with retain_graph=True than without. Depending on your outputs and labels shape you should be able to calculate the loss for all your outputs and labels at once: criterion(outputs, labels) You have to skip the j-loop then, this would also make your code faster. Maybe you need to reshape (resp. view) your data, but this should work fine. If you for some reason cannot do that you can manually sum up the loss on a tensor and call backward after the loop. This should work fine too, but is slower than the solution above. So than your code would look like this: # init loss tensor loss = torch.tensor(0.0) # move to GPU if you're using one for j in range(4): # summing up your loss for every j loss += criterion(outputs[j, :].unsqueeze(0), labels[j].unsqueeze(0)) # ... # calling backward on the summed loss - getting gradients loss.backward() # as you call backward now only once on the outputs # you shouldn't get any error and you don't have to use retain_graph=True Edit: The accumulation of the losses and calling later backward is completely equivalent, here is a small example with and without accumulating the losses: First creating some data data: # w in this case will represent a very simple model # I leave out the CE and just use w to map the output to a scalar value w = torch.nn.Linear(4, 1) data = [torch.rand(1, 4) for j in range(4)] data looks like: [tensor([[0.4593, 0.3410, 0.1009, 0.9787]]), tensor([[0.1128, 0.0678, 0.9341, 0.3584]]), tensor([[0.7076, 0.9282, 0.0573, 0.6657]]), tensor([[0.0960, 0.1055, 0.6877, 0.0406]])] Let's first do like you're doing it, calling backward for every iteration j separately: # code for directly applying backward # zero the weights layer w w.zero_grad() for j, inp in enumerate(data): # activate grad flag inp.requires_grad = True # remove / zero previous gradients for inputs inp.grad = None # apply model (only consists of one layer in our case) loss = w(inp) # calling backward on every output separately loss.backward() # print out grad print('Input:', inp) print('Grad:', inp.grad) print() print('w.weight.grad:', w.weight.grad) Here is the print-out with every input and the respective gradient and gradients for the model resp. layer w in our simplified case: Input: tensor([[0.4593, 0.3410, 0.1009, 0.9787]], requires_grad=True) Grad: tensor([[-0.0999, 0.2665, -0.1506, 0.4214]]) Input: tensor([[0.1128, 0.0678, 0.9341, 0.3584]], requires_grad=True) Grad: tensor([[-0.0999, 0.2665, -0.1506, 0.4214]]) Input: tensor([[0.7076, 0.9282, 0.0573, 0.6657]], requires_grad=True) Grad: tensor([[-0.0999, 0.2665, -0.1506, 0.4214]]) Input: tensor([[0.0960, 0.1055, 0.6877, 0.0406]], requires_grad=True) Grad: tensor([[-0.0999, 0.2665, -0.1506, 0.4214]]) w.weight.grad: tensor([[1.3757, 1.4424, 1.7801, 2.0434]]) Now instead of calling backward once for every iteration j we accumulate the values and call backward on the sum and compare the results: # init tensor for accumulation loss = torch.tensor(0.0) # zero layer gradients w.zero_grad() for j, inp in enumerate(data): # activate grad flag inp.requires_grad = True # remove / zero previous gradients for inputs inp.grad = None # apply model (only consists of one layer in our case) # accumulating values instead of calling backward loss += w(inp).squeeze() # calling backward on the sum loss.backward() # printing out gradients for j, inp in enumerate(data): print('Input:', inp) print('Grad:', inp.grad) print() print('w.grad:', w.weight.grad) Lets take a look at the results: Input: tensor([[0.4593, 0.3410, 0.1009, 0.9787]], requires_grad=True) Grad: tensor([[-0.0999, 0.2665, -0.1506, 0.4214]]) Input: tensor([[0.1128, 0.0678, 0.9341, 0.3584]], requires_grad=True) Grad: tensor([[-0.0999, 0.2665, -0.1506, 0.4214]]) Input: tensor([[0.7076, 0.9282, 0.0573, 0.6657]], requires_grad=True) Grad: tensor([[-0.0999, 0.2665, -0.1506, 0.4214]]) Input: tensor([[0.0960, 0.1055, 0.6877, 0.0406]], requires_grad=True) Grad: tensor([[-0.0999, 0.2665, -0.1506, 0.4214]]) w.grad: tensor([[1.3757, 1.4424, 1.7801, 2.0434]]) When comparing the results we can see that both are the same. This is a very simple example, but nevertheless we can see that calling backward() on every single tensor and summing up tensors and then calling backward() is equivalent in terms of the resulting gradients for both inputs and weights. When you use CE for all j 's at once as described in 3. you can use the flag reduction='sum' to archive the same behaviour like above with summing up the CE values, default is ‘mean’, which probably leads to slightly different results.
https://stackoverflow.com/questions/53843711/
How to load and use a pretained PyTorch InceptionV3 model to classify an image
I have the same problem as How can I load and use a PyTorch (.pth.tar) model which does not have an accepted answer or one I can figure out how to follow the advice given. I'm new to PyTorch. I am trying to load the pretrained PyTorch model referenced here: https://github.com/macaodha/inat_comp_2018 I'm pretty sure I am missing some glue. # load the model import torch model=torch.load("iNat_2018_InceptionV3.pth.tar",map_location='cpu') # try to get it to classify an image imsize = 256 loader = transforms.Compose([transforms.Scale(imsize), transforms.ToTensor()]) def image_loader(image_name): """load image, returns cuda tensor""" image = Image.open(image_name) image = loader(image).float() image = Variable(image, requires_grad=True) image = image.unsqueeze(0) return image.cpu() #assumes that you're using CPU image = image_loader("test-image.jpg") Produces the error: in () ----> 1 model.predict(image) AttributeError: 'dict' object has no attribute 'predict
Problem Your model isn't actually a model. When it is saved, it contains not only the parameters, but also other information about the model as a form somewhat similar to a dict. Therefore, torch.load("iNat_2018_InceptionV3.pth.tar") simply returns dict, which of course does not have an attribute called predict. model=torch.load("iNat_2018_InceptionV3.pth.tar",map_location='cpu') type(model) # dict Solution What you need to do first in this case, and in general cases, is to instantiate your desired model class, as per the official guide "Load models". # First try from torchvision.models import Inception3 v3 = Inception3() v3.load_state_dict(model['state_dict']) # model that was imported in your code. However, directly inputing the model['state_dict'] will raise some errors regarding mismatching shapes of Inception3's parameters. It is important to know what was changed to the Inception3 after its instantiation. Luckily, you can find that in the original author's train_inat.py. # What the author has done model = inception_v3(pretrained=True) model.fc = nn.Linear(2048, args.num_classes) #where args.num_classes = 8142 model.aux_logits = False Now that we know what to change, lets make some modification to our first try. # Second try from torchvision.models import Inception3 v3 = Inception3() v3.fc = nn.Linear(2048, 8142) v3.aux_logits = False v3.load_state_dict(model['state_dict']) # model that was imported in your code. And there you go with successfully loaded model!
https://stackoverflow.com/questions/53844826/
How do you change the dimension of your input pictures in pytorch?
i made a convolutional nuralnetwork and i want it to take input pictures and output pictures but when i turn the pictures into tensors they have the wrong dimension : RuntimeError: Expected 4-dimensional input for 4-dimensional weight [20, 3, 5, 5], but got 3-dimensional input of size [900, 1440, 3] instead how do i change the dimension of the pictures ? and why does it need to be changed? and how do i make the output an picture? i tryed to use transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) to normilize the img but it didnt change the dimension . here is my nuralnet def __init__(self): super(Net, self).__init__() torch.nn.Module.dump_patches = True self.conv1 = nn.Conv2d(3, 20, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(20, 16, 5) self.fc1 = nn.Linear(16*5*5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 16*5*5) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 ) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x here i get the image and put it into a list: for i in range(4): l.append(ImageGrab.grab()) and here is the code that turns the img into an tensor k=torch.from_numpy(np.asarray(l[1],dtype="int32" ))
In summary, according to the comments you and I posted: The error is due to torch.nn only supports mini-batches. The input should be in the form (batch_size, channels, height, width). You seem to be missing the batch dimension. You can add .unsqueeze(0) to add a fake batch dimension in the first position. In addition to the above, you'll also have to rearrange the dimensions of your image from [HxWxC] to [CxHxW]. This is done by .ToTensor() transformation in PyTorch. For the size mismatch problem of your input image, you could use transformation like this: transform = transforms.Compose( [transforms.Resize((32,32)), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
https://stackoverflow.com/questions/53852355/
Summing contiguous non zero tensor values
I am trying to find the summation of contiguous non zero tensor values as shown below Let’s say, I have a tensor A = [1.3, 0.0, 0.6, 0.7, 0.8]. And I want to 1) sum up the contiguous non-zero values of the tensor to output [1.3, 0.0, 2.1] and then choose the maximum which is 2.1. 2) find the indices as well which were used to sum these values. In this case it will be 2, 3, 4.
My approach is somewhat different from @Anwarvic. I try to do it in one pass. See the function below. It moves through the array and keep a log of max it has seen so far and current sum. Current sum is updated to 0 if we hit a zero or sum up the value to current if non-zero. def find_continguous_max_sum(t): max_,cur=0,0 max_indices, cur_indices = [], [] for idx, i in enumerate(t): if (i==0.0): cur = 0 cur_indices = [] else: cur = cur+i cur_indices.append(idx) if (max_ < cur): max_ = cur max_indices = cur_indices return max_, max_indices #usage find_contiguous_max_sum(A)
https://stackoverflow.com/questions/53875065/
How do I modify this PyTorch convolutional neural network to accept a 64 x 64 image and properly output predictions?
I took this convolutional neural network (CNN) from here. It accepts 32 x 32 images and defaults to 10 classes. However, I have 64 x 64 images with 500 classes. When I pass in 64 x 64 images (batch size held constant at 32), I get the following error. ValueError: Expected input batch_size (128) to match target batch_size (32). The stack trace starts at the line loss = loss_fn(outputs, labels). The outputs.shape is [128, 500] and the labels.shape is [32]. The code is listed here for completeness. class Unit(nn.Module): def __init__(self,in_channels,out_channels): super(Unit,self).__init__() self.conv = nn.Conv2d(in_channels=in_channels,kernel_size=3,out_channels=out_channels,stride=1,padding=1) self.bn = nn.BatchNorm2d(num_features=out_channels) self.relu = nn.ReLU() def forward(self,input): output = self.conv(input) output = self.bn(output) output = self.relu(output) return output class SimpleNet(nn.Module): def __init__(self,num_classes=10): super(SimpleNet,self).__init__() self.unit1 = Unit(in_channels=3,out_channels=32) self.unit2 = Unit(in_channels=32, out_channels=32) self.unit3 = Unit(in_channels=32, out_channels=32) self.pool1 = nn.MaxPool2d(kernel_size=2) self.unit4 = Unit(in_channels=32, out_channels=64) self.unit5 = Unit(in_channels=64, out_channels=64) self.unit6 = Unit(in_channels=64, out_channels=64) self.unit7 = Unit(in_channels=64, out_channels=64) self.pool2 = nn.MaxPool2d(kernel_size=2) self.unit8 = Unit(in_channels=64, out_channels=128) self.unit9 = Unit(in_channels=128, out_channels=128) self.unit10 = Unit(in_channels=128, out_channels=128) self.unit11 = Unit(in_channels=128, out_channels=128) self.pool3 = nn.MaxPool2d(kernel_size=2) self.unit12 = Unit(in_channels=128, out_channels=128) self.unit13 = Unit(in_channels=128, out_channels=128) self.unit14 = Unit(in_channels=128, out_channels=128) self.avgpool = nn.AvgPool2d(kernel_size=4) self.net = nn.Sequential(self.unit1, self.unit2, self.unit3, self.pool1, self.unit4, self.unit5, self.unit6 ,self.unit7, self.pool2, self.unit8, self.unit9, self.unit10, self.unit11, self.pool3, self.unit12, self.unit13, self.unit14, self.avgpool) self.fc = nn.Linear(in_features=128,out_features=num_classes) def forward(self, input): output = self.net(input) output = output.view(-1,128) output = self.fc(output) return output Any ideas on how to modify this CNN to accept and properly return outputs?
The problem is an incompatible reshape (view) at the end. You're using a sort of "flattening" at the end, which is different from a "global pooling". Both are valid for CNNs, but only the global poolings are compatible with any image size. The flattened net (your case) In your case, with a flatten, you need to keep track of all image dimensions in order to know how to reshape at the end. So: Enter with 64x64 Pool1 to 32x32 Pool2 to 16x16 Pool3 to 8x8 AvgPool to 2x2 Then, at the end you've got a shape of (batch, 128, 2, 2). Four times the final number if the image were 32x32. Then, your final reshape should be output = output.view(-1,128*2*2). This is a different net with a different classification layer, though, because in_features=512. The global pooling net On the other hand, you could use the same model, same layers and same weights for any image size >= 32 if you replace the last pooling with a global pooling: def flatChannels(x): size = x.size() return x.view(size[0],size[1],size[2]*size[3]) def globalAvgPool2D(x): return flatChannels(x).mean(dim=-1) def globalMaxPool2D(x): return flatChannels(x).max(dim=-1) The ending of the model: #removed the pool from here to put it in forward self.net = nn.Sequential(self.unit1, self.unit2, self.unit3, self.pool1, self.unit4, self.unit5, self.unit6, self.unit7, self.pool2, self.unit8, self.unit9, self.unit10, self.unit11, self.pool3, self.unit12, self.unit13, self.unit14) self.fc = nn.Linear(in_features=128,out_features=num_classes) def forward(self, input): output = self.net(input) output = globalAvgPool2D(output) #or globalMaxPool2D output = self.fc(output) return output
https://stackoverflow.com/questions/53875372/
PyTorch - How to deactivate dropout in evaluation mode
This is the model I defined it is a simple lstm with 2 fully connect layers. import copy import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class mylstm(nn.Module): def __init__(self,input_dim, output_dim, hidden_dim,linear_dim): super(mylstm, self).__init__() self.hidden_dim=hidden_dim self.lstm=nn.LSTMCell(input_dim,self.hidden_dim) self.linear1=nn.Linear(hidden_dim,linear_dim) self.linear2=nn.Linear(linear_dim,output_dim) def forward(self, input): out,_=self.lstm(input) out=nn.Dropout(p=0.3)(out) out=self.linear1(out) out=nn.Dropout(p=0.3)(out) out=self.linear2(out) return out x_train and x_val are float dataframe with shape (4478,30), while y_train and y_val are float df with shape (4478,10) x_train.head() Out[271]: 0 1 2 3 ... 26 27 28 29 0 1.6110 1.6100 1.6293 1.6370 ... 1.6870 1.6925 1.6950 1.6905 1 1.6100 1.6293 1.6370 1.6530 ... 1.6925 1.6950 1.6905 1.6960 2 1.6293 1.6370 1.6530 1.6537 ... 1.6950 1.6905 1.6960 1.6930 3 1.6370 1.6530 1.6537 1.6620 ... 1.6905 1.6960 1.6930 1.6955 4 1.6530 1.6537 1.6620 1.6568 ... 1.6960 1.6930 1.6955 1.7040 [5 rows x 30 columns] x_train.shape Out[272]: (4478, 30) Define the varible and do one time bp, I can find out the vaildation loss is 1.4941 model=mylstm(30,10,200,100).double() from torch import optim optimizer=optim.RMSprop(model.parameters(), lr=0.001, alpha=0.9) criterion=nn.L1Loss() input_=torch.autograd.Variable(torch.from_numpy(np.array(x_train))) target=torch.autograd.Variable(torch.from_numpy(np.array(y_train))) input2_=torch.autograd.Variable(torch.from_numpy(np.array(x_val))) target2=torch.autograd.Variable(torch.from_numpy(np.array(y_val))) optimizer.zero_grad() output=model(input_) loss=criterion(output,target) loss.backward() optimizer.step() moniter=criterion(model(input2_),target2) moniter Out[274]: tensor(1.4941, dtype=torch.float64, grad_fn=<L1LossBackward>) But I called forward function again I get a different number due to randomness of dropout moniter=criterion(model(input2_),target2) moniter Out[275]: tensor(1.4943, dtype=torch.float64, grad_fn=<L1LossBackward>) what should I do that I can eliminate all the dropout in predicting phrase? I tried eval(): moniter=criterion(model.eval()(input2_),target2) moniter Out[282]: tensor(1.4942, dtype=torch.float64, grad_fn=<L1LossBackward>) moniter=criterion(model.eval()(input2_),target2) moniter Out[283]: tensor(1.4945, dtype=torch.float64, grad_fn=<L1LossBackward>) And pass an addtional parameter p to control dropout: import copy import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class mylstm(nn.Module): def __init__(self,input_dim, output_dim, hidden_dim,linear_dim,p): super(mylstm, self).__init__() self.hidden_dim=hidden_dim self.lstm=nn.LSTMCell(input_dim,self.hidden_dim) self.linear1=nn.Linear(hidden_dim,linear_dim) self.linear2=nn.Linear(linear_dim,output_dim) def forward(self, input,p): out,_=self.lstm(input) out=nn.Dropout(p=p)(out) out=self.linear1(out) out=nn.Dropout(p=p)(out) out=self.linear2(out) return out model=mylstm(30,10,200,100,0.3).double() output=model(input_) loss=criterion(output,target) loss.backward() optimizer.step() moniter=criterion(model(input2_,0),target2) Traceback (most recent call last): File "<ipython-input-286-e49b6fac918b>", line 1, in <module> output=model(input_) File "D:\Users\shan xu\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__ result = self.forward(*input, **kwargs) TypeError: forward() missing 1 required positional argument: 'p' But neither of them worked.
You have to define your nn.Dropout layer in your __init__ and assign it to your model to be responsive for calling eval(). So changing your model like this should work for you: class mylstm(nn.Module): def __init__(self,input_dim, output_dim, hidden_dim,linear_dim,p): super(mylstm, self).__init__() self.hidden_dim=hidden_dim self.lstm=nn.LSTMCell(input_dim,self.hidden_dim) self.linear1=nn.Linear(hidden_dim,linear_dim) self.linear2=nn.Linear(linear_dim,output_dim) # define dropout layer in __init__ self.drop_layer = nn.Dropout(p=p) def forward(self, input): out,_= self.lstm(input) # apply model dropout, responsive to eval() out= self.drop_layer(out) out= self.linear1(out) # apply model dropout, responsive to eval() out= self.drop_layer(out) out= self.linear2(out) return out If you change it like this dropout will be inactive as soon as you call eval(). NOTE: If you want to continue training afterwards you need to call train() on your model to leave evaluation mode. You can also find a small working example for dropout with eval() for evaluation mode here: nn.Dropout vs. F.dropout pyTorch
https://stackoverflow.com/questions/53879727/
Does someone knows the difference between xavier_normal_ and kaiming_normal_?
Like title. Does someone knows the difference between xavier_normal_ and kaiming_normal_? Only Xavier have an argument 'gain' more than kaiming?
Read the documentation: xavier_normal_ Fills the input Tensor with values according to the method described in “Understanding the difficulty of training deep feedforward neural networks” - Glorot, X. & Bengio, Y. (2010), using a normal distribution. kaiming_normal_ Fills the input Tensor with values according to the method described in “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification” - He, K. et al. (2015), using a normal distribution. The given equations are completely different. For more details you'll have to read those papers.
https://stackoverflow.com/questions/53881908/
Is there a way to see what's going wrong with a training session in Pytorch?
I'm training a triplet convolution neural network in Jupyter. When I execute the cell I just get the * symbol and nothing happens. I'm not asking for help finding a problem with the code. I would just like to know if there is a troubleshooting possibility that might let me see what is happening. There is probably something wrong with my data loader, or data format, or model. I will find it myself if Pytorch or somebody has a method to find a clue here. It is not giving me an error. It is just working ad infinitum on something wrong. I saw a function called 'set_trace()' that could be typed into the block that is supposed to be able to give a clue about the problem. But after putting it in the for loop I get NameError: name 'set_trace' is not defined for batch_idx in range(1): for batch_idx, (data, target) in enumerate(triplet_train_loader): model.train() metrics = [] losses = [] total_loss = 0 data = tuple(d.cuda() for d in data) optimizer.zero_grad() outputs = model(*data) loss_outputs = loss_fn(*outputs) loss = loss_outputs[0] if type(loss_outputs) in (tuple, list) else loss_outputs losses.append(loss.item()) total_loss += loss.item() loss.backward() optimizer.step() set_trace() if batch_idx % log_interval == 0: message = 'Train: [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( batch_idx * len(data[0]), len(triplet_train_loader.dataset), 100. * batch_idx / len(triplet_train_loader), np.mean(losses)) for metric in metrics: message += '\t{}: {}'.format(metric.name(), metric.value()) print(message) losses = [] "Describe expected and actual results" LOVE FOR IT TO TRAIN. IN ACTUAL RESULT WORLD IT DOES NOT TRAIN.
NameError: name 'set_trace' is not defined You mean: import pdb; pdb.set_trace()
https://stackoverflow.com/questions/53890913/
Traceback (most recent call last) in Colab when looping through dataloader in pytorch
I'm working on a project to classify flower images using a pre-trained model vgg19 using pytorch. I'm relying on the model features only and using a custom classifier. However on starting a for-loop to feed images to the model classifier and calculate accuracy through epochs I get an error. I'm not sure what's the problem as the error is a traceback (most recent call last) Below is my notebook. The cell throwing the error is below #training the classifier criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.classifier.parameters(),lr=0.01) steps = 0 running_loss = 0 epochs = 5 print_every = 5 for epoch in range(epochs): for images,labels in train_dataloader: steps += 1 optimizer.zero_grad() logps = model.forward(images) loss = criterion(logps,labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: test_loss = 0 accuracy = 0 model.eval() with torch.no_grad(): for images, labels in valid_dataloader: logps = model.forward(images) batch_loss = criterion(logps, labels) test_loss += batch_loss.item() #Calculate accuracy ps = torch.exp(logps) top_p, top_class = ps.topk(5,dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() print(f"Epoch {epoch+1}/{epochs}.." f"Train loss: {running_loss/print_every: .3f}.." f"Test loss: {test_loss/len(valid_loader):.3f}.." f"Test accuracy: {accuracy/len(valid_loader):.3f}") running_loss = 0 model.train() The error I get on running the notebook AttributeError Traceback (most recent call last) <ipython-input-11-c218f8f2b72e> in <module>() 8 9 for epoch in range(epochs): ---> 10 for images,labels in train_dataloader: 11 steps += 1 12 optimizer.zero_grad() /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __next__(self) 312 if self.num_workers == 0: # same-process loading 313 indices = next(self.sample_iter) # may raise StopIteration --> 314 batch = self.collate_fn([self.dataset[i] for i in indices]) 315 if self.pin_memory: 316 batch = pin_memory_batch(batch) /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in <listcomp>(.0) 312 if self.num_workers == 0: # same-process loading 313 indices = next(self.sample_iter) # may raise StopIteration --> 314 batch = self.collate_fn([self.dataset[i] for i in indices]) 315 if self.pin_memory: 316 batch = pin_memory_batch(batch) /usr/local/lib/python3.6/dist-packages/torchvision/datasets/folder.py in __getitem__(self, index) 99 """ 100 path, target = self.samples[index] --> 101 sample = self.loader(path) 102 if self.transform is not None: 103 sample = self.transform(sample) /usr/local/lib/python3.6/dist-packages/torchvision/datasets/folder.py in default_loader(path) 145 return accimage_loader(path) 146 else: --> 147 return pil_loader(path) 148 149 /usr/local/lib/python3.6/dist-packages/torchvision/datasets/folder.py in pil_loader(path) 127 # open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835) 128 with open(path, 'rb') as f: --> 129 img = Image.open(f) 130 return img.convert('RGB') 131 /usr/local/lib/python3.6/dist-packages/PIL/Image.py in open(fp, mode) 2319 return True 2320 -> 2321 2322 def new(mode, size, color=0): 2323 """ /usr/local/lib/python3.6/dist-packages/PIL/Image.py in preinit() 368 369 --> 370 def preinit(): 371 """Explicitly load standard file format drivers.""" 372 /usr/local/lib/python3.6/dist-packages/PIL/PpmImagePlugin.py in <module>() 156 Image.register_save(PpmImageFile.format, _save) 157 --> 158 Image.register_extensions(PpmImageFile.format, [".pbm", ".pgm", ".ppm"]) AttributeError: module 'PIL.Image' has no attribute 'register_extensions'
The error is caused because of interference created by the older version of Pillow which is already installed on Colab. You need to upgrade it to the latest version. Use the following code to upgrade to the latest version of Pillow. !pip uninstall -y Pillow !pip install Pillow==5.3.0 import PIL.Image Now, simply restart the Restart the runtime. It'll remove the error.
https://stackoverflow.com/questions/53894496/
Is my custom loss function correct? (Pytorch)
I want to do word recognition using a CNN + Classifier, where the input is an image and the output a matrice 10x37. 10 is the maximum number of characters in a word and 37 is the number of letters in my example. I wrote a custom loss function for this model but I'm not sure if it's correct since I can't get above 80% Test Accuracy. I'm using Pytorch class CustomLoss(nn.Module): def __init__(self): super().__init__() self.nllloss = nn.NLLLoss() def forward(self, output, labels): loss = 0 for i in range(labels.shape[1]): loss += self.nllloss(output[:, i, :], labels[:, i]) loss /= labels.shape[1] return loss Infos: output.shape = (batch_size, 10, 37) labels.shape = (batch_size, 10) Is the loss function correct? And what my classification problem is called (Multiple Multi class classification) ?
The loss function is correct. The problem was in the file containing my training data. It was not correctly created. In fact, I flipped the dimensions in the images (width and height) so the result from my training set was indecipherable for my CNN. Now that I have solved the problem, I have reached 99.8% test accuracy.
https://stackoverflow.com/questions/53899272/
Pytorch select tensor
I want to know if Pytorch have a slice function (same as tf). In particular, I want to select the orange color rows.
You can use slicing as in numpy. See below import torch A = torch.rand((3,5,500)) first_three_rows = A[:, :3, :] However to get different slices as you asked in the question, you can do import torch A = torch.rand((3,5,500)) indices = [2,4,5] result = torch.cat([A[idx, :index, :] for idx, index in enumerate(indices)] , dim=0)
https://stackoverflow.com/questions/53899746/
What are Torch Scripts in PyTorch?
I've just found that PyTorch docs expose something that is called Torch Scripts. However, I do not know: When they should be used? How they should be used? What are their benefits?
Torch Script is one of two modes of using the PyTorch just in time compiler, the other being tracing. The benefits are explained in the linked documentation: Torch Script is a way to create serializable and optimizable models from PyTorch code. Any code written in Torch Script can be saved from your Python process and loaded in a process where there is no Python dependency. The above quote is actually true both of scripting and tracing. So You gain the ability to serialize your models and later run them outside of Python, via LibTorch, a C++ native module. This allows you to embed your DL models in various production environments like mobile or IoT. There is an official guide on exporting models to C++ here. PyTorch can compile your jit-able modules rather than running them as an interpreter, allowing for various optimizations and improving performance, both during training and inference. This is equally helpful for development and production. Regarding Torch Script specifically, in comparison to tracing, it is a subset of Python, specified in detail here, which, when adhered to, can be compiled by PyTorch. It is more laborious to write Torch Script modules instead of tracing regular nn.Module subclasses, but it allows for some extra features over tracing, most notably flow control like if statements or for loops. Tracing treats such flow control as "constant" - in other words, if you have an if model.training clause in your module and trace it with training=True, it will always behave this way, even if you change the training variable to False later on. To answer your first question, you need to use jit if you want to deploy your models outside Python and otherwise you should use jit if you want to gain some execution performance at the price of extra development effort (as not every model can be straightforwardly made compliant with jit). In particular, you should use Torch Script if your code cannot be jited with tracing alone because it relies on some features such as if statements. For maximum ergonomy, you probably want to mix the two on a case-by-case basis. Finally, for how they should be used, please refer to all the documentation and tutorial links.
https://stackoverflow.com/questions/53900396/
Convert PyTorch tensor to python list
How do I convert a PyTorch Tensor into a python list? I want to convert a tensor of size [1, 2048, 1, 1] into a list of 2048 elements. My tensor has floating point values. Is there a solution which also works with other data types such as int?
Use Tensor.tolist() e.g: >>> import torch >>> a = torch.randn(2, 2) >>> a.tolist() [[0.012766935862600803, 0.5415473580360413], [-0.08909505605697632, 0.7729271650314331]] >>> a[0,0].tolist() 0.012766935862600803 To remove all dimensions of size 1, use a.squeeze().tolist(). Alternatively, if all but one dimension are of size 1 (or you wish to get a list of every element of the tensor) you may use a.flatten().tolist().
https://stackoverflow.com/questions/53903373/
Problems with LSTM model
I try to realise LSTM model in PyTorch and got such problem: loss don't reduce. My task is so: I have sessions with different features. Session length is fixed and equals to 20. My goal is to predict will the last session been skipped or not. I tried to scale input features, I tried to pass target into features(maybe provided features are absolutely uninformative, I thought this should lead to overfitting and loss should be near 0), but always my loss reduction looks like this: print(X.shape) #(82770, 20, 31) where 82770 is count of sessions, 20 is seq_len, 31 is count of features print(y.shape) #(82770, 20) I defined also get_batches function. And yes, I know about problems with last batch in this generator def get_batches(X, y, batch_size): '''Create a generator that returns batches of size batch_size x seq_length from arr. ''' assert X.shape[0] == y.shape[0] assert X.shape[1] == y.shape[1] assert len(X.shape) == 3 assert len(y.shape) == 2 seq_len = X.shape[1] n_batches = X.shape[0]//seq_len for batch_number in range(n_batches): #print(batch_number*batch_size, ) batch_x = X[batch_number*batch_size:(batch_number+1)*batch_size, :, :] batch_y = y[batch_number*batch_size:(batch_number+1)*batch_size, :] if batch_x.shape[0] == batch_size: yield batch_x, batch_y else: print('batch_x shape: {}'.format(batch_x.shape)) break Here is my RNN class BaseRNN(nn.Module): def __init__(self, n_features, hidden_size, n_layers, drop_p=0.3, lr=0.001, last_items=10): super(BaseRNN, self).__init__() # constants self.n_features = n_features self.hidden_size = hidden_size self.n_layers = n_layers self.drop_p = drop_p self.lr = lr self.last_items = last_items # layers self.lstm = nn.LSTM( n_features, n_hidden, n_layers, dropout=drop_p, batch_first=True ) self.dropout = nn.Dropout(self.drop_p) self.linear_layer = nn.Linear(self.hidden_size, 1) self.sigm = nn.Sigmoid() def forward(self, x, hidden): out, hidden = self.lstm(x, hidden) batch_size = x.shape[0] out = self.dropout(out) out = out.contiguous().view(-1, self.hidden_size) out = self.linear_layer(out) out = self.sigm(out) # use only last elements out = out.view(batch_size, -1) out = out[:, -1] return out, hidden def init_hidden(self, batch_size): #initialize with zeros weight = next(self.parameters()).data hidden = (weight.new(self.n_layers, batch_size, self.hidden_size).zero_(), weight.new(self.n_layers, batch_size, self.hidden_size).zero_()) return hidden Here is my train function: def train(net, X, y, n_epochs=10, batch_size=10, clip=5): ''' pass ''' n_features = X.shape[2] seq_len = X.shape[1] net.train() opt = torch.optim.Adam(net.parameters(), lr=net.lr) criterion = nn.BCELoss() counter = 0 losses = [] for e in range(n_epochs): h = net.init_hidden(batch_size) for x, y in get_batches(X=X, y=y, batch_size=batch_size): counter += 1 h = net.init_hidden(batch_size) inputs, targets = torch.from_numpy(x).float(), torch.from_numpy(y.astype(int)) targets = targets[:,-net.last_items:].float().view(net.last_items*batch_size) h = tuple([each.data for each in h]) net.zero_grad() output, h = net(inputs, h) loss = criterion(output.view(net.last_items*batch_size), targets) losses.append(loss.item()) loss.backward() nn.utils.clip_grad_norm_(net.parameters(), clip) opt.step() return losses Run training: n_hidden = 100 n_layers = 1 n_features = X.shape[2] net = BaseRNN(n_features, n_hidden, n_layers, lr=0.01, drop_p=0.1, last_items=1) losses = train(net, X, y, n_epochs=5, batch_size=1000, lr=0.001, clip=5) plt.plot(losses) After all these steps I get plot like in the top of my question. I think I get a huge error somewhere because I put target variable in features, but still no loss reduction. Where I am wrong? PS.How to generate sample data? I will use real y data and add some noise. Y = np.array([[0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1], [1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1], [0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1]]) print(Y.shape) #(10, 20) # add 5 features with random noise random_noise = np.random.randn(10*20*5).reshape(10,20,5) X = np.concatenate((Y.reshape(10,20,1), random_noise), axis=2) print(X.shape) #(10, 20, 6)
My fail, forgot to scale input features, now works fine.
https://stackoverflow.com/questions/53914450/
Pytorch not recognizing directory for dataset
I'm trying to run code for a Deep Convolutional GAN from the official PyTorch site (https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html) on my Mac. When I try loading the data, I keep getting a "FileNotFound" error. Random Seed: 999 --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-6-1019cc990fb4> in <module>() 78 transforms.CenterCrop(image_size), 79 transforms.ToTensor(), ---> 80 transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), 81 ])) 82 # Create the dataloader /anaconda3/lib/python3.6/site-packages/torchvision/datasets/folder.py in __init__(self, root, transform, target_transform, loader) 176 super(ImageFolder, self).__init__(root, loader, IMG_EXTENSIONS, 177 transform=transform, --> 178 target_transform=target_transform) 179 self.imgs = self.samples /anaconda3/lib/python3.6/site-packages/torchvision/datasets/folder.py in __init__(self, root, loader, extensions, transform, target_transform) 73 74 def __init__(self, root, loader, extensions, transform=None, target_transform=None): ---> 75 classes, class_to_idx = find_classes(root) 76 samples = make_dataset(root, class_to_idx, extensions) 77 if len(samples) == 0: /anaconda3/lib/python3.6/site-packages/torchvision/datasets/folder.py in find_classes(dir) 21 22 def find_classes(dir): ---> 23 classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))] 24 classes.sort() 25 class_to_idx = {classes[i]: i for i in range(len(classes))} FileNotFoundError: [Errno 2] No such file or directory: 'Users/user1/Downloads/DCGANs/celeba/' Here is where I tried loading the dataset where dataroot = "Users/user1/Downloads/DCGANs/celeba/" The dataset is a folder (named celeba) with about 200,000 images. dataset = dset.ImageFolder(root=dataroot, transform=transforms.Compose([ transforms.Resize(image_size), transforms.CenterCrop(image_size), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ])) I tried this on both Atom and Jupyter Notebook, it didn't make a difference. All help is highly appreciated :)
The recognizable directory structure starts with /. So I assume, you should be replacing dataroot = "Users/user1/Downloads/DCGANs/celeba/" by dataroot = "/Users/user1/Downloads/DCGANs/celeba/"
https://stackoverflow.com/questions/53916510/
TypeError: object of type 'numpy.int64' has no len()
I am making a DataLoader from DataSet in PyTorch. Start from loading the DataFrame with all dtype as an np.float64 result = pd.read_csv('dummy.csv', header=0, dtype=DTYPE_CLEANED_DF) Here is my dataset classes. from torch.utils.data import Dataset, DataLoader class MyDataset(Dataset): def __init__(self, result): headers = list(result) headers.remove('classes') self.x_data = result[headers] self.y_data = result['classes'] self.len = self.x_data.shape[0] def __getitem__(self, index): x = torch.tensor(self.x_data.iloc[index].values, dtype=torch.float) y = torch.tensor(self.y_data.iloc[index], dtype=torch.float) return (x, y) def __len__(self): return self.len Prepare the train_loader and test_loader train_size = int(0.5 * len(full_dataset)) test_size = len(full_dataset) - train_size train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size]) train_loader = DataLoader(dataset=train_dataset, batch_size=16, shuffle=True, num_workers=1) test_loader = DataLoader(dataset=train_dataset) Here is my csv file When I try to iterate over the train_loader. It raises the error for i , (data, target) in enumerate(train_loader): print(i) TypeError Traceback (most recent call last) <ipython-input-32-0b4921c3fe8c> in <module> ----> 1 for i , (data, target) in enumerate(train_loader): 2 print(i) /opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self) 635 self.reorder_dict[idx] = batch 636 continue --> 637 return self._process_next_batch(batch) 638 639 next = __next__ # Python 2 compatibility /opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _process_next_batch(self, batch) 656 self._put_indices() 657 if isinstance(batch, ExceptionWrapper): --> 658 raise batch.exc_type(batch.exc_msg) 659 return batch 660 TypeError: Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in <listcomp> samples = collate_fn([dataset[i] for i in batch_indices]) File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 103, in __getitem__ return self.dataset[self.indices[idx]] File "<ipython-input-27-107e03bc3c6a>", line 12, in __getitem__ x = torch.tensor(self.x_data.iloc[index].values, dtype=torch.float) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py", line 1478, in __getitem__ return self._getitem_axis(maybe_callable, axis=axis) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py", line 2091, in _getitem_axis return self._get_list_axis(key, axis=axis) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py", line 2070, in _get_list_axis return self.obj._take(key, axis=axis) File "/opt/conda/lib/python3.6/site-packages/pandas/core/generic.py", line 2789, in _take verify=True) File "/opt/conda/lib/python3.6/site-packages/pandas/core/internals.py", line 4537, in take new_labels = self.axes[axis].take(indexer) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 2195, in take return self._shallow_copy(taken) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/range.py", line 267, in _shallow_copy return self._int64index._shallow_copy(values, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/numeric.py", line 68, in _shallow_copy return self._shallow_copy_with_infer(values=values, **kwargs) File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 538, in _shallow_copy_with_infer if not len(values) and 'dtype' not in kwargs: TypeError: object of type 'numpy.int64' has no len() Related issues: https://github.com/pytorch/pytorch/issues/10165 https://github.com/pytorch/pytorch/pull/9237 https://github.com/pandas-dev/pandas/issues/21946 Questions: How to workaround pandas issue here?
Reference: https://github.com/pytorch/pytorch/issues/9211 Just add .tolist() to indices line. def random_split(dataset, lengths): """ Randomly split a dataset into non-overlapping new datasets of given lengths. Arguments: dataset (Dataset): Dataset to be split lengths (sequence): lengths of splits to be produced """ if sum(lengths) != len(dataset): raise ValueError("Sum of input lengths does not equal the length of the input dataset!") indices = randperm(sum(lengths)).tolist() return [Subset(dataset, indices[offset - length:offset]) for offset, length in zip(_accumulate(lengths), lengths)]
https://stackoverflow.com/questions/53916594/
How to stack 1-dimensional vectors in pytorch
I am trying to stack 1-dimensional tensors in pytorch but the stack function seems to be interpreting them as 2-d square matrices. Any ideas how to stack 1-d tensors into a new 1-d tensor? Reproducibility: a = torch.randn([2]) b = torch.randn([3]) c = torch.stack([a, b]) # want a (5,) tensor RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 2 and 3 in dimension 1 at c:\new-builder_3\win-wheel\pytorch\aten\src\th\generic/THTensorMath.cpp:3616 If I unsqueeze a and b to (2,1) and (3,1) tensors or squeeze them (should have no effect) the error is still present. This seems odd too though because they should both be size 1 in dimension 1 when unsqueezed (and printing their sizes will accurately reflect this), but the error still appears the same, character-for-character. Additionally, stacking in dimension 1 just makes it say "...2 and 3 in dimension 0...". This is all in python 3.5.4, pytorch 0.4.1
You can try cat (official docs) a = torch.randn([2]) b = torch.randn([3]) c = torch.cat([a, b], dim=0)
https://stackoverflow.com/questions/53918549/
Conv 1x1 configuration for feature reduction
I am using 1x1 convolution in the deep network to reduce a feature x: Bx2CxHxW to BxCxHxW. I have three options: x -> Conv (1x1) -> Batchnorm-->ReLU. Code will be output = ReLU(BN(Conv(x))). Reference resnet x -> BN -> ReLU-> Conv. So the code will be output = Conv(ReLU(BN(x))) . Reference densenet x-> Conv. The code is output = Conv(x) Which one is most using for feature reduction? Why?
Since you are going to train your net end-to-end, whatever configuration you are using - the weights will be trained to accommodate them. BatchNorm? I guess the first question you need to ask yourself is do you want to use BatchNorm? If your net is deep and you are concerned with covariate shifts then you probably should have a BatchNorm -- and here goes option no. 3 BatchNorm first? If your x is the output of another conv layer, than there's actually no difference between your first and second alternatives: your net is a cascade of ...-conv-bn-ReLU-conv-BN-ReLU-conv-... so it's only an "artificial" partitioning of the net into triplets of functions conv, bn, relu and up to the very first and last functions you can split things however you wish. Moreover, since Batch norm is a linear operation (scale + bias) it can be "folded" into an adjacent conv layer without changing the net, so you basically left with conv-relu pairs. So, there's not really a big difference between the first two options you highlighted. What else to consider? Do you really need ReLU when changing dimension of features? You can think of the reducing dimensions as a linear mapping - decomposing the weights mapping to x into a lower rank matrix that ultimately maps into c dimensional space instead of 2c space. If you consider a linear mapping, then you might omit the ReLU altogether. See fast RCNN SVD trick for an example.
https://stackoverflow.com/questions/53919836/
What particular change of formula in target changes neural network from gradient descent into gradient ascent?
It was weird when I face it in reinforcement learning. A loss is MSE. Everything should be perfect to be gradient descent and now it is a gradient ascent. I wanna know the magic. I did numpy neural network. Change in a derivative lead to gradient ascent. What particular change in a derivative lead to gradient ascent? Is it that simple that autograd sees that it is concave or convex?
If you're doing gradient ascent, it must mean that you are doing a variant of policy gradients reinforcement learning. Doing gradient ascent is extremely simple, long story short, you just apply gradient descent, except you put a minus sign in front of the gradient term! In tensorflow code: gradients = - tf.compute_gradients(loss) update = tf.apply_gradients(zip(gradients, vars)) This is the basic gradient descent algorithm, where theta is the weights of the model, alpha is learning rate, and dJ/dtheta is the gradient of the loss function with respect to the weights. In the above, we descent upon the gradient because we want to minimize the loss. But in policy gradient methods, we want to maximize the returns, and since we are taking the gradient with respect to the reward (intuitively), we want to maximize it. Please see the below picture from TowardsDataScience, you can see that naturally, the weights get updated to the direction of lowest J. (Notice the positive instead of negative) By simply changing the sign of the update, we can instead go the other way (i.e., to maximize the reward Below is the formal equation to gradient asent for policy gradient methods. The gradient of the policy * Vt is essentially dJ/dtheta.
https://stackoverflow.com/questions/53923388/
Indexing a batched set of images
Suppose I have two index tensors and an image tensor, how can I sample the (x, y) points from the image? img.shape # -> (batch x H x W x 3) x.shape # -> (batch x H x W) y.shape # -> batch x H x W) (H x W being height x width) Basically I want to perform something like a batch "shuffle" of the image pixel intensities.
I am assuming you want output[a, b, c, d] == img[a, x[a, b, c], y[a, b, c], d], where a, b, c, d are variables which iterate over batch, H, W and 3, respectively. You can solve that by applying torch.gather twice. As you can see in documentation it performs a similar indexing operation for a single dimension, so we would first gather on dim 1 with x as the index parameter and again on dim 2 with y. Unfortunately gather does not broadcast, so to deal with the trailing rgb dimension we have to add an extra dimension and manually repeat it. The code looks like this import torch # prepare data as in the example batch, H, W = 2, 4, 5 img = torch.arange(batch * H * W * 3).reshape(batch, H, W, 3) x = torch.randint(0, H, (batch, H, W)) y = torch.randint(0, W, (batch, H, W)) # deal with `torch.gather` not broadcasting x = x.unsqueeze(3).repeat(1, 1, 1, 3) y = y.unsqueeze(3).repeat(1, 1, 1, 3) # do the actual indexing x_shuff = torch.gather(img, dim=1, index=x) output = torch.gather(x_shuff, dim=2, index=y)
https://stackoverflow.com/questions/53924868/
pytorch - Where is “conv1d” implemented?
I wanted to see how the conv1d module is implemented https://pytorch.org/docs/stable/_modules/torch/nn/modules/conv.html#Conv1d. So I looked at functional.py but still couldn’t find the looping and cross-correlation computation. Then I searched Github by keyword ‘conv1d’, checked conv.cpp https://github.com/pytorch/pytorch/blob/eb5d28ecefb9d78d4fff5fac099e70e5eb3fbe2e/torch/csrc/api/src/nn/modules/conv.cpp 1 but still couldn’t locate where the computation is happening. My question is two-fold. Where is the source code that "conv1d” is implemented? In general, if I want to check how the modules are implemented, where is the best place to find? Any pointer to the documentation will be appreciated. Thank you.
It depends on the backend (GPU, CPU, distributed etc) but in the most interesting case of GPU it's pulled from cuDNN which is released in binary format and thus you can't inspect its source code. It's a similar story for CPU MKLDNN. I am not aware of any place where PyTorch would "handroll" it's own convolution kernels, but I may be wrong. EDIT: indeed, I was wrong as pointed out in an answer below. It's difficult without knowing how PyTorch is structured. A lot of code is actually being autogenerated based on various markup files, as explained here. Figuring this out requires a lot of jumping around. For instance, the conv.cpp file you're linking uses torch::conv1d, which is defined here and uses at::convolution which in turn uses at::_convolution, which dispatches to multiple variants, for instance at::cudnn_convolution. at::cudnn_convolution is, I believe, created here via a markup file and just plugs in directly to cuDNN implementation (though I cannot pinpoint the exact point in code when that happens).
https://stackoverflow.com/questions/53927358/
how to flatten input in `nn.Sequential` in Pytorch
how to flatten input inside the nn.Sequential Model = nn.Sequential(x.view(x.shape[0],-1), nn.Linear(784,256), nn.ReLU(), nn.Linear(256,128), nn.ReLU(), nn.Linear(128,64), nn.ReLU(), nn.Linear(64,10), nn.LogSoftmax(dim=1))
You can create a new module/class as below and use it in the sequential as you are using other modules (call Flatten()). class Flatten(torch.nn.Module): def forward(self, x): batch_size = x.shape[0] return x.view(batch_size, -1) Ref: https://discuss.pytorch.org/t/flatten-layer-of-pytorch-build-by-sequential-container/5983 EDIT: Flatten is part of torch now. See https://pytorch.org/docs/stable/nn.html?highlight=flatten#torch.nn.Flatten
https://stackoverflow.com/questions/53953460/
How does Pytorch's "Fold" and "Unfold" work?
I've gone through the official doc. I'm having a hard time understanding what this function is used for and how it works. Can someone explain this in layman's terms?
The unfold and fold are used to facilitate "sliding window" operations (like convolutions). Suppose you want to apply a function foo to every 5x5 window in a feature map/image: from torch.nn import functional as f windows = f.unfold(x, kernel_size=5) Now windows has size of batch-(55x.size(1))-num_windows, you can apply foo on windows: processed = foo(windows) Now you need to "fold" processed back to the original size of x: out = f.fold(processed, x.shape[-2:], kernel_size=5) You need to take care of padding, and kernel_size that may affect your ability to "fold" back processed to the size of x. Moreover, fold sums over overlapping elements, so you might want to divide the output of fold by patch size. Please note that torch.unfold performs a different operation than nn.Unfold. See this thread for details.
https://stackoverflow.com/questions/53972159/
PyTorch Getting Started example not working
I followed this tutorial in the Getting Started section on the PyTorch website: "Deep Learning with PyTorch: A 60 Minute Blitz" and I downloaded the code for "Training a Classifier" on the bottom of the page and I ran it, and it's not working for me. I'm using the CPU version of PyTorch if that makes a difference. I'm new to Python and basically learning it for Pytorch. Here's the error message, Control + K isn't working for me because I think the editing interface is different for the first few posts and Stack Overflow needs to fix it. Or it could just be my browser: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 114, in _main prepare(preparation_data) File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path run_name="__mp_main__") File "C:\ProgramData\Anaconda3\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "C:\ProgramData\Anaconda3\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\ProgramData\Anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Anonymous\PycharmProjects\pytorchHelloWorld\train_network.py", line 100, in <module> dataiter = iter(trainloader) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __iter__ return _DataLoaderIter(self) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in __init__ w.start() File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data _check_not_importing_main() File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main is not going to be frozen to produce an executable.''') RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. Traceback (most recent call last): File "C:/Users/Anonymous/PycharmProjects/pytorchHelloWorld/train_network.py", line 100, in <module> dataiter = iter(trainloader) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __iter__ return _DataLoaderIter(self) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in __init__ w.start() File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__ reduction.dump(process_obj, to_child) File "C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) BrokenPipeError: [Errno 32] Broken pipe
The error is likely due to multiprocessing in DataLoader and Windows since the tutorial is using num_workers=2. Python3 documentation shares some guidelines on this: Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process). You can either set num_workers=0 or you need to wrap your code within if __name__ == '__main__' # Safe DataLoader multiprocessing with Windows if __name__ == '__main__': # Code to load the data with num_workers > 1 Check this reply on PyTorch forum for more details and this issue on GitHub.
https://stackoverflow.com/questions/53974351/
Using expand_dims in pytorch
I'm trying to tile a length 18 1 hot vector into a 40x40 grid. Looking at pytorch docs, expand dims seems to be what i need. But I cannot get it to work. Any idea what I'm doing wrong? one_hot = torch.zeros(18).unsqueeze(0) one_hot[0,1] = 1.0 one_hot tensor([[0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) one_hot.expand(-1,-1,40,40) Traceback (most recent call last): File "<input>", line 1, in <module> RuntimeError: The expanded size of the tensor (40) must match the existing size (18) at non-singleton dimension 3 I'm expecting a tensor of shape (1, 18, 40,40)
expand works along singleton dimensions of the input tensor. In your example, you are trying to expand a 1-by-18 tensor along its (non-existent) third and fourth dimensions - this is why you are getting an error. The only singleton dimension (=dimension with size==1) you have is the first dimension. fix one_hot = torch.zeros(1,18,1,1, dtype=torch.float) # create the tensor with all singleton dimensions in place one_hot[0,1,0,0] = 1. one_hot.expand(-1,-1,40,40)
https://stackoverflow.com/questions/53975352/
pytorch - connection between loss.backward() and optimizer.step()
Where is an explicit connection between the optimizer and the loss? How does the optimizer know where to get the gradients of the loss without a call liks this optimizer.step(loss)? -More context- When I minimize the loss, I didn't have to pass the gradients to the optimizer. loss.backward() # Back Propagation optimizer.step() # Gardient Descent
Without delving too deep into the internals of pytorch, I can offer a simplistic answer: Recall that when initializing optimizer you explicitly tell it what parameters (tensors) of the model it should be updating. The gradients are "stored" by the tensors themselves (they have a grad and a requires_grad attributes) once you call backward() on the loss. After computing the gradients for all tensors in the model, calling optimizer.step() makes the optimizer iterate over all parameters (tensors) it is supposed to update and use their internally stored grad to update their values. More info on computational graphs and the additional "grad" information stored in pytorch tensors can be found in this answer. Referencing the parameters by the optimizer can sometimes cause troubles, e.g., when the model is moved to GPU after initializing the optimizer. Make sure you are done setting up your model before constructing the optimizer. See this answer for more details.
https://stackoverflow.com/questions/53975717/