instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
LSTM model implementation in PyTorch
I was trying to implement CNN+LSTM model in PyTorch, but I have problem with LSTM part (I never used LSTM before). Could you write Many-to-one-LSTM model class (Image-link: https://i.ibb.co/SRGWT5j/lstm.png )...
For nn.LSTM in Pytorch , as per docs https://pytorch.org/docs/stable/nn.html?highlight=lstm#torch.nn.LSTM it takes input as (embedding_size_dimension , hidden_size_dimension , number_of_layers) (currently ignoring bidirectional parameter , we can also pass initial hidden_state and cell_state ) so we need to pass a tensor of shape [max sentence length , batch size , embedding size ] just a sample model class Model(nn.Module): def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5): super(Model, self).__init__() self.output_size = output_size self.n_layers = n_layers self.hidden_dim = hidden_dim self.embedding = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=drop_prob) def forward(self, sentence): batch_size = sentence.size(0) sentence = sentence.long() embeds = self.embedding(sentence) lstm_out, hidden = self.lstm(embeds) # so here lstm_out will be of [max sentence length , batch size , hidden size] # so for simple many-to-one we can just use output of last cell of LSTM out = lstm_out[-1,:,:] return out You can refer this link , is has really nicely explained about LSTM in pytorch , it also has one sample example of SentimentNet model https://blog.floydhub.com/long-short-term-memory-from-zero-to-hero-with-pytorch/
https://stackoverflow.com/questions/56944903/
No matching distribution found for numpy (from torch==1.10)
Working on Ubuntu 18.04 in a VM. I have python3 installed at /usr/bin/python3.6 according to whereis python3. When I run pip3 freeze, I see numpy installed, with version 1.15.4, and when I enter into python3 console, I can import numpy as np perfectly fine with no errors. What I am trying to do now is to install PyTorch, specifically torch==1.1.0, without GPU / CUDA. I got the following installation instructions from https://pytorch.org/get-started/locally/: pip3 install https://download.pytorch.org/whl/cpu/torch-1.1.0-cp36-cp36m-linux_x86_64.whl pip3 install https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp36-cp36m-linux_x86_64.whl I downloaded these wheel files, and tried to run them from local directory as such: /usr/bin/python3.6 -m pip install ~/Desktop/torch-1.1.0-cp36-cp36m-linux_x86_64.whl and got the following error: No matching distribution found for numpy (from torch==1.1.0) I don't understand why this is the case, cause my numpy library is clearly already installed and working (READ ABOVE: I am able to import numpy in python3 console). Also, I had no issues installing PyTorch with GPU on my Windows machine, with torch 1.1.0 and numpy 1.15.4. Alternatively, I tried to build PyTorch from source by moving into the PyTorch source directory and executing /usr/bin/python3 setup.py install, but it didn't work since I am not using a GPU, and CUDA is not installed. I am not able to find instructions on how I can build PyTorch from source without GPU in a Linux environment? Can anyone kindly advise? Thank you! Some additional information (1) Tried to install PyTorch with the --no-dependencies flag as suggested by @Jan, and it installed, but I am not sure if Torch will really work perfectly. (2) which pip3 points to /usr/bin/pip3 (3) pip3 show numpy tells me that the location is at /home/name/.local/lib/python3.6/site-packages
If you are sure that all dependencies are manually installed, you could try --no-dependencies flag for pip3. Otherwise, I would point you to Anaconda as python environment provider, it is pretty convenient.
https://stackoverflow.com/questions/56949918/
PyTorch broadcasting: how this worked?
I'm new to Deep Learning. I'm studying from Udacity. I came across one of the codes to build up a neural network, where 2 tensors are being added, specifically the 'bias' tensor with the output of the tensor-multiplication product. It was kind of... def activation(x): return (1/(1+torch.exp(-x))) inputs = images.view(images.shape[0], -1) w1 = torch.randn(784, 256) b1 = torch.randn(256) h = activation(torch.mm(inputs,w1) + b1) After flattening the MNIST, it came out as [64,784] (inputs). I'm not getting how the bias tensor (b1) of dimension [256], could be added to the multiplication product of 'inputs' and 'w1' which comes out to be the dimensions of [256, 64].
In simple terms, whenever we use "broadcasting" from a Python library (Numpy or PyTorch), what we are doing is treating our arrays (weight, bias) dimensionally compatible. In other words, if you are operating with W of shape [256,64], and your bias is only [256]. Then, broadcasting will complete that lacking dimension. As you can see in the image above, the dimension left is being filled so that our operations can be done successfully. Hope this is helpful
https://stackoverflow.com/questions/56955037/
Apply preprocessing to the dataset
I am implementing a paper on image segmentation in pytorch. I am required to do some preprocessing steps but as I am trying it the first time so I am unable to incorporate them in the traditional pipeline. Following are the preprocessing steps- 1) N(w, h) = I(w, h) − G(w, h), (1) where N is the normalized image, I is the original image, and G is the Gaussian blurred image with kernel size 65*65 and 0 mean and standard deviation 10. 2)Normalizing the mean image and dividing each pixel by average standard deviation. Following is my code snippet for the above steps- def gaussian_blur(img): image = cv2.GaussianBlur(image,(65,65),10) new_image = img - image return image def normalise(img): img_normalised = np.empty(img.shape) img_std = np.std(img) img_mean = np.mean(img) img_normalized = (img-img_mean)/imgs_std for i in range(img.shape[1]): img_normalized[i] = (img_normalized - np.mean(img_normalized))/np.std(img_normalized) return img_normalized I am really not sure how to add above functions in the traditional pytorch data-loaders pipeline like first I should load the dataset using ImageFolder and then apply or first apply and then use ImageFolder method.
This is how I did it- The solution of the first part is first defining the required function and then calling in the transforms using the generic transforms in the following way- def gaussian_blur(img): image = np.array(img) image_blur = cv2.GaussianBlur(image,(65,65),10) new_image = image - image_blur im = Image.fromarray(new_image) return im Solution of second part is to go through every image and calculate the mean and std deviation and then finally calling the mean and std deviation values in the transforms.- train_mean = [] train_std = [] for i,image in enumerate(train_loader,0): numpy_image = image[0].numpy() batch_mean = np.mean(numpy_image, axis=(0, 2, 3)) batch_std = np.std(numpy_image, axis=(0, 2, 3)) train_mean.append(batch_mean) train_std.append(batch_std) train_mean = torch.tensor(np.mean(train_mean, axis=0)) train_std = torch.tensor(np.mean(train_std, axis=0)) print('Mean:', train_mean) print('Std Dev:', train_std) Final transform calling looks like this- data_transforms = transforms.Compose([transforms.RandomCrop(512,512), transforms.Lambda(gaussian_blur), transforms.RandomRotation([+90,+180]), transforms.RandomRotation([+180,+270]), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean=train_mean, std=train_std) ])
https://stackoverflow.com/questions/56956454/
Can't change the Anchors in Faster RCNN
I'm a newbie in pytorch and I was trying to put some custom anchors on my Faster RCNN network in pytorch. Basically, I'm using a resnet50 backbone and when I try to put the anchors, I got a mismatch error. This is the code that I have: backbone = torchvision.models.detection.backbone_utils.resnet_fpn_backbone('resnet50', True) backbone.out_channels = 256 anchor_generator = AnchorGenerator(sizes=((4, 8, 16, 32, 64, 128),), aspect_ratios=((0.5, 1.0, 2.0),)) roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=[0], output_size=7, sampling_ratio=2) model = FasterRCNN(backbone, num_classes=10, rpn_anchor_generator=anchor_generator, box_roi_pool=roi_pooler) The error that I got is the following: shape '[1440000, -1]' is invalid for input of size 7674336.
Alright, after some digging into the source code of PyTorch Faster RCNN, I found how they initialize the anchors: anchor_sizes = ((32,), (64,), (128,), (256,), (512,)) aspect_ratios = ((0.5, 1.0, 2.0),) * len(anchor_sizes) rpn_anchor_generator = AnchorGenerator( anchor_sizes, aspect_ratios ) Following the same pattern for my custom anchors, the code will be: anchor_sizes = ((4,), (8,), (16,), (32,), (64,), (128,)) aspect_ratios = ((0.5, 1.0, 2.0),) * len(anchor_sizes) rpn_anchor_generator = AnchorGenerator( anchor_sizes, aspect_ratios ) It will work!
https://stackoverflow.com/questions/56962533/
Cartpole-v0 loss increasing using DQN
Hi I'm trying to train a DQN to solve gym's Cartpole problem. For some reason the Loss looks like this (orange line). Can y'all take a look at my code and help with this? I've played around with the hyperparameters a decent bit so I don't think they're the issue here. class DQN(nn.Module): def __init__(self, input_dim, output_dim): super(DQN, self).__init__() self.linear1 = nn.Linear(input_dim, 16) self.linear2 = nn.Linear(16, 32) self.linear3 = nn.Linear(32, 32) self.linear4 = nn.Linear(32, output_dim) def forward(self, x): x = F.relu(self.linear1(x)) x = F.relu(self.linear2(x)) x = F.relu(self.linear3(x)) return self.linear4(x) final_epsilon = 0.05 initial_epsilon = 1 epsilon_decay = 5000 global steps_done steps_done = 0 def select_action(state): global steps_done sample = random.random() eps_threshold = final_epsilon + (initial_epsilon - final_epsilon) * \ math.exp(-1. * steps_done / epsilon_decay) if sample > eps_threshold: with torch.no_grad(): state = torch.Tensor(state) steps_done += 1 q_calc = model(state) node_activated = int(torch.argmax(q_calc)) return node_activated else: node_activated = random.randint(0,1) steps_done += 1 return node_activated class ReplayMemory(object): # Stores [state, reward, action, next_state, done] def __init__(self, capacity): self.capacity = capacity self.memory = [[],[],[],[],[]] def push(self, data): """Saves a transition.""" for idx, point in enumerate(data): #print("Col {} appended {}".format(idx, point)) self.memory[idx].append(point) def sample(self, batch_size): rows = random.sample(range(0, len(self.memory[0])), batch_size) experiences = [[],[],[],[],[]] for row in rows: for col in range(5): experiences[col].append(self.memory[col][row]) return experiences def __len__(self): return len(self.memory[0]) input_dim, output_dim = 4, 2 model = DQN(input_dim, output_dim) target_net = DQN(input_dim, output_dim) target_net.load_state_dict(model.state_dict()) target_net.eval() tau = 2 discount = 0.99 learning_rate = 1e-4 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) memory = ReplayMemory(65536) BATCH_SIZE = 128 def optimize_model(): if len(memory) < BATCH_SIZE: return 0 experiences = memory.sample(BATCH_SIZE) state_batch = torch.Tensor(experiences[0]) action_batch = torch.LongTensor(experiences[1]).unsqueeze(1) reward_batch = torch.Tensor(experiences[2]) next_state_batch = torch.Tensor(experiences[3]) done_batch = experiences[4] pred_q = model(state_batch).gather(1, action_batch) next_state_q_vals = torch.zeros(BATCH_SIZE) for idx, next_state in enumerate(next_state_batch): if done_batch[idx] == True: next_state_q_vals[idx] = -1 else: # .max in pytorch returns (values, idx), we only want vals next_state_q_vals[idx] = (target_net(next_state_batch[idx]).max(0)[0]).detach() better_pred = (reward_batch + next_state_q_vals).unsqueeze(1) loss = F.smooth_l1_loss(pred_q, better_pred) optimizer.zero_grad() loss.backward() for param in model.parameters(): param.grad.data.clamp_(-1, 1) optimizer.step() return loss points = [] losspoints = [] #save_state = torch.load("models/DQN_target_11.pth") #model.load_state_dict(save_state['state_dict']) #optimizer.load_state_dict(save_state['optimizer']) env = gym.make('CartPole-v0') for i_episode in range(5000): observation = env.reset() episode_loss = 0 if episode % tau == 0: target_net.load_state_dict(model.state_dict()) for t in range(1000): #env.render() state = observation action = select_action(observation) observation, reward, done, _ = env.step(action) if done: next_state = [0,0,0,0] else: next_state = observation memory.push([state, action, reward, next_state, done]) episode_loss = episode_loss + float(optimize_model(i_episode)) if done: points.append((i_episode, t+1)) print("Episode {} finished after {} timesteps".format(i_episode, t+1)) print("Avg Loss: ", episode_loss / (t+1)) losspoints.append((i_episode, episode_loss / (t+1))) if (i_episode % 100 == 0): eps = final_epsilon + (initial_epsilon - final_epsilon) * \ math.exp(-1. * steps_done / epsilon_decay) print(eps) if ((i_episode+1) % 5001 == 0): save = {'state_dict': model.state_dict(), 'optimizer': optimizer.state_dict()} torch.save(save, "models/DQN_target_" + str(i_episode // 5000) + ".pth") break env.close() x = [coord[0] * 100 for coord in points] y = [coord[1] for coord in points] x2 = [coord[0] * 100 for coord in losspoints] y2 = [coord[1] for coord in losspoints] plt.plot(x, y) plt.plot(x2, y2) plt.show() I basically followed the tutorial pytorch has, except using the state returned by the env rather than the pixels. I also changed the replay memory because I was having issues there. Other than that, I left everything else pretty much the same. Edit: I tried overfitting on a small batch and the Loss looks like this without updating the target net and this when updating it Edit 2: This is definitely an issue with the target net, I tried removing it and loss seemed to not increase exponentially
Your tau value is too small, small target network update cause DQN traning unstable. You can try to use 1000 (OpenAI Baseline's DQN example) or 10000 (Deepmind's Nature paper). In Deepmind's 2015 Nature paper, it states that: The second modification to online Q-learning aimed at further improving the stability of our method with neural networks is to use a separate network for generating the traget yj in the Q-learning update. More precisely, every C updates we clone the network Q to obtain a target network Q' and use Q' for generating the Q-learning targets yj for the following C updates to Q. This modification makes the algorithm more stable compared to standard online Q-learning, where an update that increases Q(st,at) often also increases Q(st+1, a) for all a and hence also increases the target yj, possibly leading to oscillations or divergence of the policy. Generating the targets using the older set of parameters adds a delay between the time an update to Q is made and the time the update affects the targets yj, making divergence or oscillations much more unlikely. Human-level control through deep reinforcement learning, Mnih et al., 2015 I've run your code with settings of tau=2, tau=10, tau=100, tau=1000 and tau=10000. The update frequency of tau=100 solves the problem (reach maximum steps of 200). tau=2 tau=10 tau=100 tau=1000 tau=10000 Below is the modified version of your code. import random import math import matplotlib.pyplot as plt import torch from torch import nn import torch.nn.functional as F import gym class DQN(nn.Module): def __init__(self, input_dim, output_dim): super(DQN, self).__init__() self.linear1 = nn.Linear(input_dim, 16) self.linear2 = nn.Linear(16, 32) self.linear3 = nn.Linear(32, 32) self.linear4 = nn.Linear(32, output_dim) def forward(self, x): x = F.relu(self.linear1(x)) x = F.relu(self.linear2(x)) x = F.relu(self.linear3(x)) return self.linear4(x) final_epsilon = 0.05 initial_epsilon = 1 epsilon_decay = 5000 global steps_done steps_done = 0 def select_action(state): global steps_done sample = random.random() eps_threshold = final_epsilon + (initial_epsilon - final_epsilon) * \ math.exp(-1. * steps_done / epsilon_decay) if sample > eps_threshold: with torch.no_grad(): state = torch.Tensor(state) steps_done += 1 q_calc = model(state) node_activated = int(torch.argmax(q_calc)) return node_activated else: node_activated = random.randint(0,1) steps_done += 1 return node_activated class ReplayMemory(object): # Stores [state, reward, action, next_state, done] def __init__(self, capacity): self.capacity = capacity self.memory = [[],[],[],[],[]] def push(self, data): """Saves a transition.""" for idx, point in enumerate(data): #print("Col {} appended {}".format(idx, point)) self.memory[idx].append(point) def sample(self, batch_size): rows = random.sample(range(0, len(self.memory[0])), batch_size) experiences = [[],[],[],[],[]] for row in rows: for col in range(5): experiences[col].append(self.memory[col][row]) return experiences def __len__(self): return len(self.memory[0]) input_dim, output_dim = 4, 2 model = DQN(input_dim, output_dim) target_net = DQN(input_dim, output_dim) target_net.load_state_dict(model.state_dict()) target_net.eval() tau = 100 discount = 0.99 learning_rate = 1e-4 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) memory = ReplayMemory(65536) BATCH_SIZE = 128 def optimize_model(): if len(memory) < BATCH_SIZE: return 0 experiences = memory.sample(BATCH_SIZE) state_batch = torch.Tensor(experiences[0]) action_batch = torch.LongTensor(experiences[1]).unsqueeze(1) reward_batch = torch.Tensor(experiences[2]) next_state_batch = torch.Tensor(experiences[3]) done_batch = experiences[4] pred_q = model(state_batch).gather(1, action_batch) next_state_q_vals = torch.zeros(BATCH_SIZE) for idx, next_state in enumerate(next_state_batch): if done_batch[idx] == True: next_state_q_vals[idx] = -1 else: # .max in pytorch returns (values, idx), we only want vals next_state_q_vals[idx] = (target_net(next_state_batch[idx]).max(0)[0]).detach() better_pred = (reward_batch + next_state_q_vals).unsqueeze(1) loss = F.smooth_l1_loss(pred_q, better_pred) optimizer.zero_grad() loss.backward() for param in model.parameters(): param.grad.data.clamp_(-1, 1) optimizer.step() return loss points = [] losspoints = [] #save_state = torch.load("models/DQN_target_11.pth") #model.load_state_dict(save_state['state_dict']) #optimizer.load_state_dict(save_state['optimizer']) env = gym.make('CartPole-v0') for i_episode in range(5000): observation = env.reset() episode_loss = 0 if i_episode % tau == 0: target_net.load_state_dict(model.state_dict()) for t in range(1000): #env.render() state = observation action = select_action(observation) observation, reward, done, _ = env.step(action) if done: next_state = [0,0,0,0] else: next_state = observation memory.push([state, action, reward, next_state, done]) episode_loss = episode_loss + float(optimize_model()) if done: points.append((i_episode, t+1)) print("Episode {} finished after {} timesteps".format(i_episode, t+1)) print("Avg Loss: ", episode_loss / (t+1)) losspoints.append((i_episode, episode_loss / (t+1))) if (i_episode % 100 == 0): eps = final_epsilon + (initial_epsilon - final_epsilon) * \ math.exp(-1. * steps_done / epsilon_decay) print(eps) if ((i_episode+1) % 5001 == 0): save = {'state_dict': model.state_dict(), 'optimizer': optimizer.state_dict()} torch.save(save, "models/DQN_target_" + str(i_episode // 5000) + ".pth") break env.close() x = [coord[0] * 100 for coord in points] y = [coord[1] for coord in points] x2 = [coord[0] * 100 for coord in losspoints] y2 = [coord[1] for coord in losspoints] plt.plot(x, y) plt.plot(x2, y2) plt.show() Here's the result of your plotting code. tau=100 tau=10000
https://stackoverflow.com/questions/56964657/
Fastest way to split tensors in groups of two and for each group, randomly assign one to 0 and the other to 1
I have a big tensor with size (128, 64, 1, 1). For every two small tensors of size (1, 1) in that big tensor, I want to randomly assign one to be 1 and the other to be 0. My following code works but it's awfully slow. # a is a big tensor of size (128, 64, 1, 1) for i in range(a.size()[0]): j = 0 while j < a.size()[1] - 1: r = int(torch.randint(0, 2, (1,))) a[i][j + r] = 1 a[i][j + 1 - r] = 0 j += 2 Is there any way to avoid for loops and do every assignment in parallel? Thank you.
I think this might be faster : import torch x = torch.randn(128, 32, 2, 1, 1) y = x.max(dim=2, keepdim=True)[0] z = (y-x) > 0 r = torch.flatten(z, 1, 2) The idea is that you split your 64 dimension into 2, pick random values, and chose the one that is minimal to be 1 and the other to be 0, then you recombine your 32*2 into a 64 size.
https://stackoverflow.com/questions/56966142/
Inplace add with list selectors
I'm facing a somehow inconsistent behaviour of pythorch according to weither the index is a list or an integer. Take a look at this snippet of code : # First example, integer selector ==> Ok t = torch.tensor([[0, 1], [1, 0]]) t[0, 0].add_(10) print(t) tensor([[10, 1], [ 1, 0]]) # Second example, list selector ==> ??? t = torch.tensor([[0, 1], [1, 0]]) t[[0], [0]].add_(10) # notice the list selector print(t) tensor([[0, 1], [1, 0]]) #Third example, list selector with inplace add operator ==> Ok t = torch.tensor([[0, 1], [1, 0]]) t[[0], [0]] += 10 print(t) tensor([[10, 1], [ 1, 0]]) I can't understand why pytorch was unable to update tin the second example !
See the difference between the two indexing: In []: t[0, 0].shape Out[]: torch.Size([]) In []: t[[0], [0]].shape Out[]: torch.Size([1]) When you index directly the (0, 0)th element of t you have a reference to that entry and you can inplace add_ to it. The shape of t[0,0] is [] - that is you get a scalar back - the content of the (0,0) entry. However, when you use list indices ([0], [0]) you get back a 1-dim tensor, shape is [1]. That is, you get a copy of a sub-tensor of t. You then inplace add_ to that copy of sub-tensor, you have no effect over the original t: In []: r = t[[0], [0]].add_(10) In []: t Out[]: tensor([[0, 1], [1, 0]]) In []: r Out[]: tensor([10]) Perhaps you want to look into index_add_() to accomplish your task. Update When you assign to t using list indices, you are not creating a copy (it makes no sense. So, t[[0], [0]] += 10 Translates to t[[0], [0]] = t[[0], [0]] + 10 That is, on the right hand side we have a copy of the (0,0) sub-tensor of t and we are adding 10 to that sub-tensor, resulting with a shape [1] tensor with value [10]. On the left hand side we assign this [10] to the (0,0) sub-tensor of t (not to a copy of it - it makes no sense). Therefore the output of t[[0], [0]] += 10 is tensor([[10, 1], [ 1, 0]])
https://stackoverflow.com/questions/56968512/
NotImplementedError: when i try to create a DataLoader object in Pytorch
I created a custom Dataset class that inherits from PyTorch's Dataset class, in order to handle my custom dataset which i already preprocessed. When i try to create a DataLoader object, i get this error: /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __init__(self, dataset, batch_size, shuffle, sampler, batch_sampler, num_workers, collate_fn, pin_memory, drop_last, timeout, worker_init_fn) 174 if sampler is None: 175 if shuffle: --> 176 sampler = RandomSampler(dataset) 177 else: 178 sampler = SequentialSampler(dataset) /usr/local/lib/python3.6/dist-packages/torch/utils/data/sampler.py in __init__(self, data_source, replacement, num_samples) 62 "since a random permute will be performed.") 63 ---> 64 if not isinstance(self.num_samples, int) or self.num_samples <= 0: 65 raise ValueError("num_samples should be a positive integer " 66 "value, but got num_samples={}".format(self.num_samples)) /usr/local/lib/python3.6/dist-packages/torch/utils/data/sampler.py in num_samples(self) 70 # dataset size might change at runtime 71 if self._num_samples is None: ---> 72 return len(self.data_source) 73 return self._num_samples 74 /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py in __len__(self) 18 19 def __len__(self): ---> 20 raise NotImplementedError 21 22 def __add__(self, other): NotImplementedError: So, the error message regards to the not implementation of the len() function in the dataset.py, right? But i did implement it and the getitem(), init() as well . How can i overcome this? Thank you
Make sure the name is correct in your code. It should be __len__.
https://stackoverflow.com/questions/56971163/
What does this warning mean in PATE analysis?
Got warning while doing PATE analysis: Warning: May not have used enough values of l. Increase 'moments' variable and run again. from syft.frameworks.torch.differential_privacy import pate data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1) print("Data Independent Epsilon:", data_ind_eps) print("Data Dependent Epsilon:", data_dep_eps) It has gone after increasing the value of the "moment" parameter in the "pate.perform_analysis" analysis function. But I want to know why this was so. data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1,moments=20) print("Data Independent Epsilon:", data_ind_eps) print("Data Dependent Epsilon:", data_dep_eps)
TL;DR: perform_analysis wants to double-check unusually small epsilon results by using more granular computation. The pate.perform_analysis function iterates through the data (technically the privacy loss random variable) and computes various epsilons. It uses the moments parameter to know how granular this iteration should be. When using the default 8 moments, it will compute 8 epsilons. Then it returns the minimum of the computed epsilons, as you can see in the source code. When this function returns a very small data-dependent epsilon, it could be because A) the data has a high amount of agreement, or B) the computation wasn't granular enough, and the true epsilon is higher. When only 8 epsilons are computed, it's possible that they happened to be anomalies in the data that paint an overly-optimistic picture of the overall epsilon! So the function sees a surprisingly small epsilon and warns you - may want to increase the moments variable to compute more epsilons and make sure you've found the real minimum. If you still get the same result when you increase your moments parameter, your data probably has a high amount of agreement, so it truly has a small data-dependent epsilon compared to its data-independent epsilon. Hopefully that makes sense to you at a high level. If you want more details on the math behind this, you can check out the research paper that inspired the source code.
https://stackoverflow.com/questions/56975953/
Using pypi pretrained models vs PyTorch
I have two setups - one takes approx. 10 minutes to run the other is still going after an hour: 10 m: import pretrainedmodels def resnext50_32x4d(pretrained=False): pretrained = 'imagenet' if pretrained else None model = pretrainedmodels.se_resnext50_32x4d(pretrained=pretrained) return nn.Sequential(*list(model.children())) learn = cnn_learner(data, resnext50_32x4d, pretrained=True, cut=-2, split_on=lambda m: (m[0][3], m[1]),metrics=[accuracy, error_rate]) Not finishing: import torchvision.models as models def get_model(pretrained=True, model_name = 'resnext50_32x4d', **kwargs ): arch = models.resnext50_32x4d(pretrained, **kwargs ) return arch learn = Learner(data, get_model(), metrics=[accuracy, error_rate]) This is all copied and hacked from other peoples code so there are parts that I do not understand. But the most perplexing one is why one would be so much faster than the other. I would like to use the second option because its easier for me to understand and I can just swap out the pretrained model to test different ones.
Both architectures are different. I assume you are using pretrained-models.pytorch. Please notice you are using SE-ResNeXt in your first example and ResNeXt in second (standard one from torchvision). The first version uses faster block architecture (Squeeze and Excitation), research paper describing it here. I'm not sure about exact differences between both architectures and implementations except different building block used, but you could print both models and check for differences. Finally here is a nice article summarizing what Squeeze And Excitation is. Basically you do GlobalAveragePooling on all channels (im pytorch it would be torch.nn.AdaptiveAvgPoo2d(1) and flatten afterwards), push it through two linear layers (with ReLU activation in-between) finished by sigmoid in order to get weights for each channel. Finally you multiply the channels by those. Additionally you are doing something strange with modules transforming them to torch.nn.Sequential. There may be some logic in forward call of pretrained network you are removing by copying modules, it may play a part as well.
https://stackoverflow.com/questions/56976791/
Need help regarding Transfer Learning a Faster RCNN ResNet50FPN in PyTorch
I am new to PyTorch. I'm trying to use a pre-trained Faster RCNN torchvision.models.detection.fasterrcnn_resnet50_fpn() for object detection project. I have created a CustomDataset(Dataset) class to handle the custom dataset. Here is the custom class implementation class ToTensor(object): """Convert ndarrays in sample to Tensors.""" def __call__(self, sample): image, landmarks = sample['image'], sample['meta_data'] # swap color axis because # numpy image: H x W x C # torch image: C X H X W image = image.transpose((2, 0, 1)) return {'image': torch.from_numpy(image), 'meta_data': landmarks} class CustomDataset(Dataset): """Custom Landmarks dataset.""" def __init__(self, data_dir, root_dir, transform=None): """ Args: data_dir (string): Directory with all the labels(json). root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.data_dir = data_dir self.root_dir = root_dir self.transform = transform def __len__(self): return len(os.listdir(self.data_dir)) def __getitem__(self, idx): img_name = sorted(os.listdir(self.root_dir))[idx] image = io.imread(self.root_dir+'/'+img_name, plugin='matplotlib') json_file = sorted(os.listdir(self.data_dir))[idx] with open(self.data_dir+'/'+json_file) as f: meta_data = json.load(f) meta_data = meta_data['annotation']['object'] sample = {'image': image, 'meta_data': meta_data} to_tensor = ToTensor() transformed_sample = to_tensor(sample) if self.transform: sample = self.transform(sample) return transformed_sample Here is the train_model function def train_model(model, criterion, optimizer, lr_scheduler, num_epochs=25): since = time.time() best_model = model best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'test']: if phase == 'train': optimizer = lr_scheduler(optimizer, epoch) model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 for data in dset_loaders[phase]: # get the inputs inputs, labels = data['image'], data['meta_data'] inputs= inputs.to(device) # , # zero the parameter gradients optimizer.zero_grad() # forward outputs = model(inputs, labels) _, preds = torch.max(outputs.data, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() running_corrects += torch.sum(preds == labels).item() epoch_loss = running_loss / dset_sizes[phase] epoch_acc = running_corrects / dset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) # deep copy the model if phase == 'test' and epoch_acc > best_acc: best_acc = epoch_acc best_model = copy.deepcopy(model) print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) return best_model While performing model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25) I am getting "RuntimeError: _thnn_upsample_bilinear2d_forward not supported on CUDAType for Byte"
It appears your datapoints are byte tensors, i.e type uint8. Try casting your data into float32 # Replace this inputs = inputs.to(device) # With this inputs = inputs.float().to(device) Note that the torchvision models expect data to be normalized in a specific way. Check here for the procedure, which basically entails using normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) for normalizing your data.
https://stackoverflow.com/questions/57005162/
How the standard normal distribution works in practice in NumPy and PyTorch?
I have two points to ask about: 1) I would like to understand what is precisely returned from the np.random.randn from NumPy and torch.randn from PyTorch. They both return a tensor with random numbers from a normal distribution with mean 0 and std 1, hence, a standard normal distribution. However, it is not the same thing as puting x values in the standard normal distribution function here and getting its respective image values y. The values returned by PyTorch and NumPy does not seem like this. For me, it seems that both np.random.randn and torch.randn from these libraries returns the x values from the functions, not the image y as I calculated below. Is that correct? normal = np.array([(1/np.sqrt(2*np.pi))*np.exp(-(1/2)*(i**2)) for i in range(-38,39)]) Printing the normal variable shows me something like this. array([1.10e-314, 2.12e-298, 1.51e-282, 3.94e-267, 3.79e-252, 1.34e-237, 1.75e-223, 8.36e-210, 1.47e-196, 9.55e-184, 2.28e-171, 2.00e-159, 6.45e-148, 7.65e-137, 3.34e-126, 5.37e-116, 3.17e-106, 6.90e-097, 5.52e-088, 1.62e-079, 1.76e-071, 7.00e-064, 1.03e-056, 5.53e-050, 1.10e-043, 8.00e-038, 2.15e-032, 2.12e-027, 7.69e-023, 1.03e-018, 5.05e-015, 9.13e-012, 6.08e-009, 1.49e-006, 1.34e-004, 4.43e-003, 5.40e-002, 2.42e-001, 3.99e-001, 2.42e-001, 5.40e-002, 4.43e-003, 1.34e-004, 1.49e-006, 6.08e-009, 9.13e-012, 5.05e-015, 1.03e-018, 7.69e-023, 2.12e-027, 2.15e-032, 8.00e-038, 1.10e-043, 5.53e-050, 1.03e-056, 7.00e-064, 1.76e-071, 1.62e-079, 5.52e-088, 6.90e-097, 3.17e-106, 5.37e-116, 3.34e-126, 7.65e-137, 6.45e-148, 2.00e-159, 2.28e-171, 9.55e-184, 1.47e-196, 8.36e-210, 1.75e-223, 1.34e-237, 3.79e-252, 3.94e-267, 1.51e-282, 2.12e-298, 1.10e-314]) 2) Also, if we ask these libraries that I want a matrix of values from a standard normal distribution, it means that all rows and columns are draw from the same standard distribution? If I want i.i.d distributions in every row, I would need to call np.random.randn over a for loop for each row and then vstack them?
1) Yes, they give you x and not phi(x) since the formula for phi(x) gives the probability density of sampling a value x. If you want to know the probability of getting values in an interval [a,b] you need to integrate phi(x) between a and b. Intuitively, if you look at the function phi(x) you'll see that you're more likely to get values near zero than, say, values near 1. An easy way to see it, is look at the histogram of the sampled values. import numpy as np import matplotlib.pyplot as plt samples = np.random.normal(size=[1000]) plt.hist(samples) 2) they're iid. Just use a 2d size like so: samples = np.random.normal(size=[10, 10])
https://stackoverflow.com/questions/57011855/
how to concatenate embedding layer in pytorch
I am trying to concatenate embedding layer with other features. It doesn’t give me any error, but doesn’t do any training either. Is anything wrong with this model definition, how to debug this? Note: The last column (feature) in my X is feature with word2ix (single word). Note: The net works fine without the embedding feature/layer originally posted on pytorch forum class Net(torch.nn.Module): def __init__(self, n_features, h_sizes, num_words, embed_dim, out_size, dropout=None): super().__init__() self.num_layers = len(h_sizes) # hidden + input self.embedding = torch.nn.Embedding(num_words, embed_dim) self.hidden = torch.nn.ModuleList() self.bnorm = torch.nn.ModuleList() if dropout is not None: self.dropout = torch.nn.ModuleList() else: self.dropout = None for k in range(len(h_sizes)): if k == 0: self.hidden.append(torch.nn.Linear(n_features, h_sizes[0])) self.bnorm.append(torch.nn.BatchNorm1d(h_sizes[0])) if self.dropout is not None: self.dropout.append(torch.nn.Dropout(p=dropout)) else: if k == 1: input_dim = h_sizes[0] + embed_dim else: input_dim = h_sizes[k-1] self.hidden.append(torch.nn.Linear(input_dim, h_sizes[k])) self.bnorm.append(torch.nn.BatchNorm1d(h_sizes[k])) if self.dropout is not None: self.dropout.append(torch.nn.Dropout(p=dropout)) # Output layer self.out = torch.nn.Linear(h_sizes[-1], out_size) def forward(self, inputs): # Feedforward for l in range(self.num_layers): if l == 0: x = self.hidden[l](inputs[:, :-1]) x = self.bnorm[l](x) if self.dropout is not None: x= self.dropout[l](x) embeds = self.embedding(inputs[:,-1])#.view((1, -1) x = torch.cat((embeds, x),dim=1) else: x = self.hidden[l](x) x = self.bnorm[l](x) if self.dropout is not None: x = self.dropout[l](x) x = F.relu(x) output= self.out(x) return output
There were a few issues. The key one was data type. I mixed float features and int indices. sample data and training before fix: NUM_TARGETS = 4 NUM_FEATURES = 3 NUM_TEXT_FEATURES = 1 x = np.random.rand(5, NUM_FEATURES) y = np.random.rand(5, NUM_TARGETS) word_ix = np.arange(5).reshape(-1,1).astype(int) x_train = np.append(x, word_ix, axis=1) x_train = torch.from_numpy(x).float().to(device) y_train = torch.from_numpy(y).float().to(device) h_sizes = [2,2] net = Net(x_train.shape[1] , h_sizes=h_sizes, num_words=5, embed_dim=2, out_size=y_train.shape[1],dropout=.01) # define the network print(net) # net architecture net = net.float() net.to(device) optimizer = torch.optim.Adam(net.parameters(), lr=0.0001, weight_decay=.01) loss_func = torch.nn.MSELoss() # this is for regression mean squared loss # one training loop prediction = net(x_train) # input x and predict based on x loss = loss_func(prediction, y_train) # must be (1. nn output, 2. target) optimizer.zero_grad() # clear gradients for next train loss.backward() # backpropagation, compute gradients optimizer.step() # apply gradients # train_losses.append(loss.detach().to('cpu').numpy()) To resolve this, I separated word index feature from x, and also removed net.float(). changed the dtypes conversion to: x_train = torch.from_numpy(x).float().to(device) y_train = torch.from_numpy(y).float().to(device) # NOTE: word index needs to be long word_ix = torch.from_numpy(word_ix).to(torch.long).to(device) and forward method changed to : def forward(self, inputs, word_ix): # Feedforward for l in range(self.num_layers): if l == 0: x = self.hidden[l](inputs) x = self.bnorm[l](x) if self.dropout is not None: x = self.dropout[l](x) embeds = self.embedding(word_ix) # NOTE: # embeds has a shape of (batch_size, 1, embed_dim) # inorder to merge this change this with x, reshape this to # (batch_size, embed_dim) embeds = embeds.view(embeds.shape[0], embeds.shape[2]) x = torch.cat((x, embeds.view(x.shape)),dim=1) else: x = self.hidden[l](x) x = self.bnorm[l](x) if self.dropout is not None: x = self.dropout[l](x) x = F.relu(x) output= self.out(x) return output
https://stackoverflow.com/questions/57029817/
how to know model's input size in onnx?
The size of the input is not specified in the pytorch. Just size the kernel to make the output. The WinMLDashboard shows the width and height of the image input. How is that possible?
Do you mean when you serialize the network from pytorch to onnx? Because when you export from pytorch you need to define the size of the input as per documentation dummy_input = torch.randn(10, 3, 224, 224, device='cuda') model = torchvision.models.alexnet(pretrained=True).cuda() input_names = [ "actual_input_1" ] output_names = [ "output1" ] torch.onnx.export(model, dummy_input, "alexnet.onnx", verbose=True, input_names=input_names, output_names=output_names)
https://stackoverflow.com/questions/57032721/
"No route to host" error in torch.distributed.init_process_group
I'm trying to train a simple pytorch model on two GPU servers in parallel. I have compiled pytorch from source. The program gives the "RuntimeError: No route to host" when the process runs on the second server. How do I fix it? I have tried the anaconda installation and source installation of pytorch, cuda, NCCL. I have copied the following code from https://yangkky.github.io/2019/07/08/distributed-pytorch-tutorial.html import os from datetime import datetime import argparse import torch.multiprocessing as mp import torchvision import torchvision.transforms as transforms import torch import torch.nn as nn import torch.distributed as dist #from apex.parallel import DistributedDataParallel as DDP #from apex import amp class ConvNet(nn.Module): def __init__(self, num_classes=10): super(ConvNet, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2) ) self.layer2 = nn.Sequential( nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2) ) self.fc = nn.Linear(7*7*32, num_classes) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = out.reshape(out.size(0), -1) out = self.fc(out) return out def main(): parser = argparse.ArgumentParser() parser.add_argument('-n', '--nodes', default=1, type=int, metavar='N') parser.add_argument('-g', '--gpus', default=1, type=int, help='number of gpus per node') parser.add_argument('-nr', '--nr', default=0, type=int, help='ranking within the nodes') parser.add_argument('--epochs', default=2, type=int, metavar='N', help='number of total epochs to run') args = parser.parse_args() args.world_size = args.gpus * args.nodes os.environ['MASTER_ADDR'] = '192.168.0.238' os.environ['MASTER_PORT'] = '8888' mp.spawn(train, nprocs=args.gpus, args=(args,)) #train(0, args) def train(gpu, args): rank = args.nr * args.gpus + gpu dist.init_process_group( backend='nccl', init_method='env://', world_size=args.world_size, rank=rank ) model = ConvNet() print('gpu:', gpu) torch.cuda.set_device(gpu) model.cuda(gpu) batch_size= 100 criterion = nn.CrossEntropyLoss().cuda(gpu) optimizer = torch.optim.SGD(model.parameters(), 1e-4) model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu]) train_dataset = torchvision.datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) train_sampler = torch.utils.data.distributed.DistributedSample( train_dataset, num_replicas=args.world_size, rank=rank ) train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=False, num_workers=0, pin_memory=True, sampler=train_sampler) start = datetime.now() total_step = len(train_loader) for epoch in range(args.epochs): for i, (images, labels) in enumerate(train_loader): images = images.cuda(non_blocking=True) labels = labels.cuda(non_blocking=True) outputs = model(images) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() if (i+1) % 100 == 0 and gpu == 0: print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format( epoch + 1, args.epochs, i + 1, total_step, loss.item() )) if gpu == 0: print("Training complete in:" + str(datetime.now() - start)) if __name__ == '__main__': main()
TL;DR This may be caused by the firewall. Follow https://serverfault.com/a/890421 to trust your slave node. A step-by-step checklist: 1. Install the netcat to help us check the network For Ubuntu: apt-get install netcat 2. Check if your master process is available On node0 (or the master node): nc -vv localhost:<port> The output Connection to localhost <port >port [tcp/tproxy] succeeded! means your main process is running correctly. Otherwise, check if your program on the master node is running correctly. 3. Close the Firewall If the master node program is working, and two nodes are supposed to connect well, this may be a firewall issue. For firewalld, consider set your node as trusted zone. See https://serverfault.com/a/890421 for more details.
https://stackoverflow.com/questions/57035589/
Save model with updated weights in pytorch
I have a model that I want to train for 5 epochs. Then, I would like to see where the model was wrong and increase the training set accordingly. How can I save the following model with the learnt weight? trainer_ = Trainer(network = network, optimizer = optim.Adam(network.parameters(), lr=0.001), loss_function = loss_function, train_loader = train_loader, valid_every = 100, print_every = 50, save_every = 15000, save_path = ".", cudaok = is_cuda_available) trainer_.run(4,is_cuda_available) I have tried this: path = os.path.join(project_path, 'model.pth') torch.save(network.cpu().state_dict(), path) # saving model But I don't really think that the object network contains weights. I am very confused here. Can anyone help? Thank you!
network.state_dict() is a dictionary; try this to see your weights: for param in network.state_dict(): print(param, "\n", network.state_dict()[param])
https://stackoverflow.com/questions/57040269/
Pytorch LSTM vs LSTMCell
What is the difference between LSTM and LSTMCell in Pytorch (currently version 1.1)? It seems that LSTMCell is a special case of LSTM (i.e. with only one layer, unidirectional, no dropout). Then, what's the purpose of having both implementations? Unless I'm missing something, it's trivial to use an LSTM object as an LSTMCell (or alternatively, it's pretty easy to use multiple LSTMCells to create the LSTM object)
Yes, you can emulate one by another, the reason for having them separate is efficiency. LSTMCell is a cell that takes arguments: Input of shape batch × input dimension; A tuple of LSTM hidden states of shape batch x hidden dimensions. It is a straightforward implementation of the equations. LSTM is a layer applying an LSTM cell (or multiple LSTM cells) in a "for loop", but the loop is heavily optimized using cuDNN. Its input is A three-dimensional tensor of inputs of shape batch × input length × input dimension; Optionally, an initial state of the LSTM, i.e., a tuple of hidden states of shape batch × hidden dim (or tuple of such tuples if the LSTM is bidirectional) You often might want to use the LSTM cell in a different context than apply it over a sequence, i.e. make an LSTM that operates over a tree-like structure. When you write a decoder in sequence-to-sequence models, you also call the cell in a loop and stop the loop when the end-of-sequence symbol is decoded.
https://stackoverflow.com/questions/57048120/
why "RuntimeError CUDA out of memory" in testing?
The same model ran fine for training with batch-size=5. I reduced the batch size from 80 to 5 during training because of the same error. I am using a GPU with 11GB of memory instead of Titan X (12GB memory), the one used by the author in actual experiment. However, now in testing, which only has batch-size=1, it is not running. The issue is in I-frame model testing phase, the other two models have successfully produced results for testing. Following is my testing command: time python test.py --arch resnet152 --data-name ucf101 --representation iframe --data-root data/ucf101/mpeg4_videos --test-list data/datalists/ucf101_split1_test.txt --weights ucf101_iframe_model_iframe_model_best.pth.tar --save-scores iframe_score_file I have used nvidia-smi to make sure nothing else is running on the GPU. Following is the actual error message: RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 10.92 GiB total capacity; 10.12 GiB already allocated; 245.50 MiB free; 21.69 MiB cached) What could be the issue and how it can be fixed? EDIT: By removing the following two lines from test.py, it starts running without an memeory issue, but it is taking ages to process: net = torch.nn.DataParallel(net.cuda(devices[0]), device_ids=devices) net.eval() Yes, the above lines are for GPU based parallel processing. Still, is there a solution to my problem?
I have run this model/testing on GPU with memory upto 8GB, by adding the following flag in the testing command given in the question: --test-crops 1
https://stackoverflow.com/questions/57067605/
Pytorch tensor indexing
I am currently working on converting some code from tensorflow to pytorch, I encountered problem with tf.gather func, there is no direct function to convert it in pytorch. What I am trying to do is basically indexing, I have two tensors, feature tensor shapes of [minibatch, 60, 2] and indexing tensor [minibatch, 8], say like first tensor is tensor A, and the second one is B. In Tensorflow, it is directly converted with tf.gather(A, B, batch_dims=1) How do I achieve this in pytorch? I have tried A[B] indexing. This seems not work and A[0]B[0] works, but output of shape is [8, 2] I need the shape of [minibatch, 8, 2] It will probably work if I stack tensor like [stack, 8, 2] but I have no idea how to do it tensorflow out = tf.gather(logits, indices, batch_dims=1) pytorch out = A[B] -> something like this will be great Output shape of [minibatch, 8, 2]
I think you are looking for torch.gather out = torch.gather(A, 1, B[..., None].expand(*B.shape, A.shape[-1]))
https://stackoverflow.com/questions/57071002/
How to fix 'Expected object of scalar type Float but got scalar type Double for argument #4 'mat1''?
I am trying to build an lstm model. My model code is below. My input has 4 features, Sequence length of 5 and batch size of 32. class RNN(nn.Module): def __init__(self, feature_dim, output_size, hidden_dim, n_layers, dropout=0.5): """ Initialize the PyTorch RNN Module :param feature_dim: The number of input dimensions of the neural network :param output_size: The number of output dimensions of the neural network :param hidden_dim: The size of the hidden layer outputs :param dropout: dropout to add in between LSTM/GRU layers """ super(RNN, self).__init__() # set class variables self.output_size = output_size self.n_layers = n_layers self.hidden_dim = hidden_dim # define model layers self.lstm = nn.LSTM(feature_dim, hidden_dim, n_layers, batch_first=True) self.fc = nn.Linear(hidden_dim, output_size) self.dropout = nn.Dropout(dropout) def forward(self, nn_input, hidden): """ Forward propagation of the neural network :param nn_input: The input to the neural network :param hidden: The hidden state :return: Two Tensors, the output of the neural network and the latest hidden state """ # Get Batch Size batch_size = nn_input.size(0) # Pass through LSTM layer lstm_out, hidden = self.lstm(nn_input, hidden) # Stack up LSTM outputs lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim) # Add dropout and pass through fully connected layer x = self.dropout(lstm_out) x = self.fc(lstm_out) # reshape to be batch_size first output = x.view(batch_size, -1, self.output_size) # get last batch of labels out = output[:, -1] # return one batch of output word scores and the hidden state return out, hidden def init_hidden(self, batch_size): ''' Initialize the hidden state of an LSTM/GRU :param batch_size: The batch_size of the hidden state :return: hidden state of dims (n_layers, batch_size, hidden_dim) ''' # Implement function # initialize state with zero weights, and move to GPU if available weight = next(self.parameters()).data if is_gpu_available: hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(device), weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(device)) else: hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(), weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()) return hidden When I train, I got the error RuntimeError Traceback (most recent call last) /usr/local/bin/kernel-launchers/python/scripts/launch_ipykernel.py in <module> 3 4 # training the model ----> 5 trained_rnn = train_rnn(rnn, batch_size, optimizer, num_epochs, show_every_n_batches) 6 7 # saving the trained model /usr/local/bin/kernel-launchers/python/scripts/launch_ipykernel.py in train_rnn(rnn, batch_size, optimizer, n_epochs, show_every_n_batches) 18 19 # forward, back prop ---> 20 loss, hidden = forward_back_prop(rnn, optimizer, inputs, labels, hidden) 21 # record loss 22 batch_losses.append(loss) /usr/local/bin/kernel-launchers/python/scripts/launch_ipykernel.py in forward_back_prop(rnn, optimizer, inp, target, hidden) 22 23 # get the output from the model ---> 24 output, h = rnn(inp, h) 25 26 # calculate the loss and perform backprop /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 491 result = self._slow_forward(*input, **kwargs) 492 else: --> 493 result = self.forward(*input, **kwargs) 494 for hook in self._forward_hooks.values(): 495 hook_result = hook(self, input, result) /usr/local/bin/kernel-launchers/python/scripts/launch_ipykernel.py in forward(self, nn_input, hidden) 36 37 # Pass through LSTM layer ---> 38 lstm_out, hidden = self.lstm(nn_input, hidden) 39 # Stack up LSTM outputs 40 lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 491 result = self._slow_forward(*input, **kwargs) 492 else: --> 493 result = self.forward(*input, **kwargs) 494 for hook in self._forward_hooks.values(): 495 hook_result = hook(self, input, result) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py in forward(self, input, hx) 557 return self.forward_packed(input, hx) 558 else: --> 559 return self.forward_tensor(input, hx) 560 561 /usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py in forward_tensor(self, input, hx) 537 unsorted_indices = None 538 --> 539 output, hidden = self.forward_impl(input, hx, batch_sizes, max_batch_size, sorted_indices) 540 541 return output, self.permute_hidden(hidden, unsorted_indices) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py in forward_impl(self, input, hx, batch_sizes, max_batch_size, sorted_indices) 520 if batch_sizes is None: 521 result = _VF.lstm(input, hx, self._get_flat_weights(), self.bias, self.num_layers, --> 522 self.dropout, self.training, self.bidirectional, self.batch_first) 523 else: 524 result = _VF.lstm(input, batch_sizes, hx, self._get_flat_weights(), self.bias, RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #4 'mat1' I am not able to figure the cause of this error. How to fix it? Please help. Also, is it the correct way of implementing the LSTM or is there a better way to achieve the same?
torch.nn.LSTM does not need any initialization, as it's initialized to zeros by default (see documentation). Furthermore, torch.nn.Module already has predefined cuda() method, so one can move module to GPU simply, hence you can safely delete init_hidden(self, batch_size). You have this error because your input is of type torch.Double, while modules by default use torch.Float (as it's accurate enough, faster and smaller than torch.Double). You can cast your input Tensors by calling .float(), in your case it could look like that: def forward(self, nn_input, hidden): nn_input = nn_input.float() ... # rest of your code Finally, there is no need for hidden argument if it's always zeroes, you can simply use: lstm_out, hidden = self.lstm(nn_input) # no hidden here as hidden is zeroes by default as well.
https://stackoverflow.com/questions/57076709/
Pytorch tensor to numpy gives "()" as shape
I have a pytorch tensor span_end = tensor([[[13]]]) I do the following span_end = span_end.view(1).squeeze().data.numpy() print(type(span_end)) print(span_end.shape) This gives me the following output <class 'numpy.ndarray'> () Then later when I try to access the 0th element of span_end I get IndexError because the shape is null somehow. What am I doing wrong here?
tensor.squeeze() will remove all dimensions of size 1, which in this case all of them are therefore it will result in a tensor with no dimensions. Removing that statement will work. import torch span_end = torch.tensor([[[13]]]) span_end = span_end.view(1).numpy() print(type(span_end)) print(span_end.shape) print(span_end[0]) Outputs: <class 'numpy.ndarray'> (1,) 13
https://stackoverflow.com/questions/57078778/
img should be PIL Image. Got
I'm trying to iterate through a loader to check if it's working, however the below error is given: TypeError: img should be PIL Image. Got <class 'torch.Tensor'> I've tried adding both transforms.ToTensor() and transforms.ToPILImage() and it gives me an error asking for the opposite. i.e, with ToPILImage(), it will ask for tensor, and vice versa. # Imports here %matplotlib inline import matplotlib.pyplot as plt from torch import nn, optim import torch.nn.functional as F import torch from torchvision import transforms, datasets, models import seaborn as sns import pandas as pd import numpy as np data_dir = 'flowers' train_dir = data_dir + '/train' valid_dir = data_dir + '/valid' test_dir = data_dir + '/test' #Creating transform for training set train_transforms = transforms.Compose( [transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.RandomHorizontalFlip(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) #Creating transform for test set test_transforms = transforms.Compose( [transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])]) #transforming for all data train_data = datasets.ImageFolder(train_dir, transform=train_transforms) test_data = datasets.ImageFolder(test_dir, transform = test_transforms) valid_data = datasets.ImageFolder(valid_dir, transform = test_transforms) #Creating data loaders for test and training sets trainloader = torch.utils.data.DataLoader(train_data, batch_size = 32, shuffle = True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) images, labels = next(iter(trainloader)) It should allow me to simply see the image once I run plt.imshow(images[0]), if its working correctly.
transforms.RandomHorizontalFlip() works on PIL.Images, not torch.Tensor. In your code above, you are applying transforms.ToTensor() prior to transforms.RandomHorizontalFlip(), which results in tensor. But, as per the official pytorch documentation here, transforms.RandomHorizontalFlip() horizontally flip the given PIL Image randomly with a given probability. So, just change the order of your transformation in above code, like below: train_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
https://stackoverflow.com/questions/57079219/
Does the input of model require gradient?
In pytorch, when do we need to set the require_grad of input tensor to be True?
Usually, the inputs are fixed - we do not change the images, we only infer the labels/outputs from the fixed input images. Since they are fixed - there is no need to compute a gradient w.r.t the input, only w.r.t trainable parameters. Having said that, there are cases where you want to change the inputs. For instance, when you want to visualize features. See, e.g., this nice post.
https://stackoverflow.com/questions/57088243/
Predicting sequence of grid coordinates with PyTorch
I have a similar open question here on Cross Validated (though not implementation focused, which I intend this question to be, so I think they are both valid). I'm working on a project that uses sensors to monitor a persons GPS location. The coordinates will then be converted to a simple-grid representation. What I want to try and do is after recording a users routes, train a neural network to predict the next coordinates, i.e. take the example below where a user repeats only two routes over time, Home->A and Home->B. I want to train an RNN/LSTM with sequences of varying lengths e.g. (14,3), (13,3), (12,3), (11,3), (10,3), (9,3), (8,3), (7,3), (6,3), (5,3), (4,3), (3,3), (2,3), (1,3) and then also predict with sequences of varying lengths e.g. for this example route if I called route = [(14,3), (13,3), (12,3), (11,3), (10,3)] //pseudocode pred = model.predict(route) pred should give me (9,3) (or ideally even a longer prediction e.g. ((9,3), (8,3), (7,3), (6,3), (5,3), (4,3), (3,3), (2,3), (1,3)) How do I feed such training sequences to the init and forward operations identified below? self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True) out, hidden = self.rnn(x, hidden) Also, should the entire route be a tensor or each set of coordinates within the route a tensor?
I'm not very experienced with RNNs, but I'll give it a try. A few things to pay attention to before we start: 1. Your data is not normalized. 2. The output prediction you want (even after normalization) is not bounded to [-1, 1] range and therefore you cannot have tanh or ReLU activations acting on the output predictions. To address your problem, I propose a recurrent net that given a current state (2D coordinate) predicts the next state (2D coordinates). Note that since this is a recurrent net, there is also a hidden state associated with each location. At first, the hidden state is zero, but as the net sees more steps, it updates its hidden state. I propose a simple net to address your problem. It has a single RNN layer with 8 hidden states, and a fully connected layer on to to output the prediction. class MyRnn(nn.Module): def __init__(self, in_d=2, out_d=2, hidden_d=8, num_hidden=1): super(MyRnn, self).__init__() self.rnn = nn.RNN(input_size=in_d, hidden_size=hidden_d, num_layers=num_hidden) self.fc = nn.Linear(hidden_d, out_d) def forward(self, x, h0): r, h = self.rnn(x, h0) y = self.fc(r) # no activation on the output return y, h You can use your two sequences as training data, each sequence is a tensor of shape Tx1x2 where T is the sequence length, and each entry is two dimensional (x-y). To predict (during training): rnn = MyRnn() pred, out_h = rnn(seq[:-1, ...], torch.zeros(1, 1, 8)) # given time t predict t+1 err = criterion(pred, seq[1:, ...]) # compare prediction to t+1 Once the model is trained, you can show it first k steps and continue to predict the next steps: rnn.eval() with torch.no_grad(): pred, h = rnn(s[:k,...], torch.zeros(1, 1, 8, dtype=torch.float)) # pred[-1, ...] is the predicted next step prev = pred[-1:, ...] for j in range(k+1, s.shape[0]): pred, h = rnn(prev, h) # note how we keep track of the hidden state of the model. it is no longer init to zero. prev = pred I put everything together in a colab notebook so you can play with it. For simplicity, I ignored the data normalization here, but you can find it in the colab notebook. What's next? These types of predictions are prone to error accumulation. This should be addressed during training, by shifting the inputs from the ground truth "clean" sequences to the actual predicted sequences, so the model will be able to compensate for its errors.
https://stackoverflow.com/questions/57091026/
Object detection torchvision : IOError: [Errno 2] No such file or directory:
I'm working on clothes detection in an image. For this, i'm using the package TorchVision on python, and i want to start from a pretrained model and to fintetune it for my classes (different kind of clothes). My images are in a zip file. When i run the function train_one_epoch, i obtain this error : IOError: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "<ipython-input-16-4557e2f6ca8b>", line 51, in __getitem__ img = Image.open(img_path).convert("RGB") File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 2530, in open fp = builtins.open(filename, "rb") IOError: [Errno 20] Not a directory: 'gdrive/My Drive/donnees_kaggle_iMaterialist/train.zip/062fc2042f49d10fa625151cb652c3da.jpg' Here is a part of my code : python num_classes = 46 + 1 device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') dataset_train = FashionDataset('gdrive/My Drive/donnees_kaggle_iMaterialist/train.zip/', # "../input/imaterialist-fashion-2019-FGVC6/train/", 'gdrive/My Drive/donnees_kaggle_iMaterialist/train.csv.zip', #"../input/imaterialist-fashion-2019-FGVC6/train.csv", 256, 256, transforms=get_transform(train=True)) model_ft = get_instance_segmentation_model(num_classes) model_ft.to(device) data_loader = torch.utils.data.DataLoader( dataset_train, batch_size=4, shuffle=True, num_workers=8, collate_fn=lambda x: tuple(zip(*x))) params = [p for p in model_ft.parameters() if p.requires_grad] optimizer = torch.optim.SGD(params, lr=0.001, momentum=0.9, weight_decay=0.0005) lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1) num_epochs = 10 for epoch in range(num_epochs): train_one_epoch(model_ft, optimizer, data_loader, device, epoch, print_freq=10) lr_scheduler.step() torch.save(model_ft.state_dict(), "model.bin")
You are getting above error because train.zip is a zip file, not a folder. So, first unzip train.zip file using below code: import zipfile zr1 = zipfile.ZipFile("/content/gdrive/My Drive/donnees_kaggle_iMaterialist/train.zip", 'r') zr1.extractall("/content/gdrive/My Drive/donnees_kaggle_iMaterialist/train/") zr1.close() And, then change 'gdrive/My Drive/donnees_kaggle_iMaterialist/train.zip/' to 'gdrive/My Drive/donnees_kaggle_iMaterialist/train/' Also, train.csv.zip is a zip file, extract it first before actually using it. Use below code: zr2 = zipfile.ZipFile("/content/gdrive/My Drive/donnees_kaggle_iMaterialist/train.csv.zip", 'r') zr2.extractall("/content/gdrive/My Drive/donnees_kaggle_iMaterialist/") zr2.close() And, then change 'gdrive/My Drive/donnees_kaggle_iMaterialist/train.csv.zip' to something like 'gdrive/My Drive/donnees_kaggle_iMaterialist/train.csv.csv'. Note one thing here that extracted csv file name would be train.csv.csv(with extension) as zipfile name was train.csv.zip.
https://stackoverflow.com/questions/57095189/
Pytorch equivalent of tf.map_fn with parallel_iterations?
I'm executing a PyTorch function for n_iters times on a GPU. Currently I'm using a for loop for this. However, when n_iters is large, this is very inefficient. I'm wondering whether there is a PyTorch equivalent of tf.map_fn with the parallel_iterations functionality so that I can execute all the iterations in parallel?
I have searched intensively and I was not able to find any function in pytorch thats equivalent to tf.map_fn that exposes number of parallel_iterations to be set by the user. While exploring, I have found that there is a function named 'nn.DataParallel' but this function replicates the model or the operation that you want to run on multiple gpus and then returns the results which is not equivalent to number of parallel_iterations in tf.map_fn. But for now in Pytorch, using the for loop is the only way to go right now.
https://stackoverflow.com/questions/57104152/
While using GPU for PyTorch models, getting the CUDA error: Unknown error?
I am trying to use a pre-trained model using PyTorch. While loading the model to GPU, it is giving the following error: Traceback (most recent call last): File "model\vgg_model.py", line 45, in <module> vgg_model1 = VGGFeatureExtractor(True).double().to(device) File "C:\Users\myidi\Anaconda3\envs\openpose\lib\site-packages\torch\nn\modules\module.py", line 386, in to return self._apply(convert) File "C:\Users\myidi\Anaconda3\envs\openpose\lib\site-packages\torch\nn\modules\module.py", line 193, in _apply module._apply(fn) File "C:\Users\myidi\Anaconda3\envs\openpose\lib\site-packages\torch\nn\modules\module.py", line 193, in _apply module._apply(fn) File "C:\Users\myidi\Anaconda3\envs\openpose\lib\site-packages\torch\nn\modules\module.py", line 199, in _apply param.data = fn(param.data) File "C:\Users\myidi\Anaconda3\envs\openpose\lib\site-packages\torch\nn\modules\module.py", line 384, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) File "C:\Users\myidi\Anaconda3\envs\openpose\lib\site-packages\torch\cuda\__init__.py", line 163, in _lazy_init torch._C._cuda_init() RuntimeError: CUDA error: unknown error I have a Windows 10 Laptop, Nvidia 940m GPU, Latest Pytorch and CUDA Toolkit 9.0 (Also tried on 10.0). I have tried re-installing the GPU drivers, restarted my machine, re-installed PyTorch, Torchvision and CUDA Toolkit. While using the following to see if PyTorch is detecting a GPU: device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') I am getting the following output: device(type='cuda'). What could be the possible issues? I have tried the solution mentioned here: https://github.com/pytorch/pytorch/issues/20990 and the issue still persists. I simply put the torch.cuda.current_device() after import torch but the issue still persists.
Strangely, this worked by using CUDA Toolkit 10.1. I don't know why the latest one is not the default one on PyTorch website in the section where they provide the commands to download the libraries. Used the following command to install the libraries: conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
https://stackoverflow.com/questions/57113188/
Using nn.Linear() and nn.BatchNorm1d() together
I don't understand how BatchNorm1d works when the data is 3D, (batch size, H, W). Example Input size: (2,50,70) Layer: nn.Linear(70,20) Output size: (2,50,20) If I then include a batch normalisation layer it requires num_features=50: BN : nn.BatchNorm1d(50) and I don't understand why it isn't 20: BN : nn.BatchNorm1d(20) Example 1) class Net(nn.Module): def __init__(self): super(Net,self).__init__() self.bn11 = nn.BatchNorm1d(50) self.fc11 = nn.Linear(70,20) def forward(self, inputs): out = self.fc11(inputs) out = torch.relu(self.bn11(out)) return out model = Net() inputs = torch.Tensor(2,50,70) outputs = model(inputs) Example 2) class Net(nn.Module): def __init__(self): super(Net,self).__init__() self.bn11 = nn.BatchNorm1d(20) self.fc11 = nn.Linear(70,20) def forward(self, inputs): out = self.fc11(inputs) out = torch.relu(self.bn11(out)) return out model = Net() inputs = torch.Tensor(2,50,70) outputs = model(inputs) Example 1 works. Example 2 throws the error: RuntimeError: running_mean should contain 50 elements not 20 2D example: Input size: (2,70) Layer: nn.Linear(70,20) BN: nn.BatchNorm1d(20) I thought the 20 in the BN layer was due to there being 20 nodes output by the linear layer and each one requires a running means/std for the incoming values. Why in the 3D case, if the linear layer has 20 output nodes, the BN layer doesn't have 20 features?
One can find the answer inside torch.nn.Linear documentation. It takes input of shape (N, *, I) and returns (N, *, O), where I stands for input dimension and O for output dim and * are any dimensions between. If you pass torch.Tensor(2,50,70) into nn.Linear(70,20), you get output of shape (2, 50, 20) and when you use BatchNorm1d it calculates running mean for first non-batch dimension, so it would be 50. That's the reason behind your error.
https://stackoverflow.com/questions/57114974/
How to declare a scalar as a parameter in pytorch?
I am new to pytorch. I want to know how to declare a scalar as a parameter. I am wondering what's the difference between the following two ways? x = torch.randn(1,1, requires_grad=True) and tensor = torch.randn(1,1) x = Variable(tensor, requires_grad=True)
As per the pytorch official documentation here, The Variable API has been deprecated: Variables are no longer necessary to use autograd with tensors. Autograd automatically supports Tensors with requires_grad set to True. Variable(tensor) and Variable(tensor, requires_grad) still work as expected, but they return Tensors instead of Variables. var.data is the same thing as tensor.data. Methods such as var.backward(), var.detach(), var.register_hook() now work on tensors with the same method names. In addition, one can now create tensors with requires_grad=True using factory methods such as torch.randn(), torch.zeros(), torch.ones(), and others like the following: autograd_tensor = torch.randn((2, 3, 4), requires_grad=True)
https://stackoverflow.com/questions/57121445/
cuDNN error: CUDNN_STATUS_EXECUTION_FAILED while using flair
I have been using https://github.com/zalandoresearch/flair#example-usage tried using flair to experiment flair but then I don't know why I am not able to use the GPU. and tried the following: >>> from flair.data import Sentence >>> from flair.models import SequenceTagger >>> sentence = Sentence('I love Berlin .') >>> tagger = SequenceTagger.load('ner') 2019-07-20 17:52:15,062 loading file /home/vz/.flair/models/en-ner-conll03-v0.4.pt Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/flair/nn.py", line 103, in load model = cls._init_model_with_state_dict(state) File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/flair/models/sequence_tagger_model.py", line 205, in _init_model_with_state_dict locked_dropout=use_locked_dropout, File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/flair/models/sequence_tagger_model.py", line 166, in __init__ self.to(flair.device) File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/torch/nn/modules/module.py", line 386, in to return self._apply(convert) File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply module._apply(fn) File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply module._apply(fn) File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 127, in _apply self.flatten_parameters() File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 123, in flatten_parameters self.batch_first, bool(self.bidirectional)) RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED Can anyone please help me as to how to fix this error ? Thanks in advance.
The error is with my machine and CUDNN requirement i would suggest every one to install pytorch with conda so the way to install should be something like this conda install pytorch torchvision cudatoolkit=9.0 -c pytorch to Eradicate any kind of issues with the installation.
https://stackoverflow.com/questions/57124735/
Cannot Iterate through PyTorch MNIST dataset
I am trying load the MNIST dataset in Pytorch and use the built-in dataloader to iterate through the training examples. However I get an error when calling next() on the iterator. I don't have this problem with CIFAR10. import torch import torchvision import torchvision.transforms as transforms transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) batch_size = 128 dataset = torchvision.datasets.MNIST(root='./data', train=True, transform=transform, download=True) dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True, num_workers=4) dataiter = iter(dataloader) dataiter.next() # ERROR # RuntimeError: output with shape [1, 28, 28] doesn't match the broadcast shape [3, 28, 28] I am using Python 3.7.3 with PyTorch 1.1.0
MNIST dataset consists of grayscaled images, i.e., each image has just 1 channel, while CIFAR10 dataset consists of color images, i.e., each image has 3 channels. So, incase of MNIST dataset, replace to transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) to transforms.Normalize([0.5], [0.5]).
https://stackoverflow.com/questions/57130264/
Pytorch Linear Layer now automatically reshape the input?
I remember in the past, nn.Linear only accepts 2D tensors. But today, I discover that nn.Linear now accepts 3D, or even tensors with arbitrary dimensions. X = torch.randn((20,20,20,20,10)) linear_layer = nn.Linear(10,5) output = linear_layer(X) print(output.shape) >>> torch.Size([20, 20, 20, 20, 5]) When I check the documentation for Pytorch, it does say that it now takes Input: :math:(N, *, H_{in}) where :math:* means any number of additional dimensions and :math:H_{in} = \text{in\_features} So it seems to me that Pytorch nn.Linear now reshape the input by x.view(-1, input_dim) automatically. But I cannot find any x.shape or x.view in the source code: class Linear(Module): __constants__ = ['bias'] def __init__(self, in_features, out_features, bias=True): super(Linear, self).__init__() self.in_features = in_features self.out_features = out_features self.weight = Parameter(torch.Tensor(out_features, in_features)) if bias: self.bias = Parameter(torch.Tensor(out_features)) else: self.register_parameter('bias', None) self.reset_parameters() def reset_parameters(self): init.kaiming_uniform_(self.weight, a=math.sqrt(5)) if self.bias is not None: fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight) bound = 1 / math.sqrt(fan_in) init.uniform_(self.bias, -bound, bound) @weak_script_method def forward(self, input): return F.linear(input, self.weight, self.bias) def extra_repr(self): return 'in_features={}, out_features={}, bias={}'.format( self.in_features, self.out_features, self.bias is not None ) Can anyone confirms this?
torch.nn.Linear uses torch.nn.functional.linear function under the hood, that's where the operations are taking places (see documentation). It looks like this (removed docstrings and decorators for brevity): def linear(input, weight, bias=None): if input.dim() == 2 and bias is not None: # fused op is marginally faster ret = torch.addmm(bias, input, weight.t()) else: output = input.matmul(weight.t()) if bias is not None: output += bias ret = output return ret First case is addmm, which implements beta*mat + alpha*(mat1 @ mat2) and is supposedly faster (see here for example). Second operation is matmul, and as one can read in their docs it performs various operations based on the shape of tensors provided (five cases, not going to copy them blatantly here). In summary it preserves dimensions between first batch and last features dimension. No view() is used whatsoever, especially not this x.view(-1, input_dim), check the code below: import torch tensor1 = torch.randn(10, 3, 4) tensor2 = torch.randn(10, 4, 5) print(torch.matmul(tensor1, tensor2).shape) print(torch.matmul(tensor1, tensor2).view(-1, tensor1.shape[1]).shape) which gives: torch.Size([10, 3, 5]) # preserves input's 3 torch.Size([50, 3]) # destroys the batch even
https://stackoverflow.com/questions/57138540/
How to fix "RuntimeError: Function AddBackward0 returned an invalid gradient at index 1 - expected type torch.FloatTensor but got torch.LongTensor"
I encountered this bug when I am running the machine translation code. RuntimeError Traceback (most recent call last) in 5 decoder = Decoder(len(out_vocab), embed_size, num_hiddens, num_layers, 6 attention_size, drop_prob) ----> 7 train(encoder, decoder, dataset, lr, batch_size, num_epochs) in train(encoder, decoder, dataset, lr, batch_size, num_epochs) 13 dec_optimizer.zero_grad() 14 l = batch_loss(encoder, decoder, X, Y, loss) ---> 15 l.backward() 16 enc_optimizer.step() 17 dec_optimizer.step() /usr/lib64/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 105 products. Defaults to False. 106 """ --> 107 torch.autograd.backward(self, gradient, retain_graph, create_graph) 108 109 def register_hook(self, hook): /usr/lib64/python3.6/site-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 91 Variable._execution_engine.run_backward( 92 tensors, grad_tensors, retain_graph, create_graph, ---> 93 allow_unreachable=True) # allow_unreachable flag 94 95 RuntimeError: Function AddBackward0 returned an invalid gradient at index 1 - expected type torch.FloatTensor but got torch.LongTensor I think the bug is in the batch_loss function. But I don't know why and can't fix it. def batch_loss(encoder, decoder, X, Y, loss): batch_size = X.shape[0] enc_state = None enc_outputs, enc_state = encoder(X, enc_state) # 初始化解码器的隐藏状态 dec_state = decoder.begin_state(enc_state) # 解码器在最初时间步的输入是BOS dec_input = torch.tensor([out_vocab.stoi[BOS]] * batch_size) # 我们将使用掩码变量mask来忽略掉标签为填充项PAD的损失 mask, num_not_pad_tokens = torch.ones(batch_size), 0 l = torch.tensor([0]) for y in Y.t(): dec_output, dec_state = decoder(dec_input, dec_state, enc_outputs) l = l + (mask * loss(dec_output, y)).sum() dec_input = y # 使用强制教学 num_not_pad_tokens += mask.sum().item() # 当遇到EOS时,序列后面的词将均为PAD,相应位置的掩码设成0 mask = mask * (y != out_vocab.stoi[EOS]).float() return l / num_not_pad_tokens def train(encoder, decoder, dataset, lr, batch_size, num_epochs): d2lt.params_init(encoder, init=nn.init.xavier_uniform_) d2lt.params_init(decoder, init=nn.init.xavier_uniform_) enc_optimizer = optim.Adam(encoder.parameters(), lr=lr) dec_optimizer = optim.Adam(decoder.parameters(), lr=lr) loss = nn.CrossEntropyLoss(reduction='none') data_iter = tdata.DataLoader(dataset, batch_size, shuffle=True) for epoch in range(num_epochs): l_sum = 0.0 for X, Y in data_iter: enc_optimizer.zero_grad() dec_optimizer.zero_grad() l = batch_loss(encoder, decoder, X, Y, loss) l.backward() enc_optimizer.step() dec_optimizer.step() l_sum += l.item() if (epoch + 1) % 10 == 0: print("epoch %d, loss %.3f" % (epoch + 1, l_sum / len(data_iter))) Looking forward to a positive reply.
Thank Proyag. Just replace l = torch.tensor([0]) with l = torch.tensor([0], dtype=torch.float) solves my problem.
https://stackoverflow.com/questions/57142401/
How to compute product between two sets of features in pytorch using a single loop?
I wan to compute the product between two sets of feature matrices X and Y of dimensions (H,W,12) each: Inefficiently I would do: H = [] for i in range(12): for j in range(12): h = X[:,:,i]*Y[:,:,j] H.append(h) which will output H of dimension (H,W,144) How can this be done in pytorch without iterating in two loops? I have tried used tensordot solutions but cant replicate the behavior.
I am not sure this is the most efficient, but you can do something like this (warning: ugly code ahead =]): import torch # I choose not to use random -- easier to verify, IMO a = torch.Tensor([[[1,2],[3,4],[5,6]],[[1,2],[3,4],[5,6]]]) b = torch.Tensor([[[1,2],[3,4],[5,6]],[[1,2],[3,4],[5,6]]]) c = torch.bmm( a.view(-1, a.size(-1), 1), b.view(-1, 1, b.size(-1)) ).view(*(a.shape[:2]), -1) print(c) print(a.shape) print(b.shape) print(c.shape) Output: tensor([[[ 1., 2., 2., 4.], [ 9., 12., 12., 16.], [25., 30., 30., 36.]], [[ 1., 2., 2., 4.], [ 9., 12., 12., 16.], [25., 30., 30., 36.]]]) torch.Size([2, 3, 2]) # a torch.Size([2, 3, 2]) # b torch.Size([2, 3, 4]) # c Basically, the outer product. Let me know if you need me to explain. Timings While using the torch.bmm, 16 out of 32 cores were being used. I used a GeForce RTX 2080 Ti to run the CUDA version (GPU usage was ~97% during execution). Note that the dimensions used on GPU timings are x10, otherwise it is just too fast. Script: import timeit setup = ''' import torch a = torch.randn(({H}, {W}, 12)) b = torch.randn(({H}, {W}, 12)) ''' setup_cuda = setup.replace("))", ")).to(torch.device('cuda'))") bmm = ''' c = torch.bmm( a.view(-1, a.size(-1), 1), b.view(-1, 1, b.size(-1)) ).view(*(a.shape[:2]), -1) ''' loop = ''' c = [] for i in range(a.size(-1)): for j in range(b.size(-1)): c.append(a[:, :, i] * b[:, :, j]) c = torch.stack(c).permute(1, 2, 0) ''' min_dim = 10 max_dim = 100 num_repeats = 10 print('BMM') for d in range(min_dim, max_dim+1, 10): print(d, min(timeit.Timer(bmm, setup=setup.format(H=d, W=d)).repeat(num_repeats, 1000))) print('LOOP') for d in range(min_dim, max_dim+1, 10): print(d, min(timeit.Timer(loop, setup=setup.format(H=d, W=d)).repeat(num_repeats, 1000))) print('BMM - CUDA') for d in range(min_dim*10, (max_dim*10)+1, 100): print(d, min(timeit.Timer(bmm, setup=setup_cuda.format(H=d, W=d)).repeat(num_repeats, 1000))) Output: BMM 10 0.022082214010879397 20 0.034024904016405344 30 0.08957623899914324 40 0.1376199919031933 50 0.20248223491944373 60 0.2657837320584804 70 0.3533527449471876 80 0.42361779196653515 90 0.6103016039123759 100 0.7161333339754492 LOOP 10 1.7369094720343128 20 1.8517447559861466 30 1.9145489090587944 40 2.0530637570191175 50 2.2066439649788663 60 2.394576688995585 70 2.6210166650125757 80 2.9242434420157224 90 3.5709626079769805 100 5.413458575960249 BMM - CUDA 100 0.014253990724682808 200 0.015094103291630745 300 0.12792395427823067 400 0.307440347969532 500 0.541196970269084 600 0.8697826713323593 700 1.2538292426615953 800 1.6859236396849155 900 2.2016236428171396 1000 2.764942280948162
https://stackoverflow.com/questions/57147514/
RuntimeError: Expected hidden size (2, 24, 50), got (2, 30, 50)
I am trying to build a model for learning assigned scores (real numbers) to some sentences in a data set. I use RNNs (in PyTorch) for this purpose. I have defined a model: class RNNModel1(nn.Module): def forward(self, input ,hidden_0): embedded = self.embedding(input) output, hidden = self.rnn(embedded, hidden_0) output=self.linear(hidden) return output , hidden Train function is as: def train(model,optimizer,criterion,BATCH_SIZE,train_loader,clip): model.train(True) total_loss = 0 hidden = model._init_hidden(BATCH_SIZE) for i, (batch_of_data, batch_of_labels) in enumerate(train_loader, 1): hidden=hidden.detach() model.zero_grad() output,hidden= model(batch_of_data,hidden) loss = criterion(output, sorted_batch_target_scores) total_loss += loss.item() loss.backward() torch.nn.utils.clip_grad_norm(model.parameters(), clip) optimizer.step() return total_loss/len(train_loader.dataset) when I run the code I receive this error: RuntimeError: Expected hidden size (2, 24, 50), got (2, 30, 50) Batch size=30, Hidden size=50, Number of Layers=1, Bidirectional=True. I receive that error in the last batch of data. I checked the description of RNNs in PyTorch to solve this problem. RNNs in PyTorch have two input arguments and two output arguments. The input arguments are input and h_0. h_0 is a tensor includes initial hidden state for each element in batch of size(num_layers*num_directions, batch, hidden size). The output arguments are output ans h_n. h_n is a tensor includes hidden state for t=seq_len of size (num_layers*num_directions, batch, hidden size). in all batches (except the last batch) the size of h_0 and h_n is the same. but in the last batch, perhaps number of elements is less than batch size. Thesefore the size of h_n is (num_layersnum_directions, remained_elements_in_last_batch, hidden size) but the size of h_0 is still (num_layersnum_directions, batch_size, hidden size). So I receive that error in the last batch of data. How can I solve this problem and handle the situation in which the size of h_0 and h_n is different? Thanks in advance.
This error happens when the number of samples in data set is not a multiple of the size of the batch. Ignoring the last batch can solve the problem. For identifying the last batch, check the number of elements in each batch. If that was less than BATCH_SIZE then it is the last batch in data set. if(len(batch_of_data)==BATCH_SIZE): output,hidden= model(batch_of_data,hidden)
https://stackoverflow.com/questions/57148617/
Trying to understand cross_entropy loss in PyTorch
This is a very newbie question but I'm trying to wrap my head around cross_entropy loss in Torch so I created the following code: x = torch.FloatTensor([ [1.,0.,0.] ,[0.,1.,0.] ,[0.,0.,1.] ]) print(x.argmax(dim=1)) y = torch.LongTensor([0,1,2]) loss = torch.nn.functional.cross_entropy(x, y) print(loss) which outputs the following: tensor([0, 1, 2]) tensor(0.5514) What I don't understand is given my input matches the expected output why is the loss not 0?
That is because the input you give to your cross entropy function is not the probabilities as you did but the logits to be transformed into probabilities with this formula: probas = np.exp(logits)/np.sum(np.exp(logits), axis=1) So here the matrix of probabilities pytorch will use in your case is: [0.5761168847658291, 0.21194155761708547, 0.21194155761708547] [0.21194155761708547, 0.5761168847658291, 0.21194155761708547] [0.21194155761708547, 0.21194155761708547, 0.5761168847658291]
https://stackoverflow.com/questions/57161524/
PyTorch Binary Classification - same network structure, 'simpler' data, but worse performance?
To get to grips with PyTorch (and deep learning in general) I started by working through some basic classification examples. One such example was classifying a non-linear dataset created using sklearn (full code available as notebook here) n_pts = 500 X, y = datasets.make_circles(n_samples=n_pts, random_state=123, noise=0.1, factor=0.2) x_data = torch.FloatTensor(X) y_data = torch.FloatTensor(y.reshape(500, 1)) This is then accurately classified using a pretty basic neural net class Model(nn.Module): def __init__(self, input_size, H1, output_size): super().__init__() self.linear = nn.Linear(input_size, H1) self.linear2 = nn.Linear(H1, output_size) def forward(self, x): x = torch.sigmoid(self.linear(x)) x = torch.sigmoid(self.linear2(x)) return x def predict(self, x): pred = self.forward(x) if pred >= 0.5: return 1 else: return 0 As I have an interest in health data I then decided to try and use the same network structure to classify some a basic real-world dataset. I took heart rate data for one patient from here, and altered it so all values > 91 would be labelled as anomalies (e.g. a 1 and everything <= 91 labelled a 0). This is completely arbitrary, but I just wanted to see how the classification would work. The complete notebook for this example is here. What is not intuitive to me is why the first example reaches a loss of 0.0016 after 1,000 epochs, whereas the second example only reaches a loss of 0.4296 after 10,000 epochs Perhaps I am being naive in thinking that the heart rate example would be much easier to classify. Any insights to help me understand why this is not what I am seeing would be great!
TL;DR Your input data is not normalized. use x_data = (x_data - x_data.mean()) / x_data.std() increase the learning rate optimizer = torch.optim.Adam(model.parameters(), lr=0.01) You'll get convergence in only 1000 iterations. More details The key difference between the two examples you have is that the data x in the first example is centered around (0, 0) and has very low variance. On the other hand, the data in the second example is centered around 92 and has relatively large variance. This initial bias in the data is not taken into account when you randomly initialize the weights which is done based on the assumption that the inputs are roughly normally distributed around zero. It is almost impossible for the optimization process to compensate for this gross deviation - thus the model gets stuck in a sub-optimal solution. Once you normalize the inputs, by subtracting the mean and dividing by the std, the optimization process becomes stable again and rapidly converges to a good solution. For more details about input normalization and weights initialization, you can read section 2.2 in He et al Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (ICCV 2015). What if I cannot normalize the data? If, for some reason, you cannot compute mean and std data in advance, you can still use nn.BatchNorm1d to estimate and normalize the data as part of the training process. For example class Model(nn.Module): def __init__(self, input_size, H1, output_size): super().__init__() self.bn = nn.BatchNorm1d(input_size) # adding batchnorm self.linear = nn.Linear(input_size, H1) self.linear2 = nn.Linear(H1, output_size) def forward(self, x): x = torch.sigmoid(self.linear(self.bn(x))) # batchnorm the input x x = torch.sigmoid(self.linear2(x)) return x This modification without any change to the input data, yields similar convergance after only 1000 epochs: A minor comment For numerical stability, it is better to use nn.BCEWithLogitsLoss instead of nn.BCELoss. For this end, you need to remove the torch.sigmoid from the forward() output, the sigmoid will be computed inside the loss. See, for example, this thread regarding the related sigmoid + cross entropy loss for binary predictions.
https://stackoverflow.com/questions/57161576/
How to get rid of Variable API in PyTorch.autograd?
I am forwarding, and backpropping tensor data X through two simple nn.Module PyTorch models instances, model1 and model2. I can't get this process to work without usage of the depreciated Variable API. So this works just fine: y1 = model1(X) v = Variable(y1.data, requires_grad=training) # Its all about this line! y2 = model2(v) criterion = nn.NLLLoss() loss = criterion(y2, y) loss.backward() y1.backward(v.grad) self.step() But this will throw an error: y1 = model1(X) y2 = model2(y1) criterion = nn.NLLLoss() loss = criterion(y2, y) loss.backward() y1.backward(y1.grad) # it breaks here self.step() >>> RuntimeError: grad can be implicitly created only for scalar outputs I just can't seem to find a relevant difference between v in the first implementation, and y1 in the second. In both cases requires_grad is set to True. The only thing I could find was that y1.grad_fn=<ThnnConv2DBackward> and v.grad_fn=<ThnnConv2DBackward> What am I missing here? What (tensor attributes?) do I not know about, and if Variable is depreciated, what other implementation would work?
After some investigation I came to the following two solutions. The solution provided elsewhere in this thread retained the computation graph manually, without an option the free them, thus running fine initially, but causing OOM errors later on. The first solution is to tie the models together using the built in torch.nn.Sequential as such: model = torch.nn.Sequential(Model1(), Model2()) it's as easy as that. It looks clean and behaves exactly like an ordinary model would. The alternative is to simply tie them together manually: model1 = Model1() model2 = Model2() y1 = model1(X) y2 = model2(y1) loss = criterion(y2, y) loss.backward() My fear that this would only backpropagate model2 turned out to be unsubstantiated, since model1 is also stored in the computation graph that is back propagated over. This implementation enabled inceased transparancy of the interface between the two models, compared to the previous implementation.
https://stackoverflow.com/questions/57167492/
Assigning a parameter to the GPU sets is_leaf as false
If I create a Parameter in PyTorch, then it is automatically assigned as a leaf variable: x = torch.nn.Parameter(torch.Tensor([0.1])) print(x.is_leaf) This prints out True. From what I understand, if x is a leaf variable, then it will be updated by the optimiser. But if I then assign x to the GPU: x = torch.nn.Parameter(torch.Tensor([0.1])) x = x.cuda() print(x.is_leaf) This prints out False. So now I cannot assign x to the GPU and keep it as a leaf node. Why does this happen?
Answer is in is_leaf documentation and here is your exact case: >>> b = torch.rand(10, requires_grad=True).cuda() >>> b.is_leaf False # b was created by the operation that cast a cpu Tensor into a cuda Tensor Citing documentation further: For Tensors that have requires_grad which is True, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so grad_fn is None. In your case, Tensor was not created by you, but was created by PyTorch's cuda() operation (leaf is the pre-cuda b).
https://stackoverflow.com/questions/57188409/
How to do random search for hyperparameters on different GPUs in parallel?
Assuming my model uses only one GPU but Virtual Machine has 4. How to leverage all GPUs for this code? channel_1_range = [8, 16, 32, 64] channel_2_range = [8, 16, 32, 64] kernel_size_1_range = [3, 5, 7] kernel_size_2_range = [3, 5, 7] max_count = 40 for count in range(max_count): reg = 10**np.random.uniform(-3, 0) learning_rate = 10**np.random.uniform(-6, -3) channel_1 = channel_1_range[np.random.randint(low=0, high=len(channel_1_range))] channel_2 = channel_2_range[np.random.randint(low=0, high=len(channel_2_range))] kernel_size_1 = kernel_size_1_range[np.random.randint(low=0, high=len(kernel_size_1_range))] kernel_size_2 = kernel_size_2_range[np.random.randint(low=0, high=len(kernel_size_2_range))] model = ThreeLayerConvNet(in_channel=3, channel_1=channel_1, kernel_size_1=kernel_size_1, \ channel_2=channel_2, kernel_size_2=kernel_size_2, num_classes=10) optimizer = optim.Adam(model.parameters(), lr=learning_rate) engine = Engine(loader_train=loader_train, loader_val=loader_val, device=device, dtype=dtype, print_every=100, \ verbose=False) engine.train(model, optimizer, epochs=1, reg=reg) print("Reg: {0:.2E}, LR: {1:.2E}, Ch_1: {2:2} [{4}], Ch_2: {3:2} [{5}], Acc: {6:.2f} [{7:.2f}], {8:.2f} secs". \ format(reg, learning_rate, channel_1, channel_2, kernel_size_1, kernel_size_2, \ engine.accuracy, engine.accuracy_train, engine.duration)) One option is to move this to standalone console app, start N instances (N == number of GPUs) and aggregate results (one output file). Is it possible to do it directly in Python so I could continue to use jupyter notebook?
In pytorch you can distribute your models on different GPUs. I think in your case it's the device parameter that allows you to specify the actual GPU: device1 = torch.device('cuda:0') device2 = torch.device('cuda:1') . . . devicen = torch.device('cuda:n') I don't remember the exact details but if my memory serves me well, you might need to make your code non-blocking by using threading or multiprocessing (better go with multiprocessing to be sure, the GIL might cause you some problems otherwise if you fully utilize your process). In your case that would mean to parallelise your for loop. For instance by having Queue containing all models and then spawning threads/ processes, allowing you to consume them (where the number of spawned processed, working on the queue, correspond to a GPU each). So to answer your question, yes you can do it in pure Python (I did a while back, so I'm 100% positive). You can even let one GPU process multiple models (but make sure to calculate your VRAM correctly beforehand). Whether it's actually worth it, compared to just starting multiple jobs is up to you though. As a little sidenote, if you run it as 'standalone' script, it might still use the same GPU if the GPU number isn't automatically adjusted, otherwise PyTorch might try using DataParallel distribution...
https://stackoverflow.com/questions/57189133/
Use class from file based of name in string form (Python)
Currently, I am importing every class I have in a python file in my main "Runner" script. The goal is to, based on a string given, instantiate the class that has the same name as the string given. For example: If there is a class called 'Test': class Test(): if the String is "Test", it will call that class.
It's seems a reflection case, that are explained here. Example from ref: module = __import__(module_name) class_ = getattr(module, class_name) instance = class_()
https://stackoverflow.com/questions/57189532/
Multiply a specific column in each row by -1
I have a pytorch tensor A, that's of size (n,m) and a list of indices for size n, such that each entry of 0 <= indices[i] < m. For each row i of A, I want to multiply A[i, indices[i]] *= -1, in a vectorized way. Is there an easy way to do this? A = torch.tensor([[1,2,3],[4,5,6]]) indices = torch.tensor([1, 2]) #desired result A = [[1,-2,3],[4,5,-6]]
Sure there is, fancy indexing is the way to go: import torch A = torch.tensor([[1, 2, 3], [4, 5, 6]]) indices = torch.tensor([1, 2]).long() A[range(A.shape[0]), indices] *= -1 Remember indices must be torch.LongTensor type. You could cast it if you have float using .long() member function.
https://stackoverflow.com/questions/57190215/
How to multiply a tensor with a vector?
I have 2 tensors a = torch.tensor([1,2]) b = torch.tensor([[[10,20], [30,40]], [[1,2], [3,4]]]) and I would like to combine them in such a way that a ? b = tensor([[[10,20], [30,40]], [[ 2, 4], [ 6, 8]]]) (and then sum over the 0th dimension, in the end I want to do a weighted sum) I've tried: """ no idea how to interpret that """ a @ b tensor([[ 70, 100], [ 7, 10]]) b @ a tensor([[ 50, 110], [ 5, 11]]) for i in range(b.size()[0]): # works but I don't think this will work with autograd b[i] *= a[i] a * b # multiplies right side by 2 tensor([[[10, 40], [30, 80]], [[ 1, 4], [ 3, 8]]]) a.unsqueeze(1) # multiplies bottom side by 2 tensor([[[10, 20], [60, 80]], [[ 1, 2], [ 6, 8]]]) a.unsqueeze(2) * b # dimension out of range
This should workc = a.unsqueeze(1).unsqueeze(1) * b
https://stackoverflow.com/questions/57206785/
Is there a way to do Pytorch element wise equality treating each dimension as an element?
I have two tensors and I want to check for equality treating an array in one dimension as the element I have 2 tensors lo = torch.Tensor(([1., 1., 0.], [0., 1., 1.], [0., 0., 0.], [1., 1., 1.])) lo = torch.Tensor(([1., 1., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.])) I've tried using torch.eq(lee, lo) which returns a tensor like tensor([[1, 1, 1], [1, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=torch.uint8) Is there a way to have the output become tensor([1, 0, 1, 0]) as the only complete element that matches is the first? edit: I've come up with this solution lee = lee.tolist() lo = lo.tolist() out = [] for i, j in enumerate(lee): if j == lo[i]: out.append(1) else: out.append(0) and out will be [1, 0, 1, 0] But is there an easier way?
You can simply use torch.all(tensor, dim). code: l1 = torch.Tensor(([1., 1., 0.], [0., 1., 1.], [0., 0., 0.], [1., 1., 1.])) l2 = torch.Tensor(([1., 1., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.])) print(torch.eq(l1, l2)) print(torch.all(torch.eq(l1, l2), dim=0)) # equivalent to dim = -2 print(torch.all(torch.eq(l1, l2), dim=1)) # equivalent to dim = -1 output: tensor([[1, 1, 1], [1, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=torch.uint8) tensor([0, 0, 0], dtype=torch.uint8) tensor([1, 0, 1, 0], dtype=torch.uint8) # your desired output
https://stackoverflow.com/questions/57208913/
How to solve strange cuda error in PyTorch?
I moved to PyTorch from Keras. I'm very new to the whole moving to CUDA thing. I've spent hours, surfing the web and haven't been able to find anything? The fix is probably something a line or two. I'd appreciate it if someone knows how to solve this issue? Here' my code, First I define my u-net model as a class of nn.Module like the following code: import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils class unet(nn.Module): def __init__(self): super(unet, self).__init__() self.conv1 = nn.Conv3d(1, 32, 3, padding=1) self.conv1_1 = nn.Conv3d(32, 32, 3, padding=1) self.conv2 = nn.Conv3d(32, 64, 3, padding=1) self.conv2_2 = nn.Conv3d(64, 64, 3, padding=1) self.conv3 = nn.Conv3d(64, 128, 3, padding=1) self.conv3_3 = nn.Conv3d(128, 128, 3, padding=1) self.convT1 = nn.ConvTranspose3d(128, 64, 3, stride=(2,2,2), padding=1, output_padding=1) self.conv4 = nn.Conv3d(128, 64, 3, padding=1) self.conv4_4 = nn.Conv3d(64, 64, 3, padding=1) self.convT2 = nn.ConvTranspose3d(64, 32, 3,stride=(2,2,2), padding=1, output_padding=1) self.conv5 = nn.Conv3d(64, 32, 3, padding=1) self.conv5_5 = nn.Conv3d(32, 32, 3, padding=1) self.conv6 = nn.Conv3d(32, 1 ,3, padding=1) def forward(self, inputs): conv1 = F.relu(self.conv1(inputs)) conv1 = F.relu(self.conv1_1(conv1)) pool1 = F.max_pool3d(conv1, 2) conv2 = F.relu(self.conv2(pool1)) conv2 = F.relu(self.conv2_2(conv2)) pool2 = F.max_pool3d(conv2, 2) conv3 = F.relu(self.conv3(pool2)) conv3 = F.relu(self.conv3_3(conv3)) conv3 = self.convT1(conv3) up1 = torch.cat((conv3, conv2), dim=1) conv4 = F.relu(self.conv4(up1)) conv4 = F.relu(self.conv4_4(conv4)) conv4 = self.convT2(conv4) up2 = torch.cat((conv4, conv1), dim=1) conv5 = F.relu(self.conv5(up2)) conv5 = F.relu(self.conv5_5(conv5)) conv6 = F.relu(self.conv6(conv5)) return conv6 Then I run my unet like the following code. note that when defining the module I set it to the cuda. I also set the input data and its labels to the cuda. device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = unet().to(device) optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) loss_fn = nn.MSELoss() datasets = torch.utils.data.TensorDataset(data_recon, data_truth) train_loader = DataLoader(datasets, batch_size=2, shuffle=True) def training_loop(n_epochs, optimizer, model, loss_fn, train_loader): for epoch in range(1, n_epochs + 1): loss_train = 0 for imgs, labels in train_loader: imgs.to(device) labels.to(device) outputs = model(imgs) loss = loss_fn(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() loss_train += loss.item() print('{} Epoch {}, Training loss {}'.format(datetime.datetime.now(), epoch, float(loss_train))) training_loop(50, optimizer, model, loss_fn, train_loader) But I get this error: RuntimeError Traceback (most recent call last) in ----> 1 training_loop(50, optimizer, model, loss_fn, train_loader) in training_loop(n_epochs, optimizer, model, loss_fn, train_loader) 5 imgs.to(device) 6 labels.to(device) ----> 7 outputs = model(imgs) 8 loss = loss_fn(outputs, labels) 9 /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 491 result = self._slow_forward(*input, **kwargs) 492 else: --> 493 result = self.forward(*input, **kwargs) 494 for hook in self._forward_hooks.values(): 495 hook_result = hook(self, input, result) in forward(self, inputs) 18 19 def forward(self, inputs): ---> 20 conv1 = F.relu(self.conv1(inputs)) 21 conv1 = F.relu(self.conv1_1(conv1)) 22 pool1 = F.max_pool3d(conv1, 2) /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 491 result = self._slow_forward(*input, **kwargs) 492 else: --> 493 result = self.forward(*input, **kwargs) 494 for hook in self._forward_hooks.values(): 495 hook_result = hook(self, input, result) /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input) 474 self.dilation, self.groups) 475 return F.conv3d(input, self.weight, self.bias, self.stride, --> 476 self.padding, self.dilation, self.groups) 477 478 RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight' I've spent hours, surfing the web and haven't been able to find anything? The fix is probably something a line or two. I'd appreciate it if someone knows how to solve this issue?
The problem is with this line imgs.to(device) labels.to(device) .to(device) returns a new Tensor and won't change imgs and labels. So the cuda error is valid. You can simply fix it by assigning new tensor as follows: imgs = imgs.to(device) labels = labels.to(device)
https://stackoverflow.com/questions/57223818/
Why is my classifier not giving me binary results?
I created a linear ReLu network that is supposed to over-fit to my data. I use BCEWithLogisticLoss as the loss function. I was using it to classify 3d points. Since the data were small enough I didn't care to make into batches. And it worked fine. However now that I've implemented batches into it it seems that the predicted values aren't what I expect (i.e 0 or 1) instead it give me numbers like -25.4562 I didn't change anything else from the network only the batches. I tried the binary loss function BSELoss however it seems to be a bug in the version of pytorch, so I can't use it. You can take a look at my code below: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # We load the training data Samples, Ocupancy = common.load_samples() for i in range(0,Ocupancy.shape[0]): if Ocupancy[i] > 1 or Ocupancy[i] < 0: print("upsie") max = np.amax(Samples) min = np.amin(Samples) x_test = torch.from_numpy(Samples.astype(np.float32)).to(device) y_test = torch.from_numpy(Ocupancy.astype(np.float32)).to(device) train_data = CustomDataset(x_test, y_test) train_loader = DataLoader(dataset=train_data, batch_size= 22500, shuffle=False) # Batches_size equal to the number of points in each slice phi = common.MLP(3, 1).to(device) criterion = torch.nn.BCEWithLogitsLoss() optimizer = torch.optim.Adam(phi.parameters(), lr = 0.01) epoch = 10 fit_start_time = time.time() for epoch in range(epoch): for x_batch, y_batch in train_loader: #optimizer.zero_grad() x_train = x_batch.to(device) y_train = y_batch.to(device) y_pred = phi(x_batch) print(y_pred) # Compute Loss loss = criterion(y_pred.squeeze(), y_batch.squeeze()) print('Epoch {}: train loss: {}'.format(epoch, loss.item())) # Backward pass loss.backward() optimizer.step() fit_end_time = time.time() print("Total time = %f" % (fit_end_time - fit_start_time)) min = -2 max = 2 resolution = 0.05 X,Y,Z = np.mgrid[min:max:resolution,min:max:resolution,min:max:resolution] # sample way more xyz = torch.from_numpy(np.vstack([X.ravel(), Y.ravel(),Z.ravel()]).transpose().astype(np.float32)).to(device) eval = LabelData(xyz) eval_loader = DataLoader(dataset=eval, batch_size= 22500, shuffle=False) # Make bigger batches # feed the network bit by bit? i = 0 for x_batch in eval_loader: phi.eval() labels = phi(x_batch).to(device) print(labels) visualization_iso(X,Y,Z,labels) I expect to the predicted values be 0 or 1, or at least be a probability however it gives me big numbers that I don't understand. Like :19.5953 Please take a look at my code and if you spot any big mistakes please tell me so. I'm really confused since it work quite well before I expanded the size of the data I was using. Regards
I might be wrong but I am trying to answer based on your code in the question. You are using BCEwithlogitsloss which means that the model is expected to output logits. logits are the output just before using the sigmoid activation. Recall, sigmoid activation is used to convert outputs to probability (essentially to be between 0 and 1). Logits can be any real number. Based on this, I think you should pass the output of the model through a sigmoid activation i.e. F.sigmoid(phi(x_batch)). Or you can also just check if your model's output is greater than 0 or less than 0. If greater than 0, the label should be 1.
https://stackoverflow.com/questions/57224147/
Is there a way to update the weights of a layer/variable directly after a training step?
There are ways to get and update the weight directly outside of training, but what I am looking to do is after the each training step and after the gradients have updated a particular variable or layer, I would like to save those weights to file and then replace the layer or variable weights with brand new ones. And then continue with the next step (forward pass using the new values in the variable or layer, then backward pass with the calculated loss/gradients) of training. I have thought about just calling each training step individually, but I am wondering if that is somehow very time/memory inefficient.
You can try to use a Callback to do that. Define the function you want: def afterBatch(batch, logs): model.save_weights('weights'+str(batch)) #maybe you want also a method to save the current epoch.... #option 1 model.layers[select_a_layer].set_weights(listOfNumpyWeights) #not sure this works #option 2 K.set_value(model.layers[select_a_layer].kernel, newValueInNumpy) #depending on the layer you have kernel, bias and maybe other weight names #take a look at the source code of the layer you are changing the weights Use a LambdaCallback: from keras.callbacks import LambdaCallback batchCallback = LambdaCallback(on_batch_end = aterBatch) model.fit(x, y, callbacks = [batchCallback, ....]) There are weight updates after every batch (this might be too much if you are saving weights every time). You can also try on_epoch_end instead of on_batch_end.
https://stackoverflow.com/questions/57228000/
In PyTorch, what is the input into nll_loss?
I am looking at the tutorial here: https://pytorch.org/tutorials/beginner/fgsm_tutorial.html import torch.nn.functional as F loss = F.nll_loss(output, target) In the above two lines of code, what exactly is "target"? They load the data set for target but never discuss what it is exactly. The documentation is also hard to understand.
Check yourself by running below code: test_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=False, download=True, transform=transforms.Compose([ transforms.ToTensor(), ])), batch_size=1, shuffle=True) for data, target in test_loader: print(data, target) break Here, data is basically a grayscaled MNIST image and target is the label between 0 and 9. So, in loss = F.nll_loss(output, target), output is the model prediction(what the model predicted on giving an image/data) and target is the actual label of the given image. Furthermore, in the above example, check below lines: output = model(data) # shape [1, 10] init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability # If the initial prediction is wrong, don't bother attacking, just move on if init_pred.item() != target.item(): continue # Calculate the loss loss = F.nll_loss(output, target) In the above code, only those output-target pairs are passed into F.nll_loss loss function, where the model is predicting correctly. In case, it is unable to predict the label correctly, then all the operations(including loss calculation) after that are skipped and it continues with the next example in the test_loader.
https://stackoverflow.com/questions/57229669/
How can I randomly shuffle the labels of a Pytorch Dataset?
I am new to Pytorch, and I am having troubles with some technicalities. I have downloaded the MNIST dataset, using the following command: train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) I now need to run some experiments on this dataset, but using random labels. How can I shuffle/reassign them randomly? I am trying to do it manually, but it tells me that " 'tuple' object does not support item assignment". How can I do it then? Second question: How can I remove a training point from the dataset? It gives me the same error, when I try to do it. Thank you!!
If you only want to shuffle the targets, you can use target_transform argument. For example: train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), target_transform=lambda y: torch.randint(0, 10, (1,)).item(), download=True) If you want some more elaborate tweaking of the dataset, you can wrap mnist completely class MyTwistedMNIST(torch.utils.data.Dataset): def __init__(self, my_args): super(MyTwistedMNIST, self).__init__() self.orig_mnist = dset.MNIST(...) def __getitem__(self, index): x, y = self.orig_mnist[index] # get the original item my_x = # change input digit image x ? my_y = # change the original label y ? return my_x, my_y def __len__(self): return self.orig_mnist.__len__() If there are elements of the original mnist you want to completely discard, than by wrapping around the original mnist, your MyTwistedMNIST class can return len smaller than self.orig_mnist.__len__() reflecting the amount of actual mnist examples you want to handle. Moreover, you will need to map the new index of examples to the original mnist index.
https://stackoverflow.com/questions/57231943/
Why casting input and model to float16 doesn't work?
I'm trying to change inputs and a deep learning model to flaot16, since I'm using T4 GPU and they work much faster with fp16. Here's part of the code: I first have my model and then made some dummy data point for the sake of figuring the data casting figured out first (I ran it with the whole batch and got the same error). model = CRNN().to(device) model = model.type(torch.cuda.HalfTensor) data_recon = torch.from_numpy(data_recon) data_truth = torch.from_numpy(data_truth) dummy = data_recon[0:1,:,:,:,:] # Gets just one batch dummy = dummy.to(device) dummy = dummy.type(torch.cuda.HalfTensor) model(dummy) And here's the error I get: > --------------------------------------------------------------------------- RuntimeError Traceback (most recent call > last) <ipython-input-27-1fe8ecc524aa> in <module> > ----> 1 model(dummy) > > /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py > in __call__(self, *input, **kwargs) > 491 result = self._slow_forward(*input, **kwargs) > 492 else: > --> 493 result = self.forward(*input, **kwargs) > 494 for hook in self._forward_hooks.values(): > 495 hook_result = hook(self, input, result) > > <ipython-input-12-06f39f9304a1> in forward(self, inputs, test) > 57 > 58 net['t%d_x0'%(i-1)] = net['t%d_x0'%(i-1)].view(times, batch, self.filter_size, width, > height) > ---> 59 net['t%d_x0'%i] = self.bcrnn(inputs, net['t%d_x0'%(i-1)], test) > 60 net['t%d_x0'%i] = net['t%d_x0'%i].view(-1, self.filter_size, width, height) > 61 > > /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py > in __call__(self, *input, **kwargs) > 491 result = self._slow_forward(*input, **kwargs) > 492 else: > --> 493 result = self.forward(*input, **kwargs) > 494 for hook in self._forward_hooks.values(): > 495 hook_result = hook(self, input, result) > > <ipython-input-11-b687949e9ce5> in forward(self, inputs, > input_iteration, test) > 31 hidden = initial_hidden > 32 for i in range(times): > ---> 33 hidden = self.CRNN(inputs[i], input_iteration[i], hidden) > 34 output_forward.append(hidden) > 35 output_forward = torch.cat(output_forward) > > /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py > in __call__(self, *input, **kwargs) > 491 result = self._slow_forward(*input, **kwargs) > 492 else: > --> 493 result = self.forward(*input, **kwargs) > 494 for hook in self._forward_hooks.values(): > 495 hook_result = hook(self, input, result) > > <ipython-input-10-15c0b221226b> in forward(self, inputs, > hidden_iteration, hidden) > 23 def forward(self, inputs, hidden_iteration, hidden): > 24 in_to_hid = self.i2h(inputs) > ---> 25 hid_to_hid = self.h2h(hidden) > 26 ih_to_ih = self.ih2ih(hidden_iteration) > 27 > > /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py > in __call__(self, *input, **kwargs) > 491 result = self._slow_forward(*input, **kwargs) > 492 else: > --> 493 result = self.forward(*input, **kwargs) > 494 for hook in self._forward_hooks.values(): > 495 hook_result = hook(self, input, result) > > /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in > forward(self, input) > 336 _pair(0), self.dilation, self.groups) > 337 return F.conv2d(input, self.weight, self.bias, self.stride, > --> 338 self.padding, self.dilation, self.groups) > 339 > 340 > > RuntimeError: Input type (torch.cuda.FloatTensor) and weight type > (torch.cuda.HalfTensor) should be the same
Check out your implementation of CRNN. My guess is that you have "hidden" state tensor stored in the model, but not as a "buffer" but just as a regular tensor. Therefore, when casting the model to float16 the hidden state remains float32 and causes you this error. Try to store the hidden state as a register in the module (see register_buffer for more info). Alternatively, you can explicitly cast to float16 any member tensor in the module by overloading the .to() method of your model.
https://stackoverflow.com/questions/57236261/
What does "unsqueeze" do in Pytorch?
The PyTorch documentation says: Returns a new tensor with a dimension of size one inserted at the specified position. [...] >>> x = torch.tensor([1, 2, 3, 4]) >>> torch.unsqueeze(x, 0) tensor([[ 1, 2, 3, 4]]) >>> torch.unsqueeze(x, 1) tensor([[ 1], [ 2], [ 3], [ 4]])
If you look at the shape of the array before and after, you see that before it was (4,) and after it is (1, 4) (when second parameter is 0) and (4, 1) (when second parameter is 1). So a 1 was inserted in the shape of the array at axis 0 or 1, depending on the value of the second parameter. That is opposite of np.squeeze() (nomenclature borrowed from MATLAB) which removes axes of size 1 (singletons).
https://stackoverflow.com/questions/57237352/
"RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [3, 224, 224] instead"?
I am trying to use a pre-trained model. Here's where the problem occurs Isn't the model supposed to take in a simple colored image? Why is it expecting a 4-dimensional input? RuntimeError Traceback (most recent call last) <ipython-input-51-d7abe3ef1355> in <module>() 33 34 # Forward pass the data through the model ---> 35 output = model(data) 36 init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability 37 5 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input) 336 _pair(0), self.dilation, self.groups) 337 return F.conv2d(input, self.weight, self.bias, self.stride, --> 338 self.padding, self.dilation, self.groups) 339 340 RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [3, 224, 224] instead Where inception = models.inception_v3() model = inception.to(device)
As Usman Ali wrote in his comment, pytorch (and most other DL toolboxes) expects a batch of images as an input. Thus you need to call output = model(data[None, ...]) Inserting a singleton "batch" dimension to your input data. Please also note that the model you are using might expect a different input size (3x229x229) and not 3x224x224.
https://stackoverflow.com/questions/57237381/
I have a GPU and CUDA installed in Windows 10 but Pytorch's torch.cuda.is_available() returns false; how can I correct this?
I have PyTorch installed on a Windows 10 machine with a Nvidia GTX 1050 GPU. I have installed the CUDA Toolkit and tested it using Nvidia instructions and that has gone smoothly, including execution of the suggested tests. However, torch.cuda.is_available() returns False. How can I fix this?
I also had the same issue. And running this => a=torch.cuda.FloatTensor(), gave the assertion error AssertionError: Torch not compiled with CUDA enabled . ...which kind of cleared that i was running pytorch without cuda. Steps: Make sure you have un-installed Pytorch by invoking the following command: pip uninstall torch Go to https://pytorch.org/get-started/locally/ and select your system configurations(as shown in the figure). Copy the exact command from the Run this command dialog and run it on your terminal.
https://stackoverflow.com/questions/57238344/
Variable length of text with cnn pytorch
I am a newbie with pytorch and I am wondering what is the best practice for variable length sentences sequences in CNNs. I want to use CNN for feature selection on top of word emmbeding generated by fasttext and then feed the output into LSTM . Now I know pytorch has a dynamic graph and I was wondering if there is a way to this except padding
Although PyTorch has dynamic graph construction, it is not possible to dynamically construct different graphs for different sequences in the same batch. As such, it is possible to perform stochastic gradient descent, using individual sequences with arbitrary length, but it is not possible to perform batch gradient descent without padding. To make training more efficient and avoid training over padded values, PyTorch has both the torch.nn.utils.rnn.pack_padded_sequence function and the torch.nn.utils.rnn.pad_packed_sequence to remove and then replace padding for batch operations. # pack_padded_sequence so that padded items in the sequence won't be shown to the LSTM X = torch.nn.utils.rnn.pack_padded_sequence(x, X_lengths, batch_first=True) # now run through LSTM X, self.hidden = self.lstm(X, self.hidden) # undo the packing operation X, _ = torch.nn.utils.rnn.pad_packed_sequence(X, batch_first=True)
https://stackoverflow.com/questions/57243915/
What are the difference between .bin and .pt pytorch saved model types?
Sometimes I see .bin files for pretrained pytorch, like the one here https://github.com/allenai/scibert#pytorch-models However, the files are usually saved as .pt files. What's the difference between these two parameter weights file formats? Why are there two?
There is no difference as it's just an extension. When it comes to UNIX-like OSes one can open the file no matter the extension (see here), Windows on the other hand is built with them in mind (here). torch can read either .bin or .pt or .anything so it's probably convention employed by the creators of that repository. Standard approach is to use .pt or .pth, though the second extension collides with Python's text file readable by interpreter, so .pt seems the best idea for now (see this github issue).
https://stackoverflow.com/questions/57245332/
Can GPU supports multiple jobs without delay?
So I am running PyTorch deep learning job using GPU but the job is pretty light. My GPU has 8 GB but the job only uses 2 GB. Also GPU-Util is close to 0%. |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 1080 Off | 00000000:01:00.0 On | N/A | | 0% 36C P2 45W / 210W | 1155MiB / 8116MiB | 0% Default | +-------------------------------+----------------------+----------------------+ based on GPU-Util and memory I might be able to fit in another 3 jobs. However, I am not sure if that will affect the overall runtime. If I run multiple jobs on same GPU does that affects the overall runtime? I think tried once and I think there was delay.
Yes you can. One option is to use NVIDIA's Multi-Process Service (MPS) to run four copies of your model on the same card. This is the best description I have found of how to do it: How do I use Nvidia Multi-process Service (MPS) to run multiple non-MPI CUDA applications? If you are using your card for inference only, then you can host several models (either copies, or different models) on the same card using NVIDIA's TensorRT Inference Service.
https://stackoverflow.com/questions/57246067/
Can not find
I create a pytorch extention, followed by this link but it throws out the error that fatal error: torch/extension.h: No such file or directory. ubuntu 18.04 code::blocks 17.04 gcc 7.4.0 #include <torch/extension.h> I expect the code to run. Is there any document to fix this problem?
is the canonical header file that create Python bindings for C++/Cuda extensions. So Try upgrading the version of Pytorch if it's not in newest version.
https://stackoverflow.com/questions/57246555/
How to split data into train and test sets using torchvision.datasets.Imagefolder?
In my custom dataset, one kind of image is in one folder which torchvision.datasets.Imagefolder can handle, but how to split the dataset into train and test?
You can use torch.utils.data.Subset to split your ImageFolder dataset into train and test based on indices of the examples. For example: orig_set = torchvision.datasets.Imagefolder(...) # your dataset n = len(orig_set) # total number of examples n_test = int(0.1 * n) # take ~10% for test test_set = torch.utils.data.Subset(orig_set, range(n_test)) # take first 10% train_set = torch.utils.data.Subset(orig_set, range(n_test, n)) # take the rest
https://stackoverflow.com/questions/57246630/
using huggingface's pytorch- transformers GPT-2 for classifcation tasks
I want to use GPT-2 to make a text classifier model. I am not really sure what head should I add after I extracted features through the GPT-2. for eample I have a sequence. import pytorch_transformers as pt import torch text=test.iloc[1,1] text 'If a fire wanted fanning, it could readily be fanned with a newspaper, and as the government grew weaker, I have no doubt that leather and iron acquired durability in proportion, for, in a very short time, there was not a pair of bellows in all Rotterdam that ever stood in need of a stitch or required the assistance of a hammer.' len(text) 74 tokenizer = pt.GPT2Tokenizer.from_pretrained('gpt2') model = pt.GPT2Model.from_pretrained('gpt2') zz = tokenizer.tokenize(text) z1=torch.tensor([tokenizer.convert_tokens_to_ids(zz)]) z1 tensor([[ 1532, 257, 2046, 2227, 4336, 768, 11, 340, 714, 14704, 307, 277, 3577, 351, 257, 7533, 11, 290, 355, 262, 1230, 6348, 17642, 11, 314, 423, 645, 4719, 326, 11620, 290, 6953, 9477, 26578, 287, 9823, 11, 329, 11, 287, 257, 845, 1790, 640, 11, 612, 373, 407, 257, 5166, 286, 8966, 1666, 287, 477, 18481, 353, 11043, 326, 1683, 6204, 287, 761, 286, 257, 24695, 393, 2672, 262, 6829, 286, 257, 15554, 13]]) output,hidden=model(z1) ouput.shape torch.Size([1, 74, 768]) the output of GPT2 is n x m x 768 for me, which n is the batch size,m is the number of tokens in the seqence(for example I can pad/truncate to 128.), so I can not do what as the paper said for a classification task just add a fully connected layer in the tail.And I searched on google, few GPT-2 classification task is mensioned. I am not sure what is correct. Should I do flatten/max pooling/average pooling before the fully connected layer or something else?
" so I can not do what as the paper said for a classification task just add a fully connected layer in the tail." - This is the answer to your question. Usually, transformers like BERT and Roberta, have bidirectional self-attention and they have the [CLS] token where we feed in to the classfier. Since GPT-2 is left-right you need to feed the final token of the embeddings sequence. P.S - Can you put the link to the paper.
https://stackoverflow.com/questions/57248098/
Backward function in PyTorch
I have some question about pytorch's backward function I don't think I'm getting the right output : import numpy as np import torch from torch.autograd import Variable a = Variable(torch.FloatTensor([[1,2,3],[4,5,6]]), requires_grad=True) out = a * a out.backward(a) print(a.grad) the output is tensor([[ 2., 8., 18.], [32., 50., 72.]]) maybe it's 2*a*a but i think the output suppose to be tensor([[ 2., 4., 6.], [8., 10., 12.]]) 2*a. cause d(x^2)/dx=2x
Please read carefully the documentation on backward() to better understand it. By default, pytorch expects backward() to be called for the last output of the network - the loss function. The loss function always outputs a scalar and therefore, the gradients of the scalar loss w.r.t all other variables/parameters is well defined (using the chain rule). Thus, by default, backward() is called on a scalar tensor and expects no arguments. For example: a = torch.tensor([[1,2,3],[4,5,6]], dtype=torch.float, requires_grad=True) for i in range(2): for j in range(3): out = a[i,j] * a[i,j] out.backward() print(a.grad) yields tensor([[ 2., 4., 6.], [ 8., 10., 12.]]) As expected: d(a^2)/da = 2a. However, when you call backward on the 2-by-3 out tensor (no longer a scalar function) - what do you expects a.grad to be? You'll actually need a 2-by-3-by-2-by-3 output: d out[i,j] / d a[k,l](!) Pytorch does not support this non-scalar function derivatives. Instead, pytorch assumes out is only an intermediate tensor and somewhere "upstream" there is a scalar loss function, that through chain rule provides d loss/ d out[i,j]. This "upstream" gradient is of size 2-by-3 and this is actually the argument you provide backward in this case: out.backward(g) where g_ij = d loss/ d out_ij. The gradients are then calculated by chain rule d loss / d a[i,j] = (d loss/d out[i,j]) * (d out[i,j] / d a[i,j]) Since you provided a as the "upstream" gradients you got a.grad[i,j] = 2 * a[i,j] * a[i,j] If you were to provide the "upstream" gradients to be all ones out.backward(torch.ones(2,3)) print(a.grad) yields tensor([[ 2., 4., 6.], [ 8., 10., 12.]]) As expected. It's all in the chain rule.
https://stackoverflow.com/questions/57248777/
i want to convert the code below(neural network) from keras to pytorch
i want to use pytorch instead of keras but i failed to do it my self Keras def _model(self): model = Sequential() model.add(Dense(units=64, input_dim=self.state_size, activation="relu")) model.add(Dense(units=32, activation="relu")) model.add(Dense(units=8, activation="relu")) model.add(Dense(self.action_size, activation="linear")) model.compile(loss="mse", optimizer=Adam(lr=0.001)) return model Pytorch class Model(nn.Module): def __init__(self, input_dim): super(Model, self).__init__() self.fc1 = nn.ReLU(input_dim, 64) self.fc2 = nn.ReLU(64,32) self.fc3 = nn.Relu(32, 8) self.fc4 = nn.Linear(8, 3) model = Model() criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
model = Model() You need to provide an argument when you call Model() since your __init__(self, input_dims) requires an argument. It should be this: model = Model(<integer dimensions for your network>)
https://stackoverflow.com/questions/57260097/
Using automatic differentiation libraries to compute partial derivatives of an arbitrary tensor
(Note: this is not a question about back-propagation.) I am trying so solve on a GPU a non-linear PDE using PyTorch tensors in place of Numpy arrays. I want to calculate the partial derivatives of an arbitrary tensor, akin to the action of the center finite-difference numpy.gradient function. I have other ways around this problem, but since I am already using PyTorch, I'm wondering if it is possible use the autograd module (or, in general, any other autodifferentiation module) to perform this action. I have created a tensor-compatible version of the numpy.gradient function - which runs a lot faster. But perhaps there is a more elegant way of doing this. I can't find any other sources that address this question, either to show that it's possible or impossible; perhaps this reflects my ignorance with the autodifferentiation algorithms.
I've had this same question myself: when numerically solving PDEs, we need access to spatial gradients (which the numpy.gradients function can give us) all the time - could it be possible to use automatic differentiation to compute the gradients, instead of using finite-difference or some flavor of it? "I'm wondering if it is possible use the autograd module (or, in general, any other autodifferentiation module) to perform this action." The answer is no: as soon as you discretize your problem in space or time, then time and space become discrete variables with a grid-like structure, and are not explicit variables which you feed into some function to compute the solution to the PDE. For example, if I wanted to compute the velocity field of some fluid flow u(x,t), I would discretize in space and time, and I would have u[:,:] where the indices represent positions in space and time. Automatic differentiation can compute the derivative of a function u(x,t). So why can't it compute the spatial or time derivative here? Because you've discretized your problem. This means you don't have a function for u for arbitrary x, but rather a function of u at some grid points. You can't differentiate automatically with respect to the spacing of the grid points. As far as I can tell, the tensor-compatible function you've written is probably your best bet. You can see that a similar question has been asked in the PyTorch forums here and here. Or you could do something like dx = x[:,:,1:]-x[:,:,:-1] if you're not worried about the endpoints.
https://stackoverflow.com/questions/57261254/
Pytorch pretrained model not recognizing my image
I have an image that is a torch.Tensor format. I want to feed it directly into a pre-trained classifier, like Inception v3. However, it's being predicted incorrectly (no error message, just output is wrong). I guess this is because I didn't normalize it (in accordance with: https://pytorch.org/docs/stable/torchvision/models.html), so that's what I'm trying to do. The problem is, the normalization requires numpy input. However, to get numpy, I do this and it gives me an error: ----> 9 image = data.numpy()[0].transpose((1, 2, 0)) # [image_size, image_size, RGB] RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead. I cannot call detach because I require gradients to flow through the image (which was generated by other functions). Is there a way to bypass converting this to a numpy. If not, how do I keep the gradient flow?
One option is to scale images into the interval [0, 1], before using the mean and standard deviation vectors as tensors. import torch data = (data - data.min()) / (data.max() - data.min()) # rescale to [0, 1] mean = torch.tensor([[[0.485, 0.456, 0.406]]]) std = torch.tensor([[[0.229, 0.224, 0.225]]]) data = (data - mean) / std # normalise with tensors
https://stackoverflow.com/questions/57261314/
ImportError: cannot import name 'warmup_linear'
While trying to import warmup_linear, I'm getting this error ImportError: cannot import name 'warmup_linear' Import - from pytorch_pretrained_bert.optimization import BertAdam, warmup_linear Requirements file boto3==1.9.198 botocore==1.12.198 certifi==2019.6.16 chardet==3.0.4 docutils==0.14 h5py==2.9.0 idna==2.8 jmespath==0.9.4 Keras==2.2.4 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.0 numpy==1.17.0 Pillow==6.1.0 python-dateutil==2.8.0 pytorch-pretrained-bert==0.6.2 PyYAML==5.1.1 regex==2019.6.8 requests==2.22.0 s3transfer==0.2.1 scipy==1.3.0 seqeval==0.0.12 six==1.12.0 torch==1.1.0 torchvision==0.3.0 tqdm==4.32.2 urllib3==1.25.3 What needs to be done to import 'warmup_linear'?
The version 0.4.0 doesn't give this issue. pip install pytorch_pretrained_bert==0.4.0 or downgrading to 0.6.1 solved it.
https://stackoverflow.com/questions/57266256/
Pytorch: turning a [1,x] sized tensor into an [x] sized tensor
While trying to load in Pytorch 0.4.0 a model that has probably been produced by Pytorch 0.3.1, I keep getting such errors: While copying the parameter named "conv1_7x7_s2_bn.bias", whose dimensions in the model are torch.Size([64]) and whose dimensions in the checkpoint are torch.Size([1, 64]). I thought that if I had applied transpose on each tensor, then it would work, but it is still failing, as the dimension turns into [64, 1], rather than [64], which I need. I can I remove the redundant dimension and thus turn the 1-row matrix into a vector? Note: When calling torch.flatten, I get: AttributeError: module 'torch' has no attribute 'flatten'
Removing empty dimensions is called "squeezing". NumPy does it, Tensorflow does it and PyTorch does it. So the correct command is: torch.squeeze(tensor)
https://stackoverflow.com/questions/57269885/
Pytorch setting part of the training value into zero
When I am training network, I want to assign a specific part of the output into zero. For example, temp = nn.conv2d(3,6) in 3-channel / 16 batch size would have tensor value like this. temp = tensor(16,6,36,36) I want half of the channel value into zero. temp[:,:3,:,:] = 0 But I got this error. RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 128, 15, 15]], which is output 0 of ReluBackward1, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! How can I approach this problem? I've tried with this code def weight_init(self,x,label): if label.data[0]: x[:,:64,:,:] =0 return x else: x[:,64:,:,:] =0 return x
You are doing operation on tensor, and this operation will be tracked for the future gradient computation. you just need to temporally switch off the changes tracking with torch.no_grad(): Code: def weight_init(self,x,label): with torch.no_grad(): if label.data[0]: x[:,:64,:,:] =0 return x else: x[:,64:,:,:] =0 return x
https://stackoverflow.com/questions/57272024/
Multi-GPU training of AllenNLP coreference resolution
I'm trying to replicate (or come close) to the results obtained by the End-to-end Neural Coreference Resolution paper on the CoNLL-2012 shared task. I intend to do some enhancements on top of this, so I decided to use AllenNLP's CoreferenceResolver. This is how I'm initialising & training the model: import torch from allennlp.common import Params from allennlp.data import Vocabulary from allennlp.data.dataset_readers import ConllCorefReader from allennlp.data.dataset_readers.dataset_utils import Ontonotes from allennlp.data.iterators import BasicIterator, MultiprocessIterator from allennlp.data.token_indexers import SingleIdTokenIndexer, TokenCharactersIndexer from allennlp.models import CoreferenceResolver from allennlp.modules import Embedding, FeedForward from allennlp.modules.seq2seq_encoders import PytorchSeq2SeqWrapper from allennlp.modules.seq2vec_encoders import CnnEncoder from allennlp.modules.text_field_embedders import BasicTextFieldEmbedder from allennlp.modules.token_embedders import TokenCharactersEncoder from allennlp.training import Trainer from allennlp.training.learning_rate_schedulers import LearningRateScheduler from torch.nn import LSTM, ReLU from torch.optim import Adam def read_data(directory_path): data = [] for file_path in Ontonotes().dataset_path_iterator(directory_path): data += dataset_reader.read(file_path) return data INPUT_FILE_PATH_TEMPLATE = "data/CoNLL-2012/v4/data/%s" dataset_reader = ConllCorefReader(10, {"tokens": SingleIdTokenIndexer(), "token_characters": TokenCharactersIndexer()}) training_data = read_data(INPUT_FILE_PATH_TEMPLATE % "train") validation_data = read_data(INPUT_FILE_PATH_TEMPLATE % "development") vocabulary = Vocabulary.from_instances(training_data + validation_data) model = CoreferenceResolver(vocab=vocabulary, text_field_embedder=BasicTextFieldEmbedder({"tokens": Embedding.from_params(vocabulary, Params({"embedding_dim": embeddings_dimension, "pretrained_file": "glove.840B.300d.txt"})), "token_characters": TokenCharactersEncoder(embedding=Embedding(num_embeddings=vocabulary.get_vocab_size("token_characters"), embedding_dim=8, vocab_namespace="token_characters"), encoder=CnnEncoder(embedding_dim=8, num_filters=50, ngram_filter_sizes=(3, 4, 5), output_dim=100))}), context_layer=PytorchSeq2SeqWrapper(LSTM(input_size=400, hidden_size=200, num_layers=1, dropout=0.2, bidirectional=True, batch_first=True)), mention_feedforward=FeedForward(input_dim=1220, num_layers=2, hidden_dims=[150, 150], activations=[ReLU(), ReLU()], dropout=[0.2, 0.2]), antecedent_feedforward=FeedForward(input_dim=3680, num_layers=2, hidden_dims=[150, 150], activations=[ReLU(), ReLU()], dropout=[0.2, 0.2]), feature_size=20, max_span_width=10, spans_per_word=0.4, max_antecedents=250, lexical_dropout=0.5) if torch.cuda.is_available(): cuda_device = 0 model = model.cuda(cuda_device) else: cuda_device = -1 iterator = BasicIterator(batch_size=1) iterator.index_with(vocabulary) optimiser = Adam(model.parameters(), weight_decay=0.1) Trainer(model=model, train_dataset=training_data, validation_dataset=validation_data, optimizer=optimiser, learning_rate_scheduler=LearningRateScheduler.from_params(optimiser, Params({"type": "step", "step_size": 100})), iterator=iterator, num_epochs=150, patience=1, cuda_device=cuda_device).train() After reading the data I've trained the model but ran out of GPU memory: RuntimeError: CUDA out of memory. Tried to allocate 4.43 GiB (GPU 0; 11.17 GiB total capacity; 3.96 GiB already allocated; 3.40 GiB free; 3.47 GiB cached). Therefore, I attempted to make use of multiple GPUs to train this model. I'm making use of Tesla K80s (which have 12GiB memory). I've tried making use of AllenNLP's MultiprocessIterator, by itialising the iterator as MultiprocessIterator(BasicIterator(batch_size=1), num_workers=torch.cuda.device_count()). However, only 1 GPU is being used (by monitoring the memory usage through the nvidia-smi command) & got the error below. I also tried fiddling with its parameters (increasing num_workers or decreasing output_queue_size) & the ulimit (as mentioned by this PyTorch issue) to no avail. Process Process-3: Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/user/.local/lib/python3.6/site-packages/allennlp/data/iterators/multiprocess_iterator.py", line 32, in _create_tensor_dicts output_queue.put(tensor_dict) File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/user/.local/lib/python3.6/site-packages/allennlp/data/iterators/multiprocess_iterator.py", line 32, in _create_tensor_dicts output_queue.put(tensor_dict) File "<string>", line 2, in put File "<string>", line 2, in put File "/usr/lib/python3.6/multiprocessing/managers.py", line 772, in _callmethod raise convert_to_error(kind, result) File "/usr/lib/python3.6/multiprocessing/managers.py", line 772, in _callmethod raise convert_to_error(kind, result) multiprocessing.managers.RemoteError: --------------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/managers.py", line 228, in serve_client request = recv() File "/usr/lib/python3.6/multiprocessing/connection.py", line 251, in recv return _ForkingPickler.loads(buf.getbuffer()) File "/home/user/.local/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 276, in rebuild_storage_fd fd = df.detach() File "/usr/lib/python3.6/multiprocessing/resource_sharer.py", line 58, in detach return reduction.recv_handle(conn) File "/usr/lib/python3.6/multiprocessing/reduction.py", line 182, in recv_handle return recvfds(s, 1)[0] File "/usr/lib/python3.6/multiprocessing/reduction.py", line 161, in recvfds len(ancdata)) RuntimeError: received 0 items of ancdata --------------------------------------------------------------------------- I also tried achieving this through PyTorch's DataParallel, by wrapping the model's context_layer, mention_feedforward, antecedent_feedforward with a custom DataParallelWrapper (to provide compatibility with the AllenNLP-assumed class functions). Still, only 1 GPU is used & it eventually runs out of memory as before. class DataParallelWrapper(DataParallel): def __init__(self, module): super().__init__(module) def get_output_dim(self): return self.module.get_output_dim() def get_input_dim(self): return self.module.get_input_dim() def forward(self, *inputs): return self.module.forward(inputs)
After some digging through the code I found out that AllenNLP does this under the hood directly through its Trainer. The cuda_device can either be a single int (in the case of single-processing) or a list of ints (in the case of multi-processing): cuda_device : Union[int, List[int]], optional (default = -1) An integer or list of integers specifying the CUDA device(s) to use. If -1, the CPU is used. So all GPU devices needed should be passed on instead: if torch.cuda.is_available(): cuda_device = list(range(torch.cuda.device_count())) model = model.cuda(cuda_device[0]) else: cuda_device = -1 Note that the model still has to be manually moved to the GPU (via model.cuda(...)), as it would otherwise try to use multiple CPUs instead.
https://stackoverflow.com/questions/57277214/
How to install nvidia apex on Google Colab
what I did is follow the instruction on the official github site !git clone https://github.com/NVIDIA/apex !cd apex !pip install -v --no-cache-dir ./ it gives me the error: ERROR: Directory './' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. Exception information: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 178, in main status = self.run(options, args) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py", line 326, in run self.name, wheel_cache File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 268, in populate_requirement_set wheel_cache=wheel_cache File "/usr/local/lib/python3.6/dist-packages/pip/_internal/req/constructors.py", line 248, in install_req_from_line "nor 'pyproject.toml' found." % name pip._internal.exceptions.InstallationError: Directory './' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
(wanted to just add a comment but I don't have enough reputation...) it works for me but the cd is actually not required. Also, I needed the two global options as suggested here: https://github.com/NVIDIA/apex/issues/86 %%writefile setup.sh git clone https://github.com/NVIDIA/apex pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex then !sh setup.sh
https://stackoverflow.com/questions/57284345/
Derivative in both arguments of torch.nn.BCELoss()
When using a torch.nn.BCELoss() on two arguments that are both results of some earlier computation, I get some curious error, which this question is about: RuntimeError: the derivative for 'target' is not implemented The MCVE is as follows: import torch import torch.nn.functional as F net1 = torch.nn.Linear(1,1) net2 = torch.nn.Linear(1,1) loss_fcn = torch.nn.BCELoss() x = torch.zeros((1,1)) y = F.sigmoid(net1(x)) #make sure y is in range (0,1) z = F.sigmoid(net2(y)) #make sure z is in range (0,1) loss = loss_fcn(z, y) #works if we replace y with y.detach() loss.backward() It turns out if we call .detach() on y the error disappears. But this results in a different computation, now in the .backward()-pass, the gradients with respect to the second argument of the BCELoss will not be computed. Can anyone explain what I'm doing wrong in this case? As far as I know all pytorch modules in torch.nn should support computing gradients. And this error message seems to tell me that the derivative is not implemented for y, which is somehow strange, as you can compute the gradient of y, but not of y.detach() which seems to be contradictory.
It seems I misunderstood the error message. It is not y that doesn't allow the computation for gradients, it is BCELoss() that doesn't have the ability to compute gradients with respect to the second argument. A similar problem was discussed here.
https://stackoverflow.com/questions/57285077/
I can't load my model because I can't put a PosixPath
I'm setting up a script and I need to use some functions from fast-ai package. The fact is that I'm on Windows and when I define my paths, the function from fast-ai named load_learner can't load the model. I've tried to change the function into the package as: state = pickle.load(open(str(path) + '/' + str(fname), 'rb')) instead of: state = pickle.load(open(path/fname, 'rb')) but I obtain this error: File "lib\site-packages\fastai\basic_train.py", line 462, in load_learner state = pickle.load(open(path/fname, 'rb')) File "\lib\pathlib.py", line 1006, in __new__ % (cls.__name__,)) NotImplementedError: cannot instantiate 'PosixPath' on your system My paths are defined as: folder_path = './models/model1' fname = 'model.pkl' and I call the function as: model = load_learner(folder_path, fname) How can I use Windows paths in this function? UPDATE 1 The answer posted was correct only on Linux. I still have the issue on Windows. I didn't find a way to pass through the PosixPath on Windows. The only solution that I found is to change the internal packages from my modules but it is not a secure way to solve this kind of issue. Thanks in advance.
Just redirect PosixPath to WindowsPath. import pathlib temp = pathlib.PosixPath pathlib.PosixPath = pathlib.WindowsPath I am also loading fastai models and this trick works.
https://stackoverflow.com/questions/57286486/
How to install torchtext 0.4.0 on conda
The torchtext 0.4.0 library exists (can be downloaded thru pip), but conda install torchtext=0.4.0 will not work. How can I download torchtext to a anaconda environment?
Create conda env with python: $ conda create -n torchtext python=3.7 Activate it: $ conda activate torchtext torchtext 0.4.0 installation currently is not available via conda. So we will use pip, anyway pip installs torchtext inside your activated conda env. Let's find torchtex available versions: (torchtext)$ pip search torchtext torchtext (0.3.1) - Text utilities and datasets for PyTorch As you can see there is no 0.4.0. 4. But we can install it from official github repo: (torchtext)$ pip install git+https://github.com/pytorch/text.git Verify the installation: (torchtext)$ conda list ... torch 1.1.0 pypi_0 pypi torchtext 0.4.0 pypi_0 pypi tqdm 4.32.2 pypi_0 pypi ... (torchtext)$ python -c "import torchtext; print('torchtext is fine!')" torchtext is fine!
https://stackoverflow.com/questions/57286492/
PyTorch Module with attrs cannot get parameter list
The attr's package somehow ruins pytorch's parameter() method for a module. I am wondering if anyone has any work-arounds or solutions, so that the two packages can seamlessly integrate? If not, any advice on which github to post the issue to? My instinct would be to post this onto attr's github, but the stack trace is almost entirely relevant to pytorch's codebase. Python 3.7.3 attrs== 19.1.0 torch==1.1.0.post2 torchvision==0.3.0 import attr import torch class RegularModule(torch.nn.Module): pass @attr.s class AttrsModule(torch.nn.Module): pass module = RegularModule() print(list(module.parameters())) module = AttrsModule() print(list(module.parameters())) The actual output is: $python attrs_pytorch.py [] Traceback (most recent call last): File "attrs_pytorch.py", line 18, in <module> print(list(module.parameters())) File "/usr/local/anaconda3/envs/bgg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 814, in parameters for name, param in self.named_parameters(recurse=recurse): File "/usr/local/anaconda3/envs/bgg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 840, in named_parameters for elem in gen: File "/usr/local/anaconda3/envs/bgg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 784, in _named_members for module_prefix, module in modules: File "/usr/local/anaconda3/envs/bgg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 975, in named_modules if self not in memo: TypeError: unhashable type: 'AttrsModule' The expected output is: $python attrs_pytorch.py [] []
You may get it to work with one workaround and using dataclasses (which you should, as it's in standard Python library since 3.7 which you are apparently using). Though I think simple __init__ is more readable. One could do something similar using attrs library (disabling hashing), I just prefer the solution using standard libraries if possible. The reason (if you manage to handle hashing related errors) is that you are calling torch.nn.Module.__init__() which generates _parameters attribute and other framework-specific data. First solving hashing with dataclasses: @dataclasses.dataclass(eq=False) class AttrsModule(torch.nn.Module): pass This solves hashing issues as, as stated by the documentation, section about hash and eq: By default, dataclass() will not implicitly add a hash() method unless it is safe to do so. which is needed by PyTorch so the model can be used in C++ backed (correct me if I'm wrong), furthermore: If eq is false, hash() will be left untouched meaning the hash() method of the superclass will be used (if the superclass is object, this means it will fall back to id-based hashing). So you are fine using torch.nn.Module __hash__ function (refer to documentation of dataclasses if any further errors arise). This leaves you with the error: AttributeError: 'AttrsModule' object has no attribute '_parameters' Because torch.nn.Module constructor is not called. Quick and dirty fix: @dataclasses.dataclass(eq=False) class AttrsModule(torch.nn.Module): def __post_init__(self): super().__init__() __post_init__ is a function called after __init__ (who would of guessed), where you can initialize torch-specific parameters. Still, I would advise against using those two modules together. For example, you are destroying PyTorch's __repr__ using your code, so repr=False should be passed to the dataclasses.dataclass constructor, which gives this final code (obvious collisions between libraries eliminated I hope): import dataclasses import torch class RegularModule(torch.nn.Module): pass @dataclasses.dataclass(eq=False, repr=False) class AttrsModule(torch.nn.Module): def __post_init__(self): super().__init__() module = RegularModule() print(list(module.parameters())) module = AttrsModule() print(list(module.parameters())) For more on attrs please see hynek answer and his blog post.
https://stackoverflow.com/questions/57291307/
Normalization (Feature scaling) of Point Cloud Dataset
I have point cloud data set where single data is represented by N * 3 where N is number of points. Similarly I have "M" number of points clouds in Dataset. The range of these point clouds varies largely. Some have very large values (e.g., in term of 10^6 for all N points) while some has very small values (e.g., in term of 10^1 for all N points). I want to normalize each point cloud. How should i do that? Q1. Should I normalize(min-max) each point cloud (single point cloud N*3) individually along x, y, z dimension by choosing min and max from this point cloud only. In this scenario, for all "M" point clouds we have different min-max. The same is done for output point cloud. Please view the image for more understanding https://i.stack.imgur.com/tKauw.jpg Q2. Or Should I normalize (min-max) all point clouds along x, y, z dimension by choosing min and max (from M * N *3 in x, y, z columns) from this entire data set only. In this scenario, for all "m" point clouds we have same min-max. Please view the image for more understanding https://i.stack.imgur.com/0HAhn.jpg
You should use option 1. The point of normalisation is to standardise the inputs to your network - in the case of min-max normalisation this will map your 3 features (I assume xyz) to the interval [0,1]. Option 2 is undesirable as it standardises the normalisation instead. As the centroids of your point clouds are highly variable, this will increase the difficulty of input discrimination for your model. You could also consider standardising using variable standard deviation.
https://stackoverflow.com/questions/57295276/
Combine index and value of tenor to from a new tensor
I have a tensor like a = torch.tensor([1,2,0,1,2]). I want to calculate a tensor b which has indices and values of tensor a such that: b = tensor([ [0,1], [1,2], [2,0], [3,1], [4,2] ]). Edit: a[i] is >= 0.
One way of doing this is: b = torch.IntTensor(list(zip(range(0, list(a.size())[0], 1), a.numpy()))) Output: tensor([[0, 1], [1, 2], [2, 0], [3, 1], [4, 2]], dtype=torch.int32) Alternatively, you can also use torch.cat() as below: a = torch.tensor([1,2,0,1,2]) indices = torch.arange(0, list(a.size())[0]) res = torch.cat([indices.view(-1, 1), a.view(-1, 1)], 1) Output: tensor([[0, 1], [1, 2], [2, 0], [3, 1], [4, 2]])
https://stackoverflow.com/questions/57306205/
Why does setting backward(retain_graph=True) use up lot GPU memory?
I need to backpropagate through my neural network multiple times, so I set backward(retain_graph=True). However, this is causing RuntimeError: CUDA out of memory I don't understand why this is. Are the number of variables or weights doubling? Shouldn't the amount of memory used remain the same regardless of how many times backward() is called?
The source of the issue : You are right that no matter how many times we call the backward function, the memory should not increase theorically. Yet your issue is not because of the backpropagation, but the retain_graph variable that you have set to true when calling the backward function. When you run your network by passing a set of input data, you call the forward function, which will create a "computation graph". A computation graph is containing all the operations that your network has performed. Then when you call the backward function, the computation graph saved will "basically" be runned backward to know which weight should be adjusted in which directions (what is called the gradients). So PyTorch is saving in memory the computation graph in order to call the backward function. After the backward function has been called and the gradients have been calculated, we free the graph from the memory, as explained in the doc https://pytorch.org/docs/stable/autograd.html : retain_graph (bool, optional) – If False, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph. Then usually during training we apply the gradients to the network in order to minimise the loss, then we re-run the network, and so we create a new computation graph. Yet we have only one graph in memory at the same time. The issue : If you set retain_graph to true when you call the backward function, you will keep in memory the computation graphs of ALL the previous runs of your network. And since on every run of your network, you create a new computation graph, if you store them all in memory, you can and will eventually run out of memory. On the first iteration and run of your network, you will have only one graph in memory. Yet on the 10th run of the network, you have 10 graphs in memory. And on the 10000th run you have 10000 in memory. It is not sustainable, and it is understandable why it is not recommended in the docs. So even if it may seems that the issue is the backpropagation, it is actually the storing of the computation graphs, and since we usually call the the forward and backward function once per iteration or network run, making a confusion is understandable. Solution : What you need to do, is find a way to make your network and architecture work without using retain_graph. Using it will make it almost impossible to train your network, since each iteration increase the usage of your memory and decrease the speed of training, and in your case, even cause you to run out of memory. You did not mention why you need to backpropagate multiple times, yet it is rarely needed, and i do not know of a case where it cannot be "worked around". For example, if you need to access variables or weights of previous runs you could save them inside variables and later access them, instead of trying doing a new backpropagation. You likely need to backpropagate multiple times for another reason, yet believe as i have been in this situation, there is likely a way to accomplish what you are trying to do without storing the previous computation graphs. If you want to share why you need to backpropagate multiple times, maybe others and i could help you more. More about the backward process : If you want to learn more about the backward process it is called the "Jacobian-vector product". It is a bit complex and is handled by PyTorch. I do not yet fully understand it, yet this ressource seems good as a starting point, as it seems less intimidating than the PyTorch documentation (in term of algebra) : https://mc.ai/how-pytorch-backward-function-works/
https://stackoverflow.com/questions/57317366/
Why torch.sum() before doing .backward()?
I can see what this code below from this video is trying to do. But the sum from y=torch.sum(x**2) confuses me. With sum operation, y becomes a tensor with one single value. As I understand .backward() as calculating derivatives, why would we want to use sum and reduce y to one value? import pytorch import matplotlib.pyplot as plt x = torch.linspace(-10.0,10.0,10, requires_grad=True) Y = x**2 y = torch.sum(x**2) y.backward() plt.plot(x.detach().numpy(), Y.detach().numpy(), label="Y") plt.plot(x.detach().numpy(), x.grad.detach().numpy(), label="derivatives") plt.legend()
You can only compute partial derivatives for a scalar function. What backwards() gives you is d loss/d parameter and you expect a single gradient value per parameter/variable. Had your loss function been a vector function, i.e., mapping from multiple inputs to multiple outputs, you would have ended up with multiple gradients per parameter/variable. Please see this answer for more information.
https://stackoverflow.com/questions/57320830/
ValueError: optimizer got an empty parameter list
I create the following simple linear class: class Decoder(nn.Module): def __init__(self, K, h=()): super().__init__() h = (K,)+h+(K,) self.layers = [nn.Linear(h1,h2) for h1,h2 in zip(h, h[1:])] def forward(self, x): for layer in self.layers[:-1]: x = F.relu(layer(x)) return self.layers[-1](x) However, when I try to put the parameters in a optimizer class I get the error ValueError: optimizer got an empty parameter list. decoder = Decoder(4) LR = 1e-3 opt = optim.Adam(decoder.parameters(), lr=LR) Is there something I'm doing obviously wrong with the class definition?
Since you store your layers in a regular pythonic list inside your Decoder, Pytorch has no way of telling these members of the self.list are actually sub modules. Convert this list into pytorch's nn.ModuleList and your problem will be solved class Decoder(nn.Module): def __init__(self, K, h=()): super().__init__() h = (K,)+h+(K,) self.layers = nn.ModuleList(nn.Linear(h1,h2) for h1,h2 in zip(h, h[1:]))
https://stackoverflow.com/questions/57320958/
pytorch - loss.backward() and optimizer.step() in eval mode with batch norm layers?
I have a ResNet-8 network I am using for a project of Domain Adaptation over images, basically I have trained the network over a dataset and now I want to evaluate it over another dataset simulating a real time environment where I try to predict one image at a time, but here comes the fun part: The way I want to do the evaluation on the target dataset is by doing, for each image, a forward pass in train mode so that the batch norm layers statistics are updated (with torch.no_grad(), since I don't want to update the network parameters then, but only "adapt" the batch norm layers), and then do another forward pass in eval mode to get the actual prediction so that the batch norm layers will use mean and variance based on the whole set of images seen so far (and not only those of that batch, a single image in this case): optimizer.zero_grad() model.train() with torch.no_grad(): output_train = model(inputs) model.eval() output_eval = model(inputs) loss = criterion(output_eval, targets) The idea is that I do domain adaptation just by updating the batch norm layers to the new target distribution. Then after doing this let's say I get an accuracy of 60%. Now if I add this two other lines I am able to achieve something like 80% accuracy: loss.backward() optimizer.step() Therefore my question is what happens if I do backward() and step() while in eval mode? Because I know about the different behaviour of batch norm and dropout layers between train and eval mode and I know about torch.no_grad() and how gradient are calculated and then parameters updated by the optimizer, but I wasn't able to find any information about my specific problem. I think that since the model is then set in eval mode, those two line should be useless, but something clearly happens, does this have something to do with the affine parameters of the batch norm layers? UPDATE: Ok I misunderstood something: eval mode does not block parameters to be updated, it only changes the behaviour of some layers (batch norm and dropout) during the forward pass, am I right? Therefore with those two lines I am actually training the network, hence the better accuracy. Anyway does this change something if batch norm affine is set to true? Are those parameters considered as "normal" parameters to be updated during optimizer.step() or is it different?
eval mode does not block parameters to be updated, it only changes the behaviour of some layers (batch norm and dropout) during the forward pass, am I right? True. Therefore with those two lines I am actually training the network, hence the better accuracy. Anyway does this change something if batch norm affine is set to true? Are those parameters considered as "normal" parameters to be updated during optimizer.step() or is it different? BN parameters are updated during optimizer step. Look: if self.affine: self.weight = Parameter(torch.Tensor(num_features)) self.bias = Parameter(torch.Tensor(num_features)) else: self.register_parameter('weight', None) self.register_parameter('bias', None)
https://stackoverflow.com/questions/57323023/
Convolutional Autoencoder in Pytorch for Dummies
I am here to ask some more general questions about Pytorch and Convolutional Autoencoders. If I only use Convolutional Layers (FCN), do I even have to care about the input shape? And then how do I choose the number of featuremaps best? Does a ConvTranspose2d Layer automatically unpool? Can you spot any errors or unconventional code in my example? By the way, I want to make a symmetrical Convolutional Autoencoder to colorize black and white images with different image sizes. self.encoder = nn.Sequential ( # conv 1 nn.Conv2d(in_channels=3, out_channels=512, kernel_size=3, stride=1, padding=1), nn.ReLU, nn.MaxPool2d(kernel_size=2, stride=2), # 1/2 nn.BatchNorm2d(512), # conv 2 nn.Conv2d(in_channels=512, out_channels=256, kernel_size=3, stride=1, padding=1), nn.ReLU, nn.MaxPool2d(kernel_size=2, stride=2), # 1/4 nn.BatchNorm2d(256), # conv 3 nn.Conv2d(in_channels=256, out_channels=128, kernel_size=3, stride=1, padding=1), nn.ReLU, nn.MaxPool2d(kernel_size=2, stride=2), # 1/8 nn.BatchNorm2d(128), # conv 4 nn.Conv2d(in_channels=128, out_channels=64, kernel_size=3, stride=1, padding=1), nn.ReLU, nn.MaxPool2d(kernel_size=2, stride=2), #1/16 nn.BatchNorm2d(64) ) self.encoder = nn.Sequential ( # conv 5 nn.ConvTranspose2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1), nn.ReLU, nn.BatchNorm2d(128), # conv 6 nn.ConvTranspose2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1), nn.ReLU, nn.BatchNorm2d(256), # conv 7 nn.ConvTranspose2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1), nn.ReLU, nn.BatchNorm2d(512), # conv 8 nn.ConvTranspose2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1), nn.Softmax() ) def forward(self, x): h = x h = self.encoder(h) h = self.decoder(h) return h
No, you don't need to care about input width and height with a fully convolutional model. But should probably ensure that each downsampling operation in the encoder is matched by a corresponding upsampling operation in the decoder. I'm not sure what you mean by unpooling. If you mean upsampling (increasing spatial dimensions), then this is what the stride parameter is for. In PyTorch, a transpose convolution with stride=2 will upsample twice. Note, however, that instead of a transpose convolution, many practitioners prefer to use bilinear upsampling followed by a regular convolution. This is one reason why. If, on the other hand, you mean actual unpooling, then you should look at the documentation of torch.MaxUnpool2d. You need to collect maximal value indices from the MaxPool2d operation and feed them into MaxUnpool2d. The general consensus seems to be that you should increase the number of feature maps as you downsample. Your code appears to do the reverse. Consecutive powers of 2 seem like a good place to start. It's hard to suggest a better rule of thumb. You probably need to experiment a little. In other notes, I'm not sure why you apply softmax to the encoder output.
https://stackoverflow.com/questions/57324308/
pytorch: "multi-target not supported" error message
So I want to classify some (3, 50, 50) pictures. First I loaded the dataset from the file without a dataloader or batches, it worked. Now, after adding both things I get that error: RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15 I found a lot of answers in the internet, mostly to use target.squeeze(1) but it doesn´t work for me. My target-batch looks like following: tensor([[1, 0], [1, 0], [1, 0], [1, 0], [1, 0], [1, 0], [1, 0], [1, 0]], device='cuda:0') Shouldn't that be okay? Here the full code (notice that Im only creating the structure of the model on which Im going to apply the full and correct dataset afterwards, because I dont have the full data yet, only 32 pictures and no labels, thats why I added torch.tensor([1, 0]) as a placeholder for all labels): import torch import torch.utils.data import torch.nn as nn import torch.nn.functional as F import torch.optim from torch.autograd import Variable import numpy as np from PIL import Image class Model(nn.Module): def __init__(self): super(Model, self).__init__() # model structur: self.conv1 = nn.Conv2d(3, 10, kernel_size=(5,5), stride=(1,1)) self.conv2 = nn.Conv2d(10, 20, kernel_size=(5,5), stride=(1,1)) # with mapool: output = 20 * (9,9) feature-maps -> flatten self.fc1 = nn.Linear(20*9*9, 250) self.fc2 = nn.Linear(250, 100) self.fc3 = nn.Linear(100, 2) def forward(self, x): # conv layers x = F.relu(self.conv1(x)) # shape: 1, 10, 46, 46 x = F.max_pool2d(x, 2, 2) # shape: 1, 10, 23, 23 x = F.relu(self.conv2(x)) # shape: 1, 20, 19, 19 x = F.max_pool2d(x, 2, 2) # shape: 1, 20, 9, 9 # flatten to dense layer: x = x.view(-1, 20*9*9) # dense layers x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) output = F.log_softmax(self.fc3(x), dim=1) return output class Run: def __init__(self, epochs, learning_rate, dropout, momentum): # load model self.model = Model().cuda() # hyperparameters: self.epochs = epochs self.learning_rate = learning_rate self.dropout = dropout def preporcessing(self): dataset_folder = "/media/theodor/hdd/Programming/BWKI/dataset/bilder/" dataset = [] for i in range(0, 35): sample_image = Image.open(dataset_folder + str(i) + ".png") data = torch.from_numpy(np.array(sample_image)).type("torch.Tensor").reshape(3, 50, 50) target = torch.tensor([[1, 0]]) sample = (data, target) dataset.append(sample) train_loader = torch.utils.data.DataLoader(dataset, batch_size=8) return train_loader def train(self): train_set = self.preporcessing() criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(self.model.parameters(), lr=self.learning_rate) for epoch in range(self.epochs): epoch_loss = 0 for i, data in enumerate(train_set, 0): sample, target = data # set data as cuda varibale sample = Variable(sample.float().cuda()) target = Variable(target.cuda()) # initialize optimizer optimizer.zero_grad() # predict output = self.model(sample) # backpropagation print(output, target.squeeze(1)) loss = criterion(output, target.squeeze(1)) # ERROR MESSAGE: RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15 loss.backward() optimizer.step() epoch_loss += loss.item() print("loss after epoch [", epoch, "|", self.epochs, "] :", epoch_loss) run = Run(10, 0.001, 0.5, 0.9) run.train() So I expected it to start training (of course not learning anything because the labels are wrong).
For nn.CrossEntropyLoss the target has to be a single number from the interval [0, #classes] instead of a one-hot encoded target vector. Your target is [1, 0], thus PyTorch thinks you want to have multiple labels per input which is not supported. Replace your one-hot-encoded targets: [1, 0] --> 0 [0, 1] --> 1
https://stackoverflow.com/questions/57325844/
Sum all diagonals in feature maps in parallel in PyTorch
Let's say I have a tensor shaped (1, 64, 128, 128) and I want to create a tensor of shape (1, 64, 255) holding the sums of all diagonals for every (128, 128) matrix (there are 1 main, 127 below, 127 above diagonals so in total 255). What I am currently doing is the following: x = torch.rand(1, 64, 128, 128) diag_sums = torch.zeros(1, 64, 255) j = 0 for k in range(-127, 128): diag_sums[j, :, k + 127] = torch.diagonal(x, offset=k, dim1=-2, dim2=-1).sum(dim=2) This is obviously very slow, since it is using Python loops and is not done in parallel with respect to k. I don't think this can be done using torch.diagonal since the function explicitly uses a single int for the offset parameter. If I could pass a list there, this would work, but I guess it would be complicated to implement (requiring changes in PyTorch itself). I think it could be possible to implement this using torch.einsum, but I cannot think of a way to do it. So this is my question: how do I get the tensor described above?
Have you considered using torch.nn.functional.conv2d? You can sum the diagonals with a diagonal filter sliding across the tensor with appropriate zero padding. import torch import torch.nn.functional as nnf # construct a diagonal filter using `eye` function, shape it appropriately f = torch.eye(x.shape[2])[None, None,...].repeat(x.shape[1], 1, 1, 1) # compute the diagonal sum with appropriate zero padding conv_diag_sums = nnf.conv2d(x, f, padding=(x.shape[2]-1,0), groups=x.shape[1])[..., 0] Note the the result has a slightly different order than the one you computed in the loop: diag_sums = torch.zeros(1, 64, 255) for k in range(-127, 128): diag_sums[j, :, 127-k] = torch.diagonal(x, offset=k, dim1=-2, dim2=-1).sum(dim=2) # compare (conv_diag_sums == diag_sums).all() results with True - they are the same.
https://stackoverflow.com/questions/57347896/
Cannot run Flask on Docker (ModuleNotFoundError)
I have been trying to run my Python API (using Flask) with Docker for a while, but keep running into this issue: ModuleNotFoundError: No module named 'flask' Running this on Mac OS X (10.14.5) with Docker version 19.03.1, build 74b1e89. My Dockerfile looks like this: FROM pytorch/pytorch:1.1.0-cuda10.0-cudnn7.5-runtime RUN apt-get update && apt-get install -y python3 RUN apt-get install -y python3-pip RUN apt-get install -y build-essential RUN python3 --version RUN pip3 --version COPY . /app WORKDIR /app RUN pip3 install -r requirements.txt RUN pip3 install flask # just to be sure Flask gets installed RUN which flask RUN ls -alh ENTRYPOINT ["python3"] CMD ["api.py"] Note that I'm extending the PyTorch docker image, because my API will be running an ML model using PyTorch. I tried starting from other images such as ufoym/deepo but with the same Flask error. I stripped my api.py file to look like this (running it locally is no problem): from flask import Flask, request, jsonify #import classifier # commented out for now because I just want the API to work first app = Flask(__name__) @app.route('/', methods=['GET', 'POST']) def process(): # TODO return 'test' if __name__ == '__main__': app.run(debug=True, host='0.0.0.0') When I build my Docker image, I get the following output: $ docker build -t flower-image-classifier:latest . Sending build context to Docker daemon 269.9MB Step 1/14 : FROM pytorch/pytorch:1.1.0-cuda10.0-cudnn7.5-runtime ---> 299bfb9e54db Step 2/14 : RUN apt-get update && apt-get install -y python3 ---> Running in f1d7db593022 Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB] Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB] Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB] Get:4 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB] Get:5 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [908 kB] Get:6 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1558 kB] Get:7 http://archive.ubuntu.com/ubuntu xenial/restricted amd64 Packages [14.1 kB] Get:8 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [9827 kB] Get:9 http://security.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [12.7 kB] Get:10 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [573 kB] Get:11 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [6117 B] Get:12 http://archive.ubuntu.com/ubuntu xenial/multiverse amd64 Packages [176 kB] Get:13 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [1292 kB] Get:14 http://archive.ubuntu.com/ubuntu xenial-updates/restricted amd64 Packages [13.1 kB] Get:15 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [980 kB] Get:16 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse amd64 Packages [19.1 kB] Get:17 http://archive.ubuntu.com/ubuntu xenial-backports/main amd64 Packages [7942 B] Get:18 http://archive.ubuntu.com/ubuntu xenial-backports/universe amd64 Packages [8532 B] Fetched 16.0 MB in 9s (1744 kB/s) Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following additional packages will be installed: dh-python file libmagic1 libmpdec2 libpython3-stdlib libpython3.5-minimal libpython3.5-stdlib mime-support python3-minimal python3.5 python3.5-minimal Suggested packages: python3-doc python3-tk python3-venv python3.5-venv python3.5-doc binfmt-support The following NEW packages will be installed: dh-python file libmagic1 libmpdec2 libpython3-stdlib libpython3.5-minimal libpython3.5-stdlib mime-support python3 python3-minimal python3.5 python3.5-minimal 0 upgraded, 12 newly installed, 0 to remove and 29 not upgraded. Need to get 4885 kB of archives. After this operation, 28.4 MB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython3.5-minimal amd64 3.5.2-2ubuntu0~16.04.5 [524 kB] Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3.5-minimal amd64 3.5.2-2ubuntu0~16.04.5 [1598 kB] Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-minimal amd64 3.5.1-3 [23.3 kB] Get:4 http://archive.ubuntu.com/ubuntu xenial/main amd64 mime-support all 3.59ubuntu1 [31.0 kB] Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmpdec2 amd64 2.4.2-1 [82.6 kB] Get:6 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython3.5-stdlib amd64 3.5.2-2ubuntu0~16.04.5 [2134 kB] Get:7 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3.5 amd64 3.5.2-2ubuntu0~16.04.5 [165 kB] Get:8 http://archive.ubuntu.com/ubuntu xenial/main amd64 libpython3-stdlib amd64 3.5.1-3 [6818 B] Get:9 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dh-python all 2.20151103ubuntu1.1 [74.1 kB] Get:10 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3 amd64 3.5.1-3 [8710 B] Get:11 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libmagic1 amd64 1:5.25-2ubuntu1.2 [216 kB] Get:12 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 file amd64 1:5.25-2ubuntu1.2 [21.2 kB] debconf: delaying package configuration, since apt-utils is not installed Fetched 4885 kB in 7s (663 kB/s) Selecting previously unselected package libpython3.5-minimal:amd64. (Reading database ... 11013 files and directories currently installed.) Preparing to unpack .../libpython3.5-minimal_3.5.2-2ubuntu0~16.04.5_amd64.deb ... Unpacking libpython3.5-minimal:amd64 (3.5.2-2ubuntu0~16.04.5) ... Selecting previously unselected package python3.5-minimal. Preparing to unpack .../python3.5-minimal_3.5.2-2ubuntu0~16.04.5_amd64.deb ... Unpacking python3.5-minimal (3.5.2-2ubuntu0~16.04.5) ... Selecting previously unselected package python3-minimal. Preparing to unpack .../python3-minimal_3.5.1-3_amd64.deb ... Unpacking python3-minimal (3.5.1-3) ... Selecting previously unselected package mime-support. Preparing to unpack .../mime-support_3.59ubuntu1_all.deb ... Unpacking mime-support (3.59ubuntu1) ... Selecting previously unselected package libmpdec2:amd64. Preparing to unpack .../libmpdec2_2.4.2-1_amd64.deb ... Unpacking libmpdec2:amd64 (2.4.2-1) ... Selecting previously unselected package libpython3.5-stdlib:amd64. Preparing to unpack .../libpython3.5-stdlib_3.5.2-2ubuntu0~16.04.5_amd64.deb ... Unpacking libpython3.5-stdlib:amd64 (3.5.2-2ubuntu0~16.04.5) ... Selecting previously unselected package python3.5. Preparing to unpack .../python3.5_3.5.2-2ubuntu0~16.04.5_amd64.deb ... Unpacking python3.5 (3.5.2-2ubuntu0~16.04.5) ... Selecting previously unselected package libpython3-stdlib:amd64. Preparing to unpack .../libpython3-stdlib_3.5.1-3_amd64.deb ... Unpacking libpython3-stdlib:amd64 (3.5.1-3) ... Selecting previously unselected package dh-python. Preparing to unpack .../dh-python_2.20151103ubuntu1.1_all.deb ... Unpacking dh-python (2.20151103ubuntu1.1) ... Processing triggers for libc-bin (2.23-0ubuntu11) ... Setting up libpython3.5-minimal:amd64 (3.5.2-2ubuntu0~16.04.5) ... Setting up python3.5-minimal (3.5.2-2ubuntu0~16.04.5) ... Setting up python3-minimal (3.5.1-3) ... Selecting previously unselected package python3. (Reading database ... 11957 files and directories currently installed.) Preparing to unpack .../python3_3.5.1-3_amd64.deb ... Unpacking python3 (3.5.1-3) ... Selecting previously unselected package libmagic1:amd64. Preparing to unpack .../libmagic1_1%3a5.25-2ubuntu1.2_amd64.deb ... Unpacking libmagic1:amd64 (1:5.25-2ubuntu1.2) ... Selecting previously unselected package file. Preparing to unpack .../file_1%3a5.25-2ubuntu1.2_amd64.deb ... Unpacking file (1:5.25-2ubuntu1.2) ... Processing triggers for libc-bin (2.23-0ubuntu11) ... Setting up mime-support (3.59ubuntu1) ... Setting up libmpdec2:amd64 (2.4.2-1) ... Setting up libpython3.5-stdlib:amd64 (3.5.2-2ubuntu0~16.04.5) ... Setting up python3.5 (3.5.2-2ubuntu0~16.04.5) ... Setting up libpython3-stdlib:amd64 (3.5.1-3) ... Setting up libmagic1:amd64 (1:5.25-2ubuntu1.2) ... Setting up file (1:5.25-2ubuntu1.2) ... Setting up python3 (3.5.1-3) ... running python rtupdate hooks for python3.5... running python post-rtupdate hooks for python3.5... Setting up dh-python (2.20151103ubuntu1.1) ... Processing triggers for libc-bin (2.23-0ubuntu11) ... Removing intermediate container f1d7db593022 ---> 562a488914b2 Step 3/14 : RUN apt-get install -y python3-pip ---> Running in 3884de8e3194 Reading package lists... Building dependency tree... Reading state information... The following additional packages will be installed: libexpat1 libexpat1-dev libpython3-dev libpython3.5 libpython3.5-dev python-pip-whl python3-dev python3-pkg-resources python3-setuptools python3-wheel python3.5-dev Suggested packages: python-setuptools-doc The following NEW packages will be installed: libexpat1-dev libpython3-dev libpython3.5 libpython3.5-dev python-pip-whl python3-dev python3-pip python3-pkg-resources python3-setuptools python3-wheel python3.5-dev The following packages will be upgraded: libexpat1 1 upgraded, 11 newly installed, 0 to remove and 28 not upgraded. Need to get 40.7 MB of archives. After this operation, 62.4 MB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libexpat1 amd64 2.1.0-7ubuntu0.16.04.4 [71.4 kB] Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libexpat1-dev amd64 2.1.0-7ubuntu0.16.04.4 [115 kB] Get:3 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython3.5 amd64 3.5.2-2ubuntu0~16.04.5 [1360 kB] Get:4 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython3.5-dev amd64 3.5.2-2ubuntu0~16.04.5 [37.3 MB] Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 libpython3-dev amd64 3.5.1-3 [6926 B] Get:6 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 python-pip-whl all 8.1.1-2ubuntu0.4 [1110 kB] Get:7 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3.5-dev amd64 3.5.2-2ubuntu0~16.04.5 [413 kB] Get:8 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-dev amd64 3.5.1-3 [1186 B] Get:9 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 python3-pip all 8.1.1-2ubuntu0.4 [109 kB] Get:10 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-pkg-resources all 20.7.0-1 [79.0 kB] Get:11 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-setuptools all 20.7.0-1 [88.0 kB] Get:12 http://archive.ubuntu.com/ubuntu xenial/universe amd64 python3-wheel all 0.29.0-1 [48.1 kB] debconf: delaying package configuration, since apt-utils is not installed Fetched 40.7 MB in 1min 5s (624 kB/s) (Reading database ... 12000 files and directories currently installed.) Preparing to unpack .../libexpat1_2.1.0-7ubuntu0.16.04.4_amd64.deb ... Unpacking libexpat1:amd64 (2.1.0-7ubuntu0.16.04.4) over (2.1.0-7ubuntu0.16.04.3) ... Selecting previously unselected package libexpat1-dev:amd64. Preparing to unpack .../libexpat1-dev_2.1.0-7ubuntu0.16.04.4_amd64.deb ... Unpacking libexpat1-dev:amd64 (2.1.0-7ubuntu0.16.04.4) ... Selecting previously unselected package libpython3.5:amd64. Preparing to unpack .../libpython3.5_3.5.2-2ubuntu0~16.04.5_amd64.deb ... Unpacking libpython3.5:amd64 (3.5.2-2ubuntu0~16.04.5) ... Selecting previously unselected package libpython3.5-dev:amd64. Preparing to unpack .../libpython3.5-dev_3.5.2-2ubuntu0~16.04.5_amd64.deb ... Unpacking libpython3.5-dev:amd64 (3.5.2-2ubuntu0~16.04.5) ... Selecting previously unselected package libpython3-dev:amd64. Preparing to unpack .../libpython3-dev_3.5.1-3_amd64.deb ... Unpacking libpython3-dev:amd64 (3.5.1-3) ... Selecting previously unselected package python-pip-whl. Preparing to unpack .../python-pip-whl_8.1.1-2ubuntu0.4_all.deb ... Unpacking python-pip-whl (8.1.1-2ubuntu0.4) ... Selecting previously unselected package python3.5-dev. Preparing to unpack .../python3.5-dev_3.5.2-2ubuntu0~16.04.5_amd64.deb ... Unpacking python3.5-dev (3.5.2-2ubuntu0~16.04.5) ... Selecting previously unselected package python3-dev. Preparing to unpack .../python3-dev_3.5.1-3_amd64.deb ... Unpacking python3-dev (3.5.1-3) ... Selecting previously unselected package python3-pip. Preparing to unpack .../python3-pip_8.1.1-2ubuntu0.4_all.deb ... Unpacking python3-pip (8.1.1-2ubuntu0.4) ... Selecting previously unselected package python3-pkg-resources. Preparing to unpack .../python3-pkg-resources_20.7.0-1_all.deb ... Unpacking python3-pkg-resources (20.7.0-1) ... Selecting previously unselected package python3-setuptools. Preparing to unpack .../python3-setuptools_20.7.0-1_all.deb ... Unpacking python3-setuptools (20.7.0-1) ... Selecting previously unselected package python3-wheel. Preparing to unpack .../python3-wheel_0.29.0-1_all.deb ... Unpacking python3-wheel (0.29.0-1) ... Processing triggers for libc-bin (2.23-0ubuntu11) ... Setting up libexpat1:amd64 (2.1.0-7ubuntu0.16.04.4) ... Setting up libexpat1-dev:amd64 (2.1.0-7ubuntu0.16.04.4) ... Setting up libpython3.5:amd64 (3.5.2-2ubuntu0~16.04.5) ... Setting up libpython3.5-dev:amd64 (3.5.2-2ubuntu0~16.04.5) ... Setting up libpython3-dev:amd64 (3.5.1-3) ... Setting up python-pip-whl (8.1.1-2ubuntu0.4) ... Setting up python3.5-dev (3.5.2-2ubuntu0~16.04.5) ... Setting up python3-dev (3.5.1-3) ... Setting up python3-pip (8.1.1-2ubuntu0.4) ... Setting up python3-pkg-resources (20.7.0-1) ... Setting up python3-setuptools (20.7.0-1) ... Setting up python3-wheel (0.29.0-1) ... Processing triggers for libc-bin (2.23-0ubuntu11) ... Removing intermediate container 3884de8e3194 ---> 388a6aa0bb5f Step 4/14 : RUN apt-get install -y build-essential ---> Running in 2e8c8dc95d9e Reading package lists... Building dependency tree... Reading state information... build-essential is already the newest version (12.1ubuntu2). 0 upgraded, 0 newly installed, 0 to remove and 28 not upgraded. Removing intermediate container 2e8c8dc95d9e ---> 3c08d3467777 Step 5/14 : RUN python3 --version ---> Running in 027dcf1dc8b6 Python 3.6.8 :: Anaconda, Inc. Removing intermediate container 027dcf1dc8b6 ---> 0a8389f2fe13 Step 6/14 : RUN pip3 --version ---> Running in 1df781383970 pip 8.1.1 from /usr/lib/python3/dist-packages (python 3.5) Removing intermediate container 1df781383970 ---> ad30a1cd5323 Step 7/14 : COPY . /app ---> db6769b7ba80 Step 8/14 : WORKDIR /app ---> Running in c234276bf1f6 Removing intermediate container c234276bf1f6 ---> face402e69b6 Step 9/14 : RUN pip3 install -r requirements.txt ---> Running in d89df8c75e5a Collecting Flask==1.1.1 (from -r requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/9b/93/628509b8d5dc749656a9641f4caf13540e2cdec85276964ff8f43bbb1d3b/Flask-1.1.1-py2.py3-none-any.whl (94kB) Collecting numpy==1.16.4 (from -r requirements.txt (line 2)) Downloading https://files.pythonhosted.org/packages/bb/ef/d5a21cbc094d3f4d5b5336494dbcc9550b70c766a8345513c7c24ed18418/numpy-1.16.4-cp35-cp35m-manylinux1_x86_64.whl (17.2MB) Collecting Pillow==6.1.0 (from -r requirements.txt (line 3)) Downloading https://files.pythonhosted.org/packages/d6/98/0d360dbc087933679398d73187a503533ec0547ba4ffd2115365605559cc/Pillow-6.1.0-cp35-cp35m-manylinux1_x86_64.whl (2.1MB) Collecting Werkzeug>=0.15 (from Flask==1.1.1->-r requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/d1/ab/d3bed6b92042622d24decc7aadc8877badf18aeca1571045840ad4956d3f/Werkzeug-0.15.5-py2.py3-none-any.whl (328kB) Collecting itsdangerous>=0.24 (from Flask==1.1.1->-r requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/76/ae/44b03b253d6fade317f32c24d100b3b35c2239807046a4c953c7b89fa49e/itsdangerous-1.1.0-py2.py3-none-any.whl Collecting click>=5.1 (from Flask==1.1.1->-r requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/fa/37/45185cb5abbc30d7257104c434fe0b07e5a195a6847506c074527aa599ec/Click-7.0-py2.py3-none-any.whl (81kB) Collecting Jinja2>=2.10.1 (from Flask==1.1.1->-r requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/1d/e7/fd8b501e7a6dfe492a433deb7b9d833d39ca74916fa8bc63dd1a4947a671/Jinja2-2.10.1-py2.py3-none-any.whl (124kB) Collecting MarkupSafe>=0.23 (from Jinja2>=2.10.1->Flask==1.1.1->-r requirements.txt (line 1)) Downloading https://files.pythonhosted.org/packages/6e/57/d40124076756c19ff2269678de7ae25a14ebbb3f6314eb5ce9477f191350/MarkupSafe-1.1.1-cp35-cp35m-manylinux1_x86_64.whl Installing collected packages: Werkzeug, itsdangerous, click, MarkupSafe, Jinja2, Flask, numpy, Pillow Successfully installed Flask-1.1.1 Jinja2-2.10.1 MarkupSafe-1.1.1 Pillow-6.1.0 Werkzeug-0.15.5 click-7.0 itsdangerous-1.1.0 numpy-1.16.4 You are using pip version 8.1.1, however version 19.2.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Removing intermediate container d89df8c75e5a ---> 95766b1e174e Step 10/14 : RUN pip3 install flask ---> Running in 97e806168780 Requirement already satisfied (use --upgrade to upgrade): flask in /usr/local/lib/python3.5/dist-packages Requirement already satisfied (use --upgrade to upgrade): itsdangerous>=0.24 in /usr/local/lib/python3.5/dist-packages (from flask) Requirement already satisfied (use --upgrade to upgrade): Jinja2>=2.10.1 in /usr/local/lib/python3.5/dist-packages (from flask) Requirement already satisfied (use --upgrade to upgrade): Werkzeug>=0.15 in /usr/local/lib/python3.5/dist-packages (from flask) Requirement already satisfied (use --upgrade to upgrade): click>=5.1 in /usr/local/lib/python3.5/dist-packages (from flask) Requirement already satisfied (use --upgrade to upgrade): MarkupSafe>=0.23 in /usr/local/lib/python3.5/dist-packages (from Jinja2>=2.10.1->flask) You are using pip version 8.1.1, however version 19.2.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Removing intermediate container 97e806168780 ---> dd8d93e7b8bc Step 11/14 : RUN which flask ---> Running in 0ac4678d5ecd /usr/local/bin/flask Removing intermediate container 0ac4678d5ecd ---> e9325cc606ec Step 12/14 : RUN ls -alh ---> Running in e249ecafd7fa total 1.8M drwxr-xr-x 6 root root 4.0K Aug 4 14:29 . drwxr-xr-x 1 root root 4.0K Aug 4 14:30 .. -rw-r--r-- 1 root root 11K Aug 2 11:15 .DS_Store drwxr-xr-x 5 root root 4.0K Jul 28 11:45 .git -rw-r--r-- 1 root root 18 Aug 2 13:52 .gitignore -rw-r--r-- 1 root root 878 Aug 4 14:27 Dockerfile -rw-r--r-- 1 root root 1004K Jul 24 12:49 Image Classifier Project.html -rw-r--r-- 1 root root 690K Jul 24 12:50 Image Classifier Project.ipynb -rw-r--r-- 1 root root 1.1K Feb 6 2018 LICENSE -rw-r--r-- 1 root root 686 Aug 2 14:04 README.md drwxr-xr-x 2 root root 4.0K Aug 4 13:05 __pycache__ -rw-r--r-- 1 root root 1018 Aug 4 14:12 api.py drwxr-xr-x 2 root root 4.0K Jul 20 12:32 assets -rw-r--r-- 1 root root 2.2K Feb 6 2018 cat_to_name.json drwxr-xr-x 2 root root 4.0K Jul 24 12:06 checkpoints -rw-r--r-- 1 root root 245 Aug 2 21:48 environment.yml -rw-r--r-- 1 root root 921 Aug 4 09:11 environment0.yml -rw-r--r-- 1 root root 8.6K Aug 4 13:02 flower_classifier.py -rw-r--r-- 1 root root 1.1K Aug 2 19:17 predict.py -rw-r--r-- 1 root root 41 Aug 4 12:54 requirements.txt -rw-r--r-- 1 root root 1.6K Aug 2 18:58 train.py Removing intermediate container e249ecafd7fa ---> dfe9737330ac Step 13/14 : ENTRYPOINT ["python3"] ---> Running in 57782c4a0d03 Removing intermediate container 57782c4a0d03 ---> 7e6b0d1d5656 Step 14/14 : CMD ["api.py"] ---> Running in 8400de10dc2d Removing intermediate container 8400de10dc2d ---> 33733bebf60d Successfully built 33733bebf60d Successfully tagged flower-image-classifier:latest When I run the container, I get the following error: $ docker run -p 5000:5000 flower-image-classifier Traceback (most recent call last): File "api.py", line 1, in <module> from flask import Flask, request, jsonify ModuleNotFoundError: No module named 'flask' My question to the Docker/Python experts here: how is it possible that flask is not found, when pip has clearly installed it according to the logs? Note that I also tried starting from the continuumio/miniconda3 image at first, just like in this issue: Docker Flask ModuleNotFoundError: No module named 'flask' I created this environment.yml: name: fic channels: - pytorch - defaults dependencies: - flask=1.1.1 - numpy=1.16.4 - numpy-base=1.16.4 - pillow=6.1.0 - python==3.6.9 - pytorch=1.1.0 - torchvision=0.3.0 - pip: - pip==19.2.1 prefix: //anaconda3/envs/fic Then I modified my Dockerfile a bit to extend from that image, and to run this to install the dependencies with Conda: RUN echo "source activate $(head -1 ./environment.yml | cut -d' ' -f2)" > ~/.bashrc ENV PATH /opt/conda/envs/$(head -1 ./environment.yml | cut -d' ' -f2)/bin:$PATH Unfortunately, this downloads & extracts everything every time I build the Docker image, which takes a while and the downloads often randomly fail: CondaError: Downloaded bytes did not match Content-Length. And when it finally built successfully, I still got the same ModuleNotFoundError for Flask. What is wrong here? Thanks for reading
you need just to specify the correct Python version, so you need just to change your entrypoint to: ENTRYPOINT ["python3.5"]
https://stackoverflow.com/questions/57357154/
What is the point of the comma operator when a tuple only has one entry in python?
This section of code is taken from one of the Pytorch tutorials, I have just removed the non-essential parts so it doesn't error out and added some print statements. The question I have is why the two print statements I provided have slightly different results? Is this a tuple with nothing in the second half of it? I am confused by the comma without anything after it before the assignment operator. import torch class MyReLU(torch.autograd.Function): @staticmethod def forward(ctx, input): ctx.save_for_backward(input) return input.clamp(min=0) @staticmethod def backward(ctx, grad_output): input, = ctx.saved_tensors print("ctx ", ctx.saved_tensors) print("inputs ", input) grad_input = grad_output.clone() grad_input[input < 0] = 0 return grad_input relu = MyReLU.apply relu = MyReLU.apply y_pred = relu(x.mm(w1)).mm(w2) loss = (y_pred - y).pow(2).sum() loss.backward() Output ctx (tensor([[-34.2381, 18.6334, 8.8368, ..., 13.7337, -31.5657, -11.8838], [-25.5597, -6.2847, 9.9412, ..., -75.0621, 5.0451, -32.9348], [-56.6591, -40.0830, 2.4311, ..., -2.8988, -18.9742, -74.0132], ..., [ -6.4023, -30.3526, -73.9649, ..., 1.8587, -23.9617, -11.6951], [ -3.6425, 34.5828, 27.7200, ..., -34.3878, -19.7250, 11.1960], [ 16.0137, -24.0628, 14.4008, ..., -5.4443, 9.9499, -18.1259]], grad_fn=<MmBackward>),) inputs tensor([[-34.2381, 18.6334, 8.8368, ..., 13.7337, -31.5657, -11.8838], [-25.5597, -6.2847, 9.9412, ..., -75.0621, 5.0451, -32.9348], [-56.6591, -40.0830, 2.4311, ..., -2.8988, -18.9742, -74.0132], ..., [ -6.4023, -30.3526, -73.9649, ..., 1.8587, -23.9617, -11.6951], [ -3.6425, 34.5828, 27.7200, ..., -34.3878, -19.7250, 11.1960], [ 16.0137, -24.0628, 14.4008, ..., -5.4443, 9.9499, -18.1259]], grad_fn=<MmBackward>)
It's just an edge-case of unpacking a single-element list or tuple. a, = [1] print(type(a), a) # <class 'int'> 1 Without the comma, a would have been assigned the entire list: a = [1] print(type(a), a) # <class 'list'> [1] And the same goes for a tuple: a, = (1,) # have to use , with literal single-tuples, because (1) is just 1 print(type(a), a) # <class 'int'> 1 a = (1,) # have to use , with literal single-tuples, because (1) is just 1 print(type(a), a) # <class 'tuple'> (1,)
https://stackoverflow.com/questions/57357963/
What considerations should be taken into account with directly setting Pytorch model variables and optimizer parameters?
From the Pytorch forums https://discuss.pytorch.org/t/layer-weight-vs-weight-data/24271/2 it is mentioned that setting variable weights directly could result in "Using .data on the other side would work, but is generally not recommended, as changing it after the model was used would yield weird results and autograd cannot throw an error." I am wonder what would cause weird results. Also, I am also thinking of setting optimizer parameters directly, specifically the momentum/sum of gradients for optimizers which have those parameters. Is there any considerations needed for that case as well?
It is perfectly legal to update the PyTorch layer weights. Check this out how we can alter the weights no problem: lin = nn.Linear(10, 2) torch.nn.init.xavier_uniform_(lin.weight) The upper code actually calls with torch.no_grad(): def _no_grad_uniform_(tensor, a, b): with torch.no_grad(): return tensor.uniform_(a, b) See how the torch.no_grad() will help us in the next example. lin = nn.Linear(10, 2) with torch.no_grad(): lin.weight[0][0] = 1. x = torch.randn(1, 10) output = lin(x) output.mean().backward() And if we don't use it: lin = nn.Linear(10, 2) lin.weight[0][0] = 1. x = torch.randn(1, 10) output = lin(x) output.mean().backward() We end in: RuntimeError: leaf variable has been moved into the graph interior So you can do it but inside with torch.no_grad():. This is because every action we do on PyTorch tensor will get caught, if requires grad is set to True. In case we do lin.weight[0][0] = 1. we will catch grad_fn=<CopySlices>. The thing is we don't need that to be caught as this is part of our layer setup and not our calculation.
https://stackoverflow.com/questions/57362625/
Can ReduceLrOnPlateau scheduler in pytorch use test set metric for decreasing learning rate?
Hi I am currently learning the use of scheduler in deep learning in pytroch. I came across the following code : import torch import torch.nn as nn import torchvision.transforms as transforms import torchvision.datasets as dsets # Set seed torch.manual_seed(0) # Where to add a new import from torch.optim.lr_scheduler import ReduceLROnPlateau ''' STEP 1: LOADING DATASET ''' train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) test_dataset = dsets.MNIST(root='./data', train=False, transform=transforms.ToTensor()) ''' STEP 2: MAKING DATASET ITERABLE ''' batch_size = 100 n_iters = 6000 num_epochs = n_iters / (len(train_dataset) / batch_size) num_epochs = int(num_epochs) train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) ''' STEP 3: CREATE MODEL CLASS ''' class FeedforwardNeuralNetModel(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim): super(FeedforwardNeuralNetModel, self).__init__() # Linear function self.fc1 = nn.Linear(input_dim, hidden_dim) # Non-linearity self.relu = nn.ReLU() # Linear function (readout) self.fc2 = nn.Linear(hidden_dim, output_dim) def forward(self, x): # Linear function out = self.fc1(x) # Non-linearity out = self.relu(out) # Linear function (readout) out = self.fc2(out) return out ''' STEP 4: INSTANTIATE MODEL CLASS ''' input_dim = 28*28 hidden_dim = 100 output_dim = 10 model = FeedforwardNeuralNetModel(input_dim, hidden_dim, output_dim) ''' STEP 5: INSTANTIATE LOSS CLASS ''' criterion = nn.CrossEntropyLoss() ''' STEP 6: INSTANTIATE OPTIMIZER CLASS ''' learning_rate = 0.1 optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9, nesterov=True) ''' STEP 7: INSTANTIATE STEP LEARNING SCHEDULER CLASS ''' # lr = lr * factor # mode='max': look for the maximum validation accuracy to track # patience: number of epochs - 1 where loss plateaus before decreasing LR # patience = 0, after 1 bad epoch, reduce LR # factor = decaying factor scheduler = ReduceLROnPlateau(optimizer, mode='max', factor=0.1, patience=0, verbose=True) ''' STEP 7: TRAIN THE MODEL ''' iter = 0 for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): # Load images as Variable images = images.view(-1, 28*28).requires_grad_() # Clear gradients w.r.t. parameters optimizer.zero_grad() # Forward pass to get output/logits outputs = model(images) # Calculate Loss: softmax --> cross entropy loss loss = criterion(outputs, labels) # Getting gradients w.r.t. parameters loss.backward() # Updating parameters optimizer.step() iter += 1 if iter % 500 == 0: # Calculate Accuracy correct = 0 total = 0 # Iterate through test dataset for images, labels in test_loader: # Load images to a Torch Variable images = images.view(-1, 28*28) # Forward pass only to get logits/output outputs = model(images) # Get predictions from the maximum value _, predicted = torch.max(outputs.data, 1) # Total number of labels total += labels.size(0) # Total correct predictions # Without .item(), it is a uint8 tensor which will not work when you pass this number to the scheduler correct += (predicted == labels).sum().item() accuracy = 100 * correct / total # Print Loss # print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.data[0], accuracy)) # Decay Learning Rate, pass validation accuracy for tracking at every epoch print('Epoch {} completed'.format(epoch)) print('Loss: {}. Accuracy: {}'.format(loss.item(), accuracy)) print('-'*20) scheduler.step(accuracy) I am using the above strategy. The only thing that I am not able to understand is that how are they using test data to step up the accuracy and decreasing learning rate on the basis of that via scheduler? It is the last line of the code. Can we during training show the test accuracy to the scheduler and ask that it to reduce the learning rate? I found the similar thing on github resnet main.py too. Can someone please clarify ?
I think there might be some confusion regarding the term test here. Difference between test and validation data What the code actually refers to by test is the validation set not the actual test set. The difference is that the validation set is used during training to see how well the model generalizes. Normally people just cut off a part of the training data and use that for validation. To me it seems like your code is using the same data for training and validation but that's just my assumption because I don't know what ./data looks like. To work in a strictly scientific way, your model should never see actual test data during training, only training and validation. This way we can assess the models actual ability to generalize on unseen data after training. Reducing learning rate based on validation accuracy The reason why you use validation data (called test data in your case) to reduce the learning rate is probably because if you did this using the actual training data and training accuracy the model is more likely to overfit. Why? When you are on a plateau of the training accuracy it does not necessarily imply that it's a plateau of the validation accuracy and the other way round. Meaning you could be stepping in a promising direction regarding the validation accuracy (and thus in a direction of parameters that generalize well) and suddenly you reduce or increase the learning rate because there was a plateau (or non) in the training accuracy.
https://stackoverflow.com/questions/57375240/
How to transform BERT's network output to readable text?
I am trying to understand, how to use BERT for QnA and found a tutorial on how to start on PyTorch (here). Now I would like to use these snippets to get started, but i do not understand how to project the output back on the example text. text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" (...) # Predict the start and end positions logits with torch.no_grad(): start_logits, end_logits = questionAnswering_model(tokens_tensor, segments_tensors) # Or get the total loss start_positions, end_positions = torch.tensor([12]), torch.tensor([14]) multiple_choice_loss = questionAnswering_model( tokens_tensor, segments_tensors, start_positions=start_positions, end_positions=end_positions) start_logits (shape: [1, 16]):tensor([[ 0.0196, 0.1578, 0.0848, 0.1333, -0.4113, -0.0241, -0.1060, -0.3649, 0.0955, -0.4644, -0.1548, 0.0967, -0.0659, 0.1055, -0.1488, -0.3649]]) end_logits (shape: [1, 16]):tensor([[ 0.1828, -0.2691, -0.0594, -0.1618, 0.0441, -0.2574, -0.2883, 0.2526, -0.0551, -0.0051, -0.1572, -0.1670, -0.1219, -0.1831, -0.4463, 0.2526]]) If my assumption is correct start_logits and end_logits need to be projected back on text, but how do i compute this? Additionally do you have any resources/guides/tutorials you could recommend to continue further into QnA (except google-research/bert github and the paper for bert)? Thank you in advance.
I think you are trying to use Bert for Q&A where the answer is a span of the original text. The original paper uses this for the SQuAD dataset where this is the case. start_logits and end_logits have the logits for a token being the start/end of the answer so you can take argmax and it would be the index of the token in the text. There is this NAACL tutorial on tranfer learning from the author of the repo you linked https://colab.research.google.com/drive/1iDHCYIrWswIKp-n-pOg69xLoZO09MEgf#scrollTo=qQ7-pH1Jp5EG It's using classification as the target task but you might still find it useful.
https://stackoverflow.com/questions/57378005/
Pytorch is not found & cannot be installed in pycharm
I need pytorch module to run my project. But When I tried to install it via command prompt via two different ways, it shows an error - . C:\Users\Toothless>pip install torchvision --user Collecting torchvision Using cached https://files.pythonhosted.org/packages/b7/ff /091b4503d5f228bd1120db784e2c071617211b965a8a78018e75750c7199/torchvision-0.3.0-cp37-cp37m-win_amd64.whl Requirement already satisfied: six in c:\users\toothless\appdata\local\programs\python\python37\lib\site-packages (from torchvision) (1.12.0) Collecting pillow>=4.1.1 (from torchvision) Downloading https://files.pythonhosted.org/packages/ae/96/6f83deebfcd20a5d4ad35e4e989814a16559d8715741457e670aae1a5a09/Pillow-6.1.0-cp37-cp37m-win_amd64.whl (2.0MB) |████████████████████████████████| 2.0MB 27kB/s Requirement already satisfied: numpy in c:\users\toothless\appdata\local\programs\python\python37\lib\site-packages (from torchvision) (1.17.0) Collecting torch>=1.1.0 (from torchvision) ERROR: Could not find a version that satisfies the requirement torch>=1.1.0 (from torchvision) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch>=1.1.0 (from torchvision) C:\Users\Toothless>pip3 install torchvision Collecting torchvision Using cached https://files.pythonhosted.org/packages/b7/ff/091b4503d5f228bd1120db784e2c071617211b965a8a78018e75750c7199/torchvision-0.3.0-cp37-cp37m-win_amd64.whl Requirement already satisfied: six in c:\users\toothless\appdata\local\programs\python\python37\lib\site-packages (from torchvision) (1.12.0) Collecting torch>=1.1.0 (from torchvision) ERROR: Could not find a version that satisfies the requirement torch>=1.1.0 (from torchvision) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch>=1.1.0 (from torchvision) Again I've tried to installed through pycharm . I've been following this question but conda`` is not listed in my envairoment variable. Edited: When I tried pip install torch torchvision --user this command it gives an error - ERROR: torchvision 0.3.0 has requirement torch>=1.1.0, but you'll have torch 0.1.2.post2 which is incompatible. Edited: I've also tried pip install torchvision but it shows with error - C:\Users\Toothless>pip install torchvision Collecting torchvision Using cached https://files.pythonhosted.org/packages/b7/ff/091b4503d5f228bd1120db784e2c071617211b965a8a78018e75750c7199/torchvision-0.3.0-cp37-cp37m-win_amd64.whl Collecting torch>=1.1.0 (from torchvision) ERROR: Could not find a version that satisfies the requirement torch>=1.1.0 (from torchvision) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch>=1.1.0 (from torchvision) What should i do now to make a workable pytorch?
Finally I've installed my desired pytorch & torchvision. I've installed it via pip. The commandline I've used to install pytorch: pip3 install https://download.pytorch.org/whl/cpu/torch-1.1.0-cp37-cp37m-win_amd64.whl And for torchVision: pip3 install https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp37-cp37m-win_amd64.whl I've found this code from PyTorch website.
https://stackoverflow.com/questions/57382913/
Getting the value at the intersection of a row and column in a pytorch tensor matrix
I am new to pytorch and am looking to get a value at an index from a matrix. There is a matrix called psfm_s that has been initialized with psfm_s=Var(torch.randn(12,20),requires_grad=True) For example, I would like to to get the number in the first column (out of 12 columns) and the number in the first row (out of 20 rows). I have tried doing something like index=torch.tensor([0,0]) num_at_index=psfm_s[index] to get the desired number but that just gets me a tensor with a bunch of numbers in it, I'm not really sure what happens with this method. I just want the one number at the desired index, how can I go about doing this if it's even possible? Thanks for the help!
To reproduce the described code in its completeness (for future reference, please provide a [mcve] in your question), and taking the already correct solution from @jodag in the comments, consider this code snippet: from torch.autograd import Variable import torch psfm_s = Variable(torch.randn(12,20), requires_grad=True) single_value = psfm_s[0,0].item() print(single_value) # prints a single random number from your tensor For some background information, consider the official docs: Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see tolist(). This operation is not differentiable. Consequently, getting a complete row (or column), would look like this: from torch.autograd import Variable import torch psfm_s = Variable(torch.randn(12,20), requires_grad=True) single_row_tensor = psfm_s[0,:] single_row_list = single_row_tensor.tolist() single_row_numpy_1 = single_row_tensor.data.numpy() single_row_numpy_2 = single_row_tensor.detach().numpy() # the following doesn't work, as it is a torch.Variable with gradient history: single_row_fail = single_row_tensor.numpy() In the case you want to get a NumPy array, you have to be careful not to directly cast it to .numpy(), as this causes issues with the gradient history of the Variable. You can either use .data.numpy(), or .detach().numpy(). There seems to be some discussion as to which one is preferred, but both should work for your case.
https://stackoverflow.com/questions/57386558/
Why do I have error with trained model from ubuntu to windows?
I have trained the model on a super computer(ubuntu). After the training I used the model with Windows 10 and got this error: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) I can't load the model I trained. pytorch version in ubuntu: 1.1.0a0+9a7bcac pytorch version in windows: 0.4.1 What is going wrong and how can I fix it?
EDIT after comment discussion: It seems like yor Windows version is outdated and there is thus a version conflict occurring. I would strongly suggest to update the version on Windows to post 1.0 release, which should fix the problem. According to this link, you can likely ignore the warning (not an error), as long as your model seems to still work as intended. The usual culprit for such changes is that you have inconsistent versions of PyTorch on your two systems, and therefore might encounter this warning. Generally, the versions are supposed to be fully backward compatible, but of course there is no guarantee for this. It has nothing to do with the fact that you are on Linux and/or Windows, unless the source code detects changes in the line break character (which is the main difference from what I remember), although I think this is very unlikely to be the case.
https://stackoverflow.com/questions/57388643/
Why do we need padding in seq2seq network
To handle sequences of different, I would like to know. Why do we need padding the sequence the word to the same length? If the answer is "Yes, you need padding.". Can I set the padding in other indexes? For example, if I have an index word like this: {0:"<s>,1:"<e>",2:"AAA",3:"BBB",.......,500:"zzz"} Where <s> is starting word of the sentence and is the ending word of the sentence. Can I set the padding flag to the last index? {0:"<s>,1:"<e>",2:"AAA",3:"BBB",.......,500:"zzz",501:"<pad>"}
Why do we need padding the sequence the word to the same length? Because basically all layers with parameters perform some way of matrix multiplication (actually: tensor multiplication) at some point in their logic. Now, try it yourself. Multiply matrices where not all rows or columns have the same length. E.g. what is this supposed to be? | 1 2 3 | | 1 | | 4 5 | * | 2 | = ??? | 3 | It is simply not possible to do this, unless you put some value in the gap. Some people may even argue that this thing on the left hand side is not even a matrix. Can I set the padding in other indexes? Can I set the padding flag to the last index? Sure. You can take whatever value you want for padding. Ideally, you should use a value that has otherwise no other meaning in the context of you problem and thus cannot be confused with any "real" value.
https://stackoverflow.com/questions/57393033/
Loading pytorch model in C++, problem with libtorch.dll
I'm trying to load neural network model trained with pytorch to C++ program. There is a tutorial how to do it, but can't get it working. The console appears, and then I get, "The Code execution cannot proceed, because the object xxx.dll was not found". Sometimes it is c10.dll, other time torch.dll, or caffe2.dll. I checked my C/C++ and Linker properties like 50 times. I checked it with using debug and release version of libtorch. I ran it on Debug x86, Debug x64, Release x86, Release x64. And I added those files manually to Debug folder (where .exe file is) None of it worked. This is my code #include "pch.h" #include <torch/script.h> #include <memory> #include <iostream> int main() { std::cout << "Hello World!\n"; } This is how I set directory for header files: $(SolutionDir)libtorch\include ...and linker directory for additional libraries: $(SolutionDir)libtorch\lib ...and all .lib files I added: torch.lib;onnxifi_loader.lib;onnxifi_dummy.lib;onnx_proto.lib;onnx.lib;libprotoc.lib;libprotobuf-lite.lib;libprotobuf.lib;foxi_loader.lib;foxi_dummy.lib;cpuinfo.lib;clog.lib;caffe2_module_test_dynamic.lib;caffe2_detectron_ops.lib;caffe2.lib;c10.lib; It's not that some function or classes from this library are not working. I cannot compile a simple "Hello World" program. I even downloaded some random .dll file to check if it is this particular library problem, and other .dll worked with no problems. I set up the project manually, and followed the instruction from the link I sent (with creating project with CMAKE), and still I have this error. I'm working on it for few hours and I'm pretty annoyed about this.I ran out of ideas. I really don't know what else I could miss in this situation. I'm running VS 2017 Community, version 15.9.14 on Windows 10.
Ok, ive actually came up with solution by my own. For some reasons which i completly do not understand i had to put .dll files in my project folder. Setting up the path for additional libraries in linker properties seems to not work for those libraries, at least on my PC. This is very confusing because other, random library which i downloaded for test, I can place wherever i want, all I need to do is just set up correct path in Linker properties. But not THIS particular library (libtorch). Anyway, problem is solved, hope that someday someone finds this usefull :)
https://stackoverflow.com/questions/57396623/
pytorch predictions stability
This is my predict function. is there anything wrong with this? Predictions are not stable, everytime I run on same data, I get different predictions. def predict(model, device, inputs, batch_size=1024): model = model.to(device) dataset = torch.utils.data.TensorDataset(*inputs) loader = torch.utils.data.DataLoader( dataset, batch_size=batch_size, pin_memory=False ) predictions = [] for i, batch in enumerate(loader): with torch.no_grad(): pred = model(*(item.to(device) for item in batch)) pred = pred.detach().cpu().numpy() predictions.append(pred) return np.concatenate(predictions)
As Usman Ali suggested, you need to set your model to eval mode by calling model.eval() before your prediction function. What eval mode does: Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. When you finish your prediction and wish t continue training, don't forget to reset your model to training mode by calling model.train() There are several layers in models that may introduce randomness into the forward pass of the net. One such example is the dropout layers. A dropout layer "drops" p percent of its neurons at random to increase model's generalization. Additionally, BatchNorm (and possibly other adaptive normalization layers) keeps track of the statistics of the data and therefore has a different "behavior" in train mode or in eval mode.
https://stackoverflow.com/questions/57404655/
how to add BatchNormalization with SWA:stochastic weights average?
I am a beginner in Deepleaning and Pytorch. I don't understand how to use BatchNormalization in using SWA. pytorch.org says in https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/: Note that the SWA averages of the weights are never used to make predictions during training, and so the batch normalization layers do not have the activation statistics computed after you reset the weights of your model with opt.swap_swa_sgd() This means it's suitable for adding BatchNormalization layer after using SWA? # it means, in my idea #for example opt = torchcontrib.optim.SWA(base_opt) for i in range(100): opt.zero_grad() loss_fn(model(input), target).backward() opt.step() if i > 10 and i % 5 == 0: opt.update_swa() opt.swap_swa_sgd() #save model once torch.save(model,"swa_model.pt") #model_load saved_model=torch.load("swa_model.pt") #it means adding BatchNormalization layer?? model2=saved_model model2.add_module("Batch1",nn.BatchNorm1d(10)) # decay learning_rate more learning_rate=0.005 optimizer = torch.optim.SGD(model2.parameters(), lr=learning_rate) # train model again for epoch in range(num_epochs): loss = train(train_loader) val_loss, val_acc = valid(test_loader) I appreciate your replying. following your advise, I try to make example model adding optimizer.bn_update() # add optimizer.bn_update() to model criterion = nn.CrossEntropyLoss() learning_rate=0.01 base_opt = torch.optim.SGD(model.parameters(), lr=0.1) optimizer = SWA(base_opt, swa_start=10, swa_freq=5, swa_lr=0.05) def train(train_loader): #mode:train model.train() running_loss = 0 for batch_idx, (images, labels) in enumerate(train_loader): optimizer.zero_grad() outputs = model(images) #loss loss = criterion(outputs, labels) running_loss += loss.item() loss.backward() optimizer.step() optimizer.swap_swa_sgd() train_loss = running_loss / len(train_loader) return train_loss def valid(test_loader): model.eval() running_loss = 0 correct = 0 total = 0 #torch.no_grad with torch.no_grad(): for batch_idx, (images, labels) in enumerate(test_loader): outputs = model(images) loss = criterion(outputs, labels) running_loss += loss.item() _, predicted = torch.max(outputs, 1) correct += (predicted == labels).sum().item() total += labels.size(0) val_loss = running_loss / len(test_loader) val_acc = float(correct) / total return val_loss, val_acc num_epochs=30 loss_list = [] val_loss_list = [] val_acc_list = [] for epoch in range(num_epochs): loss = train(train_loader) val_loss, val_acc = valid(test_loader) optimizer.bn_update(train_loader, model) print('epoch %d, loss: %.4f val_loss: %.4f val_acc: %.4f' % (epoch, loss, val_loss, val_acc)) # logging loss_list.append(loss) val_loss_list.append(val_loss) val_acc_list.append(val_acc) # optimizer.bn_updata() optimizer.bn_update(train_loader, model) # go on evaluating model,,,
What the documentation is telling you is that since SWA computes averages of weights but those weights aren't used for prediction during training the batch normalization layers won't see those weights. This means they haven't computed the respective statistics for them (as they were never able to) which is important because the weights are used during actual prediction (i.e. not during training). This means, they assume you have batch normalization layers in your model and want to train it using SWA. This is (more or less) not straight-forward due to the reasons above. One approach is given as follows: To compute the activation statistics you can just make a forward pass on your training data using the SWA model once the training is finished. Or you can use their helper class: In the SWA class we provide a helper function opt.bn_update(train_loader, model). It updates the activation statistics for every batch normalization layer in the model by making a forward pass on the train_loader data loader. You only need to call this function once in the end of training. In case you are using Pytorch's DataLoader class you can simply supply the model (after training) and the training loader to the bn_update function which updates all batch normalization statistics for you. You only need to call this function once in the end of training. Steps to proceed: Train your model that includes batch normalization layers using SWA After your model has finished training, call opt.bn_update(train_loader, model) using your training data and providing your trained model
https://stackoverflow.com/questions/57406061/
Pytorch: saving model or state_dict gives different on-disk space occupation
I was playing around with the function torch.save and I noticed something curious, let's say i load a model from torchvision repository: model = torchvision.models.mobilenet_v2() if i save the model in this way: torch.save(model,'model.pth') I get a 14MB file, while if i do: torch.save(model.state_dict(),'state_dict.pth') The file size blow to ~500MB. Since i didn't find any reference on this behaviour I was wondering what does cause the increment in size. Is it something related to compression? does saving the whole state_dict stores extra stuff like uninitialized gradients? P.S. the same happens for other model like vgg16
If ask for what is in model: vars(vgg16) Out: {'_backend': <torch.nn.backends.thnn.THNNFunctionBackend at 0x232c78759b0>, '_parameters': OrderedDict(), '_buffers': OrderedDict(), '_backward_hooks': OrderedDict(), '_forward_hooks': OrderedDict(), '_forward_pre_hooks': OrderedDict(), '_state_dict_hooks': OrderedDict(), '_load_state_dict_pre_hooks': OrderedDict(), '_modules': OrderedDict([('features', Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (6): ReLU(inplace) (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (8): ReLU(inplace) (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace) (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (13): ReLU(inplace) (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (15): ReLU(inplace) (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (18): ReLU(inplace) (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (20): ReLU(inplace) (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (22): ReLU(inplace) (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (25): ReLU(inplace) (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (27): ReLU(inplace) (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace) (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) )), ('avgpool', AdaptiveAvgPool2d(output_size=(7, 7))), ('classifier', Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace) (2): Dropout(p=0.5) (3): Linear(in_features=4096, out_features=4096, bias=True) (4): ReLU(inplace) (5): Dropout(p=0.5) (6): Linear(in_features=4096, out_features=1000, bias=True) ))]), 'training': True} You will get it is more than just state dict. vgg16.state_dict() State dict is inside _modules (vgg16._modules['features'].state_dict()) This is why when you save the model you save not just the state dict, but also all aforementioned stuff such as parameters, buffers, hooks... But if you don't use parameters, buffers, hooks for inference time for your model you may avoid saving these. The sizes when saving: torch.save(model,'model.pth') torch.save(model.state_dict(),'state_dict.pth') should be: model.pth > state_dict.pth because state dict is included into the model.
https://stackoverflow.com/questions/57407939/
Reduction parameter of the negative log likelihood
What is the intuitive explanation of the reduction parameter in negative log likelihood loss function in PyTorch? The parameter can take values such as 'mean' or 'sum'. Is it summing over the elements of the batch? torch.nn.functional.nll_loss(outputs.mean(0), target, reduction="sum")
From the documentation: Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean' If you use none, the output will be the same as batch size, If you use mean, it will be the mean (sum divided by batch) If you use sum, it will be the sum of all elements. You can also verify this with following code: import torch logit = torch.rand(100,10) target = torch.randint(10, size=(100,)) m = torch.nn.functional.nll_loss(logit, target) s = torch.nn.functional.nll_loss(logit, target, reduction="sum") l = torch.nn.functional.nll_loss(logit, target, reduction="none") print(torch.abs(m-s/100)) print(torch.abs(l.mean()-m)) The output should be 0 or very close to 0.
https://stackoverflow.com/questions/57415707/
Image size of 256x256 (not 299x299) fed into Inception v3 model (PyTorch) and works?
I am testing out the pretrained inception v3 model on Pytorch. I fed it an image size 256x256 and also resized it up to 299x299. In both cases, the image was classified correctly. Can someone explain why the PyTorch pretrained model can accept an image that's not 299x299?
It's because the pytorch implementation of inception v3 uses an adaptive average pooling layer right before the fully-connected layer. If you take a look at the Inception3 class in torchvision/models/inception.py, the operation of most interest with respect to your question is x = F.adaptive_avg_pool2d(x, (1, 1)). Since the average pooling is adaptive the height and width of x before pooling are independent of the output shape. In other words, after this operation we always get a tensor of size [b,c,1,1] where b and c are the batch size and number of channels respectively. This way the input to the fully connected layer is always the same size so no exceptions are raised. That said, if you're using the pretrained inception v3 weights then the model was originally trained for input of size 299x299. Using inputs of different sizes may have a negative impact on loss/accuracy, although smaller input images will almost certainly decrease computational time and memory footprint since the feature maps will be smaller.
https://stackoverflow.com/questions/57421842/
CNN: taking the most confident prediction among many
I'm training a CNN for image classification. The same object (with the same label then) is present in the test set twice (like two view-point). I'd like to take advantage of this when predicting the class. Right now the final layer is a Linear layer (PyTorch) and I'm using cross-entropy as loss function. I was wondering what is the best way to take the most confident prediction for each object. Should I first compute the LogSoftMax and take the class with the highest probability (among both arrays of predictions), or should I take the logits directly?
Since LogSoftMax preserves order, the largest logit will always correspond to the highest confidence. Therefore there's no need to perform the operation if all you're interested in is finding the index of most confident class. Probably the easiest way to get the index of the most confident class is by using torch.argmax. e.g. batch_size = 5 num_logits = 10 y = torch.randn(batch_size, num_logits) preds = torch.argmax(y, dim=1) which in this case results in >>> print(preds) tensor([9, 7, 2, 4, 6])
https://stackoverflow.com/questions/57423336/
I have a question about TRANSLATION WITH A SEQUENCE TO SEQUENCE in the pytorch tutorials
I am currently learning about the Seq2seq translation. I am trying to understand and following PyTorch tutorial from this website "https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html#attention-decoder". In the website, They talk about the Attention technique. I would like to know which technique are they used between Luong and Bahdanau? Another question, Why do they apply Relu layer before GRU cell? Finally, the red box in the figure is called a context vector, right?
I would like to know which technique are they used between Luong and Bahdanau? Loung is multiplicative, so it should be using Bahdanau (additive attention) as it concats then applies linearity. See http://ruder.io/deep-learning-nlp-best-practices/index.html#attention for more about attention types Why do they apply RelU layer before GRU cell? This is the activation after Linear layer. I think tanh was used originally but ReLU became preferrable. I think the other ReLU after the embeddings in plain Decoder is there by mistake though https://github.com/spro/practical-pytorch/issues/4 the red box in the figure is called a context vector, right? yes
https://stackoverflow.com/questions/57427310/
axes don't match array / size mismatch, m1: [132096 x 344], m2: [118336 x 128]
This is a linear auto-encoder code, the original picture is 344*344 RGB, after the training process is over, I want to show the decoded picture using the code below, but it has ValueError: axes don't match array pytorch, google colab(GPU) enter code here: EPOCH = 20 BATCH_SIZE = 128 LR = 0.005 # learning rate torch.cuda.empty_cache() data_transforms = torchvision.transforms.Compose([ torchvision.transforms.RandomResizedCrop(344), torchvision.transforms.RandomHorizontalFlip(), torchvision.transforms.ToTensor()]) path1 = 'drive/My Drive/Colab/image/test/' train_data = torchvision.datasets.ImageFolder(path1, transform=data_transforms) train_loader = Data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True) class AutoEncoder(nn.Module): def __init__(self): super(AutoEncoder, self).__init__() self.encoder = nn.Sequential( nn.Linear(3*344*344, 128), nn.Tanh(), # 激活 nn.Linear(128, 64), nn.Tanh(), nn.Linear(64, 12), nn.Tanh(), nn.Linear(12, 3), # compress to 3 features which can be visualized in plt ) self.decoder = nn.Sequential( nn.Linear(3, 12), nn.Tanh(), nn.Linear(12, 64), nn.Tanh(), nn.Linear(64, 128), nn.Tanh(), nn.Linear(128, 3*344*344), nn.Sigmoid(), # compress to a range (0, 1) ) def forward(self, x): x = x.view(x.size(0), -1) encoded = self.encoder(x) decoded = self.decoder(encoded) return encoded, decoded autoencoder = AutoEncoder() optimizer = torch.optim.Adam(autoencoder.parameters(), lr=LR) loss_func = nn.MSELoss() for epoch in range(EPOCH): for step, (x, b_label) in enumerate(train_loader): b_x = x.view(-1, 3*344*344) # batch x, shape (batch, 28*28) b_y = x.view(-1, 3*344*344) # batch y, shape (batch, 28*28) encoded, decoded = autoencoder(b_x) loss = loss_func(decoded, b_y) # mean square error optimizer.zero_grad() # clear gradients for this training step loss.backward() # backpropagation, compute gradients optimizer.step() # apply gradients ################################################### ######## below is used to plot decoded pic ######## with torch.no_grad(): for img, label in train_loader : fig = plt.figure() ) imggg = np.transpose(img[0],(1,2,0)) ax1 = fig.add_subplot(121) ax1.imshow(imggg) if torch.cuda.is_available(): img = Variable(img.to()) else: img = Variable(img) encoded, decoded = autoencoder(img) decodeddd = np.transpose(decoded.cpu()[0],(1,2,0)) ax2 = fig.add_subplot(122) ax2.imshow(decodeddd) I expect the output of 2 pics but now only the original one shows, the decoded one doesn't show. The training process works well, but I don't know what's the problem with picture's size.
decoder is returning a linear output of shape BATCH_SIZE x 355008. First, we need to reshape the second dimension to 3 dimensions of shape 3 x 344 x 344 before applying transpose on it. Replacing decodeddd with following should do the trick: decodeddd = np.transpose(decoded.cpu()[0].view(3, 344, 344),(1,2,0))
https://stackoverflow.com/questions/57454113/