instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
LSTM Sequence length
I want to ask if there is an optimal sequence length of a LSTM network in general, or in terms of time series prediction problems? I read about vanishing gradient or exploding gradient problems that very long RNN networks had and LSTM tried to solve and succeeded to a certain extent. I also heard about techniques to handle very large sequences with LSTM’s and RNN’s in general like: truncating sequences, summarizing sequences, truncating backpropagation through time or even using an Encoder-Decoder architecture. I asked this question because I didn’t find a research paper about this, only this blog post that stated an optimal sequence length between 10-30.
Do some Model Selection. TLDR: Just try it out. Because training is already very computationally expensive, the easiest way to calculate how successful a model would be is to test it out. The combination that works best cannot be easily predetermined, especially not with such a vague description (or no description at all) of how the actual problem looks like. From this answer: It totally depends on the nature of your data and the inner correlations, there is no rule of thumb. However, given that you have a large amount of data a 2-layer LSTM can model a large body of time series problems / benchmarks. So in your case, you might want to try sequence lengths from 10 - 30. But I'd also try and evaluate how your training algorithm performs outside of that recommendation by the post you linked.
https://stackoverflow.com/questions/51101177/
Net does not change weights during training, pytorch
I'm implementing DDPG and got stuck training my two nets. All in all I've got 4 nets called: actor, actor_target, critic and critic_target. I'm training the actor and critic in the training loop and do soft updates for the other two nets with: def update_weights(self, source, tau): for target, source in zip(self.parameters(), source.parameters()): target.data.copy_(tau * source.data + (1 - tau) * target.data) My training loop looks like: tensor_next_states = torch.tensor(next_states).view(-1, 1) prediction_target = self.actor_target(tensor_next_states).data.numpy() target_critic_output = self.critic_target( construct_tensor(next_states, prediction_target)) y = torch.tensor(rewards).view(-1,1) + \ self.gamma * target_critic_output output_critic = self.critic( torch.tensor(construct_tensor(states, actions), dtype=torch.float)) # compute loss and update critic self.critic.zero_grad() loss_critic = self.criterion_critic(y, output_critic) loss_critic.backward() self.critic_optim.step() # compute loss and update actor tensor_states = torch.tensor(states).view(-1, 1) ouput_actor = self.actor(tensor_states).data.numpy() self.actor.zero_grad() loss_actor = (-1.) * \ self.critic(construct_tensor(states, ouput_actor)).mean() loss_actor.backward() self.actor_optim.step() # update target self.actor_target.update_weights(self.actor, self.tau) self.critic_target.update_weights(self.critic, self.tau) using SGD as optimizer and self.criterion_critic = F.mse_loss. construct_tensor(a,b) constructs a tensor like [a[0], b[0], a[1], b[1], ...]. I've noticed, that the RMSE on the test set before and after training is the same. So I debugged a lot and noticed in update_weights that the weights of the trained net and the target net are the same - so I concluded, that the training does not have any affect on the weights of the trained net. I already checked that the computed losses are not zero but still a float, checked replacing the zero_grad() calls and moving the computed losses to self, which hadn't got any impact. Does anyone already met this behavior and/or has any tips or know how to fix this? Update: Full code: import datetime import random from collections import namedtuple import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim def combine_tensors(s, a): """ Combines the two given tensors :param s: tensor1 :param a: tensor2 :return: combined tensor """ target = [] if not len(a[0].shape) == 0: for i in range(len(s)): target.append(torch.cat((s[i], a[i])).data.numpy()) else: for i in range(len(s)): target.append(torch.cat((s[i], a[i].float().view(-1))) \ .data.numpy()) return torch.tensor(target, device=device) class actor(nn.Module): """ Actor - gets a state (2-dim) and returns probabilities about which action to take (4 actions -> 4 outputs) """ def __init__(self): super(actor, self).__init__() # define net structure self.input_layer = nn.Linear(2, 4) self.hidden_layer_1 = nn.Linear(4, 8) self.hidden_layer_2 = nn.Linear(8, 16) self.hidden_layer_3 = nn.Linear(16, 32) self.output_layer = nn.Linear(32, 4) # initialize them nn.init.xavier_uniform_(self.input_layer.weight) nn.init.xavier_uniform_(self.hidden_layer_1.weight) nn.init.xavier_uniform_(self.hidden_layer_2.weight) nn.init.xavier_uniform_(self.hidden_layer_3.weight) nn.init.xavier_uniform_(self.output_layer.weight) nn.init.constant_(self.input_layer.bias, 0.1) nn.init.constant_(self.hidden_layer_1.bias, 0.1) nn.init.constant_(self.hidden_layer_2.bias, 0.1) nn.init.constant_(self.hidden_layer_3.bias, 0.1) nn.init.constant_(self.output_layer.bias, 0.1) def forward(self, state): state = F.relu(self.input_layer(state)) state = F.relu(self.hidden_layer_1(state)) state = F.relu(self.hidden_layer_2(state)) state = F.relu(self.hidden_layer_3(state)) state = F.softmax(self.output_layer(state), dim=0) return state class critic(nn.Module): """ Critic - gets a state (2-dim) and an action and returns value """ def __init__(self): super(critic, self).__init__() # define net structure self.input_layer = nn.Linear(3, 8) self.hidden_layer_1 = nn.Linear(8, 16) self.hidden_layer_2 = nn.Linear(16, 32) self.hidden_layer_3 = nn.Linear(32, 16) self.output_layer = nn.Linear(16, 1) # initialize them nn.init.xavier_uniform_(self.input_layer.weight) nn.init.xavier_uniform_(self.hidden_layer_1.weight) nn.init.xavier_uniform_(self.hidden_layer_2.weight) nn.init.xavier_uniform_(self.hidden_layer_3.weight) nn.init.xavier_uniform_(self.output_layer.weight) nn.init.constant_(self.input_layer.bias, 0.1) nn.init.constant_(self.hidden_layer_1.bias, 0.1) nn.init.constant_(self.hidden_layer_2.bias, 0.1) nn.init.constant_(self.hidden_layer_3.bias, 0.1) nn.init.constant_(self.output_layer.bias, 0.1) def forward(self, state_, action_): state_ = combine_tensors(state_, action_) state_ = F.relu(self.input_layer(state_)) state_ = F.relu(self.hidden_layer_1(state_)) state_ = F.relu(self.hidden_layer_2(state_)) state_ = F.relu(self.hidden_layer_3(state_)) state_ = self.output_layer(state_) return state_ Transition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward')) class ReplayMemory(object): """ Memory """ def __init__(self, capacity): self.capacity = capacity self.memory = [] self.position = 0 def push(self, *args): if len(self.memory) < self.capacity: self.memory.append(None) self.memory[self.position] = Transition(*args) self.position = (self.position + 1) % self.capacity def sample(self, batch_size): return random.sample(self.memory, batch_size) def __len__(self): return len(self.memory) def compute_action(actor_trainined, state, eps=0.1): """ Computes an action given the actual policy, the state and an eps. Eps is resposible for the amount of exploring :param actor_trainined: actual policy :param state: :param eps: float in [0,1] :return: """ denoise = random.random() if denoise > eps: action_probs = actor_trainined(state.float()) return torch.argmax(action_probs).view(1).int() else: return torch.randint(0, 4, (1,)).view(1).int() def compute_next_state(_action, _state): """ Computes the next state given an action and a state :param _action: :param _state: :return: """ state_ = _state.clone() if _action.item() == 0: state_[1] += 1 elif _action.item() == 1: state_[1] -= 1 elif _action.item() == 2: state_[0] -= 1 elif _action.item() == 3: state_[0] += 1 return state_ def update_weights(target, source, tau): """ Soft-Update of weights :param target: :param source: :param tau: :return: """ for target, source in zip(target.parameters(), source.parameters()): target.data.copy_(tau * source.data + (1 - tau) * target.data) def update(transition__, replay_memory, batch_size_, gamma_): """ Performs one update step :param transition__: :param replay_memory: :param batch_size_: :param gamma_: :return: """ replay_memory.push(*transition__) if replay_memory.__len__() < batch_size_: return transitions = replay_memory_.sample(batch_size_) batch = Transition(*zip(*transitions)) states = torch.stack(batch.state) actions = torch.stack(batch.action) rewards = torch.stack(batch.reward) next_states = torch.stack(batch.next_state) action_target = torch.argmax(actor_target(next_states.float()), 1).int() y = ( rewards.float().view(-1, 1) + gamma_ * critic_target(next_states.float(), action_target.float()) .float() ) critic_trained.zero_grad() crit_ = critic_trained(states.float(), actions.float()) # nn stuff does not work here! -> doing mse myself.. # loss_critic = (torch.sum((y.float() - crit_.float()) ** 2.) # / y.data.nelement()) loss_critic = F.l1_loss(y.float(), crit_.float()) loss_critic.backward() optimizer_critic.step() actor_trained.zero_grad() loss_actor = ((-1.) * critic_trained(states.float(), torch.argmax( actor_trained(states.float()), 1 ).int().float())).mean() loss_actor.backward() optimizer_actor.step() def get_eps(epoch): """ Computes the eps for action choosing dependant on the epoch :param epoch: number of epoch :return: """ if epoch <= 10: eps_ = 1. elif epoch <= 20: eps_ = 0.8 elif epoch <= 40: eps_ = 0.6 elif epoch <= 60: eps_ = 0.4 elif epoch <= 80: eps_ = 0.2 else: eps_ = 0.1 return eps_ def compute_reward_2(state_, next_state_, terminal_state_): """ Better (?) reward function that "compute_reward" If next_state == terminal_state -> reward = 100 If next_state illegal -> reward = -100 if next_state is further away from terminal_state than state_ -> -2 else 1 :param state_: :param next_state_: :param terminal_state_: :return: """ if torch.eq(next_state_, terminal_state_).all(): reward_ = 100 elif torch.eq(next_state_.abs(), 15).any(): reward_ = -100 else: if (state_.abs() > next_state_.abs()).any(): reward_ = 1. else: reward_ = -2 return torch.tensor(reward_, device=device, dtype=torch.float) def compute_reward(next_state_, terminal_state_): """ Computes some reward :param next_state_: :param terminal_state_: :return: """ if torch.eq(next_state_, terminal_state_).all(): return torch.tensor(100., device=device, dtype=torch.float) elif next_state_[0] == 15 or next_state_[1] == 15: return torch.tensor(-100., device=device, dtype=torch.float) else: return (-1.) * next_state_.abs().sum().float() def fill_memory_2(): """ Fills the memory with random transitions which got a "good" action chosen """ terminal_state_ = torch.tensor([0, 0], device=device, dtype=torch.int) while replay_memory_.__len__() < batch_size: state_ = torch.randint(-4, 4, (2,)).to(device).int() if state_[0].item() == 0 and state_[1].item == 0: continue # try to find a "good" action if state_[0].item() == 0: if state_[1].item() > 0: action_ = torch.tensor(1, device=device, dtype=torch.int) else: action_ = torch.tensor(0, device=device, dtype=torch.int) elif state_[1].item() == 0: if state_[0].item() > 0: action_ = torch.tensor(2, device=device, dtype=torch.int) else: action_ = torch.tensor(3, device=device, dtype=torch.int) else: random_bit = random.random() if random_bit > 0.5: if state_[1].item() > 0: action_ = torch.tensor(1, device=device, dtype=torch.int) else: action_ = torch.tensor(0, device=device, dtype=torch.int) else: if state_[0].item() > 0: action_ = torch.tensor(2, device=device, dtype=torch.int) else: action_ = torch.tensor(3, device=device, dtype=torch.int) action_ = action_.view(1).int() next_state_ = compute_next_state(action_, state_) reward_ = compute_reward_2(state_, next_state_, terminal_state_) transition__ = Transition(state=state_, action=action_, reward=reward_, next_state=next_state_) replay_memory_.push(*transition__) def fill_memory(): """ Fills the memory with random transitions """ while replay_memory_.__len__() < batch_size: state_ = torch.randint(-14, 15, (2,)).to(device).int() if state_[0].item() == 0 and state_[1].item == 0: continue terminal_state_ = torch.tensor([0, 0], device=device, dtype=torch.int) action_ = torch.randint(0, 4, (1,)).view(1).int() next_state_ = compute_next_state(action_, state_) reward_ = compute_reward_2(state_, next_state_, terminal_state_) transition__ = Transition(state=state_, action=action_, reward=reward_, next_state=next_state_) replay_memory_.push(*transition__) if __name__ == '__main__': # get device if possible device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # set seed seed_ = 0 random.seed(seed_) # seed of python if device == "cuda": # cuda seed torch.cuda.manual_seed(seed_) else: # cpu seed torch.manual_seed(seed_) # initialize the nets actor_trained = actor().to(device) actor_target = actor().to(device) # copy -> _trained eqaul _target actor_target.load_state_dict(actor_trained.state_dict()) optimizer_actor = optim.RMSprop(actor_trained.parameters()) # move them to the device critic_trained = critic().to(device) critic_target = critic().to(device) critic_target.load_state_dict((critic_trained.state_dict())) actor_target.load_state_dict((actor_trained.state_dict())) # used optimizer optimizer_critic = optim.RMSprop(critic_trained.parameters(), momentum=0.9, weight_decay=0.001) # replay memory capacity_replay_memory = 16384 replay_memory_ = ReplayMemory(capacity_replay_memory) # hyperparams batch_size = 1024 gamma = 0.7 tau = 0.01 num_epochs = 256 # fill replay memory such that batching is possible fill_memory_2() # Print params printing_while_training = True printing_while_testing = False print('######################## Training ########################') starting_time = datetime.datetime.now() for i in range(num_epochs): # random state starting_state = torch.randint(-14, 15, (2,)).to(device).int() # skip if terminal state if starting_state[0].item() == 0 and starting_state[0].item() == 0: continue state = starting_state.clone() # terminal state terminal_state = torch.tensor([0, 0], device=device, dtype=torch.int) iteration = 0 # get eps for exploring eps = get_eps(i) running_reward = 0. # training loos while True: # compute action and next state action = compute_action(actor_trained, state, eps) next_state = compute_next_state(action, state) # finished if next state is terminal state if torch.eq(next_state, terminal_state).all(): reward = compute_reward_2(state, next_state, terminal_state) running_reward += reward.item() transition_ = Transition(state=state, action=action, reward=reward, next_state=next_state) replay_memory_.push(*transition_) if printing_while_training: print('{}: Finished after {} iterations with reward {} ' 'in state {} starting from {}' .format(i + 1, iteration + 1, running_reward, next_state.data.numpy(), starting_state.data.numpy())) break # abort if illegal state elif torch.eq(next_state.abs(), 15).any() or iteration == 99: reward = compute_reward_2(state, next_state, terminal_state) running_reward += reward transition_ = Transition(state=state, action=action, reward=reward, next_state=next_state) replay_memory_.push(*transition_) if printing_while_training: print('{}: Aborted after {} iterations with reward {} ' 'in state {} starting from {}' .format(i + 1, iteration + 1, running_reward, next_state.data.numpy(), starting_state.data.numpy())) break # compute immediate reward reward = compute_reward_2(state, next_state, terminal_state) # save it - only for logging purposes running_reward += reward.item() # construct transition transition_ = Transition(state=state, action=action, reward=reward, next_state=next_state) # update model update(transition_, replay_memory_, batch_size, gamma) # perform soft updates update_weights(actor_target, actor_trained, tau) update_weights(critic_target, critic_trained, tau) state = next_state iteration += 1 print('Ended after: {}'.format(datetime.datetime.now() - starting_time)) print('######################## Testing ########################') starting_time = datetime.datetime.now() test_states = [torch.tensor([i, j], device=device, dtype=torch.int) for i in range(-15, 14) for j in range(-15, 14)] finished = 0 aborted = 0 aborted_reward = [] finished_reward = [] for starting_state in test_states: state = starting_state.clone() terminal_state = torch.tensor([0, 0], device=device, dtype=torch.int) iteration = 0 reward = 0. while True: action = torch.argmax(actor_target(state.float())).view(1).int() next_state = compute_next_state(action, state) if torch.eq(next_state, terminal_state).all(): reward += compute_reward_2(state, next_state, terminal_state) finished_reward.append(reward.item()) if printing_while_testing: print('{}: Finished after {} iterations with reward {} ' 'in state {} starting from {}' .format(starting_state.data.numpy(), iteration + 1, reward.item(), next_state.data.numpy(), starting_state.data.numpy())) finished += 1 break elif torch.eq(next_state.abs(), 15).any(): reward += compute_reward_2(state, next_state, terminal_state) aborted_reward.append(reward.item()) if printing_while_testing: print('{}: Aborted after {} iterations with reward {} ' 'in state {} starting from {}' .format(starting_state.data.numpy(), iteration + 1, reward.item(), next_state.data.numpy(), starting_state.data.numpy())) aborted += 1 break elif iteration > 500: if printing_while_testing: print('Aborting due to more than 500 iterations! ' 'Started from {}'.format( starting_state.data.numpy())) aborted += 1 break reward += compute_reward_2(state, next_state, terminal_state) state = next_state iteration += 1 print('Ended after: {}'.format(datetime.datetime.now() - starting_time)) print('Finished: {}, aborted: {}'.format(finished, aborted)) print('Reward mean finished: {}, aborted: {}' .format(np.mean(finished_reward), np.mean(aborted_reward))) I already tried using an other reward function but it did not have any effect.. . Additionally I tried using some less aggressive exploration and optim.SGD instead of optim.RMSprop - both had no effect.
This might now be the direct answer or a recipe for your code to work, but I have some initial worries that might help you debug the code. The biggest issue is, I believe, that you perform several conversions to datatypes that are not tensors. For example, you call your combine_tensors() functions couple of times, and it converts given tensors to numpy() and creates new tensor when returning the value. Other time you call your networks to perform the forward pass, and give them tensors converted with float() function as the argument. There are calls to int() on tensors aswell. All these calls cause losing the tensor's operation graph that is used for computing the gradient on the backward() call. This is described in PyTorch documentation and should be understood before writing RL algorithms in this framework. It is essential to work on the tensors for the whole time you are in the train function - from the time you convert the experience batch to tensors, to the time you call the backward functions. This alone still does not guarantee that the learning will be performed in a correct way. For example, when you use the target networks to estimate the loss for critic, you should detach the results to prevent the gradient computations in target networks (although, if you use optimizer, and you register only the critic parameters, it is more of a performance issue as step() call won't update target network's parameters). When both issues will be resolved in your code, you might observe more correct behaviour. My additional comment here is that I don't really understand parts of your code and I think it is not a correct DDPG implementation (i. e. you use argmax() on actor network output and provide this to critic network and this does not look like the correct way). I'd advice you to take a step back and get a bit more understanding of PyTorch framework and ideas, as well as look for some baseline implementations of DDPG to make sure you know how to perform the calculations step by step.
https://stackoverflow.com/questions/51102220/
Pytorch gradients exist but weights not updating
So, I have a deep convolutional network with an lstm layer, and after the ltsm layer it splits off to compute two different functions (using two different linear layers) whose results are then added together to form the final network output. When I compute the loss of the network so that I can have it compute the gradients and update the weights, I have it do a few operations and then have it compute the loss between the derived value and the calculated target value. def update(output, target): # target output is calculated outside the function # operations on output loss(output, target).backward() self.optimizer.step() The network has some loss (sometimes in a very small order of magnitude, but sometimes also on higher orders of magnitude), for example a few of the losses: tensor(1.00000e-04 * 5.7420) tensor(2.7190) tensor(0.9684) It also has gradients as calculated here: for param in self.parameters(): print(param.grad.data.sum()) Which outputs: tensor(1.00000e-03 * 1.9996) tensor(1.00000e-03 * 2.6101) tensor(1.00000e-02 * -1.3879) tensor(1.00000e-03 * -4.5834) tensor(1.00000e-02 * 2.1762) tensor(1.00000e-03 * 3.6246) tensor(1.00000e-03 * 6.6234) tensor(1.00000e-02 * 2.9373) tensor(1.00000e-02 * 1.2680) tensor(1.00000e-03 * 1.8791) tensor(1.00000e-02 * 1.7322) tensor(1.00000e-02 * 1.7322) tensor(0.) tensor(0.) tensor(1.00000e-03 * -6.7885) tensor(1.00000e-02 * 9.7793) And: tensor(2.4620) tensor(0.9544) tensor(-26.2465) tensor(0.2280) tensor(-219.2602) tensor(-2.7870) tensor(-50.8203) tensor(3.2548) tensor(19.6163) tensor(-18.6029) tensor(3.8564) tensor(3.8564) tensor(0.) tensor(0.) tensor(0.8040) tensor(-0.1157) But when I compare the weight before and after running the optimizer, I get the result that the weights are equal to each other. Code to see if weights change: before = list(neuralnet.parameters()) neuralnet.update() after = list(neuralnet.parameters()) for i in range(len(before)): print(torch.equal(before[i].data, after[i].data)) The above returns True for each iteration.
https://discuss.pytorch.org/t/gradients-exist-but-weights-not-updating/20484/2?u=wr01 has the answer I sought. The problem was that neuralnet.parameters() does not clone the list of parameters, so when I was updating the weights, the weights were updating in the before variable.
https://stackoverflow.com/questions/51104648/
Custom weight initialization in PyTorch
What would be the right way to implement a custom weight initialization method in PyTorch? I believe I can't directly add any method to 'torch.nn.init` but wish to initialize my model's weights with my own proprietary method.
You can define a method to initialize the weights according to each layer: def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv2d') != -1: m.weight.data.normal_(0.0, 0.02) elif classname.find('BatchNorm') != -1: m.weight.data.normal_(1.0, 0.02) m.bias.data.fill_(0) And then just apply it to your network: model = create_your_model() model.apply(weights_init)
https://stackoverflow.com/questions/51125856/
How do I create a normal distribution in pytorch?
I want to create a random normal distribution with a given mean and std.
You can easily use torch.Tensor.normal_() method. Let's create a matrix Z (a 1d tensor) of dimension 1 × 5, filled with random elements samples from the normal distribution parameterized by mean = 4 and std = 0.5. torch.empty(5).normal_(mean=4,std=0.5) Result: tensor([4.1450, 4.0104, 4.0228, 4.4689, 3.7810])
https://stackoverflow.com/questions/51136581/
Difference between tensor.permute and tensor.view in PyTorch?
What is the difference between tensor.permute() and tensor.view()? They seem to do the same thing.
Input In [12]: aten = torch.tensor([[1, 2, 3], [4, 5, 6]]) In [13]: aten Out[13]: tensor([[ 1, 2, 3], [ 4, 5, 6]]) In [14]: aten.shape Out[14]: torch.Size([2, 3]) torch.view() reshapes the tensor to a different but compatible shape. For example, our input tensor aten has the shape (2, 3). This can be viewed as tensors of shapes (6, 1), (1, 6) etc., # reshaping (or viewing) 2x3 matrix as a column vector of shape 6x1 In [15]: aten.view(6, -1) Out[15]: tensor([[ 1], [ 2], [ 3], [ 4], [ 5], [ 6]]) In [16]: aten.view(6, -1).shape Out[16]: torch.Size([6, 1]) Alternatively, it can also be reshaped or viewed as a row vector of shape (1, 6) as in: In [19]: aten.view(-1, 6) Out[19]: tensor([[ 1, 2, 3, 4, 5, 6]]) In [20]: aten.view(-1, 6).shape Out[20]: torch.Size([1, 6]) Whereas tensor.permute() is only used to swap the axes. The below example will make things clear: In [39]: aten Out[39]: tensor([[ 1, 2, 3], [ 4, 5, 6]]) In [40]: aten.shape Out[40]: torch.Size([2, 3]) # swapping the axes/dimensions 0 and 1 In [41]: aten.permute(1, 0) Out[41]: tensor([[ 1, 4], [ 2, 5], [ 3, 6]]) # since we permute the axes/dims, the shape changed from (2, 3) => (3, 2) In [42]: aten.permute(1, 0).shape Out[42]: torch.Size([3, 2]) You can also use negative indexing to do the same thing as in: In [45]: aten.permute(-1, 0) Out[45]: tensor([[ 1, 4], [ 2, 5], [ 3, 6]]) In [46]: aten.permute(-1, 0).shape Out[46]: torch.Size([3, 2])
https://stackoverflow.com/questions/51143206/
Is there a bug with PyTorch training for large batch sizes, or with this script?
I am following this PyTorch tutorial by Joshua L. Mitchell. The grand finale of the tutorial is the following PyTorch training script. One element, the batch size, I have parameterized in the first line of the script, which I run in a newly started Jupyter notebook. The key parameter in question is BIGGER_BATCH, initially set to 4: BIGGER_BATCH=4 import numpy as np import torch # Tensor Package (for use on GPU) import torch.nn as nn ## Neural Network package import torch.optim as optim # Optimization package import torchvision # for dealing with vision data import torchvision.transforms as transforms # for modifying vision data to run it through models from torch.autograd import Variable # for computational graph import torch.nn.functional as F # Non-linearities package import matplotlib.pyplot as plt # for plotting def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) transform = transforms.Compose( # we're going to use this to transform our data to make each sample more uniform [ transforms.ToTensor(), # converts each sample from a (0-255, 0-255, 0-255) PIL Image format to a (0-1, 0-1, 0-1) FloatTensor format transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) # for each of the 3 channels of the image, subtract mean 0.5 and divide by stdev 0.5 ]) # the normalization makes each SGD iteration more stable and overall makes convergence easier trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) # this is all we need to get/wrangle the dataset! testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=BIGGER_BATCH, shuffle=False) testloader = torch.utils.data.DataLoader(testset, batch_size=BIGGER_BATCH, shuffle=False) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') # each image can have 1 of 10 labels class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 10, 5) # Let's add more feature maps - that might help self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(10, 20, 5) # And another conv layer with even more feature maps self.fc1 = nn.Linear(20 * 5 * 5, 120) # and finally, adjusting our first linear layer's input to our previous output self.fc2 = nn.Linear(120, 10) def forward(self, x): x = self.conv1(x) x = F.relu(x) # we're changing our nonlinearity / activation function from sigmoid to ReLU for a slight speedup x = self.pool(x) x = self.conv2(x) x = F.relu(x) x = self.pool(x) # after this pooling layer, we're down to a torch.Size([4, 20, 5, 5]) tensor. x = x.view(-1, 20 * 5 * 5) # so let's adjust our tensor again. x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) return x net = Net().cuda() NUMBER_OF_EPOCHS = 25 LEARNING_RATE = 1e-2 loss_function = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=LEARNING_RATE) for epoch in range(NUMBER_OF_EPOCHS): train_loader_iter = iter(trainloader) for batch_idx, (inputs, labels) in enumerate(train_loader_iter): net.zero_grad() inputs, labels = Variable(inputs.float().cuda()), Variable(labels.cuda()) output = net(inputs) loss = loss_function(output, labels) loss.backward() optimizer.step() if epoch % 5 is 0: print("Iteration: " + str(epoch + 1)) dataiter = iter(testloader) images, labels = dataiter.next() imshow(torchvision.utils.make_grid(images[0:4])) outputs = net(Variable(images.cuda())) _, predicted = torch.max(outputs.data, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) correct = 0 total = 0 for data in testloader: images, labels = data labels = labels.cuda() outputs = net(Variable(images.cuda())) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) This gets a good, expected result of 58% accuracy: Predicted: cat ship ship plane GroundTruth: cat ship ship plane Accuracy of the network on the 10000 test images: 58 % Now if I change the first line of the above script to BIGGER_BATCH=4096 and I restart the kernel and run the script I get 19% accuracy, consistently: Predicted: car ship ship ship GroundTruth: cat ship ship plane Accuracy of the network on the 10000 test images: 19 % Note that I am not shuffling the inputs, so I can't attribute this change to the order of inputs in the training: trainloader = torch.utils.data.DataLoader(trainset, batch_size=BIGGER_BATCH, shuffle=False) testloader = torch.utils.data.DataLoader(testset, batch_size=BIGGER_BATCH, shuffle=False) What accounts for the huge drop in accuracy when I increase the batch size? Is there something wrong in the script, or something wrong in PyTorch, or something else I'm not thinking of?
Sorry I just realized this was a dumb question. I am doing far fewer updates -- by a factor of 1,024. That's why the accuracy is much lower. I could adjust the learning rate but there is clearly a tradeoff between batch size and learning rate that I am just learning about now.
https://stackoverflow.com/questions/51180423/
Import System Modules in a Conda Environment
I have installed caffe and pytorch0.3 in my system path(environment), and there is a project which only works under caffe and history version of pytorch0.2. To solve that I install pytorch0.2 in a Conda environment and I wonder if there is any way can save me from installing a caffe again in this conda environment. In other words, can I use pytorch0.2 in this Conda environment meanwhile import the system caffe? and how?
Activate your environment and try the following conda install package-name --offline Also, in case you wish to clone the root or some other environment to a conda environment, you can use -- clone. For instance, when you wish to clone the root - conda create -n pytorch_02 --clone root
https://stackoverflow.com/questions/51186374/
Pytorch: Dropout Layers and Packed Sequences
(PyTorch 0.4.0) How does one apply a manual dropout layer to a packed sequence (specifically in an LSTM on a GPU)? Passing the packed sequence (which comes from the lstm layer) directly does not work, as the dropout layer doesn’t know quite what to do with it and returns something not a packed sequence. Passing the data of the packed sequence seems like it should work, but results in the attribute error shown below the code sample. def __init__ (self, ....): super(Model1, self).__init__() .... self.drop = torch.nn.Dropout(p=0.5, inplace=False) def forward(self, inputs, lengths): pack1 = nn.utils.rnn.pack_padded_sequence(inputs, lengths, batch_first=True) out1, self.hidden1 = self.lstm1(pack1, (self.hidden1[0].detach(), self.hidden1[1].detach())) out1.data = self.drop(out1.data) This results in: AttributeError: can't set attribute Perversely, I can make this an inplace operation (again, on the data directly, not the full packed sequence) and it technically works (i.e., it runs) on the CPU, but gives a warning on the GPU that the inplace operation is modifying a needed gradient. This gives me no confidence that the CPU version works correctly (does it? Is it missing a warning? Would not be the first time that I caught PyTorch silently chugging along doing something it should flag a warning for) and in any case GPU support is vital So: Are the different behaviors between CPU and GPU expected? What is the overall correct way to do this on a GPU? What is the overall correct way to do this on a CPU?
You can use pad_packed_sequence: def forward(self, inputs, lengths): pack1 = nn.utils.rnn.pack_padded_sequence(inputs, lengths, batch_first=True) out1, self.hidden1 = self.lstm1(pack1, (self.hidden1[0].detach(), self.hidden1[1].detach())) out1, _ = nn.utils.rnn.pad_packed_sequence(out1, batch_first=True) out1 = self.drop(out1)
https://stackoverflow.com/questions/51227017/
Neural networks pytorch
I am very new in pytorch and implementing my own network of image classifier. However I see for each epoch training accuracy is very good but validation accuracy is 0.i noted till 5th epoch. I am using Adam optimizer and have learning rate .001. also resampling the whole data set after each epoch into training n validation set. Please help where I am going wrong. Here is my code: ### where is data? data_dir_train = '/home/sup/PycharmProjects/deep_learning/CNN_Data/training_set' data_dir_test = '/home/sup/PycharmProjects/deep_learning/CNN_Data/test_set' # Define your batch_size batch_size = 64 allData = datasets.ImageFolder(root=data_dir_train,transform=transformArr) # We need to further split our training dataset into training and validation sets. def split_train_validation(): # Define the indices num_train = len(allData) indices = list(range(num_train)) # start with all the indices in training set split = int(np.floor(0.2 * num_train)) # define the split size #train_idx, valid_idx = indices[split:], indices[:split] # Random, non-contiguous split validation_idx = np.random.choice(indices, size=split, replace=False) train_idx = list(set(indices) - set(validation_idx)) # define our samplers -- we use a SubsetRandomSampler because it will return # a random subset of the split defined by the given indices without replacement train_sampler = SubsetRandomSampler(train_idx) validation_sampler = SubsetRandomSampler(validation_idx) #train_loader = DataLoader(allData,batch_size=batch_size,sampler=train_sampler,shuffle=False,num_workers=4) #validation_loader = DataLoader(dataset=allData,batch_size=1, sampler=validation_sampler) return (train_sampler,validation_sampler) Training from torch.optim import Adam import torch import createNN import torch.nn as nn import loadData as ld from torch.autograd import Variable from torch.utils.data import DataLoader # check if cuda - GPU support available cuda = torch.cuda.is_available() #create model, optimizer and loss function model = createNN.ConvNet(class_num=2) optimizer = Adam(model.parameters(),lr=.001,weight_decay=.0001) loss_func = nn.CrossEntropyLoss() if cuda: model.cuda() # function to save model def save_model(epoch): torch.save(model.load_state_dict(),'imageClassifier_{}.model'.format(epoch)) print('saved model at epoch',epoch) def exp_lr_scheduler ( epoch , init_lr = args.lr, weight_decay = args.weight_decay, lr_decay_epoch = cf.lr_decay_epoch): lr = init_lr * ( 0.5 ** (epoch // lr_decay_epoch)) def train(num_epochs): best_acc = 0.0 for epoch in range(num_epochs): print('\n\nEpoch {}'.format(epoch)) train_sampler, validation_sampler = ld.split_train_validation() train_loader = DataLoader(ld.allData, batch_size=30, sampler=train_sampler, shuffle=False) validation_loader = DataLoader(dataset=ld.allData, batch_size=1, sampler=validation_sampler) model.train() acc = 0.0 loss = 0.0 total = 0 # train model with training data for i,(images,labels) in enumerate(train_loader): # if cuda then move to GPU if cuda: images = images.cuda() labels = labels.cuda() # Variable class wraps a tensor and we can calculate grad images = Variable(images) labels = Variable(labels) # reset accumulated gradients for each batch optimizer.zero_grad() # pass images to model which returns preiction output = model(images) #calculate the loss based on prediction and actual loss = loss_func(output,labels) # backpropagate the loss and compute gradient loss.backward() # update weights as per the computed gradients optimizer.step() # prediction class predVal , predClass = torch.max(output.data, 1) acc += torch.sum(predClass == labels.data) loss += loss.cpu().data[0] total += labels.size(0) # print the statistics train_acc = acc/total train_loss = loss / total print('Mean train acc = {} over epoch = {}'.format(epoch,acc)) print('Mean train loss = {} over epoch = {}'.format(epoch, loss)) # Valid model with validataion data model.eval() acc = 0.0 loss = 0.0 total = 0 for i,(images,labels) in enumerate(validation_loader): # if cuda then move to GPU if cuda: images = images.cuda() labels = labels.cuda() # Variable class wraps a tensor and we can calculate grad images = Variable(images) labels = Variable(labels) # reset accumulated gradients for each batch optimizer.zero_grad() # pass images to model which returns preiction output = model(images) #calculate the loss based on prediction and actual loss = loss_func(output,labels) # backpropagate the loss and compute gradient loss.backward() # update weights as per the computed gradients optimizer.step() # prediction class predVal, predClass = torch.max(output.data, 1) acc += torch.sum(predClass == labels.data) loss += loss.cpu().data[0] total += labels.size(0) # print the statistics valid_acc = acc / total valid_loss = loss / total print('Mean train acc = {} over epoch = {}'.format(epoch, valid_acc)) print('Mean train loss = {} over epoch = {}'.format(epoch, valid_loss)) if(best_acc<valid_acc): best_acc = valid_acc save_model(epoch) # at 30th epoch we save the model if (epoch == 30): save_model(epoch) train(20)
I think you did not take into account that acc += torch.sum(predClass == labels.data) returns a tensor instead of a float value. Depending on the version of pytorch you are using I think you should change it to: acc += torch.sum(predClass == labels.data).cpu().data[0] #pytorch 0.3 acc += torch.sum(predClass == labels.data).item() #pytorch 0.4 Although your code seems to be working for old pytorch version, I would recommend you to upgrade to the 0.4 version. Also, I mentioned other problems/typos in your code. You are loading the dataset for every epoch. for epoch in range(num_epochs): print('\n\nEpoch {}'.format(epoch)) train_sampler, validation_sampler = ld.split_train_validation() train_loader = DataLoader(ld.allData, batch_size=30, sampler=train_sampler, shuffle=False) validation_loader = DataLoader(dataset=ld.allData, batch_size=1, sampler=validation_sampler) ... That should not happen, it should be enough loading it once train_sampler, validation_sampler = ld.split_train_validation() train_loader = DataLoader(ld.allData, batch_size=30, sampler=train_sampler, shuffle=False) validation_loader = DataLoader(dataset=ld.allData, batch_size=1, sampler=validation_sampler) for epoch in range(num_epochs): print('\n\nEpoch {}'.format(epoch)) ... In the training part you have (this does not happen in the validation): train_acc = acc/total train_loss = loss / total print('Mean train acc = {} over epoch = {}'.format(epoch,acc)) print('Mean train loss = {} over epoch = {}'.format(epoch, loss)) Where you are printing acc instead of train_acc Also, in the validation part I mentioned that you are printing print('Mean train acc = {} over epoch = {}'.format(epoch, valid_acc)) when it should be something like 'Mean val acc'. Changing this lines of code, using a standard model I created and CIFAR dataset the training seems to converge, accuracy increases at every epoch while mean loss value decreases. I Hope I could help you!
https://stackoverflow.com/questions/51234035/
Loss to minimize the overlap of two collections/sets
I wonder if there is a loss function can measure the overlap of two collections/sets (order doesn't matter). E.g. the groud truth is a set [a, b, c] and my model prediction is a set [b, e, f], the overlap is [b]. My goal is to maximize the overlap of my prediction. Do we have a loss function that can measure the size of overlap that I can minimize (negative of) the metric as a result I can maximize the overlap. (I know one solution may follow the REIFORCE learning that treat the overlap as a reward for each data sample and use the reward to weight the loss, but do we have another solution) Thank you.
As noted by P-Gn, the problem with such coefficients is their differentiability. However, it is possible to define similar measures to these coefficients that are differentiable. IOU (intersection over union), as proposed by Prune is a good measure. For deep learning tasks more popular is the similar Dice coefficient: 2 * len(A intersect B)/(len(A)+ len(B)) which ranges between 0 if no overlap and 1 for identical sets. For binary vectors this can be formulated as 2 * abs(a.b)/(a**2 + b**2) where the vectors are a one-hot encoded representation of the set. Now, if your last layer in your neural network has a softmax activation (like when you use cross entropy) you can interpret the output as the probabilities of that particular element belonging to your predicted set. The previous formula still is a good measure for the intersection between your sets but stays differentiable. The so called Dice loss (1 - dice coefficient) was introduced first in this paper where you can read more about it.
https://stackoverflow.com/questions/51255745/
GPU program failed to execute : cublas runtime error
I am trying to train a network via pytorch on CUDA enabled GeForce GTX 1070 gpu. I don't understand the error nor have I found any similar problem anywhere. I don't know if its cuda's issue or something in my code. Traceback (most recent call last): File "main.py", line 497, in <module> main() File "main.py", line 167, in main train(train_loader, model, criterion, optimizer, epoch, normalizer) File "main.py", line 244, in train output = model(*input_var) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\1546544\Desktop\ML\model.py", line 147, in forward atom_fea = conv_func(atom_fea, nbr_fea, nbr_fea_idx) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\1546544\Desktop\ML\model.py", line 66, in forward total_gated_fea = self.fc_full(total_nbr_fea) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\linear.py", line 55, in forward return F.linear(input, self.weight, self.bias) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\functional.py", line 837, in linear output = input.matmul(weight.t()) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\autograd\variable.py", line 386, in matmul return torch.matmul(self, other) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\functional.py", line 192, in matmul output = torch.mm(tensor1, tensor2) RuntimeError: cublas runtime error : the GPU program failed to execute at C:/Anaconda2/conda-bld/pytorch_1519496000060/work/torch/lib/THC/THCBlas.cu:247
I faced the same problem. I fixed this problem by dataset label correction. I mean, training label was incorrect for my dataset. That's why it was failed during backward() pass. So, checking the expected label after loading it from disk/database might be helpful.
https://stackoverflow.com/questions/51280089/
2d array as index in Pytorch
I want to ‘grow’ a matrix using a set of rules. Example of rules: 0->[[1,1,1],[0,0,0],[2,2,2]], 1->[[2,2,2],[2,2,2],[2,2,2]], 2->[[0,0,0],[0,0,0],[0,0,0]] Example of growing a matrix: [[0]]->[[1,1,1],[0,0,0],[2,2,2]]-> [[2,2,2,2,2,2,2,2,2],[2,2,2,2,2,2,2,2,2],[2,2,2,2,2,2,2,2,2], [1,1,1,1,1,1,1,1,1],[0,0,0,0,0,0,0,0,0],[2,2,2,2,2,2,2,2,2], [0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0]] This is the code I’ve been trying to get to work in Pytorch rules = np.random.randint(256,size=(10,256,3,3,3)) rules_tensor = torch.randint(256,size=(10, 256, 3, 3, 3), dtype=torch.uint8, device = torch.device('cuda')) rules = rules[0] rules_tensor = rules_tensor[0] seed = np.array([[128]]) seed_tensor = seed_tensor = torch.cuda.ByteTensor([[128]]) decode = np.empty((3**3, 3**3, 3)) decode_tensor = torch.empty((3**3, 3**3, 3), dtype=torch.uint8, device = torch.device('cuda')) for i in range(3): grow = seed grow_tensor = seed_tensor for j in range(1,4): grow = rules[grow,:,:,i].reshape(3**j,-1) grow_tensor = rules_tensor[grow_tensor,:,:,i].reshape(3**j,-1) decode[..., i] = grow decode_tensor[..., i] = grow_tensor I can’t seem to select indices the same way as in Numpy in this line: grow = rules[grow,:,:,i].reshape(3**j,-1) Is there a way to do the following in Pytorch?
You could consider using torch.index_select(), flattening your index tensor before reshaping the result: Code: import torch import numpy as np rules_np = np.array([ [[1,1,1],[0,0,0],[2,2,2]], # for value 0 [[2,2,2],[2,2,2],[2,2,2]], # for value 1 [[0,0,0],[0,0,0],[0,0,0]]]) # for value 2, etc. rules = torch.from_numpy(rules_np).long() rule_shape = rules[0].shape seed = torch.zeros(1).long() num_growth = 2 print("Seed:") print(seed) grow = seed for i in range(num_growth): grow = (torch.index_select(rules, 0, grow.view(-1)) .view(grow.shape + rule_shape) .squeeze()) print("Growth #{}:".format(i)) print(grow) Log: Seed: tensor([ 0]) Growth #0: tensor([[ 1, 1, 1], [ 0, 0, 0], [ 2, 2, 2]]) Growth #1: tensor([[[[ 2, 2, 2], [ 2, 2, 2], [ 2, 2, 2]], [[ 2, 2, 2], [ 2, 2, 2], [ 2, 2, 2]], [[ 2, 2, 2], [ 2, 2, 2], [ 2, 2, 2]]], [[[ 1, 1, 1], [ 0, 0, 0], [ 2, 2, 2]], [[ 1, 1, 1], [ 0, 0, 0], [ 2, 2, 2]], [[ 1, 1, 1], [ 0, 0, 0], [ 2, 2, 2]]], [[[ 0, 0, 0], [ 0, 0, 0], [ 0, 0, 0]], [[ 0, 0, 0], [ 0, 0, 0], [ 0, 0, 0]], [[ 0, 0, 0], [ 0, 0, 0], [ 0, 0, 0]]]])
https://stackoverflow.com/questions/51304809/
How to utilize all GPUs when dealing with pytorch code?
I have 2 GPUs and when I am working with pytorch code, only one GPU is used. I tried CUDA_VISIBLE_DEVICES=0,1 python xxx.py, but occurs 'CUDA_VISIBLE_DEVICES: command not found' problems. I have also tried to add the following lines in object py file: import os os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "0,1" but still only one GPU is utilized.
You need to parallelize the training data to each GPU seperatly. Data Parallelism is implemented using torch.nn.DataParallel. An example from the pytorch documentation: import torch import torch.nn as nn class DataParallelModel(nn.Module): def __init__(self): super().__init__() self.block1 = nn.Linear(10, 20) # wrap block2 in DataParallel self.block2 = nn.Linear(20, 20) self.block2 = nn.DataParallel(self.block2) self.block3 = nn.Linear(20, 20) def forward(self, x): x = self.block1(x) x = self.block2(x) x = self.block3(x) return x
https://stackoverflow.com/questions/51322688/
Which function in Tensorflow is similar to expand_as in Pytorch
I am trying to adapt some code from Pytorch to Tensorflow, though I'm not so familiar with the latter. I use expand_as in my old code. Is there any similar function which can be used in Tensorflow?
You are looking for tf.broadcast_to which has been introduced in TF 1.9 (initially under the name tf.contrib.framework.broadcast_to).
https://stackoverflow.com/questions/51346819/
Pytorch, `backward` RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed
I'm implementing DDPG with PyTorch (0.4) and got stuck backproping the loss. So, first my Code performing the update: def update_nets(self, transitions): """ Performs one update step :param transitions: list of sampled transitions """ # get batches batch = transition(*zip(*transitions)) states = torch.stack(batch.state) actions = torch.stack(batch.action) next_states = torch.stack(batch.next_state) rewards = torch.stack(batch.reward) # zero gradients self._critic.zero_grad() # compute critic's loss y = rewards.view(-1, 1) + self._gamma * \ self.critic_target(next_states, self.actor_target(next_states)) loss_critic = F.mse_loss(y, self._critic(states, actions), size_average=True) # backpropagte it loss_critic.backward() self._optim_critic.step() # zero gradients self._actor.zero_grad() # compute actor's loss loss_actor = ((-1.) * self._critic(states, self._actor(states))).mean() # backpropagate it loss_actor.backward() self._optim_actor.step() # do soft updates self.perform_soft_update(self.actor_target, self._actor) self.perform_soft_update(self.critic_target, self._critic) Where self._actor, self._crtic, self.actor_target and self.critic_target are Nets. If I run this, I get the following error in the second iteration: RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. at line 221, in update_nets loss_critic.backward() line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) line 89, in backward allow_unreachable=True) # allow_unreachable flag and I don't know what is causing it. What I know till now is, that the loss_critic.backward() call causes the error. I already debugged loss_critic - it got a valid value. If I replace the loss computation with a simple loss_critic = torch.tensor(1., device=self._device, dtype=torch.float, requires_grad=True) Tensor containing the value 1 it works fine. Also, I already checked that I'm not saving some results which could cause the error. Additionally updating the actor with the loss_actor doesn't cause any problems. Does anyone know what is going wrong here? Thanks! Update I replaced # zero gradients self._critic.zero_grad() and # zero gradients self._actor.zero_grad() with # zero gradients self._critic.zero_grad() self._actor.zero_grad() self.critic_target.zero_grad() self.actor_target.zero_grad() (both calls) but it is still failing with the same error. Additionally, the code performing the update at the end of one iteration def perform_soft_update(self, target, trained): """ Preforms the soft update :param target: Net to be updated :param trained: Trained net - used for update """ for param_target, param_trained in \ zip(target.parameters(), trained.parameters()): param_target.data.copy_( param_target.data * ( 1.0 - self._tau) + param_trained * self._tau )
I found the solution. I saved tensors in my replay_buffer, for training purposes, which I used in every iteration resulting in the code snippet: # get batches batch = transition(*zip(*transitions)) states = torch.stack(batch.state) actions = torch.stack(batch.action) next_states = torch.stack(batch.next_state) rewards = torch.stack(batch.reward) This "saving" of tensors is the cause of the problem. So I changed my code to save only the data (tensor.data.numpy().tolist()) and only put it into a tensor when I need it. More detailed: In DDPG I evaluate the policy every iteration and do one learning step with a batch. Now I'm saving the evaluation in the replay buffer via: action = self.action(state) ... self.replay_buffer.push(state.data.numpy().tolist(), action.data.numpy().tolist(), ...) And used it like: batch = transition(*zip(*transitions)) states = self.to_tensor(batch.state) actions = self.to_tensor(batch.action) next_states = self.to_tensor(batch.next_state) rewards = self.to_tensor(batch.reward)
https://stackoverflow.com/questions/51349272/
Is column selection in pytorch differentiable?
Is column selection in Pytorch differentiable? for eg if I want to select a single column from each row to make a new row X 1 array and then backdrop using this new array, will the backdrop work properly? qvalues = qvalues[range(5),[0,1,0,1,0]] if element selection is done as shown above from a 5*2 tensor?
I think it is. Let me make an example with code. First we create the qvalues tensor and we say we want to compute its gradients qvalues = torch.rand((5, 5), requires_grad=True) Now we create the tensor to index it and obtain a 5x2 tensor as its result (I think this is the same selection you wanted to perform with qvalues[range(5),[0,1,0,1,0]]): y = torch.LongTensor([1, 3]) new_qvalues = qvalues[:, y] We see that the slice new_qvalues of the original qvalues will compute the gradient print(new_qvalues.requires_grad) # True Now we perform our mathematical operations. In this example code, I am doing the square of new_qvalues because we know that its gradient (derivative) will be 2 * new_qvalues. qvalues_a = new_qvalues ** 2 Now, we have to compute the gradients of qvalues_a. We set retain_graph=True to store the .grad of each tensor and avoid freeing the buffers on the backward pass. qvalues_a.backward(torch.ones(new_qvalues.shape), retain_graph=True) Now, we can go back to the original qvalues and see if the gradients have been calculated print(qvalues) print(qvalues.grad) # result of the print statemets #tensor([[ 0.9677, 0.4303, 0.2036, 0.3870, 0.6085], # [ 0.8876, 0.8695, 0.2028, 0.3283, 0.1560], # [ 0.1764, 0.4718, 0.5418, 0.5167, 0.6200], # [ 0.7610, 0.9322, 0.5584, 0.5589, 0.8901], # [ 0.8146, 0.7296, 0.8036, 0.5277, 0.5754]]) #tensor([[ 0.0000, 0.8606, 0.0000, 0.7739, 0.0000], # [ 0.0000, 1.7390, 0.0000, 0.6567, 0.0000], # [ 0.0000, 0.9435, 0.0000, 1.0334, 0.0000], # [ 0.0000, 1.8645, 0.0000, 1.1178, 0.0000], # [ 0.0000, 1.4592, 0.0000, 1.0554, 0.0000]]) We can observe how the gradients have been computed only in the selected indexes. To be sure about it we create some fast test by comparing that the value of qvalues.grad for the selected slice is equal to the derivate 2 * new_qvalues. assert torch.equal(qvalues.grad[:, y], 2 * new_qvalues) And it does not throw any error, so I would assume that you can get the gradient of the slice.
https://stackoverflow.com/questions/51361407/
Calculate covariance matrix for complex data in two channels (no complex data type)
I have complex-valued data given in 2 channels of a matrix (one is the real, one the imaginary part, so the matrix dimensions are (height, width, 2), since Pytorch does not have native complex data types. I now want to calculate the covariance matrix. The stripped-down numpy calculation adapted for Pytorch is this: def cov(m, y=None): if m.ndimension() > 2: raise ValueError("m has more than 2 dimensions") if y.ndimension() > 2: raise ValueError('y has more than 2 dimensions') X = m if X.shape[0] == 0: return torch.tensor([]).reshape(0, 0) if y is not None: X = torch.cat((X, y), dim=0) ddof = 1 avg = torch.mean(X, dim=1) fact = X.shape[1] - ddof if fact <= 0: import warnings warnings.warn("Degrees of freedom <= 0 for slice", RuntimeWarning, stacklevel=2) fact = 0.0 X -= avg[:, None] X_T = X.t() c = dot(X, X_T) c *= 1. / fact return c.squeeze() Now in numpy, this would transparently work with complex numbers, but I cannot simply feed a 3-d array with the last dimension being (real, imag) and hope it will work. How can I adapt the calculation to obtain the complex covariance matrix with real and imaginary channels?
[For PyTorch implementation of cov() for complex matrices, skip the explanations and go to the last snippet] Description of cov() operation Let M be a HxW matrix, where each of the H rows corresponds to a variable of W complex observations. Now let cov(M) be the HxH covariance matrix of the H variables of M (definition of numpy.cov()). It can be computed as follows (ignoring edge cases): cov(M) = 1 / (W - 1) . M * M.T with * matrix multiplication operator, and M.T tranpose of M. Note: to clarify the next equations, let cov_prod(X, Y) = 1 / (W - 1) . X * Y.T, with X, Y HxW matrices. We thus have cov(M) = cov_prod(M, M). So far, nothing new, this corresponds to the code you wrote (minus the y weighting and data checks for edge cases). Let's double-check that the Pytorch implementation of this formula corresponds to the Numpy one, for real-valued data: import torch import numpy as np def cov(m, y=None): if y is not None: m = torch.cat((m, y), dim=0) m_exp = torch.mean(m, dim=1) x = m - m_exp[:, None] cov = 1 / (x.size(1) - 1) * x.mm(x.t()) return cov # Real-valued matrix: M_np = np.random.rand(3, 2) # Same matrix as torch.Tensor: M = torch.from_numpy(M_np) cov_real_np = np.cov(M_np) cov_real = cov(M) eq = np.allclose(cov_real_np, cov_real.numpy()) print("Numpy & Torch real covariance results equal? > {}".format(eq)) # Numpy & PyTorch real covariance results equal? > True Extension to complex matrices Now, how does this works for complex matrices? From here, let M be of complex values, i.e. composed of H row-variables of W complex observations. Furthermore, let A and B be the real-valued matrices such that M = A + i.B. I will not go into the mathematical demonstration, which you can find here thanks to @zimzam, but in that case cov(M) can be decomposed as: cov(M) = [cov_prod(A, A) + cov_prod(B, B)] + i.[-cov_prod(A, B) + cov_prod(B, A)] This makes it straightforward to compute separately the real and imaginary components of cov(M), given the real and imaginary components of M (A and B). Implementation & Validation Find below an optimized implementation: import torch import numpy as np def cov_complex(m_comp): # (adding further parameters such as `y` is left for exercise) # Supposing real and img are stored separately in the last dim: real, img = m_comp[..., 0], m_comp[..., 1] x_real = real - torch.mean(real, dim=1)[:, None] x_img = img - torch.mean(img, dim=1)[:, None] x_real_T = x_real.t() x_img_T = x_img.t() frac = 1 / (x_real.size(1) - 1) cov_real = frac * (x_real.mm(x_real_T) + x_img.mm(x_img_T)) cov_img = frac * (-x_real.mm(x_img_T) + x_img.mm(x_real_T)) return torch.stack((cov_real, cov_img), dim=-1) # Matrix with real/img values stored separately in last dimension: M_np = np.random.rand(3, 2, 2) # Same matrix converted to np.complex format: M_comp_np = M_np.view(dtype=np.complex128)[...,0] # Same matrix as torch.Tensor: M = torch.from_numpy(M_np) cov_com_np = np.cov(M_comp_np) cov_com = cov_complex(M) eq = np.allclose(cov_com_np, cov_com.numpy().view(dtype=np.complex128)[...,0]) print("Numpy & Torch complex covariance results equal? > {}".format(eq)) # Numpy & PyTorch complex covariance results equal? > True
https://stackoverflow.com/questions/51416825/
Basic DNN with highly imbalanced dataset -- network predicts same labels
I will try to explain my issue at a high level, and I hope I'd be able to get some better understanding of the ML behind it all. I am working with aggregated features extracted from audio files, so each feature vector is of size (1xN). The output would be a single sentiment label, Positive, Neutral, or Negative. I mapped these to 2, 1, 0 respectively (the labels are discrete by design, but maybe I could make it continuous?) The dataset I am using is 90% neutral, 6% negative, and 4% positive, and I split these into train/dev/test. I wrote up a basic DNN in PyTorch, and have been training using CrossEntropyLoss and SGD (with nesterov momentum). The issue I am running into is that the network, after seeing only ~10% of the data, starts to predict only netural labels. The class weights converge to something like tensor([[-0.9255], [ 1.9352], [-1.1473]]) no matter what 1xN feature vectors you feed in. I would appreciate guidance on how to address this issue. For reference, the architecture is DNNModel( (in_layer): Linear(in_features=89, out_features=1024, bias=True) (fcs): Sequential( (0): Linear(in_features=1024, out_features=512, bias=True) (1): Linear(in_features=512, out_features=256, bias=True) (2): Linear(in_features=256, out_features=128, bias=True) ) (out_layer): Sequential( (0): SequenceWise ( Linear(in_features=128, out_features=3, bias=True)) ) ) def forward(self, x): x = F.relu(self.in_layer(x)) for fc in self.fcs: x = F.relu(fc(x)) x = self.out_layer(x) return x Not sure if NN actually makes sense -- could it be the relus between each hidden layer or the bias? Or something else. Thanks! EDIT: Moved to Data Science Stack Exchange, since this is more relevant there. link
There are various ways to get through this problem.You could try re-sampling your dataset. This can be done in two ways : You could either try under-sampling i.e, delete instances of the over represented class ,or You could try over-sampling i.e, add more instances of the under represented class. This is probably the simplest way but if you are willing to try this you could try penalised models. In penalised models we impose an additional cost on the model for making classification mistakes on the minority class during training.This additional cost cost or penalties can make the model pay more attention to the minority class.There are penalised versions of algorithms such as penalised-svm and others. For more information on the penalised-svm algorithm you could follow this link: [https://stats.stackexchange.com/questions/122095/does-support-vector-machine-handle-imbalanced-dataset][1]
https://stackoverflow.com/questions/51433774/
can't import 'torchtext' module in jupyter notebook while using pytorch
I installed pytorch using anaconda3 and my created virtual conda environment named 'torchTest'. I installed all the modules needed but, codes doesn't work in jupyter python. I installed torchtext using 1.pip install https://github.com/pytorch/text/archive/master.zip 2.and also pip install torchtext too. all I mentioned successfully downloaded in my MAC OS X, but can't get what's wrong with my Jupyter notebook..
After having the same issue with torchtext from within my jupyterlab, I opened an issue on Github at the jupyterlab project as well as at the torchtext repository. My current solution is to add the PYTHONPATH from the Anaconda env. The Anaconda path is usually like that $HOME/anaconda/bin You can add it from within Jupyter Lab/Notebook like that: import sys sys.path.append("/some/path/to/add") import torchtext
https://stackoverflow.com/questions/51452412/
Installing PyTorch under conda fails with permissions error and Rolling back transaction
I'd like to use PyTorch in a Python program. The instructions for installing it require conda. After installing Conda I ran: >conda install -c pytorch pytorch (as instructed on the PyTorch [page][1]) It looked promising -- until the end. Solving environment: done ## Package Plan ## environment location: C:\ProgramData\Miniconda3 added / updated specs: - pytorch The following packages will be downloaded: package | build ---------------------------|----------------- icc_rt-2017.0.4 | h97af966_0 8.0 MB vs2015_runtime-15.5.2 | 3 2.2 MB pytorch-0.4.0 |py36_cuda80_cudnn7he774522_1 529.2 MB pytorch mkl-2018.0.3 | 1 178.1 MB numpy-1.14.5 | py36h9fa60d3_4 35 KB intel-openmp-2018.0.3 | 0 1.7 MB numpy-base-1.14.5 | py36h5c71026_4 3.8 MB vc-14.1 | h0510ff6_3 5 KB blas-1.0 | mkl 6 KB conda-4.5.8 | py36_0 1.0 MB mkl_fft-1.0.2 | py36hb217b18_0 113 KB mkl_random-1.0.1 | py36h77b88f5_1 268 KB ------------------------------------------------------------ Total: 724.4 MB The following NEW packages will be INSTALLED: blas: 1.0-mkl icc_rt: 2017.0.4-h97af966_0 intel-openmp: 2018.0.3-0 mkl: 2018.0.3-1 mkl_fft: 1.0.2-py36hb217b18_0 mkl_random: 1.0.1-py36h77b88f5_1 numpy: 1.14.5-py36h9fa60d3_4 numpy-base: 1.14.5-py36h5c71026_4 pytorch: 0.4.0-py36_cuda80_cudnn7he774522_1 pytorch The following packages will be UPDATED: conda: 4.5.4-py36_0 --> 4.5.8-py36_0 vc: 14-h0510ff6_3 --> 14.1-h0510ff6_3 vs2015_runtime: 14.0.25123-3 --> 15.5.2-3 Proceed ([y]/n)? y Downloading and Extracting Packages icc_rt-2017.0.4 | 8.0 MB | ############################################################################## | 100% vs2015_runtime-15.5. | 2.2 MB | ############################################################################## | 100% pytorch-0.4.0 | 529.2 MB | ############################################################################# | 100% mkl-2018.0.3 | 178.1 MB | ############################################################################# | 100% numpy-1.14.5 | 35 KB | ############################################################################## | 100% intel-openmp-2018.0. | 1.7 MB | ############################################################################## | 100% numpy-base-1.14.5 | 3.8 MB | ############################################################################## | 100% vc-14.1 | 5 KB | ############################################################################## | 100% blas-1.0 | 6 KB | ############################################################################## | 100% conda-4.5.8 | 1.0 MB | ############################################################################## | 100% mkl_fft-1.0.2 | 113 KB | ############################################################################## | 100% mkl_random-1.0.1 | 268 KB | ############################################################################## | 100% Preparing transaction: done Verifying transaction: done But then this. Executing transaction: failed ERROR conda.core.link:_execute(502): An error occurred while uninstalling package 'defaults::conda-4.5.4-py36_0'. PermissionError(13, 'Access is denied') Attempting to roll back. Rolling back transaction: done PermissionError(13, 'Access is denied') Apparently it was at least partly installed because PyCharm was able to see it. But when I asked PyCharm to install it in an environment, I got this error. RuntimeError: PyTorch does not currently provide packages for PyPI (see status at https://github.com/pytorch/pytorch/issues/566). Please follow the instructions at http://pytorch.org/ to install with miniconda instead. It suggests an alternative way to install PyTorch. So I tried that. >conda install pytorch torchvision -c pytorch Solving environment: failed PackagesNotFoundError: The following packages are not available from current channels: - torchvision Current channels: - https://conda.anaconda.org/pytorch/win-64 - https://conda.anaconda.org/pytorch/noarch - https://repo.anaconda.com/pkgs/main/win-64 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/free/win-64 - https://repo.anaconda.com/pkgs/free/noarch - https://repo.anaconda.com/pkgs/r/win-64 - https://repo.anaconda.com/pkgs/r/noarch - https://repo.anaconda.com/pkgs/pro/win-64 - https://repo.anaconda.com/pkgs/pro/noarch - https://repo.anaconda.com/pkgs/msys2/win-64 - https://repo.anaconda.com/pkgs/msys2/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. But when I do that and search for PyTorch, I eventually find myself back at the original instructions. When I search for Torchvision, no Windows versions are listed.
Try the following steps in Windows: Create a virtual environment using the command : conda create -n py_env python=3.5 source activate py_env conda install pytorch-cpu -c pytorch pip install torchvision Note: You can use any name instead of py_env Thanks
https://stackoverflow.com/questions/51469194/
Dimension of hidden layer LSTM Pytorch
I was reading the implementation of LSTM in Pytorch. The code goes like this: lstm = nn.LSTM(3, 3) # Input dim is 3, output dim is 3 inputs = [torch.randn(1, 3) for _ in range(5)] # make a sequence of length 5 # initialize the hidden state. hidden = (torch.randn(1, 1, 3), torch.randn(1, 1, 3)) for i in inputs: # Step through the sequence one element at a time. # after each step, hidden contains the hidden state. out, hidden = lstm(i.view(1, 1, -1), hidden) I don't understand why the hidden state is defined by a tuple of two tensors instead of one? Since the hidden layer is simply a layer of the feed-forward neural network which is a vector.
Apart from the hidden state, LSTM also has cell state, C. Therefore, a tuple is passed I think. See https://pytorch.org/docs/stable/nn.html#lstmcell. If you don't pass C, it is taken to be all zeros. Note that this is the case for LSTM, GRU or RNN do not have C.
https://stackoverflow.com/questions/51483113/
How to use PyTorch to calculate partial derivatives?
I want to use PyTorch to get the partial derivatives between output and input. Suppose I have a function Y = 5*x1^4 + 3*x2^3 + 7*x1^2 + 9*x2 - 5, and I train a network to replace this function, then I use autograd to calculate dYdx1, dYdx2: net = torch.load('net_723.pkl') x = torch.tensor([[1,-1]],requires_grad=True).type(torch.FloatTensor) y = net(x) grad_c = torch.autograd.grad(y,x,create_graph=True,retain_graph=True)[0] Then I get a wrong derivative as: >>>tensor([[ 7.5583, -5.3173]]) but when I use function to calculate, I get the right answer: Y = 5*x[0,0]**4 + 3*x[0,1]**3 + 7*x[0,0]**2 + 9*x[0,1] - 5 grad_c = torch.autograd.grad(Y,x,create_graph=True,retain_graph=True)[0] >>>tensor([[ 34., 18.]]) Why does this happen?
A neural network is a universal function approximator. What that means is, that, for enough computational resources, training time, nodes, etc., you can approximate any function. Without any further information on how you trained your network in the first example, I would suspect that your network simply does not fit properly to the underlying function, meaning that the internal representation of your network actually models a different function! For the second code snippet, autmatic differentiation does give you the exact partial derivative. It does so via a different method, see another one of my answers on SO, on the topic of AutoDiff/Autograd specifically.
https://stackoverflow.com/questions/51500010/
How to create sub network reference in pytorch?
The simple explanation for what I aim to do is the following Given network O with structure of --------- ---------- ---------- ---------- ---------- | Input | -> (x) -> | Blob A | -> (xa) -> | Blob B | -> (xb) -> | Blob C | -> (xc) -> | Output | --------- ---------- ---------- ---------- ---------- I want to create a sub network to calculate a noise loss function for Blob C. The operation is given input xb with original output xc, and pass xb + noise through Blob C again to get xc'. Then the mse_loss is compute between xc and xc' I have tried creating nn.Sequential from the original model. But I am not sure that it created a new deep copy or a reference. If I have missed anything, please comment Thank you
So after some testing, I found out that if the layer references is kept (like in some variable), and then create a new model using nn.Sequential with that layer architecture, the new model will share the same layer reference. So when the original network is updated, the new model is also updated. The code I used to test my hypothesis is the following class TestNN(nn.Module): def __init__(self): super(TestNN, self).__init__() self.conv1 = nn.Conv2d(3, 64, kernel_size=3, padding=1) self.relu1 = nn.ReLU() self.conv2 = nn.Conv2d(64, 64, kernel_size=3, padding=1) self.relu2 = nn.ReLU() self.conv3 = nn.Conv2d(64, 3, kernel_size=3, padding=1) self.relu3 = nn.ReLU() def forward(self, x): in_x = x h = self.relu1(self.conv1(in_x)) h = self.relu2(self.conv2(h)) h = self.relu3(self.conv3(h)) return h net = TestNN() testInput = torch.from_numpy(np.random.rand(1, 3, 3, 3)).float() target = torch.from_numpy(np.random.rand(1, 3, 3, 3)).float() criterion = nn.MSELoss() optimizer = optim.SGD(net.parameters(), lr=0.01) def subnetwork(model, start_layer_idx, end_layer_idx): subnetwork = nn.Sequential() for idx, layer in enumerate(list(model)[start_layer_idx: end_layer_idx+1]): subnetwork.add_module("layer_{}".format(idx), layer) return subnetwork start = subnetwork(net.children(), 0, 1) middle = subnetwork(net.children(), 2, 3) end = subnetwork(net.children(), 4, 5) print(end(middle(start(testInput)))) print(net(testInput)) for idx in range(5): net.zero_grad() out = net(testInput) loss = criterion(out, target) print("[{}] {:4f}".format(idx, loss)) loss.backward() optimizer.step() print(end(middle(start(testInput)))) print(net(testInput)) The output before and after the training is the same. So I concluded that my hypothesis is correct. To finish my objective, I created a 'transparent' loss like from this tutorial. class NoiseLoss(nn.Module): def __init__(self, subnet, noise_count = 20, noise_range=0.3): super(NoiseLoss, self).__init__() self.net = subnet self.noise_count = noise_count self.noise_range = noise_range def add_noise(self, x): b, c, h, w = x.size() noise = torch.zeros(c, h, w) for i in range(self.noise_count): row, col = rng.randint(0, h-1), rng.randint(0, w-1) for j in range(c): noise[j,row,col] = (2*(rng.random()%self.noise_range)) - self.noise_range noise = noise.float() xp = x.clone() for b_idx in range(b): xp[b_idx,:,:,:] = xp[b_idx,:,:,:] + noise return xp def forward(self, x): self.loss = F.mse_loss(x, self.add_noise(x)) print(self.loss) return x noise_losses = [] testLoss = NoiseLoss(subnetwork(net.children(), 2, 3)) middle.add_module('noise_loss_test', testLoss) noise_losses.append(testLoss) and modify my loop to ... print("[{}] {:4f}".format(idx, loss)) for nl in noise_losses: loss += nl.loss loss.backward(retain_graph=True) ... If I miss something, please leave a comment
https://stackoverflow.com/questions/51511074/
How to train neural network over days?
I need to train a CNN that will take 1-2 days to train on a remotely accessed GPU server. Will I simply need to leave my laptop on overnight for the training to be complete or is there a way to save the state of the training and resume from there the next day? (Implementation in pytorch)
I assume you ssh into you remote server. When training the model by running your script, say, $ python train.py, simply pre-append nohup: $ nohup python train.py This tells your process to disregard the hangup signal when you exit the ssh session and shut down your laptop.
https://stackoverflow.com/questions/51527668/
Debug Pytorch Optimizer
When I run optimizer.step on my code, I get this error RuntimeError: sqrt not implemented for 'torch.LongTensor' C:\Program Files\Anaconda3\lib\site-packages\IPython\core\magic.py in <lambda>(f, *a, **k) 186 # but it's overkill for just that one bit of state. 187 def magic_deco(arg): --> 188 call = lambda f, *a, **k: f(*a, **k) 189 190 if callable(arg): C:\Program Files\Anaconda3\lib\site-packages\IPython\core\magics\execution.py in time(self, line, cell, local_ns) 1178 else: 1179 st = clock2() -> 1180 exec(code, glob, local_ns) 1181 end = clock2() 1182 out = None <timed exec> in <module>() C:\Program Files\Anaconda3\lib\site-packages\torch\optim\adam.py in step(self, closure) 98 denom = max_exp_avg_sq.sqrt().add_(group['eps']) 99 else: --> 100 denom = exp_avg_sq.sqrt().add_(group['eps']) 101 102 bias_correction1 = 1 - beta1 ** state['step'] RuntimeError: sqrt not implemented for 'torch.LongTensor' I am using my own loss function. My question is how will I debug this error? Is there a quick way to see the type of all my variables? I am manually doing it and all of them are type float (including the output of my custom loss). I can't figure out why we are even getting an error related to a LongTensor. How does the optimizer.step function work in PyTorch? Just in case, below is most of the code. This is the model: class LSTM(nn.Module): def __init__(self, mel_channels=40, frames=81, hidden_dim=768, proj_dim=256): super(LSTM, self).__init__() self.hidden_dim = hidden_dim self.mel_channels = mel_channels self.frames = frames self.proj_dims = proj_dim weight = torch.tensor([10]) bias = torch.tensor([-5]) self.w = nn.Parameter(weight) self.b = nn.Parameter(bias) # The LSTM takes word embeddings as inputs, and outputs hidden states # with dimensionality hidden_dim. self.lstm1 = nn.LSTM(mel_channels, hidden_dim, batch_first=False) print("here1") self.lstm2 = nn.LSTM(proj_dim, hidden_dim, batch_first=False) self.lstm3 = nn.LSTM(proj_dim, hidden_dim, batch_first=False) self.lstms = [self.lstm1, self.lstm2, self.lstm3] self.proj1 = nn.Linear(hidden_dim, proj_dim) self.proj2 = nn.Linear(hidden_dim, proj_dim) self.proj3 = nn.Linear(hidden_dim, proj_dim) self.projs = [self.proj1, self.proj2, self.proj3] def init_states(self, batchsize): # Before we've done anything, we dont have any hidden state. # Refer to the Pytorch documentation to see exactly # why they have this dimensionality. # The axes semantics are (num_layers, minibatch_size, hidden_dim) return [(torch.zeros(1, batchsize, self.hidden_dim), torch.zeros(1, batchsize, self.hidden_dim)), (torch.zeros(1, batchsize, self.hidden_dim), torch.zeros(1, batchsize, self.hidden_dim)), (torch.zeros(1, batchsize, self.hidden_dim), torch.zeros(1, batchsize, self.hidden_dim)), ] def forward(self, inputs, states=None): time, batchsize, inputdim = list(inputs.shape) if states is None: states = self.init_states(batchsize) output = inputs print(output.type()) for i in range(3): print(output.type()) output, state = self.lstms[i](output, states[i]) output = self.projs[i](output) # perform normalization on this output here output = output[-1] print(output.type()) output = F.normalize(output, p=2, dim=-1) print(output.type()) self.state = state print(output.type()) return output def get_w(self): print(get_w.type()) return(self.w) def get_b(self): print(get_b.type()) return(self.b) def get_state(self): print(get_state()) return(self.state) This is the custom loss: class CustomLoss(_Loss): def __init__(self, size_average=True, reduce=True): super(CustomLoss, self).__init__(size_average, reduce) def forward(self, S, N, M, type='softmax',): return self.loss_cal(S, N, M, type) def loss_cal(self, S, N, M, type="softmax",): self.A = torch.cat([S[i * M:(i + 1) * M, i:(i + 1)] for i in range(N)], dim=0) if type == "softmax": self.B = torch.log(torch.sum(torch.exp(S.float()), dim=1, keepdim=True) + 1e-8) total = torch.abs(torch.sum(self.A - self.B)) else: raise AssertionError("loss type should be softmax or contrast !") return total Finally, this is the main file model=LSTM() optimizer = optim.Adam(list(model.parameters()), lr=LEARNING_RATE) model = model.to(device) best_loss = 100. generator = SpeakerVerificationDataset() dataloader = DataLoader(generator, batch_size=4, shuffle=True, num_workers=0) loss_history = [] update_counter = 1 for epoch in range(NUM_EPOCHS): print("Epoch # : ", epoch + 1) for step in range(STEPS_PER_EPOCH): # get batch dataset for i_batch, sample_batched in enumerate(dataloader): print(sample_batched['MelData'].size()) inputs = sample_batched['MelData'].float() inputs=sample_batched['MelData'].view(180, M*N, 40).float() print((inputs.size())) inputs = inputs #print(here) # remove previous gradients optimizer.zero_grad() # get gradients and loss at this iteration #predictions,state,w,b = model(inputs) predictions = model(inputs) w = model.w b = model.b predictions = similarity(output=predictions,w=w,b=b) #loss = CustomLoss() S = predictions loss_func = CustomLoss() loss = loss_func.loss_cal(S=S,N=N,M=M) loss.backward() # update the weights print("start optimizing") optimizer.step() loss_history.append(loss.item()) print(update_counter, ":", loss_history[-1]) update_counter += 1 print() # save the weights torch.save(model.state_dict(), CHECKPOINT_PATH) print("Saving weights") print() print()
The error comes from here: weight = torch.tensor([10]) bias = torch.tensor([-5]) self.w = nn.Parameter(weight) self.b = nn.Parameter(bias) Had to change it to weight = torch.tensor([10.0]) bias = torch.tensor([-5.0]) self.w = nn.Parameter(weight) self.b = nn.Parameter(bias)
https://stackoverflow.com/questions/51529463/
tensorflow stop_gradient equivalent in pytorch
What is the tf.stop_gradient() equivalent (provides a way to not compute gradient with respect to some variables during back-propagation) in pytorch?
Could you check with x.detach().
https://stackoverflow.com/questions/51529974/
Can we deploy Pytorch models to Google Cloud ML Engine? If so, how?
I searched if anybody has. Somebody has provided a work around but couldn't confirm its working. Here are the links - How to setup pytorch in google-cloud-ml https://discuss.pytorch.org/t/pytorch-in-google-cloud-ml-engine/12818 Currently, I am using Floydhub, but it is good to start with. I work mostly with Google Cloud products, so want to keep my deep learning projects as close to it as possible.
The answer you referred to will likely work (I've also not fully tested it). The Cloud ML Engine team is working hard to simplify this. In the meantime, if you prefer working with a VM (just remember to shut it off when not in use), follow instructions on this page. For example, export IMAGE_FAMILY="pytorch-latest-cu91" export ZONE="us-west1-b" export INSTANCE_NAME="my-instance" gcloud compute instances create $INSTANCE_NAME \ --zone=$ZONE \ --image-family=$IMAGE_FAMILY \ --image-project=deeplearning-platform-release \ --maintenance-policy=TERMINATE \ --accelerator='type=nvidia-tesla-v100,count=1' \ --metadata='install-nvidia-driver=True'
https://stackoverflow.com/questions/51553365/
How to Invert AvgPool2d?
Is it possible to invert an avgPool2d operation in PyTorch, like maxunpool2d for a maxpool2d operation, and if so, how could that be done? I've already checked the documentation, and there isn't an option to return the indices, like in the maxpool2d operation, so I assume the unpooling won't be possible in a similar way. EDIT: I found a document by Intel which describes how the unpooling works. After checking the math regarding the avgpool2d function the unpooling seems to be pretty straight forward, basically mirroring every input element onto multiple output elements, and apply padding in order to get a correct output size.
I think you are looking for ConvTransposed2d, aka deconvolution: This function allows you to "upsample" the pooled layer. Using fixed weights you can replicate the averged pooled values. You can also train this layer hopefully getting something better.
https://stackoverflow.com/questions/51574547/
Pytorch how to get the gradient of loss function twice
Here is what I'm trying to implement: We calculate loss based on F(X), as usual. But we also define "adversarial loss" which is a loss based on F(X + e). e is defined as dF(X)/dX multiplied by some constant. Both loss and adversarial loss are backpropagated for the total loss. In tensorflow, this part (getting dF(X)/dX) can be coded like below: grad, = tf.gradients( loss, X ) grad = tf.stop_gradient(grad) e = constant * grad Below is my pytorch code: class DocReaderModel(object): def __init__(self, embedding=None, state_dict=None): self.train_loss = AverageMeter() self.embedding = embedding self.network = DNetwork(opt, embedding) self.optimizer = optim.SGD(parameters) def adversarial_loss(self, batch, loss, embedding, y): self.optimizer.zero_grad() loss.backward(retain_graph=True) grad = embedding.grad grad.detach_() perturb = F.normalize(grad, p=2)* 0.5 self.optimizer.zero_grad() adv_embedding = embedding + perturb network_temp = DNetwork(self.opt, adv_embedding) # This is how to get F(X) network_temp.training = False network_temp.cuda() start, end, _ = network_temp(batch) # This is how to get F(X) del network_temp # I even deleted this instance. return F.cross_entropy(start, y[0]) + F.cross_entropy(end, y[1]) def update(self, batch): self.network.train() start, end, pred = self.network(batch) loss = F.cross_entropy(start, y[0]) + F.cross_entropy(end, y[1]) loss_adv = self.adversarial_loss(batch, loss, self.network.lexicon_encoder.embedding.weight, y) loss_total = loss + loss_adv self.optimizer.zero_grad() loss_total.backward() self.optimizer.step() I have few questions: 1) I substituted tf.stop_gradient with grad.detach_(). Is this correct? 2) I was getting "RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time." so I added retain_graph=True at the loss.backward. That specific error went away. However now I'm getting a memory error after few epochs (RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCStorage.cu:58 ). I suspect I'm unnecessarily retaining graph. Can someone let me know pytorch's best practice on this? Any hint / even short comment will be highly appreciated.
I think you are trying to implement generative adversarial network (GAN), but from the code, I don't understand and can't follow to what you are trying to achieve as there are a few missing pieces for a GAN to works. I can see there's a discriminator network module, DNetwork but missing the generator network module. If to guess, when you say 'loss function twice', I assumed you mean you have one loss function for the discriminator net and another for the generator net. If that's the case, let me share how I would implement a basic GAN model. As an example, let's take a look at this Wasserstein GAN Jupyter notebook I'll skip the less important bits and zoom into the important ones here: First, import PyTorch libraries and set up # Set up batch size, image size, and size of noise vector: bs, sz, nz = 64, 64, 100 # nz is the size of the latent z vector for creating some random noise later Build a discriminator module class DCGAN_D(nn.Module): def __init__(self): ... truncated, the usual neural nets stuffs, layers, etc ... def forward(self, input): ... truncated, the usual neural nets stuffs, layers, etc ... Build a generator module class DCGAN_G(nn.Module): def __init__(self): ... truncated, the usual neural nets stuffs, layers, etc ... def forward(self, input): ... truncated, the usual neural nets stuffs, layers, etc ... Put them all together netG = DCGAN_G().cuda() netD = DCGAN_D().cuda() Optimizer needs to be told what variables to optimize. A module automatically keeps track of its variables. optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-4) optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-4) One forward step and one backward step for Discriminator Here, the network can calculate gradient during the backward pass, depends on the input to this function. So, in my case, I have 3 type of losses; generator loss, dicriminator real image loss, dicriminator fake image loss. I can get gradient of loss function three times for 3 different net passes. def step_D(input, init_grad): # input can be from generator's generated image data or input image from dataset err = netD(input) err.backward(init_grad) # backward pass net to calculate gradient return err # loss Control trainable parameters [IMPORTANT] Trainable parameters in the model are those that require gradients. def make_trainable(net, val): for p in net.parameters(): p.requires_grad = val # note, i.e, this is later set to False below in netG update in the train loop. In TensorFlow, this part can be coded like below: grad = tf.gradients(loss, X) grad = tf.stop_gradient(grad) So, I think this will answer your first question, "I substituted tf.stop_gradient with grad.detach_(). Is this correct?" Train loop You can see here how's the 3 different loss functions are being called here. def train(niter, first=True): for epoch in range(niter): # Make iterable from PyTorch DataLoader data_iter = iter(dataloader) i = 0 while i < n: ########################### # (1) Update D network ########################### make_trainable(netD, True) # train the discriminator d_iters times d_iters = 100 j = 0 while j < d_iters and i < n: j += 1 i += 1 # clamp parameters to a cube for p in netD.parameters(): p.data.clamp_(-0.01, 0.01) data = next(data_iter) ##### train with real ##### real_cpu, _ = data real_cpu = real_cpu.cuda() real = Variable( data[0].cuda() ) netD.zero_grad() # Real image discriminator loss errD_real = step_D(real, one) ##### train with fake ##### fake = netG(create_noise(real.size()[0])) input.data.resize_(real.size()).copy_(fake.data) # Fake image discriminator loss errD_fake = step_D(input, mone) # Discriminator loss errD = errD_real - errD_fake optimizerD.step() ########################### # (2) Update G network ########################### make_trainable(netD, False) netG.zero_grad() # Generator loss errG = step_D(netG(create_noise(bs)), one) optimizerG.step() print('[%d/%d][%d/%d] Loss_D: %f Loss_G: %f Loss_D_real: %f Loss_D_fake %f' % (epoch, niter, i, n, errD.data[0], errG.data[0], errD_real.data[0], errD_fake.data[0])) "I was getting "RuntimeError: Trying to backward through the graph a second time..." PyTorch has this behaviour; to reduce GPU memory usage, during the .backward() call, all the intermediary results (if you have like saved activations, etc.) are deleted when they are not needed anymore. Therefore, if you try to call .backward() again, the intermediary results don't exist and the backward pass cannot be performed (and you get the error you see). It depends on what you are trying to do. You can call .backward(retain_graph=True) to make a backward pass that will not delete intermediary results, and so you will be able to call .backward() again. All but the last call to backward should have the retain_graph=True option. Can someone let me know pytorch's best practice on this As you can see from the PyTorch code above and from the way things are being done in PyTorch which is trying to stay Pythonic, you can get a sense of PyTorch's best practice there.
https://stackoverflow.com/questions/51578235/
Pytorch- why is “accumulating” the default mode of .gradient?
Why didn't the authors just make it overwrite the gradient? Is there any specific reason for keeping it accumulated?
Because if you use the same network twice (or same weights) in the forward pass, it should accumulate and not override. Also, since pytorch computation graph is defined by the run, so it makes sense to accumulate. See https://discuss.pytorch.org/t/why-do-we-need-to-set-the-gradients-manually-to-zero-in-pytorch/4903/9
https://stackoverflow.com/questions/51586819/
How to get backward gradients from sparse matrices in PyTorch networks?
I am following the PyTorch tutorial example: https://pytorch.org/tutorials/beginner/pytorch_with_examples.html The example runs without any problems, but as soon as I switched it to my own dataset (which is a sparse tensor, because it is too big to be used as a dense tensor), I run into this error: RuntimeError Traceback (most recent call last) <ipython-input-127-8b4999644085> in <module>() 41 # Backward pass: compute gradient of the loss with respect to model 42 # parameters ---> 43 loss.backward() 44 45 # Calling the step function on an Optimizer makes an update to its ~/miniconda3/envs/py3/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 91 products. Defaults to ``False``. 92 """ ---> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph) 94 95 def register_hook(self, hook): ~/miniconda3/envs/py3/lib/python3.6/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 87 Variable._execution_engine.run_backward( 88 tensors, grad_tensors, retain_graph, create_graph, ---> 89 allow_unreachable=True) # allow_unreachable flag 90 91 RuntimeError: Expected object of type torch.FloatTensor but found type torch.sparse.FloatTensor for argument #2 'mat2' I tried switching the optimizer (Adagrad, Adam), but it doesn't seem to work. Edit: Added more of the error output. The error occurs on backward()
Looks like this functionality is currently being worked on now. https://github.com/pytorch/pytorch/issues/2389
https://stackoverflow.com/questions/51603798/
Disparate result after setting requires_grad=True
I am currently using PyTorch for deep neural network. I wrote a toy neural network shown below and I found that whether or not I set requires_grad=True for label y makes huge difference. When y.requires_grad=True, the neural network diverges. I am wondering why this happens. import torch x = torch.unsqueeze(torch.linspace(-1, 1, 100), dim=1) y = x.pow(2) + 10 * torch.rand(x.size()) x.requires_grad = True # this is where problem occurs y.requires_grad = True class Net(torch.nn.Module): def __init__(self, n_feature, n_hidden, n_output): super(Net, self).__init__() self.hidden = torch.nn.Linear(n_feature, n_hidden) self.predict = torch.nn.Linear(n_hidden, n_output) def forward(self, x): x = torch.relu(self.hidden(x)) x = self.predict(x) return x net = Net(1, 10, 1) optimizer = torch.optim.SGD(net.parameters(), lr=0.5) criterion = torch.nn.MSELoss() for t in range(200): y_pred = net(x) loss= criterion(y_pred, y) optimizer.zero_grad() loss.backward() print("Epoch {}: {}".format(t, loss)) optimizer.step()
It seems that you are using an outdated version of PyTorch. In more recent versions (0.4.0+), this will throw you the following error: AssertionError: nn criterions don't compute the gradient w.r.t. targets - please mark these tensors as not requiring gradients Essentially, it tells you that it will only work if you set the requires_grad flag to False for your targets. The reason why this works at all in prior versions is indeed very interesting, and also why it causes diverging behavior. My guess would be that a backwards pass would then also change your targets (instead of only changing your weights), which is obviously something you do not desire.
https://stackoverflow.com/questions/51627601/
Output of DecoderRNN contains extra dimensions (Pytorch)
I have developed an Encoder(CNN)-Decoder (RNN) network for image captioning in pytorch. The decoder network takes in two inputs- Context feature vector from the Encoder and the word embeddings of the caption for training. The context feature vector is of size = embed_size , which is also the embedding size of each word in the caption. My question here is more concerned with the output of the Class DecoderRNN. Please refer to the code below. class DecoderRNN(nn.Module): def __init__(self, embed_size, hidden_size, vocab_size, num_layers=1): super(DecoderRNN, self).__init__() self.embed_size = embed_size self.hidden_size = hidden_size self.vocab_size = vocab_size self.num_layers = num_layers self.linear = nn.Linear(hidden_size, vocab_size) self.embed = nn.Embedding(vocab_size, embed_size) self.lstm = nn.LSTM(embed_size, hidden_size, num_layers, batch_first = True) def forward(self, features, captions): embeddings = self.embed(captions) embeddings = torch.cat((features.unsqueeze(1), embeddings),1) hiddens,_ = self.lstm(embeddings) outputs = self.linear(hiddens) return outputs In the forward function , I send in a sequence of (batch_size, caption_length+1, embed_size) (concatenated tensor of context feature vector and the embedded caption) . The output of the sequence should be captions and of the shape (batch_size, caption_length, vocab_size) , but I am still receiving an output of shape (batch_size, caption_length +1, vocab_size) . Can anyone please suggest what should I alter in my forward function so that extra 2nd dimension is not received? Thanks in advance
Since in LSTM (or in any RNN) for each time step (or caption length here), there will be one output, I do not see any problem here. What you need to do is make input size (caption_length) at the second dimension to get the required output. (or people usually add a < END of SENTENCE > tag to target. Hence the target length is caption+1)
https://stackoverflow.com/questions/51639976/
How to use PyTorch to calculate the gradients of outputs w.r.t. the inputs in a neural network?
I have a trained network. And I want to calculate the gradients of outputs w.r.t. the inputs. By querying the PyTorch Docs, torch.autograd.grad may be useful. So, I use the following code: x_test = torch.randn(D_in,requires_grad=True) y_test = model(x_test) d = torch.autograd.grad(y_test, x_test)[0] model is the neural network. x_test is the input of size D_in and y_test is a scalar output. I want to compare the calculated result with numerical difference by scipy.misc.derivative. So, I calculated the partial derivate by setting a index. idx = 3 x_test = torch.randn(D_in,requires_grad=True) y_test = model(x_test) print(x_test[idx].item()) d = torch.autograd.grad(y_test, x_test)[0] print(d[idx].item()) def fun(x): x_input = x_test.detach() x_input[idx] = x with torch.no_grad(): y = model(x_input) return y.item() x0 = x_test[idx].item() print(x0) print(derivative(fun, x0, dx=1e-6)) But I got totally different results. The gradient calculated by torch.autograd.grad is -0.009522666223347187, while that by scipy.misc.derivative is -0.014901161193847656. Is there anything wrong about the calculation? Or I use torch.autograd.grad wrongly?
In fact, it is very likely that your given code is completely correct. Let me explain this by redirecting you to a little background information on backpropagation, or rather in this case Automatic Differentiation (AutoDiff). The specific implementation of many packages is based on AutoGrad, a common technique to get the exact derivatives of a function/graph. It can do this by essentially "inverting" the forward computational pass to compute piece-wise derivatives of atomic function blocks, like addition, subtraction, multiplication, division, etc., and then "chaining them together". I explained AutoDiff and its specifics in a more detailed answer in this question. On the contrary, scipy's derivative function is only an approximation to this derivative by using finite differences. You would take the results of the function at close-by points, and then calculate a derivative based on the difference in function values for those points. This is why you see a slight difference in the two gradients, since this can be an inaccurate representation of the actual derivative.
https://stackoverflow.com/questions/51666410/
Pytorch embedding RuntimeError: Expected object of type torch.LongTensor but found type torch.cuda.LongTensor for argument #3 'index'
I'm getting this error saying RuntimeError: Expected object of type torch.LongTensor but found type torch.cuda.LongTensor for argument #3 'index' But what does it mean by argument #3 "index"? I can't find "index" argument in torch.embedding (here source : https://pytorch.org/docs/stable/_modules/torch/nn/modules/sparse.html#Embedding) It seems like I'm passing the embedding the wrong parameters. I even changed the data type of my input like below but the error persists. batch['doc_tok'] = batch['doc_tok'].long() batch['query_tok'] = batch['query_tok'].long() Any comment (even though it's short!) or just listing keywords to look at will be highly appreciated! Here is a full traceback. Traceback (most recent call last): File "train_v2.py", line 110, in <module> main() File "train_v2.py", line 81, in main model.update(batch) File "/home/aerin/Desktop/squad_vteam/src/model.py", line 129, in update loss_adv = self.adversarial_loss(batch, loss, self.network.lexicon_encoder.embedding.weight, y) File "/home/aerin/Desktop/squad_vteam/src/model.py", line 104, in adversarial_loss start, end, _ = self.network(batch) File "/home/aerin/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/aerin/Desktop/squad_vteam/src/dreader.py", line 78, in forward doc_mask, query_mask = self.lexicon_encoder(batch) File "/home/aerin/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/aerin/Desktop/squad_vteam/src/encoder.py", line 116, in forward doc_emb, query_emb = emb(doc_tok), emb(query_tok) File "/home/aerin/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/aerin/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 108, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/home/aerin/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/functional.py", line 1076, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected object of type torch.LongTensor but found type torch.cuda.LongTensor for argument #3 'index' Update: I even sent the whole model.network to cpu but still getting the same error. batch['doc_tok']=batch['doc_tok'].long().cpu() batch['query_tok']=batch['query_tok'].long().cpu() self.network.cpu() print(batch['doc_tok'].dtype, batch['query_tok'].dtype) # They are both torch.int64 torch.int64 start, end, _ = self.network(batch) At this point, I'm suspecting this might be a bug... model.py code: https://github.com/byorxyz/san_mrc/blob/master/src/model.py Network defined: https://github.com/byorxyz/san_mrc/blob/master/src/dreader.py
It seems to me like your input/target tensors (batch['doc_tok'], etc.) and your network and its variables (I believe index is an internal tensor for Embedding layers) are on different devices (CPU for your data, GPU for your model?). If you want everything to run on GPU, you need both to: Load your data there, e.g. batch['doc_tok'].cuda() Load your model there, e.g. model.network.cuda() Same goes if you want to run on CPU, replacing with .cpu().
https://stackoverflow.com/questions/51681243/
import torch error in google Colaboratory
I get the following error on google colaboratory. ImportErrorTraceback (most recent call last) in () ----> 1 import torch /usr/local/lib/python3.6/site-packages/torch/init.py in () 54 except ImportError: 55 pass ---> 56 from torch._C import * 57 58 all += [name for name in dir(_C) ImportError: No module named _C I tried importing from a different directory. But, still get the same error. Any help?
Try restarting runtime. Colab sometimes gives errors like this :P
https://stackoverflow.com/questions/51698148/
How to release temporarily consumed GPU memory after each forward?
I have a class like this: class Stem(nn.Module): def __init__(self): super(Stem, self).__init__() self.out_1 = BasicConv2D(3, 32, kernelSize = 3, stride = 2) self.out_2 = BasicConv2D(32, 32, kernelSize = 3, stride = 1) self.out_3 = BasicConv2D(32, 64, kernelSize = 3, stride = 1, padding = 1) def forward(self, x): x = self.out_1(x) x = self.out_2(x) x = self.out_3(x) return x and the attributes out_1,2,3 of Stem are instances of below class: class BasicConv2D(nn.Module): def __init__(self, inChannels, outChannels, kernelSize, stride, padding = 0): super(BasicConv2D, self).__init__() self.conv = nn.Conv2d(inChannels, outChannels, kernel_size = kernelSize, stride = stride, padding = padding, bias = False) self.bn = nn.BatchNorm2d(outChannels, eps = 0.001, momentum = 0.1, affine = True) self.relu = nn.ReLU(inplace = False) def forward(self, x): x = self.conv(x) x = self.bn(x) y = self.relu(x) return y when training, within Stem.forward(), nvidia-smi tells that each line will consume x MBs GPU memory, but after Stem.forward() finished, the memory won't be released, causing the training to quickly crashed, out of GPU memory. The question is thus: how to release the temporarily consumed GPU memory?
Your model does look good, so you might want to have a general look at how pytorch manages memory allocation. I suspect that you simply keep pointers to your return value (y) alive (e.g. by accumulating a loss or some such). As pytorch stores the entire attached computation graph, you never free the memory. See this question and in particular this answer for a more detailed discussion.
https://stackoverflow.com/questions/51706902/
Pytorch installation issue under Anaconda
I followed the link here to install fastai library using pip install git+https://github.com/fastai/fastai.git It gave me the following error message. These messages keep the same even I installed Pytorch successfully using conda install pytorch-cpu -c pytorch and pip3 install torchvision. What can be the reason? Collecting torch<0.4 (from fastai==0.7.0) Using cached https://files.pythonhosted.org/packages/5f/e9/bac4204fe9cb1a002ec6140b47f51affda1655379fe302a1caef421f9846/torch-0.1.2.post1.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\shuxi\AppData\Local\Temp\pip-install-7sjptuad\torch\setup.py", line 11, in <module> raise RuntimeError(README) RuntimeError: PyTorch does not currently provide packages for PyPI (see status at https://github.com/pytorch/pytorch/issues/566). Please follow the instructions at http://pytorch.org/ to install with miniconda instead. ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in C:\Users\shuxi\AppData\Local\Temp\pip-install-7sjptuad\torch\
To fix this, do: $ pip install --upgrade git+https://github.com/fastai/fastai.git OR $ pip install --no-cache-dir git+https://github.com/fastai/fastai.git Your command probably failed because you have installed a old version of torch (0.1.2) some time ago. pip was not supported for torch install for that version and pip instead redirected the user to open pytorch.org in the browser. In your case, pip is reusing this cached package. --upgrade forces pip to choose latest version of all depending packages.
https://stackoverflow.com/questions/51735017/
pytorch Rnn.py RuntimeError: CUDNN_STATUS_INTERNAL_ERROR
I'm getting an CUDNN_STATUS_INTERNAL_ERROR error like below. python train_v2.py Traceback (most recent call last): File "train_v2.py", line 113, in <module> main() File "train_v2.py", line 74, in main model.cuda() File "/home/ahkim/Desktop/squad_vteam/src/model.py", line 234, in cuda self.network.cuda() File "/home/ahkim/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/module.py", line 249, in cuda return self._apply(lambda t: t.cuda(device)) File "/home/ahkim/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/module.py", line 176, in _apply module._apply(fn) File "/home/ahkim/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/module.py", line 176, in _apply module._apply(fn) File "/home/ahkim/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/module.py", line 176, in _apply module._apply(fn) File "/home/ahkim/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 112, in _apply self.flatten_parameters() File "/home/ahkim/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 105, in flatten_parameters self.batch_first, bool(self.bidirectional)) RuntimeError: CUDNN_STATUS_INTERNAL_ERROR What should I try to resolve this issue? I tried deleting .nv but no success. nvidia-smi Wed Aug 8 10:56:29 2018 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 390.67 Driver Version: 390.67 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX TIT... Off | 00000000:04:00.0 Off | N/A | | 22% 21C P8 15W / 250W | 125MiB / 12212MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce GTX TIT... Off | 00000000:05:00.0 Off | N/A | | 22% 24C P8 14W / 250W | 11MiB / 12212MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 GeForce GTX TIT... Off | 00000000:08:00.0 Off | N/A | | 22% 23C P8 14W / 250W | 11MiB / 12212MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 3 GeForce GTX TIT... Off | 00000000:09:00.0 Off | N/A | | 22% 23C P8 15W / 250W | 11MiB / 12212MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 4 GeForce GTX TIT... Off | 00000000:85:00.0 Off | N/A | | 22% 24C P8 14W / 250W | 11MiB / 12212MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 5 GeForce GTX TIT... Off | 00000000:86:00.0 Off | N/A | | 22% 23C P8 15W / 250W | 11MiB / 12212MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 6 GeForce GTX TIT... Off | 00000000:89:00.0 Off | N/A | | 22% 21C P8 15W / 250W | 11MiB / 12212MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 7 GeForce GTX TIT... Off | 00000000:8A:00.0 Off | N/A | | 22% 23C P8 15W / 250W | 11MiB / 12212MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1603 C /usr/bin/python 114MiB | +-----------------------------------------------------------------------------+ Update: The same code runs without error using Nvidia Driver Version: 396.26 (cuda V9.1.85. torch.backends.cudnn.version(): 7102). I'm getting an error using Driver Version: 390.67 (cuda V9.1.85. torch.backends.cudnn.version(): 7102)
solved by below steps. export LD_LIBRARY_PATH= "/usr/local/cuda-9.1/lib64" Due to nfs issue, have pytoch cache not in nfs. For example: $ rm ~/.nv -rf $ mkdir -p /tmp/$USER/.nv $ ln -s /tmp/$USER/.nv ~/.nv
https://stackoverflow.com/questions/51752869/
Change dimensions of image when creating custom dataloader in pytorch
I was training my CNN on the CIFAR10 dataset. I extracted the 50,000 images of dimensions (32 x 32 x 3) and read them in a list. I converted them into numpy arrays and stored them in a list. I did the same with the labels for my training and testing. I then constructed my CNN of two layers and a single FC in pytorch. Before doing this I created my own custom data loader. While doing so, the dimensions of the image that I am feeding in it are changing. The dimensions (32 x 32 x 3) are changing to (3 x 32 x 32) and I am not being able to train my neural network. tensor_x = torch.stack([torch.Tensor(i) for i in train_images]) tensor_y = torch.stack([torch.Tensor(i) for i in train_labels]) dataset = data_utils.TensorDataset(tensor_x , tensor_y) train_dataloader = data_utils.DataLoader(dataset=dataset) tensor_x = torch.stack([torch.Tensor(i) for i in test_images]) tensor_y = torch.stack([torch.Tensor(i) for i in test_labels]) dataset = data_utils.TensorDataset(tensor_x , tensor_y) test_dataloader = data_utils.DataLoader(dataset=dataset) RuntimeError: Given groups=1, weight[64, 3, 3, 3], so expected input[1, 32, 32, 3] to have 3 channels, but got 32 channels instead
In PyTorch image, the channels come first, so your image should be 3, 32, 32 and not 32, 32, 3. If image is a numpy array, then you can do something like this image = image.transpose((2, 0, 1))
https://stackoverflow.com/questions/51769200/
How to convert FloatTensor to ByteTensor with Pytorch?
I'm new to Pytorch and neural network programming but I've an issue I encountered and I'm not able to solve it on my own. My data are numpy arrays of 1 and 0. But when I try to train my net, I get this error : RuntimeError: Expected object of type torch.ByteTensor but found type torch.FloatTensor for argument #2 'mat2' the line where the error comes from is in the forward method of my net x = self.fc1(x) I've tried these to convert my tensors but I still get the error : x = x.type('torch.ByteTensor') and x.byte()
x.byte() returns what you need, but it's not an "inplace" method. Try doing: x = x.byte()
https://stackoverflow.com/questions/51769901/
How to use different data augmentation for Subsets in PyTorch
How to use different data augmentation (transforms) for different Subsets in PyTorch? For instance: train, test = torch.utils.data.random_split(dataset, [80000, 2000]) train and test will have the same transforms as dataset. How to use custom transforms for these subsets?
My current solution is not very elegant, but works: from copy import copy train_dataset, test_dataset = random_split(full_dataset, [train_size, test_size]) train_dataset.dataset = copy(full_dataset) test_dataset.dataset.transform = transforms.Compose([ transforms.Resize(img_resolution), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) train_dataset.dataset.transform = transforms.Compose([ transforms.RandomResizedCrop(img_resolution[0]), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) Basically, I'm defining a new dataset (which is a copy of the original dataset) for one of the splits, and then I define a custom transform for each split. Note: train_dataset.dataset.transform works since I'm using an ImageFolder dataset, which uses the .tranform attribute to perform the transforms. If anybody knows a better solution, please share with us!
https://stackoverflow.com/questions/51782021/
How do I use a saved model in Pytorch to predict the label of a never before seen image?
I have been trying to use my pretrained model to predict the label on a never before seen image. I have trained a CNN to classify flowers of 5 types using the Kaggle flower recognition dataset. I so far have trained my model to 97% accuracy and saved the model to a directory. I now want to download any image of a flower from those types and be able to use this pretrained model to predict the label. So far this is my code: (A code review for all of this would be very helpful as this is my first ever project) This is my CNN model that I train: from multiprocessing import freeze_support import torch from torch import nn from torch.autograd import Variable from torch.utils.data import DataLoader, Sampler from torchvision import datasets from torchvision.transforms import transforms from torch.optim import Adam import matplotlib.pyplot as plt import numpy as np # Hyperparameters. num_epochs = 20 num_classes = 5 batch_size = 100 learning_rate = 0.001 num_of_workers = 5 DATA_PATH_TRAIN = 'C:\\Users\Aeryes\PycharmProjects\simplecnn\images\\train\\' DATA_PATH_TEST = 'C:\\Users\Aeryes\PycharmProjects\simplecnn\images\\test\\' MODEL_STORE_PATH = 'C:\\Users\Aeryes\PycharmProjects\simplecnn\model' trans = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.Resize(32), transforms.CenterCrop(32), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5),(0.5, 0.5, 0.5)) ]) # Flowers dataset. train_dataset = datasets.ImageFolder(root=DATA_PATH_TRAIN, transform=trans) test_dataset = datasets.ImageFolder(root=DATA_PATH_TEST, transform=trans) # Create custom random sampler class to iter over dataloader. train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, num_workers=num_of_workers) test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False, num_workers=num_of_workers) # CNN we are going to implement. class Unit(nn.Module): def __init__(self, in_channels, out_channels): super(Unit, self).__init__() self.conv = nn.Conv2d(in_channels=in_channels, kernel_size=3, out_channels=out_channels, stride=1, padding=1) self.bn = nn.BatchNorm2d(num_features=out_channels) self.relu = nn.ReLU() def forward(self, input): output = self.conv(input) output = self.bn(output) output = self.relu(output) return output class CNNet(nn.Module): def __init__(self, num_class): super(CNNet, self).__init__() # Create 14 layers of the unit with max pooling in between self.unit1 = Unit(in_channels=3, out_channels=32) self.unit2 = Unit(in_channels=32, out_channels=32) self.unit3 = Unit(in_channels=32, out_channels=32) self.pool1 = nn.MaxPool2d(kernel_size=2) self.unit4 = Unit(in_channels=32, out_channels=64) self.unit5 = Unit(in_channels=64, out_channels=64) self.unit6 = Unit(in_channels=64, out_channels=64) self.unit7 = Unit(in_channels=64, out_channels=64) self.pool2 = nn.MaxPool2d(kernel_size=2) self.unit8 = Unit(in_channels=64, out_channels=128) self.unit9 = Unit(in_channels=128, out_channels=128) self.unit10 = Unit(in_channels=128, out_channels=128) self.unit11 = Unit(in_channels=128, out_channels=128) self.pool3 = nn.MaxPool2d(kernel_size=2) self.unit12 = Unit(in_channels=128, out_channels=128) self.unit13 = Unit(in_channels=128, out_channels=128) self.unit14 = Unit(in_channels=128, out_channels=128) self.avgpool = nn.AvgPool2d(kernel_size=4) # Add all the units into the Sequential layer in exact order self.net = nn.Sequential(self.unit1, self.unit2, self.unit3, self.pool1, self.unit4, self.unit5, self.unit6 , self.unit7, self.pool2, self.unit8, self.unit9, self.unit10, self.unit11, self.pool3, self.unit12, self.unit13, self.unit14, self.avgpool) self.fc = nn.Linear(in_features=128, out_features=num_class) def forward(self, input): output = self.net(input) output = output.view(-1, 128) output = self.fc(output) return output # Check if gpu support is available cuda_avail = torch.cuda.is_available() # Create model, optimizer and loss function model = CNNet(num_classes) # if cuda is available, move the model to the GPU if cuda_avail: model.cuda() # Define the optimizer and loss function optimizer = Adam(model.parameters(), lr=0.0001, weight_decay=0.0001) loss_fn = nn.CrossEntropyLoss() def save_models(epoch): torch.save(model.state_dict(), f"flowermodel_{epoch}.model") print("Checkpoint saved") def test(): model.eval() test_acc = 0.0 for i, (images, labels) in enumerate(test_loader): if cuda_avail: images = Variable(images.cuda()) labels = Variable(labels.cuda()) # Predict classes using images from the test set outputs = model(images) _, prediction = torch.max(outputs.data, 1) test_acc += torch.sum(prediction == labels.data).float() # Compute the average acc and loss over all 10000 test images test_acc = test_acc / 4242 * 100 return test_acc def train(num_epoch): best_acc = 0.0 for epoch in range(num_epoch): model.train() train_acc = 0.0 train_loss = 0.0 for i, (images, labels) in enumerate(train_loader): # Move images and labels to gpu if available if cuda_avail: images = Variable(images.cuda()) labels = Variable(labels.cuda()) # Clear all accumulated gradients optimizer.zero_grad() # Predict classes using images from the test set outputs = model(images) # Compute the loss based on the predictions and actual labels loss = loss_fn(outputs, labels) # Backpropagate the loss loss.backward() # Adjust parameters according to the computed gradients optimizer.step() train_loss += loss.cpu().data[0] * images.size(0) _, prediction = torch.max(outputs.data, 1) train_acc += torch.sum(prediction == labels.data).float() # Call the learning rate adjustment function #adjust_learning_rate(epoch) # Compute the average acc and loss over all 50000 training images train_acc = train_acc / 4242 * 100 train_loss = train_loss / 8484 # Evaluate on the test set test_acc = test() # Save the model if the test acc is greater than our current best if test_acc > best_acc: save_models(epoch) best_acc = test_acc # Print the metrics print(f"Epoch {epoch + 1}, Train Accuracy: {train_acc} , TrainLoss: {train_loss} , Test Accuracy: {test_acc}") if __name__ == '__main__': freeze_support() train(num_epochs) This is my image loader to view preprocessed images: from multiprocessing import freeze_support import torch from torch import nn import torchvision from torch.autograd import Variable from torch.utils.data import DataLoader, Sampler from torchvision import datasets from torchvision.transforms import transforms from torch.optim import Adam import matplotlib.pyplot as plt import numpy as np import PIL num_classes = 5 batch_size = 100 num_of_workers = 5 DATA_PATH_TRAIN = 'C:\\Users\Aeryes\PycharmProjects\simplecnn\images\\train' DATA_PATH_TEST = 'C:\\Users\Aeryes\PycharmProjects\simplecnn\images\\test' trans = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.Resize(32), transforms.CenterCrop(32), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5),(0.5, 0.5, 0.5)) ]) train_dataset = datasets.ImageFolder(root=DATA_PATH_TRAIN, transform=trans) train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, num_workers=num_of_workers) def imshow(img): img = img / 2 + 0.5 # unnormalize #npimg = img.numpy() plt.imshow(np.transpose(img[0].numpy(), (1, 2, 0))) plt.show() def main(): # get some random training images dataiter = iter(train_loader) images, labels = dataiter.next() # show images imshow(images) if __name__ == "__main__": main() So far this is what I have for my classify new image file which will classify new images: from multiprocessing import freeze_support import torch from torch import nn from torch.autograd import Variable from torch.utils.data import DataLoader, Sampler from torchvision import datasets from torchvision.transforms import transforms from torch.optim import Adam import matplotlib.pyplot as plt import numpy as np def classify_new_image(): # Classify a new image using a pretrained model from the above training. # Location of the image we will classify. IMG_PATH = "C:\\Users\\Aeryes\\PycharmProjects\\simplecnn\\images\\pretrain_classify\\" # Pre-processing the new image using transform. min_img_size = 32 trans = transforms.Compose([transforms.Resize(min_img_size), transforms.CenterCrop(32), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) # Picture dataset. classify_dataset = datasets.ImageFolder(root=IMG_PATH, transform=trans) # Create custom random sampler class to iter over dataloader. classify_loader = DataLoader(dataset=classify_dataset, batch_size=1, shuffle=True, num_workers=5) # Check if gpu support is available cuda_avail = torch.cuda.is_available() model = torch.load('C:\\Users\\Aeryes\\PycharmProjects\\simplecnn\\src\\flowermodel_20.tar')['state_dict'] # if cuda is available, move the model to the GPU if cuda_avail: model.cuda() if __name__ == "__main__": classify_new_image() A big issue that I am also facing is making sense of the outputs. I printed the prediction variable from the CNN models and it gave me a tensor of numbers ranging from 0-4, which I presume is the 5 classes I have in my data folders. If anyone can help me make sense of this a I would be very grateful. Here is my direct question: How do I use my pre-trained model to predict never before seen images of flowers?
You can do like this for a single image, import torch from torchvision.transforms import transforms from PIL import Image from cnn_main import CNNet from pathlib import Path model = CNNet(5) checkpoint = torch.load(Path('C:/Users/Aeryes/PycharmProjects/simplecnn/src/19.model')) model.load_state_dict(checkpoint) trans = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.Resize(32), transforms.CenterCrop(32), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5),(0.5, 0.5, 0.5)) ]) image = Image.open(Path('C:/Users/Aeryes/PycharmProjects/simplecnn/images/pretrain_classify/rose_classify.jpg')) input = trans(image) input = input.view(1, 3, 32,32) output = model(input) prediction = int(torch.max(output.data, 1)[1].numpy()) print(prediction) if (prediction == 0): print ('daisy') if (prediction == 1): print ('dandelion') if (prediction == 2): print ('rose') if (prediction == 3): print ('sunflower') if (prediction == 4): print ('tulip')
https://stackoverflow.com/questions/51803437/
load a pretrained model pytorch - dict object has no attribute eval
def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'): torch.save(state, filename) if is_best: shutil.copyfile(filename, 'model_best.pth.tar') save_checkpoint({ 'epoch': epoch + 1, 'arch': args.arch, 'state_dict': model.state_dict(), 'best_prec1': best_prec1, 'optimizer': optimizer.state_dict() }, is_best) I am saving my model like this. How can I load back the model so that I can use it in other places, like cnn visualization? This is how I am loading the model now: torch.load('model_best.pth.tar') But when I do this, I get this error: AttributeError: 'dict' object has no attribute 'eval' What am I missing here??? EDIT: I want to use the model that I trained to visualize the filters and grads. I am using this repo to make the vis. I replaced line 179 with torch.load('model_best.pth.tar')
First, you have stated your model. And torch.load() gives you a dictionary. That dictionary has not an eval function. So you should upload the weights to your model. import torch from modelfolder import yourmodel model = yourmodel() checkpoint = torch.load('model_best.pth.tar') try: checkpoint.eval() except AttributeError as error: print error ### 'dict' object has no attribute 'eval' model.load_state_dict(checkpoint['state_dict']) ### now you can evaluate it model.eval()
https://stackoverflow.com/questions/51811154/
In-place operations with PyTorch
I was wondering how to deal with in-place operations in PyTorch. As I remember using in-place operation with autograd has always been problematic. And actually I’m surprised that this code below works, even though I haven’t tested it I believe this code would have raised an error in version 0.3.1. Basically I want do is set a certain position of a tensor vector to a certain value in a like: my_tensor[i] = 42 Working example code: # test parameter a a = torch.rand((2), requires_grad=True) print('a ', a) b = torch.rand(2) # calculation c = a + b # performing in-place operation c[0] = 0 print('c ', c) s = torch.sum(c) print('s ', s) # calling backward() s.backward() # optimizer step optim = torch.optim.Adam(params=[a], lr=0.5) optim.step() # changed parameter a print('changed a', a) Output: a tensor([0.2441, 0.2589], requires_grad=True) c tensor([0.0000, 1.1511], grad_fn=<CopySlices>) s tensor(1.1511, grad_fn=<SumBackward0>) changed a tensor([ 0.2441, -0.2411], requires_grad=True) So obviously in version 0.4.1. this works just fine without warnings or errors. Referring to this article in the documentation: autograd-mechanics Supporting in-place operations in autograd is a hard matter, and we discourage their use in most cases. Autograd’s aggressive buffer freeing and reuse makes it very efficient and there are very few occasions when in-place operations actually lower memory usage by any significant amount. Unless you’re operating under heavy memory pressure, you might never need to use them. But even though it works, the use of in-place operations is discouraged in most cases. So my questions are: How much does the usage of in-place operations affect performance? How do I get around using in-place operations in such cases where I want to set one element of a tensor to a certain value? Thanks in advance!
I am not sure about how much in-place operation affect performance but I can address the second query. You can use a mask instead of in-place ops. a = torch.rand((2), requires_grad=True) print('a ', a) b = torch.rand(2) # calculation c = a + b # performing in-place operation mask = np.zeros(2) mask[1] =1 mask = torch.tensor(mask) c = c*mask ...
https://stackoverflow.com/questions/51818163/
How can I load and use a PyTorch (.pth.tar) model
I am not very familiar with Torch, and I primarily use Tensorflow. I, however, need to use a retrained inception model that was retrained in Torch. Due to the large amount of computing resources required to retrain an inception model for my particular application, I would like to use the model that was already retrained. This model is saved as a .pth.tar file. I would like to be able to first load this model. So far, I have been able to figure out that I must use the following: model = torch.load('iNat_2018_InceptionV3.pth.tar', map_location='cpu') This seems to work, because print(model) prints out a large set of numbers and other values, which I presume are the values for the weights an biases. After this, I need to be able to classify an image with it. I haven't been able to figure this out. How must I format the image? Should the image be converted into an array? After this, how must I pass the input data to the network?
you basically need to do the same as in tensorflow. That is, when you store a network, only the parameters (i.e. the trainable objects in your network) will be stored, but not the "glue", that is all the logic you need to use a trained model. So if you have a .pth.tar file, you can load it, thereby overriding the parameter values of a model already defined. That means that the general procedure of saving/loading a model is as follows: write your network definition (i.e. your nn.Module object) train or otherwise change the network's parameters in a way you want save the parameters using torch.save when you want to use that network, use the same definition of an nn.Module object to first instantiate a pytorch network then override the values of the network's parameters using torch.load Here's a discussion with some references on how to do this: pytorch forums And here's a super short mwe: # to store torch.save({ 'state_dict': model.state_dict(), 'optimizer' : optimizer.state_dict(), }, 'filename.pth.tar') # to load checkpoint = torch.load('filename.pth.tar') model.load_state_dict(checkpoint['state_dict']) optimizer.load_state_dict(checkpoint['optimizer'])
https://stackoverflow.com/questions/51857274/
How to specify os-specific wheel files in the Pipfile
I'm trying to use the pipenv and pytorch. To install pytorch in windows, I have to write following codes into the Pipfile: [packages] torch = {file = "http://download.pytorch.org/whl/cpu/torch-0.4.1-cp37-cp37m-win_amd64.whl"} However, the wheel file is different for the linux. [packages] torch = {file = "http://download.pytorch.org/whl/cpu/torch-0.4.1.post2-cp37-cp37m-linux_x86_64.whl "} How to specify both of them in Pipfile?
This article demonstrates how to structure your Pipfile to use pytorch across multiple platforms. I tweaked their example to deal with whl files on my local file system: [packages] pyfoo = {path = "./../pyfoo/dist/pyfoo-1.1.0-cp37-cp37m-linux_x86_64.whl", platform_system = "== 'Linux'"} pyfoo-win = {path = "./../pyfoo/dist/pyfoo-1.1.0-py3-none-any.whl", platform_system = "== 'Windows'"} In this case, pyfoo is an internal libarary that has been built using python setup.py build and python setup.py bdist_wheel. The article uses some kind of pipenv generated hash in place of pyfoo-win. I could not coerce pipenv into generating that hash, so I created my own string. It may just be the prefix of the hash in the lock file. Note: This approach is working for me, but there is a downer: both whl files need to be present when you install packages. pipenv sync bombs out if one of them is missing, even though it really only needs one of the two. Interestingly, the contents of the irrelevant file does not seem to matter. On my linux machine, I did an echo 'hello' > to the windows whl file, and pipenv was happy with it.
https://stackoverflow.com/questions/51876482/
Change Image format from NHWC to NCHW for Pytorch
In pytorch we need images in NCHW format but my images are NHWC. What is the procedure to feed this image to CNN? (I have found this solution which suggests to use "permute" but where and how should i use it?)
Using torch.Tensor.permute(): x = x.permute(0, 3, 1, 2) # from NHWC to NCHW
https://stackoverflow.com/questions/51881481/
Model modification in Pytorch
I want to change weights in a certain kernels in a my saved CNN model. How can I change the values in specific kernels and save to a new model?
You can torch.load the weights you saved. You should get a state_dict dictionary in which the weights are stored. Use the state_dict keys to locate the weights you wish to change, modify them and then torch.save the modified state_dict (better use different filename ;).
https://stackoverflow.com/questions/52036035/
How to use backpropagation with a self-defined loss in pytorch?
I am trying to implement a Siamese network with a ranking loss between two images. If I define my own loss would I be able to do the backpropagation step as follows? When I run it sometimes it seems to me that it is giving the same results that the single network gives. with torch.set_grad_enabled(phase == 'train'): outputs1 = model(inputs1) outputs2 = model(inputs2) preds1 = outputs1; preds2 = outputs2; alpha = 0.02; w_r = torch.tensor(1).cuda(async=True); y_i, y_j, predy_i, predy_j = labels1,labels2,outputs1,outputs2; batchRankLoss = torch.tensor([max(0,alpha - delta(y_i[i], y_j[i])*predy_i[i] - predy_j[i])) for i in range(batchSize)],dtype = torch.float) rankLossPrev = torch.mean(batchRankLoss) rankLoss = Variable(rankLossPrev,requires_grad=True) loss1 = criterion(outputs1, labels1) loss2 = criterion(outputs2, labels2) #total loss = loss1 + loss2 + w_r*rankLoss totalLoss = torch.add(loss1,loss2) w_r = w_r.type(torch.LongTensor) rankLossPrev = rankLossPrev.type(torch.LongTensor) mult = torch.mul(w_r.type(torch.LongTensor),rankLossPrev).type(torch.FloatTensor) totalLoss = torch.add(totalLoss,mult.cuda(async = True)); # backward + optimize only if in training phase if phase == 'train': totalLoss.backward() optimizer.step() running_loss += totalLoss.item() * inputs1.size(0)
You have several lines where you generate new Tensors from a constructor or a cast to another data type. When you do this, you disconnect the chain of operations through which you'd like the backwards() command to differentiate. This cast disconnects the graph because casting is non-differentiable: w_r = w_r.type(torch.LongTensor) Building a Tensor from a constructor will disconnect the graph: batchRankLoss = torch.tensor([max(0,alpha - delta(y_i[i], y_j[i])*predy_i[i] - predy_j[i])) for i in range(batchSize)],dtype = torch.float) From the docs, wrapping a Tensor in a Variable will set the grad_fn to None (also disconnecting the graph): rankLoss = Variable(rankLossPrev,requires_grad=True) Assuming that your critereon function is differentiable, then gradients are currently flowing backward only through loss1 and loss2. Your other gradients will only flow as far as mult before they are stopped by a call to type(). This is consistent with your observation that your custom loss doesn't change the output of your neural network. To allow gradients to flow backward through your custom loss, you'll have to code the same logic while avoiding type() casts and calculate rankLoss without using a list comprehension.
https://stackoverflow.com/questions/52059399/
Reducing batch size in pytorch
I am new to programming in pytorch. I am getting this error which says cuda out of memory. So I have to reduce the batch size. Can someone tell me how to do it in python code? I also don't know my current batch size. p.s. I am trying to run the Deep Image Prior's super-resolution. Here's the code. The error I am getting is when running the optimization. It says RuntimeError: Cuda out of memory. from __future__ import print_function import matplotlib.pyplot as plt %matplotlib inline import argparse import os os.environ['CUDA_VISIBLE_DEVICES'] = '0' import numpy as np from models import * import torch import torch.optim import torch.nn as nn from torch.utils.data import Dataset, DataLoader import warnings warnings.filterwarnings("ignore") from skimage.measure import compare_psnr from models.downsampler import Downsampler from utils.sr_utils import * torch.backends.cudnn.enabled = True torch.backends.cudnn.benchmark =True dtype = torch.cuda.FloatTensor imsize = -1 factor = 16 # 8 enforse_div32 = 'CROP' # we usually need the dimensions to be divisible by a power of two (32 in this case) PLOT = True path_to_image = '/home/smitha/deep-image-prior/tnew.tif' imgs = load_LR_HR_imgs_sr(path_to_image , imsize, factor, enforse_div32) imgs['bicubic_np'], imgs['sharp_np'], imgs['nearest_np'] = get_baselines(imgs['LR_pil'], imgs['HR_pil']) if PLOT: plot_image_grid([imgs['HR_np'], imgs['bicubic_np'], imgs['sharp_np'], imgs['nearest_np']], 4,12); print ('PSNR bicubic: %.4f PSNR nearest: %.4f' % ( compare_psnr(imgs['HR_np'], imgs['bicubic_np']), compare_psnr(imgs['HR_np'], imgs['nearest_np']))) input_depth = 8 INPUT = 'noise' pad = 'reflection' OPT_OVER = 'net' KERNEL_TYPE='lanczos2' LR = 5 tv_weight = 0.0 OPTIMIZER = 'adam' if factor == 16: num_iter = 10 reg_noise_std = 0.01 elif factor == 8: num_iter = 40 reg_noise_std = 0.05 else: assert False, 'We did not experiment with other factors' net_input = get_noise(input_depth, INPUT, (imgs['HR_pil'].size[1], imgs['HR_pil'].size[0])).type(dtype).detach() NET_TYPE = 'skip' # UNet, ResNet net = get_net(input_depth, 'skip', pad, skip_n33d=128, skip_n33u=128, skip_n11=4, num_scales=5, upsample_mode='bilinear').type(dtype) mse = torch.nn.MSELoss().type(dtype) img_LR_var = np_to_torch(imgs['LR_np']).type(dtype) downsampler = Downsampler(n_planes=3, factor=factor, kernel_type=KERNEL_TYPE, phase=0.5, preserve_size=True).type(dtype) def closure(): global i, net_input if reg_noise_std > 0: net_input = net_input_saved + (noise.normal_() * reg_noise_std) out_HR = net(net_input) out_LR = downsampler(out_HR) total_loss = mse(out_LR, img_LR_var) if tv_weight > 0: total_loss += tv_weight * tv_loss(out_HR) total_loss.backward() # Log psnr_LR = compare_psnr(imgs['LR_np'], torch_to_np(out_LR)) psnr_HR = compare_psnr(imgs['HR_np'], torch_to_np(out_HR)) print ('Iteration %05d PSNR_LR %.3f PSNR_HR %.3f' % (i, psnr_LR, psnr_HR), '\r', end='') # History psnr_history.append([psnr_LR, psnr_HR]) if PLOT and i % 100 == 0: out_HR_np = torch_to_np(out_HR) plot_image_grid([imgs['HR_np'], imgs['bicubic_np'], np.clip(out_HR_np, 0, 1)], factor=13, nrow=3) i += 1 return total_loss psnr_history = [] volatile=True net_input_saved = net_input.detach().clone() noise = net_input.clone() i = 0 p = get_params(OPT_OVER, net, net_input) optimize(OPTIMIZER, p, closure, LR, num_iter) out_HR_np = np.clip(torch_to_np(net(net_input)), 0, 1) result_deep_prior = put_in_center(out_HR_np, imgs['orig_np'].shape[1:]) plot_image_grid([imgs['HR_np'], imgs['bicubic_np'], out_HR_np], factor=4, nrow=1);
The batch size depends on the model. Typically, it's the first dimension of your input tensors. Your model uses different names than I'm used to, some of which are general terms, so I'm not sure of your model topology or usage.
https://stackoverflow.com/questions/52069454/
Incorporating dim parameter of torch.topk in tf.nn.top_k
Pytorch provide torch.topk(input, k, dim=None, largest=True, sorted=True) function to calculate k largest elements of the given input tensor along a given dimension dim. I have a tensor of shape (16, 512, 4096) and I am using torch.topk in the following manner- # inputs.shape (16L, 512L, 4096L) dist, idx = torch.topk(inputs, 64, dim=2, largest=False, sorted=False) # dist.shape (16L, 512L, 64L), idx.shape (16L, 512L, 64L) I found similar tensorflow implementaion as following - tf.nn.top_k(input, k=1, sorted=True, name=None). My question is how to Incorporate dim=2 parameter in tf.nn.top_k so as to achieve the tensor of the same shape as calculated by pytorch?
tf.nn.top_k works on the last dimension of the input. This means that it should work as is for your example: dist, idx = tf.nn.top_k(inputs, 64, sorted=False) In general you can imagine the Tensorflow version to work like the Pytorch version with hardcoded dim=-1, i.e. the last dimension. However it looks like you actually want the k smallest elements. In this case we could do dist, idx = tf.nn.top_k(-1*inputs, 64, sorted=False) dist = -1*dist So we take the k largest of the negative inputs, which are the k smallest of the original inputs. Then we invert the negative on the values.
https://stackoverflow.com/questions/52126579/
PyTorch - better way to get back original tensor order after torch.sort
I want to get back the original tensor order after a torch.sort operation and some other modifications to the sorted tensor, so that the tensor is not anymore sorted. It is better to explain this with an example: x = torch.tensor([30., 40., 20.]) ordered, indices = torch.sort(x) # ordered is [20., 30., 40.] # indices is [2, 0, 1] ordered = torch.tanh(ordered) # it doesn't matter what operation is final = original_order(ordered, indices) # final must be equal to torch.tanh(x) I have implemented the function in this way: def original_order(ordered, indices): z = torch.empty_like(ordered) for i in range(ordered.size(0)): z[indices[i]] = ordered[i] return z Is there a better way to do this? In particular, it is possible to avoid the loop and compute the operation more efficiently? In my case I have a tensor of size torch.Size([B, N]) and I sort each of the B rows separately with a single call of torch.sort. So, I have to call original_order B times with another loop. Any, more pytorch-ic, ideas? EDIT 1 - Get rid of inner loop I solved part of the problem by simply indexing z with indices in this way: def original_order(ordered, indices): z = torch.empty_like(ordered) z[indices] = ordered return z Now, I just have to understand how to avoid the outer loop on B dimension. EDIT 2 - Get rid of outer loop def original_order(ordered, indices, batch_size): # produce a vector to shift indices by lenght of the vector # times the batch position add = torch.linspace(0, batch_size-1, batch_size) * indices.size(1) indices = indices + add.long().view(-1,1) # reduce tensor to single dimension. # Now the indices take in consideration the new length long_ordered = ordered.view(-1) long_indices = indices.view(-1) # we are in the previous case with one dimensional vector z = torch.zeros_like(long_ordered).float() z[long_indices] = long_ordered # reshape to get back to the correct dimension return z.view(batch_size, -1)
def original_order(ordered, indices): return ordered.gather(1, indices.argsort(1)) Example original = torch.tensor([ [20, 22, 24, 21], [12, 14, 10, 11], [34, 31, 30, 32]]) sorted, index = original.sort() unsorted = sorted.gather(1, index.argsort(1)) assert(torch.all(original == unsorted)) Why it works For simplicity, imagine t = [30, 10, 20], omitting tensor notation. t.sort() gives us the sorted tensor s = [10, 20, 30], as well as the sorting index i = [1, 2, 0] for free. i is in fact the output of t.argsort(). i tells us how to go from t to s. "To sort t into s, take element 1, then 2, then 0, from t". Argsorting i gives us another sorting index j = [2, 0, 1], which tells us how to go from i to the canonical sequence of natural numbers [0, 1, 2], in effect reversing the sort. Another way to look at it is that j tells us how to go from s to t. "To sort s into t, take element 2, then 0, then 1, from s". Argsorting a sorting index gives us its "inverse index", going the other way. Now that we have the inverse index, we dump that into torch.gather() with the correct dim, and that unsorts the tensor. Sources torch.gather torch.argsort I couldn't find this exact solution when researching this problem, so I think this is an original answer.
https://stackoverflow.com/questions/52127723/
tensorflow equivalent of torch.gather
I have a tensor of shape (16, 4096, 3). I have another tensor of indices of shape (16, 32768, 3). I am trying to collect the values along dim=1. This was initially done in pytorch using gather function as shown below- # a.shape (16L, 4096L, 3L) # idx.shape (16L, 32768L, 3L) b = a.gather(1, idx) # b.shape (16L, 32768L, 3L) Please note that the size of output b is the same as that of idx. However, when I apply gather function of tensorflow, I get a completely different output. The output dimension was found mismatching as shown below- b = tf.gather(a, idx, axis=1) # b.shape (16, 16, 32768, 3, 3) I also tried using tf.gather_nd but got in vain. See below- b = tf.gather_nd(a, idx) # b.shape (16, 32768) Why am I getting different shapes of tensors? I want to get the tensor of the same shape as calculated by pytorch. In other words, I want to know the tensorflow equivalent of torch.gather.
For 2D case,there is a method to do it: # a.shape (16L, 10L) # idx.shape (16L,1) idx = tf.stack([tf.range(tf.shape(idx)[0]),idx[:,0]],axis=-1) b = tf.gather_nd(a,idx) However,For ND case,this method maybe very complex
https://stackoverflow.com/questions/52129909/
Why do we normalize the image to mean=0.5, std=0.5?
I was lookking for GAN code in Github. The code I found uses pytorch. In this code, we first normalized the image to mean = 0.5, std = 0.5. Normally, normalize to min = 0 and max = 1. Or normal distribution with mean = 0 and std = 1. Why is this normalized to mean = 0.5 and std = 0.5? transformtransfo = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)) ])
The values of mean and std for transform.normalize are not the desired mean and std, but rather the values to subtract and divide by, i.e., the estimated mean and std. In your example you subtract 0.5 and then divide by 0.5 yielding an image with mean zero and values in range [-1, 1]
https://stackoverflow.com/questions/52138920/
Why does the hash value of models change everytime I save it?
I am using torch.save() to save a model file. However, everytime I save it, it changes. Why so? netG_1 = torch.load('netG.pth') netG_2 = torch.load('netG.pth') torch.save(netG_1, 'netG_1.pth') torch.save(netG_2, 'netG_2.pth') Using md5sum *.pth: 779f0fefca47d17a0644033f9b65e594 netG_1.pth 476f502ec2d1186c349cdeba14983d09 netG_2.pth b0ceec8ac886a11b79f73fc04f51c6f9 netG.pth The model is an instance of this class: https://github.com/taoxugit/AttnGAN/blob/master/code/model.py#L397
A class which does not define the __hash__ method will have its instances hashed according to their id. As for CPython, this means everytime the instance is saved and reloaded, it changes hash since its position in memory changed. Here is a proof of concept. class Foo: pass instance = Foo() print('hash:', hex(hash(instance))) print('id: ', hex(id(instance))) Output hash: 0x22f3f57608 id: 0x22f3f576080 The exact transformation is hash(o) == id(o) // 16.
https://stackoverflow.com/questions/52152144/
DataLoader - shuffle implicit pairs
Is there a way to handle the DataLoader as a list ? The idea is that I want to shuffle implicit pairs of images, without setting the shuffling into True Basically, I have for example 10 scenes, each containing let's say 100 sequences, so they are represented inside the directory as '1_1.png', '1_2.png', '1_3.png', '....., '2_1.png', '2_2.png', '2_3.png', ...., '3_1.png', '3_2.png', '3_3.png', ..., ...., '10_1.png', '10_2.png', '10_3.png', ... I don't want complete shuffling of data, what I want simply is to shuffle but keeping pairs, so they are represented in the data loader as [ '1_3.png', '1_4.png', '2_2.png', '2_3.png', '10_1.png', '10_2.png', '1_2.png', '1_3.png', ...] and so on Please have a look at this question which I have already asked on Stack Overflow concerning shuffling array of implicit pairs, where you can understand what I mean As an example: if this is a list L = [['1_1'],['1_2'],['1_3'],['1_4'],['1_5'],['1_6'],['2_1'],['2_2'],['2_3'],['2_4'],['2_5'],['2_6'],['3_1'],['3_2'],['3_3'],['3_4'],['3_5'],['3_6']] then this is the output [['1_2'], ['1_3'], ['2_1'], ['2_2'], ['2_4'], ['2_5'], ['2_2'], ['2_3'], ['1_3'], ['1_4'], ['3_4'], ['3_5'], ['3_3'], ['3_4'], ['3_2'], ['3_3'], ['1_6'], ['2_1'], ['2_5'], ['2_6'], ['2_6'], ['3_1'], ['1_4'], ['1_5'], ['1_1'], ['1_2'], ['2_3'], ['2_4'], ['1_5'], ['1_6'], ['3_1'], ['3_2'], ['3_5'], ['3_6']] I want to achieve the same for a DataLoader The main idea, is that I want to train my network on sequential frames, but it doesn't have to be the complete sequence, but at least I need each step, two sequences are there
I think you are looking for data.Sampler: instead of the completely radom default shuffle of data.DataLoader, you can provide your own "sampler" that sample examples from your Dataset. Looking at the input parameters of data.DataLoader: sampler (Sampler, optional) – defines the strategy to draw samples from the dataset. If specified, shuffle must be False. I think a good starting point for is too look at the code of data.SubsetRandomSampler.
https://stackoverflow.com/questions/52183001/
How to load 2D data into an LSTM in pytorch
I have a series of sine waves that i have loaded in using a custom dataloader. The data is converted to a torch tensor using from_numpy. I then try to load the data using an enumerator over the train_loader. The iterator is shown below. for epoch in range(epochs): for i, data in enumerate(train_loader): input = np.array(data) train(epoch) The error i receive is: RuntimeError: input must have 3 dimensions, got 2 I know i need to have my input data in [sequence length, batch_size, input_size] for an LSTM but i have no idea how to format my array data of 1000 sine waves of length 10000. Below is my training method. def train(epoch): model.train() train_loss = 0 def closure(): optimizer.zero_grad() print(input.shape) output = model(Variable(input)) loss = loss_function(output) print('epoch: ', epoch.item(),'loss:', loss.item()) loss.backward() return loss optimizer.step(closure) I thought i would try add (seq_length, batch_size, input_size) in a tuple but this cant be fed into the network. Further to this my assumption was that the dataloader fed batch size into the system. Any help would be appreciated. edit: Here is my sample data: T = 20 L = 1000 N = 100 x = np.empty((N, L), 'int64') x[:] = np.array(range(L)) + np.random.randint(-4 * T, 4 * T, N).reshape(N, 1) data = np.sin(x / 1.0 / T).astype('float64') torch.save(data, open('traindata.pt', 'wb'))
Can you share a simple example of your data just to confirm? Also, you have to have a different order for your shape. Generally, the first dimension is always batch_size, and then afterwards the other dimensions, like [batch_size, sequence_length, input_dim]. One way to achieve this, if you have a batch size of 1, is to use torch.unsqueeze(). This allows you to create a "fake" dimension: import torch as t x = t.Tensor([1,2,3]) print(x.shape) x = x.unsqueeze(dim=0) # adds a 0-th dimension of size 1 print(x.shape)
https://stackoverflow.com/questions/52196554/
How to use double as the default type for floating numbers in PyTorch
I want all the floating numbers in my PyTorch code double type by default, how can I do that?
You are looking for torch.set_default_tensor_type: torch.set_default_tensor_type(torch.DoubleTensor) Aletrnatively, you can use torch.set_default_dtype: torch.set_default_dtype(torch.float64)
https://stackoverflow.com/questions/52199728/
Periodic pattern in loss when training neural networks using pytorch
I am using Pytorch train a model on MNIST, the loss curve has the periodic pattern shown in the figure. I've double checked the data loader and the dataset is shuffled in each epoch. Any suggestions for the possible reason? Thanks. Loss curve during training, training loss in blue and test loss in red
What I understand from the figure is that your loss is oscillating. So try decreasing your learning rate and also some momentum term if available. I cannot guarantee you that this will work but hoping that it works it worth giving a try. Most of the things in Deep Learning can be explained, its just trial and error. Next time please ask such questions in https://ai.stackexchange.com or https://datascience.stackexchange.com.
https://stackoverflow.com/questions/52215458/
pytorch scoring set of images and evaluating results
In Pytorch by using GPU (cuda) need to score a set of images given trained NN. The following code is meant to score a set of transformed images one by one. model.to('cuda') model.eval() for ii, (inputs, classes) in enumerate(dataloaders['test']): inputs, classes = inputs, classes results = model.forward(inputs) ps = torch.exp(results) Error Stack: RuntimeError Traceback (most recent call last) <ipython-input-24-948390e2b25a> in <module>() 5 for ii, (inputs, classes) in enumerate(dataloaders['test']): 6 inputs, classes = inputs, classes ----> 7 results = model(inputs) 8 ps = torch.exp(results) /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 489 result = self._slow_forward(*input, **kwargs) 490 else: --> 491 result = self.forward(*input, **kwargs) 492 for hook in self._forward_hooks.values(): 493 hook_result = hook(self, input, result) /opt/conda/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/vgg.py in forward(self, x) 40 41 def forward(self, x): ---> 42 x = self.features(x) 43 x = x.view(x.size(0), -1) 44 x = self.classifier(x) /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 489 result = self._slow_forward(*input, **kwargs) 490 else: --> 491 result = self.forward(*input, **kwargs) 492 for hook in self._forward_hooks.values(): 493 hook_result = hook(self, input, result) /opt/conda/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input) 89 def forward(self, input): 90 for module in self._modules.values(): ---> 91 input = module(input) 92 return input 93 /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 489 result = self._slow_forward(*input, **kwargs) 490 else: --> 491 result = self.forward(*input, **kwargs) 492 for hook in self._forward_hooks.values(): 493 hook_result = hook(self, input, result) /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input) 299 def forward(self, input): 300 return F.conv2d(input, self.weight, self.bias, self.stride, --> 301 self.padding, self.dilation, self.groups) 302 303 RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight' Model made on a GPU (cuda).
This solves the problem and provides an Accuracy rate. model.to('cuda') model.eval() accuracy = 0 for ii, (inputs, classes) in enumerate(dataloaders['test']): inputs, classes = inputs.to('cuda'), classes.to('cuda') # Forward and backward passes with torch.no_grad(): output = model.forward(inputs) ps = torch.exp(output) equality = (classes.data == ps.max(dim=1)[1]) accuracy += equality.type(torch.FloatTensor).mean() accuracy = accuracy/len(dataloaders['test']) print(accuracy)
https://stackoverflow.com/questions/52215500/
PyTorch NotImplementedError in forward
import torch import torch.nn as nn device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.layer = nn.Sequential( nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), # 16x16x650 nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1), # 32x16x650 nn.ReLU(), nn.Dropout2d(0.5), nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1), # 64x16x650 nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), # 64x8x325 nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1), nn.ReLU()) # 64x8x325 self.fc = nn.Sequential( nn.Linear(64*8*325, 128), nn.ReLU(), nn.Linear(128, 256), nn.ReLU(), nn.Linear(256, 1), ) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = out.reshape(out.size(0), -1) out = self.fc(out) return out # HYPERPARAMETER learning_rate = 0.0001 num_epochs = 15 import data def main(): model = Model().to(device) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) total_step = len(data.train_loader) for epoch in range(num_epochs): for i, (images, labels) in enumerate(data.train_loader): images = images.to(device) labels = labels.to(device) outputs = model(images) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() if (i + 1) % 100 == 0: print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch + 1, num_epochs, i + 1, total_step, loss.item())) model.eval() with torch.no_grad(): correct = 0 total = 0 for images, labels in data.test_loader: images = images.to(device) labels = labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Test Accuracy of the model on the 10000 test images: {} %'.format(100 * correct / total)) if __name__ == '__main__': main() Error: File "/home/rladhkstn8/Desktop/SWID/tmp/pycharm_project_853/model.py", line 82, in <module> main() File "/home/rladhkstn8/Desktop/SWID/tmp/pycharm_project_853/model.py", line 56, in main outputs = model(images) File "/home/rladhkstn8/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/rladhkstn8/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 83, in forward raise NotImplementedError NotImplementedError I do not know where the problem is. I know that NotImplementedError should be implemented, but it happens when there is unimplemented code.
please look carefully at the indentation of your __init__ function: your forward is part of __init__ not part of your module.
https://stackoverflow.com/questions/52241680/
Pytorch: join tensor and dictionary
Using Pytorch. Got the following tensor: (tensor([[-0.0030, -7.6063, -7.6334, -7.7098, -8.3540]], device='cuda:0'), tensor([[ 14, 85, 45, 82, 15]], device='cuda:0')) Need to join it with the following dictionary: {'14': 'a', '100': 'b', '45': 'c', '33': 'd', '15': 'e'} In order to get the following results: 'a','c','e'
dictionary = {int(key):dictionary[key] for key in dictionary} np.vectorize(cat_to_name.get)(best_5)
https://stackoverflow.com/questions/52241935/
pip - Unable to install Fastai
Whenever I run: pip install fastai I get the error "Command "python setup.py egg_info" failed with error code 1 in C:\Users\seja9890\AppData\Local\Temp\pip-install-_cw7ve61\torch\". Can someone please guide me where I might be going wrong? Ps.: I have tried updating setuptools and it doesn't help in my case.
Fastai doesn't work with Python 2 so make sure you installed pip3 (sudo apt install python3-pip on Ubuntu). Make sure Python3 is at least 3.6 this may change since Fastai may need 3.7. soon. and then: pip3 install git+https://github.com/fastai/fastai.git or use pip3 install fastai, or in some cases you may need: pip3 install --no-deps fastai Note: At the moment I am writing this: PyTorch v1 and Python 3.6 are the minimal version requirements.
https://stackoverflow.com/questions/52244707/
Can neuroevolution of augmenting topologies (NEAT) neural networks be built in TensorFlow?
I am making a machine learning program for time series data analysis and using NEAT could help the work. I started to learn TensorFlow not long ago but it seems that the computational graphs in TensorFlow are usually fixed. Is there tools in TensorFlow to help build a dynamically evolving neural network? Or something like Pytorch would be a better alternative? Thanks.
It can't be implemented in the static graph mode of TensorFlow without significant tradeoffs because the topology of the neural networks in the population changes. Static graphs are suited for models whose architecture doesn't change during training. However, it can be done in TensorFlow Eager or PyTorch because they support dynamic computation graphs. Check this implementation in TensorFlow Eager: https://github.com/crisbodnar/TensorFlow-NEAT
https://stackoverflow.com/questions/52287254/
CUDA HOME in pytorch installation
I installed pytorch via conda with cuda 7.5 conda install pytorch=0.3.0 cuda75 -c pytorch >>> import torch >>> torch.cuda.is_available() True I didn't do any other installations for cuda other than this, since it looks like pytorch comes with cuda Now, I am trying to setup yolo2 https://github.com/longcw/yolo2-pytorch However, I am getting error in ./make.sh command this is the error OSError: The nvcc binary could not be located in your $PATH. Either add it to your path, or set $CUDAHOME I'm assuming I need to set CUDAHOME in my path, but I am not able to locate any cuda directory having nvcc binary. Any pointers on it?
The CUDA package which is distributed via anaconda is not a complete CUDA toolkit installation. It only includes the necessary libraries and tools to support numba and pyculib and other GPU accelerated binary packages they distribute, like tensorflow and pytorch. If you need a fully functional CUDA toolkit (and it seems you do), you will need to install one yourself. Word to the wise -- install the same version that you have installed within anaconda. With a tiny bit of PATH modification, everything should just work.
https://stackoverflow.com/questions/52298146/
pytorch gradient / derivative / difference along axis like numpy.diff
I have been struggling with this for quite some time. All I want is a torch.diff() function. However, many matrix operations do not appear to be easily compatible with tensor operations. I have tried an enormous amount of various pytorch operation combinations, yet none of them work. Due to the fact that pytorch hasn't implemented this basic feature, I started by simply trying to subtract the element i+1 from element i along a specific axis. However, you can't simply do this element-wise (due to the tensor limitations), so I tried to construct another tensor, with the elements shifted along one axis: ix_plus_one = [0]+list(range(0,prediction.size(1)-1)) ix_differential_tensor = torch.LongTensor(ix_plus_one) diff_one_tensor = prediction[:,ix_differential_tensor] But now we have a different problem - indexing doesn't really work to mimic numpy in pytorch as it advertises, so you can't index with a "list-like" Tensor like this. I also tried using the tensor scatter functions So I'm still stuck with this simple problem of trying to get a gradient on a pytoch tensor. All of my searching leads to the marvelous capabilities of pytorchs' "autograd" function - which has nothing to do with this problem.
A 1D convolution with a fixed filter should do the trick: filter = torch.nn.Conv1d(in_channels=1, out_channels=1, kernel_size=2, stride=1, padding=1, groups=1, bias=False) kernel = np.array([-1.0, 1.0]) kernel = torch.from_numpy(kernel).view(1,1,2) filter.weight.data = kernel filter.weight.requires_grad = False Then use filter like you would any other layer in torch.nn. Also, you might want to change padding to suit your specific needs.
https://stackoverflow.com/questions/52306279/
Python Jupyter notebook how to display matrix in vertical
When I try to index array, I use this code for printing column part with Numpy or Pytorch. import numpy as np a = np.random.randn(5,3) a[:,1] or import torch a = torch.Tensor(5,3) a[:,1] The output is displayed like this. array([-0.07478094, -1.87787326, 0.50407517, 1.13335836, 0.23140931]) But I want to display output as column.(because i indexed column) array([-0.07478094, -1.87787326, 0.50407517, 1.13335836, 0.23140931]) Furthermore, When I make tensor with torch.ones(5), the result is tensor([1., 1., 1., 1., 1.]) but I want to see the type of output on the buttom like this tensor([1., 1., 1., 1., 1.]) [torch.FloatTensor of size 5] The reason why i want to display this is that i can't distinguish tensor and numpy Can anyone tell me how to do this? Thanks.
try this: np.vstack(a) Hope this helps..
https://stackoverflow.com/questions/52343382/
Understanding PyTorch Tensor Shape
I have a simple question regarding the shape of tensor we define in PyTorch. Let's say if I say: input = torch.randn(32, 35) This will create a matrix with 32 row and 35 columns. Now when I define: input2 = torch.randn(1,2,32, 35) What can I say about the dimension of the new matrix input2? How can I define the rows and columns here? I mean do I have two matrices with shapes 32*35 packed by the tensor? I want to better understand the geometry behind this. Thanks.
Consider tensor shapes as the number of lists that a dimension holds. For instance, a tensor shaped (4, 4, 2) will have four elements, which will all contain 4 elements, which in turn have 2 elements. The first holds 4 elements. The second holds 4 elements. The third dimension holds 2 elements. Here's what the data would look like: [[[0.86471446, 0.26302726], [0.04137454, 0.00349315], [0.06559607, 0.45617865], [0.0219786, 0.27513594]], [[0.60555118, 0.10853228], [0.07059685, 0.32746256], [0.99684617, 0.07496456], [0.55169005, 0.39024103]], [[0.55891377, 0.41151245], [0.3434965, 0.12956237], [0.74908291, 0.69889266], [0.98600141, 0.8570597]], [[0.7903229, 0.93017741], [0.54663242, 0.72318166], [0.6099451, 0.96090241], [0.63772238, 0.78605599]]] In other words, four elements of four elements of two elements.
https://stackoverflow.com/questions/52370008/
Why will GPU usage run low in NN training?
I'm running a NN training on my GPU with pytorch. But the GPU usage is strangely "limited" at about 50-60%. That's a waste of computing resources but I can't make it a bit higher. I'm sure that the hardware is fine because running 2 of my process at the same time,or training a simple NN (DCGAN,for instance) can both occupy 95% or more GPU.(which is how it supposed to be) My NN contains several convolution layers and it should use more GPU resources. Besides, I guess that the data from dataset has been feeding fast enough,because I used workers=64 in my dataloader instance and my disk works just fine. I just confused about what is happening. Dev details: GPU : Nvidia GTX 1080 Ti os:Ubuntu 64-bit
I can only guess without further research but it could be that your network is small in terms of layer-size (not number of layers) so each step of the training is not enough to occupy all the GPU resources. Or at least the ratio between the data size and the transfer speed (to the gpu memory) is bad and the GPU stays idle most of the time. tl;dr: the gpu jobs are not long enough to justify the memory transfers
https://stackoverflow.com/questions/52374287/
'no module named cv2' import error on pytorch
I am currently studying Pytorch and trying to use the cv2 module. I am using Jupyter notebook and Windows. I have installed opencv like this: !pip install opencv-python When I choose the kernel (by change kernel option) Python3 and import cv2 then there is no problem. But when I choose the kernel Pytorch and import cv2, then there is an error: ModuleNotFoundError: No module named 'cv2' This must be a basic question but I can't find out what the problem is.
I found the answer to this problem here: First do run these commands $ conda update anaconda-navigator $ conda update navigator-updater then the issue for the instruction below will be resolved For windows if you have anaconda installed, you can simply do: pip install opencv-python or conda install -c https://conda.binstar.org/menpo opencv If you're on linux you can do : pip install opencv-python or conda install opencv
https://stackoverflow.com/questions/52458725/
Derivative of ReLU
I'm learning PyTorch. Here is the first example in official tutorial. I got two questions, as shown in the block below, a) I understand that derivative of a ReLU function is 0 when x < 0 and 1 when x > 0. Is that right? But the code seems to keep the x > 0 part unchanged and set x < 0 part to 0. Why is that? b) Why transpose, i.e. x.T.mm(grad_h)? A transpose does't seem needed to me. I'm just confused. Thanks, # -*- coding: utf-8 -*- import torch dtype = torch.float device = torch.device("cpu") # device = torch.device("cuda:0") # Uncomment this to run on GPU # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random input and output data x = torch.randn(N, D_in, device=device, dtype=dtype) y = torch.randn(N, D_out, device=device, dtype=dtype) # Randomly initialize weights w1 = torch.randn(D_in, H, device=device, dtype=dtype) w2 = torch.randn(H, D_out, device=device, dtype=dtype) learning_rate = 1e-6 for t in range(500): # Forward pass: compute predicted y h = x.mm(w1) h_relu = h.clamp(min=0) y_pred = h_relu.mm(w2) # Compute and print loss loss = (y_pred - y).pow(2).sum().item() print(t, loss) # Backprop to compute gradients of w1 and w2 with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_w2 = h_relu.t().mm(grad_y_pred) grad_h_relu = grad_y_pred.mm(w2.t()) grad_h = grad_h_relu.clone() grad_h[h < 0] = 0 grad_w1 = x.t().mm(grad_h) # Update weights using gradient descent w1 -= learning_rate * grad_w1 w2 -= learning_rate * grad_w2
1- It is true that derivative of a ReLU function is 0 when x < 0 and 1 when x > 0. But notice that gradient is flowing from output of the function to all the way back to h. When you get all the way back to calculate grad_h, it is calculated as: grad_h = derivative of ReLu(x) * incoming gradient As you said exactly, derivative of ReLu function is 1 so grad_h is just equal to incoming gradient. 2- Size of the x matrix is 64x1000 and grad_h matrix is 64x100. It is obvious that you can not directly multiply x with grad_h and you need to take transpose of x to get appropriate dimensions.
https://stackoverflow.com/questions/52464922/
What is the difference between parameters and children?
It looks like parameters and children show the same info, so what is the difference between them? import torch print('torch.__version__', torch.__version__) m = torch.load('imagenet_resnet18.pth') print(m.parameters) print(m.children)
model.parameters() is a generator that returns tensors containing your model parameters. model.children() is a generator that returns layers of the model from which you can extract your parameter tensors using <layername>.weight or <layername>.bias Visit this link for a simple tutorial on accessing and freezing model layers.
https://stackoverflow.com/questions/52465723/
How to get weights shape for each layer?
There is a good question how to get model summary in pytorch Model summary in pytorch but it doesn't output shape of weights. Is it possible also to output shape of weights for each layer?
Looks like it's possible, here is an example: import torch from torchvision import models m = models.resnet18() print(m) print('-'*60) for l in list(m.named_parameters()): print(l[0], ':', l[1].detach().numpy().shape) Which outputs: ResNet( (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (layer1): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer2): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer3): Sequential( (0): BasicBlock( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer4): Sequential( (0): BasicBlock( (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0) (fc): Linear(in_features=512, out_features=1000, bias=True) ) ------------------------------------------------------------ conv1.weight : (64, 3, 7, 7) bn1.weight : (64,) bn1.bias : (64,) layer1.0.conv1.weight : (64, 64, 3, 3) layer1.0.bn1.weight : (64,) layer1.0.bn1.bias : (64,) layer1.0.conv2.weight : (64, 64, 3, 3) layer1.0.bn2.weight : (64,) layer1.0.bn2.bias : (64,) layer1.1.conv1.weight : (64, 64, 3, 3) layer1.1.bn1.weight : (64,) layer1.1.bn1.bias : (64,) layer1.1.conv2.weight : (64, 64, 3, 3) layer1.1.bn2.weight : (64,) layer1.1.bn2.bias : (64,) layer2.0.conv1.weight : (128, 64, 3, 3) layer2.0.bn1.weight : (128,) layer2.0.bn1.bias : (128,) layer2.0.conv2.weight : (128, 128, 3, 3) layer2.0.bn2.weight : (128,) layer2.0.bn2.bias : (128,) layer2.0.downsample.0.weight : (128, 64, 1, 1) layer2.0.downsample.1.weight : (128,) layer2.0.downsample.1.bias : (128,) layer2.1.conv1.weight : (128, 128, 3, 3) layer2.1.bn1.weight : (128,) layer2.1.bn1.bias : (128,) layer2.1.conv2.weight : (128, 128, 3, 3) layer2.1.bn2.weight : (128,) layer2.1.bn2.bias : (128,) layer3.0.conv1.weight : (256, 128, 3, 3) layer3.0.bn1.weight : (256,) layer3.0.bn1.bias : (256,) layer3.0.conv2.weight : (256, 256, 3, 3) layer3.0.bn2.weight : (256,) layer3.0.bn2.bias : (256,) layer3.0.downsample.0.weight : (256, 128, 1, 1) layer3.0.downsample.1.weight : (256,) layer3.0.downsample.1.bias : (256,) layer3.1.conv1.weight : (256, 256, 3, 3) layer3.1.bn1.weight : (256,) layer3.1.bn1.bias : (256,) layer3.1.conv2.weight : (256, 256, 3, 3) layer3.1.bn2.weight : (256,) layer3.1.bn2.bias : (256,) layer4.0.conv1.weight : (512, 256, 3, 3) layer4.0.bn1.weight : (512,) layer4.0.bn1.bias : (512,) layer4.0.conv2.weight : (512, 512, 3, 3) layer4.0.bn2.weight : (512,) layer4.0.bn2.bias : (512,) layer4.0.downsample.0.weight : (512, 256, 1, 1) layer4.0.downsample.1.weight : (512,) layer4.0.downsample.1.bias : (512,) layer4.1.conv1.weight : (512, 512, 3, 3) layer4.1.bn1.weight : (512,) layer4.1.bn1.bias : (512,) layer4.1.conv2.weight : (512, 512, 3, 3) layer4.1.bn2.weight : (512,) layer4.1.bn2.bias : (512,) fc.weight : (1000, 512) fc.bias : (1000,)
https://stackoverflow.com/questions/52469566/
Is computing `loss.backward` for multiple losses performant in pytorch?
I would like to calculate the gradient of my model for several loss functions. I would like to find out if calculating successive backwards calls with retain_graph=True is cheap or expensive. In theory I would expect that the first call should be slower than those following the first, because the computational graph does not have to be reevaluated, but just a few matrix multiplications need to be made. In practice I found it hard to benchmark. My code: # Code in file nn/two_layer_net_nn.py import torch D_in = 40 model = torch.load('model.pytorch') device = torch.device('cpu') def loss1(y_pred,x): return (y_pred*(0.5-x.clamp(0,1))).sum() def loss2(y_pred,x): return (y_pred*(1-x.clamp(0,1))).sum() # Predict random input x = torch.rand(1,D_in, device=device,requires_grad=True) y_pred = model(x) # Is this %%timeit loss = loss1(y_pred,x) loss.backward(retain_graph=True) 202 µs ± 4.34 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) # Slower than this? %%timeit loss = loss2(y_pred,x) loss.backward(retain_graph=True) 216 µs ± 27.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) # Are successive backwards calls cheap? loss = lossX(y_pred,x) loss.backward(retain_graph=True) I think that %%timeit doesn't work because it will run several iterations and then average over it. How can I measure whether successive calls to backward will be fast? What does retain_graph=True actually mean for performance?
I think that you only asked if the first backward pass is slower than subsequent ones? There are two more questions that might as well be combined here: is it more efficient to combine losses does keeping the graph (if not its values) require more memory? Before that, however, let's emphasize on what retain_graph is actually for: multiple passes through your graph if you computationally happen to have multiple outputs at multiple times. As an example, think joint multi-task-learning (see this question and its answers for a discussion on this). Back to the questions: In general, I would expect that it does not really matter if you are retaining the graph. After all, it is just keeping partial computations in memory for future use, without "doing" anything with them. That said - the first backwards pass will take longer, as pytorch will cache some of the computations needed when computing the gradients. So here's proof: import numpy as np import torch import torch.nn as nn import time import os import psutil D_in = 1024 model = nn.Sequential(nn.Linear(1024, 4096), nn.ReLU(), nn.Linear(4096, 4096), nn.ReLU(), nn.Linear(4096, 1024)) device = torch.device('cpu') def loss1(y_pred,x): return (y_pred*(0.5-x.clamp(0,1))).sum() def loss2(y_pred,x): return (y_pred*(1-x.clamp(0,1))).sum() def timeit(func, repetitions): time_taken = [] mem_used = [] for _ in range(repetitions): time_start = time.time() mem_used.append(func()) time_taken.append(time.time() - time_start) return np.round([np.mean(time_taken), np.min(time_taken), np.max(time_taken), \ np.mean(mem_used), np.min(mem_used), np.max(mem_used)], 4).tolist() # Predict random input x = torch.rand(1,D_in, device=device,requires_grad=True) def init(): out = model(x) loss = loss1(out, x) loss.backward() def func1(): x = torch.rand(1, D_in, device=device, requires_grad=True) loss = loss1(model(x),x) loss.backward() loss = loss2(model(x),x) loss.backward() del x process = psutil.Process(os.getpid()) return process.memory_info().rss def func2(): x = torch.rand(1, D_in, device=device, requires_grad=True) loss = loss1(model(x),x) + loss2(model(x),x) loss.backward() del x process = psutil.Process(os.getpid()) return process.memory_info().rss def func3(): x = torch.rand(1, D_in, device=device, requires_grad=True) loss = loss1(model(x),x) loss.backward(retain_graph=True) loss = loss2(model(x),x) loss.backward(retain_graph=True) del x process = psutil.Process(os.getpid()) return process.memory_info().rss def func4(): x = torch.rand(1, D_in, device=device, requires_grad=True) loss = loss1(model(x),x) + loss2(model(x),x) loss.backward(retain_graph=True) del x process = psutil.Process(os.getpid()) return process.memory_info().rss init() print(timeit(func1, 100)) print(timeit(func2, 100)) print(timeit(func3, 100)) print(timeit(func4, 100)) The results are (sorry for my lazy formatting): # time mean, time min, time max, memory mean, memory min, memory max [0.1165, 0.1138, 0.1297, 383456419.84, 365731840.0, 384438272.0] [0.127, 0.1233, 0.1376, 400914759.68, 399638528.0, 434044928.0] [0.1167, 0.1136, 0.1272, 400424468.48, 399577088.0, 401223680.0] [0.1263, 0.1226, 0.134, 400815964.16, 399556608.0, 434307072.0] However, if you skip the first backwards pass (comment out the call to the init() function), the very first backwards run in func1 will take longer: # time mean, time min, time max, memory mean, memory min, memory max [0.1208, 0.1136, **0.1579**, 350157455.36, 349331456.0, 350978048.0] [0.1297, 0.1232, 0.1499, 393928540.16, 350052352.0, 401854464.0] [0.1197, 0.1152, 0.1547, 350787338.24, 349982720.0, 351629312.0] [0.1335, 0.1229, 0.1793, 382819123.2, 349929472.0, 401776640.0]
https://stackoverflow.com/questions/52478445/
Hello World Convolution on Images in PyTorch
I am trying to verify some results with PyTorch's 2D convolution with the following: Input matrix: X (10, 10, 3) [Dummy numpy image] Weight matrix: W (3, 3, 3) [My Conv Filter to test] Output matrix: Y (10, 10, 1) I have the following code but I am not able to assign the weights properly and run the model without errors. What am I doing wrong here?? import torch import torch.nn as nn import torchvision.transforms import numpy as np # Convert image to tensor image2tensor = torchvision.transforms.ToTensor() class ConvNet(nn.Module): def __init__(self, num_classes=10): super(ConvNet, self).__init__() # Test layer self.layer1 = nn.Conv2d(3, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) def forward(self, x): out = self.layer1(x) return out # Test image image = np.ones((10, 10, 3)) tensor = image2tensor(image).unsqueeze(0) # Create new model conv = ConvNet() # Assign test weight - NOT WORKING!! weight = torch.nn.Parameter(torch.ones(3, 3, 3)) conv.layer1.weight.data = weight # Run the model output = conv(tensor)
Next time please post your corresponding error message. Turns out the matrix dimension also has to match the batch size (i.e. needs an additional fourth dimension): Thus, you are initializing the weight matrix with the wrong parameters. Correct would be this instead: weight = torch.nn.Parameter(torch.ones(1, 3, 3, 3)) Additionally, for my version of PyTorch (0.4.1), I had to manually cast the tensor again to a float because it otherwise threw a different error. Avoid by doing it like so: tensor = image2tensor(image).unsqueeze(0).float() # note the additional .float() Then it runs successfully for me.
https://stackoverflow.com/questions/52489037/
Pytorch 0.4.1 invalid gradient at index 0 - expected shape[] but got [1]
I've been around this problem for the whole day. torch.autograd.backward(loss_seq, grad_seq) will get an error. Output: Traceback (most recent call last): File "train_vgg.py", line 272, in <module> torch.autograd.backward(loss_seq, grad_seq) File "/root/anaconda3/lib/python3.6/site- packages/torch/autograd/__init__.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: invalid gradient at index 0 - expected shape [] but got [1] Input: loss_seq:[tensor(7.3761, device='cuda:1', grad_fn=<ThAddBackward>), tensor(4.3005, device='cuda:1', grad_fn=<ThAddBackward>), tensor(4.2209, device='cuda:1', grad_fn=<ThAddBackward>)] grad_seq:[tensor([1.], device='cuda:1'), tensor([1.], device='cuda:1'), tensor([1.], device='cuda:1')] ``` Can someone tell how to fix it? input code: images = Variable(images).cuda(gpu) label_yaw = Variable(labels[:,0]).cuda(gpu) label_pitch = Variable(labels[:,1]).cuda(gpu) label_roll = Variable(labels[:,2]).cuda(gpu) pre_yaw, pre_pitch, pre_roll = model(images) # Cross entropy loss loss_yaw = criterion(pre_yaw, label_yaw) loss_pitch = criterion(pre_pitch, label_pitch) loss_roll = criterion(pre_roll, label_roll) loss_yaw += 0.005 * loss_reg_yaw loss_pitch += 0.005 * loss_reg_pitch loss_roll += 0.005 * loss_reg_roll loss_seq = [loss_yaw, loss_pitch, loss_roll] grad_seq = [torch.ones(1).cuda(gpu) for _ in range(len(loss_seq))] # crash here torch.autograd.backward(loss_seq, grad_seq)
I have solved this problem. Only change: grad_seq = [torch.ones(1).cuda(gpu) for _ in range(len(loss_seq))] to: grad_seq = [torch.tensor(1.0).cuda(gpu) for _ in range(len(loss_seq))]
https://stackoverflow.com/questions/52547157/
PyTorch and CUDA driver
I have CUDA 9.2 installed. For example: (base) c:\>nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2018 NVIDIA Corporation Built on Wed_Apr_11_23:16:30_Central_Daylight_Time_2018 Cuda compilation tools, release 9.2, V9.2.88 I installed PyTorch on Windows 10 using: conda install pytorch cuda92 -c pytorch pip3 install torchvision I ran the test script: (base) c:\>python Python 3.6.5 |Anaconda custom (64-bit)| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from __future__ import print_function >>> import torch >>> x = torch.rand(5, 3) >>> print(x) tensor([[0.7041, 0.5685, 0.4036], [0.3089, 0.5286, 0.3245], [0.3504, 0.8638, 0.1118], [0.6517, 0.9209, 0.6801], [0.0315, 0.1923, 0.8720]]) >>> quit() So for, so good. Then I ran: (base) c:\>python Python 3.6.5 |Anaconda custom (64-bit)| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch.cuda.is_available() False >>> Why did PyTorch say CUDA was not available? The GPU is a compute capability 3.0 Quadro K3000M: (base) C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi.exe Mon Oct 01 16:36:47 2018 NVIDIA-SMI 385.54 Driver Version: 385.54 -------------------------------+----------------------+---------------------- GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. 0 Quadro K3000M WDDM | 00000000:01:00.0 Off | N/A N/A 35C P0 N/A / N/A | 29MiB / 2048MiB | 0% Default
Ever since https://github.com/pytorch/pytorch/releases/tag/v0.3.1, PyTorch binary releases had removed support for old GPUs' with CUDA capability 3.0. According to https://en.wikipedia.org/wiki/CUDA, the compute capability of Quadro K3000M is 3.0. Therefore, you might have to build pytorch from source or try other packages. Please refer to this thread for more information -- https://discuss.pytorch.org/t/pytorch-no-longer-supports-this-gpu-because-it-is-too-old/13803.
https://stackoverflow.com/questions/52562352/
Dataframe as datasource in torchtext
I have a dataframe, which has two columns (review and sentiment). I am using pytorch and torchtext library for preprocessing data. Is it possible to use dataframe as source to read data from, in torchtext? I am looking for something similar to, but not data.TabularDataset.splits(path='./data') I have performed some operation (clean, change to required format) on data and final data is in a dataframe. If not torchtext, what other package would you suggest that would help in preprocessing text data present in a datarame. I could not find anything online. Any help would be great.
Adapting the Dataset and Example classes from torchtext.data from torchtext.data import Field, Dataset, Example import pandas as pd class DataFrameDataset(Dataset): """Class for using pandas DataFrames as a datasource""" def __init__(self, examples, fields, filter_pred=None): """ Create a dataset from a pandas dataframe of examples and Fields Arguments: examples pd.DataFrame: DataFrame of examples fields {str: Field}: The Fields to use in this tuple. The string is a field name, and the Field is the associated field. filter_pred (callable or None): use only exanples for which filter_pred(example) is true, or use all examples if None. Default is None """ self.examples = examples.apply(SeriesExample.fromSeries, args=(fields,), axis=1).tolist() if filter_pred is not None: self.examples = filter(filter_pred, self.examples) self.fields = dict(fields) # Unpack field tuples for n, f in list(self.fields.items()): if isinstance(n, tuple): self.fields.update(zip(n, f)) del self.fields[n] class SeriesExample(Example): """Class to convert a pandas Series to an Example""" @classmethod def fromSeries(cls, data, fields): return cls.fromdict(data.to_dict(), fields) @classmethod def fromdict(cls, data, fields): ex = cls() for key, field in fields.items(): if key not in data: raise ValueError("Specified key {} was not found in " "the input data".format(key)) if field is not None: setattr(ex, key, field.preprocess(data[key])) else: setattr(ex, key, data[key]) return ex Then, first define fields using torchtext.data fields. For example: TEXT = data.Field(tokenize='spacy') LABEL = data.LabelField(dtype=torch.float) TEXT.build_vocab(train, max_size=25000, vectors="glove.6B.100d") LABEL.build_vocab(train) fields = { 'sentiment' : LABEL, 'review' : TEXT } before simply loading the dataframes: train_ds = DataFrameDataset(train_df, fields) valid_ds = DataFrameDataset(valid_df, fields)
https://stackoverflow.com/questions/52602071/
A better way to make pytorch code agnostic to running on a CPU or GPU?
The Migration guide recommends the following to make code CPU/GPU agnostic: > # at beginning of the script device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ... # then whenever you get a new Tensor or Module # this won't copy if they are already on the desired device input = data.to(device) model = MyModule(...).to(device) I did this and ran my code on a CPU-only device, but my model crashed when fed an input array, as saying it was expecting a CPU tensor not a GPU one. Somehow my model was automatically converting the a CPU-input array to a GPU array. Finally I traced it down to this command in my code: model = torch.nn.DataParallel(model).to(device) Even though I convert the model to 'cpu', the nn.DataParallel overrides this. The best solution I came up with was a conditional: if device.type=='cpu': model = model.to(device) else: model = torch.nn.DataParallel(model).to(device) This does not seem elegant. Is there a better way?
How about if torch.cuda.device_count() > 1: model = torch.nn.DataParallel(model) model = model.to(device) ? You don't need DataParallel if you have only one GPU.
https://stackoverflow.com/questions/52613383/
How to implement a location specific convolutional filters in Tensorflow or Pytorch?
I want to implement a convolutional layer, with a different convolutional filter for each output location. Specifically, think of the case when the output is of 16*16*128 (W * H * C). Instead of having a 3*3*128 filter we have 16*16 filters; each with size 3*3*128. This would lead to huge amount of parameters, but it can the case be that each of the 3*3*128 filter may be the same except scaled by a different constant, and the constants can be learned through a side network. In this way the number of parameters won't be too much. The similar idea is briefly in Dynamic Filter Networks, but I cannot find an implementation of location specific filters. My question is, if we want a location specific convolutional filter how do I implement it in Tensorflow or Pytorch? Do I need to write my own operation or there is some smart way to use the functions provided? If I have to write an OP is there any trick that can easily achieve this idea? Any help is appreciated!
A convolution, by definition, is not location specific - this is what makes it a convolution. If you wish to generalize convolution, bear in mind that eventually a convolution is a special case of a simple linear operation. Therefore, you can implement your "location specific" convolution as a fully-connected layer (nn.Linear) with a very specific sparse weights.
https://stackoverflow.com/questions/52614786/
How to derive weight and bias in a neural network?
This is the Network: class Net(torch.nn.Module): def __init__(self, n_feature, n_hidden, n_output): super(Net, self).__init__() self.hidden = torch.nn.Linear(n_feature, n_hidden) self.predict = torch.nn.Linear(n_hidden, n_output) # output layer def forward(self, x): x = F.relu(self.hidden(x)) # activation function for hidden layer x = self.predict(x) # linear output return x net = Net(n_feature=1, n_hidden=10, n_output=1) pytorch_total_params = sum(p.numel() for p in net.parameters()) print(pytorch_total_params) w = list(net.parameters()) print(w) This is the runnung result: 31 [Parameter containing: tensor([[ 0.9534], [-0.0309], [-0.9570], [-0.4179], [-0.3757], [-0.4227], [-0.8866], [ 0.2107], [ 0.0222], [ 0.2531]], requires_grad=True), Parameter containing: tensor([-0.0358, -0.2533, 0.2979, 0.9777, 0.9606, 0.9460, 0.9059, 0.7582, -0.5286, 0.3367], requires_grad=True), Parameter containing: tensor([[-0.2863, -0.3157, 0.2086, -0.0011, -0.0415, -0.2574, -0.0683, -0.0788, -0.0339, -0.0195]], requires_grad=True), Parameter containing: tensor([0.2031], requires_grad=True)] I don't know why is the number of parameters be 31? And also don't understand the numbers printed above.(whether it is weight or bias Because I thought in Relu function, there will only be (2 parameters*10) which is weight and bias multiply 10 hidden layers.
If you print the named parameters you can see to which layer a parameter belongs. Printing the named parameters: for p in net.named_parameters(): print(p) Creates the following output: ('hidden.weight', Parameter containing: tensor([[ 0.8324], [ 0.2166], [-0.9786], [ 0.3977], [ 0.9008], [-0.3102], [ 0.5052], [ 0.6589], [ 0.0828], [ 0.6505]], requires_grad=True)) ('hidden.bias', Parameter containing: tensor([ 0.6715, 0.5503, -0.6043, 0.1102, -0.2700, 0.7203, -0.6524, -0.6332, -0.2513, -0.1316], requires_grad=True)) ('predict.weight', Parameter containing: tensor([[ 0.1486, 0.1528, -0.0835, -0.3050, 0.1184, -0.0422, -0.2786, -0.2549, -0.1532, -0.0255]], requires_grad=True)) ('predict.bias', Parameter containing: tensor([0.2878], requires_grad=True)) As you can see the layers are connected by 10 weights each, as you expected, but there is one bias per neuron on the right side of a 'connection'. So you have 10 bias-parameters between your input and your hidden layer and just one for the calculation of your final prediction. You are calculating the input to each neuron in the l-th layer like this weighted sum: So you need a weight for every connection between the neurons of the two layers, but only one bias per neuron in the l-th layer. In your case: input to hidden: 10 weights and 10 bias, because your hidden layer has 10 neurons hidden to output/predict: 10 weights and 1 bias, because you output a single value sums up to 31 parameters.
https://stackoverflow.com/questions/52649188/
Debugging in Google Colab
I am running the following code snippet in google colab in a single cell: %debug # Create tensors of shape (10, 3) and (10, 2). x = torch.randn(10, 3) y = torch.randn(10, 2) # Build a fully connected layer. linear = nn.Linear(3, 2) print ('w: ', linear.weight) print ('b: ', linear.bias) I wish to debug a piece of code (step through it line by line) to understand what is going on. I wish to step inside the function nn.Linear. However, when I step through, it does not enter the function at all. Is there a way to step through nn.Linear line by line? Also, how exactly do I set a breakpoint in nn.Linear? Besides, I wish to step though the snippet line by line as well. However, as the picture shows, the step command automatically steps through and executes the print statement as well.
Since Python 3.7 you can use a built-in breakpoint function. If this is not available, you can use import pdb pdb.set_trace() instead. If you want to execute the next line you can try n (next) instead of s (step).
https://stackoverflow.com/questions/52656692/
PyTorch derivative has no degree
I am studying the tutorial from PyTorch official docs. I am trying to understand the content. Start from You can do many crazy things with autograd! x = torch.randn(3, requires_grad=True) y = x * 2 i = 0 while y.data.norm() < 100: y = y * 2 i+= 1 print(x) print(y) print(i) Output: tensor([-0.6933, 0.1126, 0.3913], requires_grad=True) tensor([-88.7455, 14.4082, 50.0871], grad_fn=<MulBackward>) 6 Find the derivative w.r.t to x at point [0.1, 1.0, 0.0001] gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float) y.backward(gradients) print(x.grad) Output: tensor([ 12.8000, 128.0000, 0.0128]) In my understanding i equals to 6. Then y = (2x)^7 and derivative is different from PyTorch. It has 7 as a factor when substitute the value to my derivative. The answer from the PyTorch is simply substitute x with given point to the dy/dx = 2^7 * x Question: How to derive the derivative? References: How to use PyTorch to calculate partial derivatives? PyTorch Autograd automatic differentiation feature
If you look closely at the expressions, it'd turn out that y = x * (2^7), the derivative of which is 2^7 * x .
https://stackoverflow.com/questions/52662197/
Problems about torch.nn.DataParallel
I am new in deep learning area. Now I am reproducing a paper’s codes. since they use several GPUs, there is a command torch.nn.DataParallel(model, device_ids= args.gpus).cuda() in codes. But I only have one GPU, what should I change this code to match up my GPU? Thank you!
DataParallel should work on a single GPU as well, but you should check if args.gpus only contains the id of the device that is to be used (should be 0) or None. Choosing None will make the module use all available devices. Also you could remove DataParallel as you do not need it and move the model to GPU only by calling model.cuda() or, as I prefer, model.to(device) where device is the device's name. Example: This example shows how to use a model on a single GPU, setting the device using .to() instead of .cuda(). from torch import nn import torch # Set device to cuda if cuda is available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Create model model = nn.Sequential( nn.Conv2d(1,20,5), nn.ReLU(), nn.Conv2d(20,64,5), nn.ReLU() ) # moving model to GPU model.to(device) If you want to use DataParallel you could do it like this # Optional DataParallel, not needed for single GPU usage model1 = torch.nn.DataParallel(model, device_ids=[0]).to(device) # Or, using default 'device_ids=None' model1 = torch.nn.DataParallel(model).to(device)
https://stackoverflow.com/questions/52663358/
ValueError: Don't know how to translate op Unsqueeze when running converted PyTorch Model
I'm running into problems trying to use a PyTorch model exported as an ONNX model with Caffe2. Here is my export code the_model = torchvision.models.densenet121(pretrained=True) garbage, model_inputs = preprocessing("test.jpg") torch_out = torch.onnx._export(the_model, model_inputs, "model_weights/chexnet-py.onnx", export_params=True) Now here is my testing code model = onnx.load("model_weights/chexnet-py.onnx") garbage, model_inputs = preprocessing("text.jpg") prepared_backend = onnx_caffe2.backend.prepare(model) W = {model.graph.input[0].name: model_inputs.numpy()} c2_out = prepared_backend.run(W)[0] This is returning the following error ValueError: Don't know how to translate op Unsqueeze when running converted PyTorch Model Additional information pytorch version 1.0.0a0+6f664d3 Caffe2 is latest version (attempted building from source, pip, and conda). All gave same result.
Try looking into this, if you have to edit package called onnx-caffe2 to add the mapping b/w Unsqueeze to ExpandDims https://github.com/onnx/onnx/issues/1481 Look for the answer: I found that the Caffe2 equivalence for Unsqueeze in ONNX is ExpandDims, and there is a special mapping in onnx_caffe2/backend.py around line 121 for those operators that are different only in their names and attribute names, but somehow Unsqueeze isn't presented there (have no idea why). So I manually added the mapping rules for it in the _renamed_operators and _per_op_renamed_attrs dicts and the code would look like: _renamed_operators = { 'Caffe2ConvTranspose': 'ConvTranspose', 'GlobalMaxPool': 'MaxPool', 'GlobalAveragePool': 'AveragePool', 'Pad': 'PadImage', 'Neg': 'Negative', 'BatchNormalization': 'SpatialBN', 'InstanceNormalization': 'InstanceNorm', 'MatMul': 'BatchMatMul', 'Upsample': 'ResizeNearest', 'Equal': 'EQ', 'Unsqueeze': 'ExpandDims', # add this line } _global_renamed_attrs = {'kernel_shape': 'kernels'} _per_op_renamed_attrs = { 'Squeeze': {'axes': 'dims'}, 'Transpose': {'perm': 'axes'}, 'Upsample': {'mode': ''}, 'Unsqueeze': {'axes': 'dims'}, # add this line } And everything works as expected. I am not the OP, thanks to OP though.
https://stackoverflow.com/questions/52684788/
Segmentation fault (Core dumped) on importing torch with python3.5 in virtualenv
I have installed torch=0.4.1 without cuda in a virtualenv. I am using python3.5 on ubuntu 16.04. Whenever I import torch in an interactive python shell, it quits the python program by showing Segmentation fault (core dumped). While, surprisingly I had earlier initialized a jupyter notebook and tried importing torch there and it was running fine. Can someone please help? I could not find a solution on official `PyTorch github' discussions.
I had the same issue, the solution was given on GitHub: import cv2 # first import cv2 import torch
https://stackoverflow.com/questions/52689970/
spyder doesn't launch after installing pytorch
I installed pytorch but after that Spyder can no longer be launched. Here are the terminal info: conda install pytorch torchvision -c pytorch Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 4.5.10 latest version: 4.5.11 Please update conda by running $ conda update -n base -c defaults conda Package Plan environment location: /anaconda3/envs/base_py36 added / updated specs: - pytorch - torchvision The following packages will be downloaded: package | build ---------------------------|----------------- torchvision-0.2.1 | py36_1 123 KB pytorch scipy-1.1.0 | py36hf1f7d93_0 15.4 MB scikit-learn-0.20.0 | py36h4f467ca_1 5.4 MB numpy-base-1.15.2 | py36h8a80b8c_1 4.1 MB numpy-1.11.3 | py36heee0a97_5 3.4 MB ninja-1.8.2 | py36h04f5b5a_1 93 KB pytorch-0.4.1 |py36_cuda0.0_cudnn0.0_1 10.0 MB pytorch ------------------------------------------------------------ Total: 38.5 MB The following NEW packages will be INSTALLED: ninja: 1.8.2-py36h04f5b5a_1 pytorch: 0.4.1-py36_cuda0.0_cudnn0.0_1 pytorch torchvision: 0.2.1-py36_1 pytorch The following packages will be REMOVED: accelerate: 2.3.1-np111py36_0 The following packages will be UPDATED: mkl: 11.3.3-0 --> 2019.0-118 numexpr: 2.6.7-py36hde7755b_0 --> 2.6.8-py36h1dc9127_0 numpy: 1.11.3-py36_nomklh8ecaf62_5 --> 1.11.3-py36heee0a97_5 numpy-base: 1.15.0-py36he97cb71_0 --> 1.15.2-py36h8a80b8c_1 scikit-learn: 0.19.1-py36_nomklhde7755b_0 --> 0.20.0-py36h4f467ca_1 scipy: 1.1.0-py36_nomklh7cd7d8e_0 --> 1.1.0-py36hf1f7d93_0 The following packages will be DOWNGRADED: blas: 1.0-openblas --> 1.0-mkl Proceed ([y]/n)? y Downloading and Extracting Packages torchvision-0.2.1 | 123 KB | ############################### | 100% scipy-1.1.0 | 15.4 MB | ##################################### | 100% scikit-learn-0.20.0 | 5.4 MB | ############################### | 100% numpy-base-1.15.2 | 4.1 MB | ##################################### | 100% numpy-1.11.3 | 3.4 MB | ##################################### | 100% ninja-1.8.2 | 93 KB | ############################### | 100% pytorch-0.4.1 | 10.0 MB | ##################################### | 100% Preparing transaction: done
I don't know if you have solved your issue, but in case you have and someone else comes across this question or you haven't and are still waiting: I came across your post as a result of having the same thing happen to me... It would seem that all I had to do was "conda update all" for it to start working again!
https://stackoverflow.com/questions/52691611/
How to take and restore snapshots of model training on another VM in Google Colab?
There is a 12 hour time limit for training DL models on GPU, according to google colab. Other people have had similar questions in the past, but there has been no clear answer on how to save and load models halfway through training when the 12 hour limits get exceeded, including saving the number of epochs that has been completed/saving other parameters. Is there an automated script for me to save the relevant parameters and resume operations on another VM? I am a complete noob; clear cut answers will be much appreciated.
As far as I know, there is no way to automatically reconnect to another VM whenever you reach the 12 hours limit. So in any case, you have to manually reconnect when the time is up. As Bob Smith points out, you can mount Google Drive in Colab VM so that you can save and load data from there. In particular, you can periodically save model checkpoints so that you can load the most recent one whenever you connect to a new Colab VM. Mount Drive in your Colab VM: from google.colab import drive drive.mount('/content/gdrive') Create a saver in your graph: saver = tf.train.Saver() Periodically (e.g. every epoch) save a checkpoint in Drive: saver.save(session, CHECKPOINT_PATH) When you connect to a new Colab VM (because of the timeout), mount Drive again in your VM and restore the most recent checkpoint before the training phase: saver.restore(session, CHECKPOINT_PATH) ... # Start training with the restored model. Take a look at the documentation to read more about tf.train.Saver.
https://stackoverflow.com/questions/52710348/
Get CUDA_HOME environment path PYTORCH
I have cuda installed via anaconda on my system which has 2 GPUs which is getting recognized by my python. import torch torch.cuda.is_available() true However when I try to run a model via its C API, I m getting following error: ~/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py in _join_cuda_home(*paths) 722 ''' 723 if CUDA_HOME is None: --> 724 raise EnvironmentError('CUDA_HOME environment variable is not set. ' 725 'Please set it to your CUDA install root.') 726 return os.path.join(CUDA_HOME, *paths) OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root. https://lfd.readthedocs.io/en/latest/install_gpu.html page gives instruction to set up CUDA_HOME path if cuda is installed via their method. Since I have installed cuda via anaconda I don't know which path to set. I tried find method but it is returning me too many paths for cuda. Can somebody help me with the path for CUDA. Thanks in advance.
Solution to above issue! As cuda installed through anaconda is not the entire package. Please install cuda drivers manually from Nvidia Website[ https://developer.nvidia.com/cuda-downloads ] After installation of drivers, pytorch would be able to access the cuda path. You can test the cuda path using below sample code. Problem resolved!!! CHECK INSTALLATION: import os print(os.environ.get('CUDA_PATH')) OUTPUT: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
https://stackoverflow.com/questions/52731782/
How to pretrain/select CNN for Biomedical Video Analysis
Data I am trying to train a model in the biomedical domain with a rather specialized task (flow prediction). My input consists of video clips and I would like to predict either a single image or a video. I have pixel wise ground truth for every frame of the video but the number of videos is very limited. I do however have a lot of unlabeled data of related scenes, which is why I considered transfer learning/pretraining of some sort. Architecture Architecture wise I am considering a CNN, RNN combination where the CNN provides a representation of the input frames for the RNN to learn about the temporal relationship between input frames. Now my question is: What kind of CNN do I use and on what do I pretrain it? Since I am working with biomedical data I would assume image net as well as most other image data sets do not really help as the image content is very different. Are there any data sets/tasks/networks I could use for this purpose?
Some people recommend shallow autoencoders to pretrain networks if there is not much labeled data, but lots of unlabeled data. A reference would be Géron's book "Hands-On Machine Learning with Scikit-Learn and TensorFlow", but there are also many articles and tutorials online.
https://stackoverflow.com/questions/52742238/
Convolution Autoencoder Image Dimension Error
I have the following Convolutional Autoencoder setup: class autoencoder(nn.Module): def __init__(self): super(autoencoder, self).__init__() self.encoder = nn.Sequential( nn.Conv2d(1, 16, 3, stride=3, padding=1), # b, 16, 10, 10 nn.ReLU(True), nn.MaxPool2d(2, stride=2), # b, 16, 5, 5 nn.Conv2d(16, 8, 3, stride=2, padding=1), # b, 8, 3, 3 nn.ReLU(True), nn.MaxPool2d(2, stride=1) # b, 8, 2, 2 ) self.decoder = nn.Sequential( nn.ConvTranspose2d(8, 16, 3, stride=2), # b, 16, 5, 5 nn.ReLU(True), nn.ConvTranspose2d(16, 8, 5, stride=3, padding=1), # b, 8, 15, 15 nn.ReLU(True), nn.ConvTranspose2d(8, 1, 2, stride=2, padding=1), # b, 1, 28, 28 nn.Tanh() ) This is the main Loop: for epoch in range(epochs): running_loss = 0 for data in (train_loader): image,_=data inputs = image.view(image.size(0),-1) optimizer.zero_grad() #image = np.expand_dims(img, axis=0) outputs = net(inputs) loss = criterion(outputs,inputs) loss.backward() optimizer.step() running_loss += loss.data[0] print('At Iteration : %d ; Mean-Squared Error : %f'%(epoch + 1,running_loss/(train_set.train_data.size(0)/batch_size))) This is the Error: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [16, 1, 3, 3], but got input of size [1000, 784] instead This has something to do with the flattening of the image but Im not exactly sure how to deflatten it.
Why are you "flattening" your input image (2nd line of main loop): inputs = image.view(image.size(0),-1) This line turns your 4 dimensional image (batch - channels - height - width) to a two dimensional "flat" vector (batch - c * h * w). You autoencoder expects its inputs to be 4D and not "flat". just remove this line and you should be okay.
https://stackoverflow.com/questions/52747193/
How to install pytorch on Power 8 or PPC64 machine?
I am trying to install pytorch using conda on power 8 IBM machine. Although, I read articles from IBM blog, I couldn't install with success because I got stuck with compiling magma.
Assuming that conda is already installed. Simply run the following command conda install -c engility pytorch conda install -c engility torchvision Note: 1. Goto this anaconda page 2. Search for pytorch 3. Scroll down to see which one has linux-ppc64le as platform 4. Click into that specific package 5. You will get the command to install pytorch
https://stackoverflow.com/questions/52750622/
Fastai on google colab
Two days back I ran my model using fastai 0.7.0 on google colab. And for two days I got busy and now if I am trying to run it , its throwing me an error, on the execution of the line *"from fastai.transforms import ’ . the error is AttributeError: module ‘torch’ has no attribute ‘float32’.
The following should get you up and running with the 0.7.0 version of fast.ai (used by v2 of the course) on Google Colab: !pip install -q "fastai==0.7.0" Pillow==4.1.1 torchtext==0.2.3 !apt-get -qq install -y libsm6 libxext6 && pip install -q -U opencv-python from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) # !apt update -q !apt install -y libsm6 libxext6 from os import path accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu' torch_whl = f"http://download.pytorch.org/whl/{accelerator}/torch-0.3.0.post4-{platform}-linux_x86_64.whl" !pip install -q {torch_whl} torchvision image
https://stackoverflow.com/questions/52769255/
What does the -1 mean in tensor.size(-1)?
I have seen something like this in the Pytorch documentation, import torch a = torch.tensor([1, 2]) a.size() # torch.Size([2]) a.size(-1) # 2 How does this work? I didn't find a description. Thanks,
a.size(-1) refers to the last dimension. For example, if the shape of x were (10,20) then x.size(-1) refers to second dimension i.e. 20. Take a look at the following examples: import torch a= torch.zeros((2,5)) # a is matrix of 2 rows and 5 columns all elements are 0 #size gives a 1d tensor containing the shapes a.size(-1)# refers to the last element in the tensor This is equivalent to: a_size= a.size() a_size(-1) Hope this helps you.
https://stackoverflow.com/questions/52772534/
Un-normalizing PyTorch data
Below code : ux = torch.tensor(np.array([[255,1,255],[255,1,255]])).float() print(ux) ux = F.normalize(ux, p=2, dim=1) print(ux) prints : tensor([[ 255., 1., 255.], [ 255., 1., 255.]]) tensor([[ 0.7071, 0.0028, 0.7071], [ 0.7071, 0.0028, 0.7071]]) How can I un-normalize the ux in order to return to values tensor([[ 255., 1., 255.], [ 255., 1., 255.]]) from tensor([[ 0.7071, 0.0028, 0.7071], [ 0.7071, 0.0028, 0.7071]]) There are various resources that detail this process such as https://discuss.pytorch.org/t/simple-way-to-inverse-normalize-a-batch-of-input-variable/12385/3 but do not detail unnormalizing result of F.normalize
F.normalize simply divides by the norm according to the documentation, so you simply need to multiply it by its magnitude. This means you still need access to the magnitude of the original vector ux, otherwise, this is not possible, since the information about the magnitude cannot be recovered from the normalized vector. Here's how this can be done: # I modified the input to make it more interesting, but you can use any other value ux = torch.tensor(np.array([[255,1,255],[101,10,123]])).float() magnitude = ux.norm(p=2, dim=1, keepdim=True) # NEW ux = F.normalize(ux, p=2, dim=1) ux_orig = ux * magnitude # NEW print(ux_orig) # Outputs: # tensor([[255., 1., 255.], # [101., 10., 123.]])
https://stackoverflow.com/questions/52785599/
Pytorch adaptive_avg_pool2d algorithm
Anyone knows the algorithm for pytorch adaptive_avg_pool2d, for example, adaptive_avg_pool2d(image,[14,14]) so question: I want to do the same in keras neural network, for any give inputs, want to get 14*14 output. Any suggestions?
I don't think this exists in Keras. You could get the dimension of your input and divide by 14 to get the desired pool_size. For example if your inputs are 28x28 you can use: keras.layers.AveragePooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format=None)
https://stackoverflow.com/questions/52808654/
How to get probabilities when using Pytorch's densenet?
I want to do a binary classification and I used the DenseNet from Pytorch. Here is my predict code: densenet = torch.load(model_path) densenet.eval() output = densenet(input) print(output) And here is the output: Variable containing: 54.4869 -54.3721 [torch.cuda.FloatTensor of size 1x2 (GPU 0)] I want to get the probabilities of each class. What should I do? I have noticed that torch.nn.Softmax() could be used when there are many categories, as discussed here.
import torch.nn as nn Add a softmax layer to the classifier layer: i.e. typical: num_ftrs = model_ft.classifier.in_features model_ft.classifier = nn.Linear(num_ftrs, num_classes) updated: model_ft.classifier = nn.Sequential(nn.Linear(num_ftrs, num_classes), nn.Softmax(dim=1))
https://stackoverflow.com/questions/52828527/
PyTorch: Loss remains constant
I've written a code in PyTorch with my own implemented loss function focal_loss_fixed. But my loss value stays fixed after every epoch. Looks like weights are not being updated. Here is my code snippet: optimizer = optim.SGD(net.parameters(), lr=lr, momentum=0.9, weight_decay=0.0005) for epoch in T(range(20)): net.train() epoch_loss = 0 for n in range(len(x_train)//batch_size): (imgs, true_masks) = data_gen_small(x_train, y_train, iter_num=n, batch_size=batch_size) temp = [] for tt in true_masks: temp.append(tt.reshape(128, 128, 1)) true_masks = np.copy(np.array(temp)) del temp imgs = np.swapaxes(imgs, 1,3) imgs = torch.from_numpy(imgs).float().cuda() true_masks = torch.from_numpy(true_masks).float().cuda() masks_pred = net(imgs) masks_probs = F.sigmoid(masks_pred) masks_probs_flat = masks_probs.view(-1) true_masks_flat = true_masks.view(-1) print((focal_loss_fixed(tf.convert_to_tensor(true_masks_flat.data.cpu().numpy()), tf.convert_to_tensor(masks_probs_flat.data.cpu().numpy())))) loss = torch.from_numpy(np.array(focal_loss_fixed(tf.convert_to_tensor(true_masks_flat.data.cpu().numpy()), tf.convert_to_tensor(masks_probs_flat.data.cpu().numpy())))).float().cuda() loss = Variable(loss.data, requires_grad=True) epoch_loss *= (n/(n+1)) epoch_loss += loss.item()*(1/(n+1)) print('Step: {0:.2f}% --- loss: {1:.6f}'.format(n * batch_size* 100.0 / len(x_train), epoch_loss), end='\r') optimizer.zero_grad() loss.backward() optimizer.step() print('Epoch finished ! Loss: {}'.format(epoch_loss)) And this is my `focal_loss_fixed' function: def focal_loss_fixed(true_data, pred_data): gamma=2. alpha=.25 eps = 1e-7 # print(type(y_true), type(y_pred)) pred_data = K.clip(pred_data,eps,1-eps) pt_1 = tf.where(tf.equal(true_data, 1), pred_data, tf.ones_like(pred_data)) pt_0 = tf.where(tf.equal(true_data, 0), pred_data, tf.zeros_like(pred_data)) with tf.Session() as sess: return sess.run(-K.sum(alpha * K.pow(1. - pt_1, gamma) * K.log(pt_1))-K.sum((1-alpha) * K.pow( pt_0, gamma) * K.log(1. - pt_0))) After each epoch the loss value stays constant(5589.60328). What's wrong with it?
I think the problem lies in your heavy weight decay. Essentially, you are not reducing the weight by x, but rather you multiply the weights by x, which means that you are instantaneously only doing very small increments, leading to a (seemingly) plateauing loss function. More explanation on this can be found in the PyTorch discussion forum (e.g., here, or here). Unfortunately, the source for SGD alone also does not tell you much about its implementation. Simply setting it to a larger value should result in better updates. You can start by leaving it out completely, and then iteratively reducing it (from 1.0), until you get more decent results.
https://stackoverflow.com/questions/52833049/