id
stringlengths
3
8
text
stringlengths
1
115k
st80868
I’m not certain but I use only the last output and so I think this may give bad influence on back prop. I’ll check the keras again. Thank you.
st80869
Finally I found that I misused the loss function torch.nn.CrossEntropyLoss. I changed the loss function to nn.NLLLoss(log_softmax(output), target) then the loss decreases as expected.
st80870
Right. So now, class Net(nn.module): ... def forward(self, x, hidden): x, hidden = self.rnn1(x, hidden) x = x.select(0, maxlen-1).contiguous() x = x.view(-1, hidden_size) x = F.relu(self.dense1(x)) x = F.log_softmax(self.dense2(x)) return x, hidden ... criterion = nn.NLLLoss() ... def train(): model.train() hidden = model.init_hidden() for epoch in range(len(sentences) // batch_size): X_batch = var(torch.FloatTensor(X[:, epoch*batch_size: (epoch+1)*batch_size, :])) y_batch = var(torch.LongTensor(y[epoch*batch_size: (epoch+1)*batch_size])) model.zero_grad() output, hidden = model(X_batch, var_pair(hidden)) loss = criterion(output, y_batch) loss.backward() optimizer.step()
st80871
Yup, that looks good! Note that you can now pass in hidden = None in the first iteration. The RNN will initialize a zero-filled hidden state for you. You might need to update pytorch though.
st80872
I have a question about the the number of parameters in RNN. I defined a RNN layer and get its paramters. I thought the number of parameters in a RNN layer should differ from different input lengths. However, when I use parameters() to get its parameters, the number of parameters seemed similar to that of the RNN layer with only one time steps. How to understand this fact? Thank you!
st80873
Your model is going to be the same, whatever is the length of your input. In Torch we used to clone the model as many times as the time steps while sharing the parameters, because it is the same model, just over time. The number of parameters will change when your input dimensionality will change (the size of x[t], for a given t = 1, ..., T), and not when T changes. If it is still not clear, you can go over my lectures on RNNs (ref. 99). And if it is still confusing, wait for the PyTorch video tutorials I’m currently working on.
st80874
Hi, Sorry for reopening this topic. I also just moved to PyTorch from Keras, and I am super confused about how RNN works. I am confused about: I don’t understand what is the ‘batch’ mean in the context of PyTorch Since RNN can accept variable length sequences, can someone please make a small example about this? What is the difference between RNN cell and RNN? http://pytorch.org/docs/nn.html#torch.nn.RNNCell 41 http://pytorch.org/docs/nn.html#rnn 14 In RNN cell, why the documentation says the input is input (batch, input_size) , while in the example given in the documentation, the input is input = Variable(torch.randn(6, 3, 10)) ? Thank you
st80875
Does it make sense, that Stateless RNN had a better performance than a Stateful RN? hidden = None y_pred = [] for x_i in x.tolist(): x_i = np.array([x_i])[:, np.newaxis] hidden = None # Is commented in statefull case. x_tensor = torch.Tensor(x_i).unsqueeze(0) prediction, hidden = rnn(x_tensor, hidden) hidden = hidden.data prediction = prediction.detach().numpy().flatten() y_pred.append(prediction) ``` ![34|374x500](upload://naWxCncyBFsSp6yUcCeb5NZ7JAK.png)
st80876
Solved by Michael_D in post #4 Somehow I did an experiments again and I didn’t succeed to reproduce it. As expected, stageful had a better results.
st80877
Could you explain what exactly you mean by a stateless RNN and what network topology you are using? Is rnn in your case a cell or a complete RNN? Do you pass a whole sequence or only one timestep input? If the initial hidden state is not passed (None) internally a zero vector is used as the first hidden state. If conditioning on the initial hidden state is not beneficial it is possible that the ‘performace’ of the model is better than using an additional context vector.
st80878
I suppose it’s a complete RNN. By Stateless, I assume that in evaluation (prediction mode) I provide hidden = None for each iteration instead of preserving it from output. Code for RNN class: RNN Class code class RNN(nn.Module): def __init__(self, input_size, output_size, hidden_dim, n_layers): super(RNN, self).__init__() self.hidden_dim=hidden_dim # define an RNN with specified parameters self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True) # last, fully-connected layer self.fc = nn.Linear(hidden_dim, output_size) def forward(self, x, hidden): batch_size = x.size(0) r_out, hidden = self.rnn(x, hidden) r_out = r_out.view(-1, self.hidden_dim) output = self.fc(r_out) return output, hidden
st80879
Somehow I did an experiments again and I didn’t succeed to reproduce it. As expected, stageful had a better results.
st80880
I use a simple Classification CNN to realize multiple Bi-classification task. My output shape of model like [10,2]. How can I use loss?
st80881
Here, is 10 the number of bi-classifications ? If it is : Since it is bi-classification, you could instead get an output shape of [10, 1] and use nn.BCEWithLogitsLoss. It combines Sigmoid + BinaryCrossEntropy (so do not use a sigmoid activation just before that). For the multiple classification aspect I suppose your output shape is [batch_size, 10, 1], and your label shape is also [batch_size, 10, 1] filled with ones and zeros. The loss function by default will average over the 10 different tasks, which is probably what you want.
st80882
Yes I’ve always heard that for classification it was better than MSE, but I have nothing to back it up.
st80883
If I’m not mistaken, the loss for a classification task using nn.MSELoss will be scaled down the closer the output gets to the target, which might slow down the training towards the end. @rasbt derives the formula here and you can find some additional information in this blog post (at the end).
st80884
Both are fine. Empirically, the cross tends to result in better performance of the resulting models though. Also, the the derivative/gradient calculation plays nicely with sigmoid or softmax in the last layer, i.e., it simplifies as follows (where “a” is the sigmoid activation, y is the true label, and z is the weighted input w^T x): Screen Shot 2019-09-19 at 3.13.31 PM.png1312×432 57.7 KB
st80885
How to set values at tensor at dynamic dim? The sizes of the tensor parameter can be dynamic. For example, for tensor = torch.rand(size=(4, 5, 6)), how to create a function like below? : def ff(tensor, dim, pos, value): # this function can also be applied to tensors # with shapes like [7,5,6,8], [11, 5, 7,5,6,8], etc. if dim == 0: tensor[ pos, :, :] = value elif dim == 1: tensor[:, pos, :] = value elif dim == 2: tensor[:, :, pos] = value
st80886
Solved by ptrblck in post #2 Depending on the operation, you could e.g. use tensor.index_copy_, which accepts a dim argument.
st80887
Depending on the operation, you could e.g. use tensor.index_copy_, which accepts a dim argument.
st80888
What is the best way to visualize My Network(model)? Is it tensorboard? Or is there a better alternative? If tensorboard is the best way, is there anything I lose when tensorflow is not installed? I get this message “TensorFlow installation not found - running with reduced feature set.” When I use tensorboard, Would it be adversely affected by visualizing the Network(model) without tensorflow?
st80889
Hi, i recently got into neural networks and machine learning and have a question. I built a small network which is supposed to take an input matrix (24x30) and an output matrix (24x30) and learn how to predict the output (basically 2d grayscale images). The idea is, that the output matrix is a subsequent development of the input matrix (ie a temporal follow up). My approach is to use a 2d convolution layer, relu, pooling, and a linear activation. Now my problem is that the prediction is a 1x30 vector, which makes sense due to the linear layer and the .view part of the network. Now i clearly dont understand well enough the network but it would be nice for someone to help me out here. The code is as follows: class ConvNet(torch.nn.Module): def __init__(self): super(ConvNet, self).__init__() self.conv1=torch.nn.Conv2d(in_channels=args.batchsize,out_channels=16, kernel_size=5) self.fc1=torch.nn.Linear(in_features=2080, out_features=384) self.fc2=torch.nn.Linear(in_features=384,out_features= 90) self.fc3=torch.nn.Linear(in_features=90,out_features=30) def forward(self, x): x=x.unsqueeze(0) x=torch.nn.functional.max_pool2d(torch.nn.functional.relu(self.conv1(x)), (2,2)) x=x.view(-1, self.num_flat_features(x)) x=torch.nn.functional.relu(self.fc1(x)) x=torch.nn.functional.relu(self.fc2(x)) x=self.fc3(x) return x def num_flat_features(self, x): size=x.size()[1:] num_features=1 for s in size: num_features *= s return num_features
st80890
Is there some good way to set(re-set ) part layers’ weights randomly for every time train the model? (‘every time’ means when every time train one time epoch.) In addition, What is the usual range of weights?
st80891
Solved by Prashant_Kalikotay in post #8 def init_params(m): if type(m)==nn.Linear or type(m)==nn.Conv2d: m.weight.data=torch.randn(m.weight.size())*.01#Random weight initialisation m.bias.data=torch.zeros(m.bias.size()) #for setting the weights you can use: Model.apply(init_params) #Model here is the model that you have created…
st80892
What do you mean by setting and resetting weights at different layers?? You can look at different initialisation like Xavier glorot initialisation 22 and some other. Initalisation of weights depend on what data you are working with.
st80893
I’m sorry, I did not express clearly. I mean, for specific layers of a model, set(re-set) layers’ weights randomly for every time training. Is there some good way to do this?
st80894
I did not get your question well. But to my understanding. If building your own function like convolution or something == True you can initialise your weights as torch.nn.Parameter 6. else you can use smthing like this: def init_params(m): if type(m)==nn.Linear or type(m)==nn.Conv2d: m.weight.data=torch.randn(m.weight.size())*.01#Random weight initialisation m.bias.data=torch.zeros(m.bias.size())
st80895
Appreciate your timely help. What does the ‘m’ in the function mean as a parameter? I guess it’s each layer in the model. If so, which function in PyTorch I should use to get the layers? If not, would you please tell me what it means?
st80896
m in my previous reply is a layer. a simple example of layer or say a for example a 2D-Convolutional layer maybe: import torch.nn as nn layer=nn.Conv2d(in_channels=1,out_channels=32,kernel_size=3,stride=1) torch.nn.Conv2D I think that is too basic a question. Never mind but. You can check out Deep learning with pytorch Courses online.One such course
st80897
I’m sorry, again, I didn’t express clearly. I mean, if the ‘m’ is a layer, then how to get and re-set the layer’s weights during the period of training the model? is there some good way to do this?
st80898
def init_params(m): if type(m)==nn.Linear or type(m)==nn.Conv2d: m.weight.data=torch.randn(m.weight.size())*.01#Random weight initialisation m.bias.data=torch.zeros(m.bias.size()) #for setting the weights you can use: Model.apply(init_params) #Model here is the model that you have created
st80899
It’s the first time I see the use of function, model.apply(). It’s very fantastic. Thank you very much for your help
st80900
We can divide a dataset by means of torch.utils.data.random_split. However, for reproduction of the results, is it possible to save the split datasets to load them later?
st80901
You could use a seed for the random number generator (torch.manual_seed) and make sure the split is the same every time. Alternatively, you could split the sample indices, store each index tensor locally via torch.save, and use it in Subset.
st80902
Thank you ptrblck for great answer, as always. It does work but I stumbled upon a strange issue. I split my training set into training and validation set using a deterministic seed as mentioned: torch.manual_seed(0) train_dataset, val_dataset = torch.utils.data.random_split(trainval_dataset, [train_size, val_size]) I wanted to test the CNN then on a validation set (using torchvision CIFAR10). When I test it on a testset, the accuracy is always the same as expected. However, when I test it on validation set, the accuracy changes. When delving into the code, I realized the problem is not with the validation set (whose targets are the same every time I run an instance of a script) but actually the network spits different outputs. How is it possible? Especially, that it appears when feeding the validation set but not a test set?
st80903
Yes, I am. I also just tried model.train() for the sake of completeness and the same erratic behavior happens.
st80904
To understand the issue completely: you are calling model.eval() and check the accuracy on the validation set. The validation set indices (passed to Subset) are definitely the same, but the accuracy changes for sequential runs?
st80905
Precisely. Here are examples of two different runs. I show the output of the same batch and print respectively: print(predicted.eq(targets).sum().item()) print(predicted.eq(targets)) print(predicted) print(targets) Targets are the same, but predicted values slightly vary. Run 1: 118 tensor([1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], device=‘cuda:0’, dtype=torch.uint8) tensor([4, 8, 1, 0, 2, 4, 3, 4, 5, 6, 4, 4, 5, 7, 3, 2, 4, 5, 1, 7, 9, 9, 7, 9, 2, 4, 1, 1, 4, 8, 2, 9, 7, 6, 9, 1, 2, 9, 1, 1, 5, 1, 7, 7, 9, 4, 3, 3, 4, 6, 0, 5, 5, 5, 7, 7, 0, 7, 0, 4, 7, 3, 6, 1, 4, 0, 4, 0, 3, 1, 4, 8, 7, 6, 3, 7, 0, 5, 2, 0, 8, 5, 0, 2, 9, 7, 2, 2, 2, 8, 9, 6, 1, 1, 9, 1, 4, 9, 4, 8, 7, 6, 4, 7, 7, 8, 0, 6, 7, 4, 7, 5, 8, 3, 1, 3, 9, 8, 5, 8, 4, 2, 3, 7, 7, 2, 5, 1], device=‘cuda:0’) tensor([4, 8, 1, 2, 2, 4, 3, 4, 5, 6, 4, 4, 5, 7, 5, 2, 4, 5, 1, 7, 1, 9, 7, 9, 2, 4, 1, 1, 4, 8, 2, 9, 7, 6, 9, 1, 2, 9, 1, 1, 5, 1, 7, 7, 9, 4, 3, 3, 4, 6, 0, 5, 5, 5, 7, 7, 0, 7, 0, 4, 7, 3, 6, 1, 5, 0, 4, 0, 3, 1, 4, 8, 7, 6, 3, 7, 2, 5, 3, 0, 8, 5, 0, 2, 9, 7, 2, 3, 2, 8, 9, 6, 1, 1, 9, 1, 4, 9, 4, 8, 7, 5, 4, 7, 7, 8, 0, 6, 7, 4, 7, 5, 8, 6, 1, 3, 1, 8, 5, 8, 4, 2, 3, 7, 7, 2, 5, 1], device=‘cuda:0’) Run 2 121 tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], device=‘cuda:0’, dtype=torch.uint8) tensor([4, 8, 1, 2, 2, 4, 3, 4, 5, 6, 4, 4, 5, 7, 3, 2, 4, 5, 1, 7, 1, 9, 7, 9, 2, 4, 1, 1, 4, 8, 2, 9, 7, 6, 9, 1, 2, 9, 1, 1, 5, 1, 7, 7, 9, 4, 3, 3, 4, 6, 0, 5, 5, 3, 7, 7, 0, 7, 0, 4, 7, 3, 6, 1, 5, 0, 4, 0, 3, 1, 4, 8, 7, 6, 3, 7, 2, 5, 3, 0, 0, 5, 0, 2, 9, 7, 2, 2, 2, 8, 9, 6, 1, 1, 9, 1, 4, 9, 4, 8, 4, 5, 4, 7, 7, 8, 0, 6, 7, 4, 7, 5, 8, 3, 1, 4, 1, 8, 5, 8, 4, 2, 3, 7, 7, 2, 5, 1], device=‘cuda:0’) tensor([4, 8, 1, 2, 2, 4, 3, 4, 5, 6, 4, 4, 5, 7, 5, 2, 4, 5, 1, 7, 1, 9, 7, 9, 2, 4, 1, 1, 4, 8, 2, 9, 7, 6, 9, 1, 2, 9, 1, 1, 5, 1, 7, 7, 9, 4, 3, 3, 4, 6, 0, 5, 5, 5, 7, 7, 0, 7, 0, 4, 7, 3, 6, 1, 5, 0, 4, 0, 3, 1, 4, 8, 7, 6, 3, 7, 2, 5, 3, 0, 8, 5, 0, 2, 9, 7, 2, 3, 2, 8, 9, 6, 1, 1, 9, 1, 4, 9, 4, 8, 7, 5, 4, 7, 7, 8, 0, 6, 7, 4, 7, 5, 8, 6, 1, 3, 1, 8, 5, 8, 4, 2, 3, 7, 7, 2, 5, 1], device=‘cuda:0’)
st80906
Were you able to exactly reproduce the same model parameters for your runs? I.e. did you compare the state_dicts before running the validation loop? Even if you are seeding and getting the same data samples for your runs, the result might still differ, e.g. due to cudnn as described in the Reproducibility docs 30.
st80907
I load the same pre-trained model in both cases. In the case of running this model on the test set, the output is always the same. Do you think there still can be any room for such nondeterministic behavior?
st80908
Did you follow the advice from the reproducibility docs? Are you using and random transformations in your Dataset for the validation set?
st80909
class CNN(nn.Module): def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, dropout, pad_idx): super().__init__() ... self.s = [torch.nn.Parameter(torch.arange(1, embedding_dim+1, dtype=torch.float), requires_grad=True)] I can’t see self.s in for p in model.parameters(): print(p) but it does show up when it’s self.s = torch.nn.Parameter(torch.arange(1, embedding_dim+1, dtype=torch.float), requires_grad=True) Is there another way to initialize multiple learn able parameters?
st80910
Solved by vainaijr in post #3 if you do self.s = nn.ParameterList([torch.nn.Parameter(torch.arange(1, embedding_dim+1, dtype=torch.float), requires_grad=True)]) then it would show up
st80911
Looks like you are putting it as a list in the case where its not working. self.s = [torch.nn.Parameter(torch.arange(1, embedding_dim+1, dtype=torch.float), requires_grad=True)] #The box bracket is for list initialisation. That could be the problem list is a variable and I think only tensors can be stored as parameters
st80912
if you do self.s = nn.ParameterList([torch.nn.Parameter(torch.arange(1, embedding_dim+1, dtype=torch.float), requires_grad=True)]) then it would show up
st80913
Issue description spectral_norm used in nn.Linear is okay. But when it’s used in nn.RNN,there will be a RuntimeError while running model = network().cuda() Traceback (most recent call last): File “/DATA/119/yhe/soft/anaconda3/envs/env1/lib/python3.6/site-packages/IPython/core/interactiveshell.py”, line 3296, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File “”, line 1, in runfile(’/DATA/119/yhe/code/crit/mnist/example.py’, wdir=’/DATA/119/yhe/code/crit/mnist’) File “/home/yhe/.pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py”, line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File “/home/yhe/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py”, line 18, in execfile exec(compile(contents+"\n", file, ‘exec’), glob, loc) File “/DATA/119/yhe/code/crit/mnist/example.py”, line 17, in model = network().cuda() File “/DATA/119/yhe/soft/anaconda3/envs/env1/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 265, in cuda return self._apply(lambda t: t.cuda(device)) File “/DATA/119/yhe/soft/anaconda3/envs/env1/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 193, in _apply module._apply(fn) File “/DATA/119/yhe/soft/anaconda3/envs/env1/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 127, in _apply self.flatten_parameters() File “/DATA/119/yhe/soft/anaconda3/envs/env1/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 123, in flatten_parameters self.batch_first, bool(self.bidirectional)) RuntimeError: param_from.type() == param_to.type() ASSERT FAILED at /pytorch/aten/src/ATen/native/cudnn/RNN.cpp:541, please report a bug to PyTorch. parameter types mismatch Code import torch import torch.nn as nn import torch.nn.functional as F class network (nn.Module): def __init__(self): super(network, self).__init__() self.sn_rnn = nn.utils.spectral_norm(nn.RNN(20,10,1),name='weight_hh_l0') # self.fc = nn.utils.spectral_norm(nn.Linear(20,10), name='weight') def forward (self,x): # x = F.tanh(self.fc(x)) out,_ = self.sn_rnn(x) return F.log_softmax(x, dim=1) if __name__ == '__main__': x = torch.randn(2, 10, 20).cuda() model = network().cuda() out= model(x) print('end') Environment PyTorch Version (e.g., 1.0): 1.1.0 Python version: 3.6.8 CUDA/cuDNN version: cuda9.0
st80914
I tried to realize this loss using pytorch but as beginner it is difficult for me… Can you make this operation to the pytorch code? It could be very helpful for me. Thank you.
st80915
Question 1: If I have multiple separate output head layers of a network, with the one in use triggered by either a network input of the head to use or by a onehot encoded matrix (have done both implementations), is loss propagated according to the head? So for example if there were two heads, and head two were never used in the forward in a batch, would the weights exclusive to head two be unchanged? To give a more specific context, if they are both Q value predictors for different tasks, can I simply sum the loss for each task together (calculated by a complex comparison to the relevant task rewards) does torch automatically makes weight changes proportionally? If head one’s values were perfect and head two’s values were awful, would summing the losses affect the heads equally or not? Question 2: If i have a distribution constructed from a softmax output, such as torch.distributions.Categorical, but i sometimes want to be able to choose actions deterministically (like DDPG), how can I select the highest probability output from a distribution rather than random sampling according to probabilities?
st80916
To question 1: The computation graph is only created for all operations which were used in the forward pass. Autograd will thus only calculate the gradients for all parameters, which were involved in these calculations. However, if you are using e.g. an optimizer with running estimates, all parameters with valid running stats will be updated. Example: class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.fc1 = nn.Linear(1, 1) self.fc2 = nn.Linear(1, 1) def forward(self, x, idx): if idx==0: x = self.fc1(x) elif idx==1: x = self.fc2(x) return x model = MyModel() optimizer = optim.Adam(model.parameters(), lr=1.) x = torch.randn(1, 1) output = model(x, idx=0) output.backward() print('fc1.weight.grad ', model.fc1.weight.grad) print('fc2.weight.grad', model.fc2.weight.grad) print('Before 1st optimization') print(model.fc1.weight) print(model.fc2.weight) optimizer.step() optimizer.zero_grad() print('After') print(model.fc1.weight) print(model.fc2.weight) output = model(x, idx=1) output.backward() print('fc1.weight.grad ', model.fc1.weight.grad) print('fc2.weight.grad ', model.fc2.weight.grad) print('Before 2nd optimization') print(model.fc1.weight) print(model.fc2.weight) optimizer.step() print('After') print(model.fc1.weight) print(model.fc2.weight) > fc1.weight.grad tensor([[-0.4842]]) fc2.weight.grad None Before 1st optimization Parameter containing: tensor([[-0.7616]], requires_grad=True) Parameter containing: tensor([[-0.8436]], requires_grad=True) After Parameter containing: tensor([[0.2384]], requires_grad=True) Parameter containing: tensor([[-0.8436]], requires_grad=True) fc1.weight.grad tensor([[0.]]) fc2.weight.grad tensor([[-0.4842]]) Before 2nd optimization Parameter containing: tensor([[0.2384]], requires_grad=True) Parameter containing: tensor([[-0.8436]], requires_grad=True) After Parameter containing: tensor([[0.9085]], requires_grad=True) Parameter containing: tensor([[0.1564]], requires_grad=True) I’m not sure I understand the second question completely (I’m not really experienced in RL), but if you select the highest value e.g. via torch.max, only the max value will get a valid gradient: x = torch.randn(1, 10, requires_grad=True) out, idx = torch.max(x, 1) out.backward() print(x.grad, idx) > tensor([[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.]]) tensor([5]) Would that work or am I misunderstanding the question?
st80917
Firstly, thank you for the reply, For the first, that is as I thought was correct, but the example confuses me- in the second case, you feed through only fc2, the gradient for fc1 weights is 0, but the weight for fc1 still changes? Is this a result of using adam? To put this another way, lets say I have a rewards tensor of N x T, where N is the batch size and T is the number of tasks. I have some input of states of N x (irrelevant) and output a value for these states. The input is fed through some shared layers, and then separate network head layers feed out values. These values are generated by each network head so I have T heads outputting a N x 1 tensor of values. Case 1: I could then concatenate this into a N x T tensor we’ll call QVals. If I now call critic_loss = nn.MSELoss(Qvals, rewards), and then call critic_loss.backward() and step the optimiser, does this act as i wish it to, training each part according to the relevant loss? Case 2: If I instead do not concatenate these losses, and instead output T N x 1 tensors, and compare them to individual reward tensors (T N x 1 tensors) by mseloss again i now have T loss values. if i were to call backward() on each of these individually and then step() after all T backwards calls, would that be equivalent to case 1? Case 3: As case 2, but i backwards() and step() and zero_grad() for each of T losses. How does this differ from case 1 and 2? Case 4: I completely separately feed through the network for each task only outputting for one head at a time (i modify my network to also take the head to use as an argument). The input data is the exact same. I can calculate/backprop losses again as per case 2 or case 3. How does this differ? I appreciate I’m asking some potentially difficult questions so thank you a lot for your help. If there is any material you could link me to that may help me understand that would be appreciated greatly. For question 2 (this is unrelated to 1), I’m simply saying if i have a distribution dist, and normally i sample from dist with .sample(), how can i instead select the value with highest probability value from dist: If I have p(a) = 0.3, p(b)=0.6 and p© = 0.1, sample is going to uniformly sample between them, and output a, b or c according to these probabilities. which is what I want for one model, but I also have a second model which I want to reuse the same code for, but instead of calling sample i wish to instead output from dist always the output with the highest value, in this case always b.
st80918
I have an input size of BxCxHxW and a label size of BxHxW, where B is the batch size. We often compute the loss likes criterion= nn.CrossEntropyWithLoss() pred = model(input) loss = criterion(pred, label) If I want to compute the loss for each batch, then I will use criterion= nn.CrossEntropyWithLoss() pred = model(input) loss = 0 for i in range (B): loss += criterion(pred[i:i+1,...], label[i:i+1,...]) Does the second approach provide same result as the first approach? Thanks
st80919
It won’t produce the same loss, as the default reduction in nn.CrossEntropyLoss calculates the mean loss value for the batch. If you set reduction='sum', you should get the same loss. However, if you need the loss for each batch, just disable the reduction via reduction='none' (related topic 57).
st80920
pip3 install torch===1.2.0 torchvision===0.4.0 -f https://download.pytorch.org/whl/torch_stable.html does not work anymore since the links page does not have 1.2.0 anymore: https://download.pytorch.org/whl/torch_stable.html 3
st80921
Solved by ptrblck in post #2 Could you try to install the wheels again, as they might have been accidentally taken down as described here.
st80922
Could you try to install the wheels again, as they might have been accidentally taken down as described here 2.
st80923
I coached a model with batch = 500. I saved the training data and want to download from the model for training with batch = 100, but I get an error from batchNorm1d. Can I get around this error so as not to lose my stored data during a workout?
st80924
It is difficult for me to ask the right question, but when they understand me, I get a good answer. I cannot continue to explore the network until I find the answer to my question. From the beginning, I coached the network for one feature per cycle. This happened a very long time. Now I submit to the network a batch of features. It works well. But I need to test the network by serving one feature at a time. But when I load load_state_dict and try to submit features one by one, I get an error. Since during training, nn.BatchNorm1d took the size of the party, and during the test I submit a size one. How do I solve this problem?
st80925
I used bn1d, it gave me an error. But I increased the dimension to 4D and used bn2d. The error has gone.
st80926
Call model.eval() before passing single samples to the model. This will make sure to use the running stats in the batch norm layers and should avoid this error. PS: don’t forget to add a batch dimension even to a single sample.
st80927
I tried two options, but I don’t know if they behave the same way. out = out.contiguous (). View (1, batch, 1, wn1) -> self.bn1 = nn.BatchNorm2d (batch, affine = True) out = out.contiguous (). View (1,1, batch, wn1) -> self.bn1 = nn.BatchNorm2d (batch, affine = True) In the first case, BatchNorm2d takes batch and does the normalization for each example. In the second case, BatchNorm2d takes 1 and makes normalization for all examples. The second case allows me to change batch without problems. Training in the first case and in the second is the same.
st80928
The second example should fail, since the specified channels in bn1 (batch) do not equal the passed channels (1). If you are dealing with a temporal signal in the shape [batch_size, channels, sequence_length], I would stick to nn.BatchNorm1d and just call model.eval() for the test/validation case. Reshaping your data such that the batch size is in dim1, is most likely not what you want.
st80929
slavavs: out = out.contiguous (). View (1,1, batch, wn1) -> self.bn1 = nn.BatchNorm2d (batch, affine = True) sorry 2. out = out.contiguous (). View (1,1, batch, wn1) -> self.bn1 = nn.BatchNorm2d (1, affine = True)
st80930
That would be syntactically correct. However, I still think you shouldn’t reshape the tensor to get the batch size in dim1 in your current implementation. Is nn.BatchNorm1d and model.eval() not working? If so, could you post the stack trace so that we could have a look?
st80931
ptrblck: PS: don’t forget to add a batch dimension even to a single sample. Help understand this solution. How can I add a batch size if I have one sample? Do I need to add the batch size with which I train the model or set the size to 1?
st80932
For a single sample you should set the batch size to 1 using data = data.unsqueeze(0).
st80933
I was trying to just look things up on the pytorch documentation but my Google Chrome Helper (renderer) starts consuming at least 75ish% of my CPU and sometimes going up to over 104% and sometimes even the GPU appears at the top of my Activity Monitor (something I had never ever seen). What is going on? Why is the pytorch documentation consuming my computer so violently?
st80934
I have a 4D tensor x, and a 2D index tensor i of shape (N, 3), where i[n] is an index over the first 3 dimensions of x. I would like to extract the x values at these indices. After some trial and error, I found that the following does what I want: result = x[i[:, 0], i[:, 1], i[:, 2]] I was wondering if there was a better way to do so. I looked at torch.gather and torch.index_select, but they seem to be for 1D indices. Any ideas?
st80935
Solved by ptrblck in post #2 Would splitting the index work? x[idx.split(1, 0)]
st80936
when I use x = nf.batch_norm(x, running_mean=self.bn1_mean, running_var=self.bn1_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05) in my net, I defined the self.bn1_mean and self.bn1_var, but when running train, there com out RuntimeError: the derivative for ‘running_mean’ is not implemented, why ??
st80937
Hi @sfancc, This is related to this GitHub issue 105. running_mean and running_var are only initialized by those value (bn1_mean and bn1_var) but are not trainable parameters.
st80938
I have a tensor x in shape of [100, 64, 256, 256]. x = [x0, x1, ..., x99], so each x_i is [64, 256, 256]. What I want is to get maximum tensor of the total 100 x_i for each i except itself. E.g. for x0, I would like to get torch.max((x1, x2, x3, ..., x99)) and for x1, I would like to get ‘torch.max((x0, x2, x3, …, x99))’ How to write this in memory efficiency way? The way I do is fist get an index tensor [100, 99]. index = torch.tensor( [[1, 2, 3, 4, 5, ..., 99], [0, 2, 3, 4, 5, ..., 99], [0, 1, 3, 4, 5, ..., 99], ... [0, 1, 2, 3, 4, ..., 98]] ) then, out = torch.max(x[index], 1) BUT, this may cost so much memory, I think the problem is x[index] this thing is too big. Especially I need gradient to do back-propagation. Is there any other better way to do this?
st80939
Solved by andreaskoepf in post #2 You could run a loop over the 100 entries in the first dimension, e.g.: import torch #x = torch.rand(100, 64, 256, 256) x = torch.rand(100, 64, 3, 3) # reduced for testing def max_exclusive(x, i): a,b = x[:i], x[i+1:] if a.nelement() == 0: return torch.max(b, 0)[0] if b.neleme…
st80940
You could run a loop over the 100 entries in the first dimension, e.g.: import torch #x = torch.rand(100, 64, 256, 256) x = torch.rand(100, 64, 3, 3) # reduced for testing def max_exclusive(x, i): a,b = x[:i], x[i+1:] if a.nelement() == 0: return torch.max(b, 0)[0] if b.nelement() == 0: return torch.max(a, 0)[0] return torch.max(torch.max(a, 0)[0], torch.max(b, 0)[0]) #y = torch.cat([max_exclusive(x, i).unsqueeze(0) for i in range(x.size(0))]) # or manually if you do not want to store intermediate results: y = x.new_empty(x.size()) for i in range(x.size(0)): y[i] = max_exclusive(x, i) print(y.size())
st80941
How do I set dtype for torch.nn.Conv2d? It seems that default dtype for torcn.nn.Conv2d is float. I want to explicitly set dtype for torch.nn.Conv2d , as I set dtype for tensor as below: a = torch.tensor([1,2,3], dtype=torch.int8)
st80942
Solved by albanD in post #2 Hi, You can use your_mod.type(dtype) to change its type. Or your_mod.double() to change to double precision. Be careful that the conv module is not implemented for every types
st80943
Hi, You can use your_mod.type(dtype) to change its type. Or your_mod.double() to change to double precision. Be careful that the conv module is not implemented for every types
st80944
Can I know which version of PyTorch is compatible with ONNX version 1.3? Currently, I have a .pth model which was converted to .onnx using PyTorch v0.4.1.
st80945
I have a numpy array of tensor values, [tensor(1), …, tensor(n)]. how to convert it to numpy array: [1, …, n]. Thanks Robert
st80946
torch.cat(list_of_tensors).numpy() should work. Let me know, if you get any errors.
st80947
Thanks, no, I got this error: TypeError: cat(): argument ‘tensors’ (position 1) must be tuple of Tensors, not numpy.ndarray
st80948
It seems you are already dealing with numpy arrays, not PyTorch tensors. In that case you could try to use np.concatenate.
st80949
Yes, the array is numpy, but contains tensor elements. I need to convert these tensor elements to none tensor values.
st80950
ptrblck: concatenate . I fixed like this: tensor_arr = [torch.tensor(1), …, torch.tensor(n)] var = [e.item() for e in tesnsor_arr]
st80951
For example; My neural network model takes x as input, and output two dimensional vector y = model(x) where y = (y1,y2) and I want to restrict the output 0< y1 < y2 < 1. where restricting to domain (0,1) is easy as I can just apply sigmoid() as activation function at last but how to nicely satisfy the sign constraint and the allow backprogation? I’m not sure if I shuffle the y as y = sorted(y) as last layer allows backprogration at all
st80952
Sorting is not differentiable. You can do the following trick: y1, y2 = model(x) y2 = torch.logsumexp([y1, y2]) Since LogSumExp is smooth maximum, y2 will also be greater than y1 (see https://en.wikipedia.org/wiki/Smooth_maximum 8). Then apply sigmoid or any other non-decreasing function that transforms input to (0, 1) range.
st80953
I’m trying to implement a trainable hyperparameter as a member of an nn.Module inheritor I want it to be moved to the same device as the rest of the module with child_module.to(my_device) I have done this with self.log_alpha= torch.tensor(0., requires_grad=True) then overriden child_module.to() to move it manually then call the parent method as so: def to(self, *args, **kwargs): super(Trainer, self).to(*args, **kwargs) device, _, _ = torch._C._nn._parse_to(*args, **kwargs) self.log_alpha.to(device) self.conf.trainer_device = device this little hack works just fine, but I read that the correct way to do this was with register_buffer 2 as so: self.register_buffer('log_alpha', torch.tensor(0., requires_grad=True)) However, doing it this way, log_alpha doesn’t get updated by its optimizer, but it does with my old hack. Am i using register_buffer() incorrectly, or is it not supposed to be used with trainable params?
st80954
Solved by ptrblck in post #2 If self.log_alpha is trainable (requires gradients), you should wrap it in an nn.Parameter, which will also make sure to move the tensor to the device.
st80955
If self.log_alpha is trainable (requires gradients), you should wrap it in an nn.Parameter, which will also make sure to move the tensor to the device.
st80956
I am using MultiLabelMarginLoss loss function. The target values to the loss function are the indices followed by -1. For example, if the correct label indices are 3 and 4, the target tensor looks something like this - target = torch.tensor([3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]).to(device) Is this the correct format for the target?
st80957
Solved by nihalnayak in post #2 This is the correct tensor format for MultiLabelMarginLoss.
st80958
Hello Pytorch forum, I have previously installed Pytorch 1.0 from source on my Mac OSX 10.12 with cuda 9.0 and cudnn 7.0 ; it runs fine with external GPU support connecting an NVIDIA GTX Titan X (compute capability 5.2). I cloned again the source and tried to install for upgrading to Pytorch 1.2 with python3 setup.py install (I do not use conda) It runs until it breaks with [ 39%] Building NVCC (Device) object caffe2/CMakeFiles/torch.dir/__/aten/src/THC/torch_generated_THCStorage.cu.o nvcc fatal : Unsupported gpu architecture ‘compute_70’ … File “/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py”, line 347, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command ‘[‘cmake’, ‘–build’, ‘.’, ‘–target’, ‘install’, ‘–config’, ‘Release’, ‘–’, ‘-j’, ‘4’]’ returned non-zero exit status 2. Does anyone know what could be wrong please ? Is it due to the compute capability of my GPU below 7.0 ? Thanks
st80959
Are you accidentally using an older CUDA version? It looks like nvcc doesn’t support compute capability 7.0, which should be supported starting from CUDA9.0. Could you check the CUDA version via nvcc --version? PS: you could try to set the architecture specifically for your GPU via TORCH_CUDA_ARCH_LIST=6.1.
st80960
Thank you for your reply. For the CUDA version here is what I have from nvcc: nvcc: NVIDIA ® Cuda compiler driver Copyright © 2005-2017 NVIDIA Corporation Built on Fri_Sep__1_13:16:23_CDT_2017 Cuda compilation tools, release 9.0, V9.0.175 When I try running: export TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install Then I get the same errors. I attach the full output screenshot if that helps … Capture d’écran 2019-09-13 à 13.07.50.png950×1142 181 KB
st80961
That’s strange. Could you run python setup.py clean and rerun both lines to reinstall it?
st80962
thank you for following-up I ran python3 setup.py clean Then tried again the compilation from scratch as MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install (or also just TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install) It still break at 39% but the report mentions a fatal error: ‘string.h’ file not found instead of the previous nvcc fatal error. I wonder if I should clone again or if something else is missing … I attach the new report, thanks ! Capture d’écran 2019-09-13 à 17.47.14.png1022×1192 177 KB
st80963
It seems to be an xcode issue es reported here 3. Could you run xcode-select --install and check, if it helps?
st80964
Thank you. This one ran without issues: xcode-select --install Then I tried again from scratch the: MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install Which got me further, but as some others in the thread you point it did not solve all. Now it breaks at 75%: make[1]: *** [caffe2/CMakeFiles/torch.dir/all] Error 2 make: *** [all] Error 2 Traceback (most recent call last): File “setup.py”, line 759, in build_deps() File “setup.py”, line 321, in build_deps cmake=cmake) I cannot see by myself where the problem could come from (but it seems pytorch related) … may you have any more guesses please ? Capture d’écran 2019-09-14 à 13.43.08.png1238×1142 209 KB
st80965
Does your compiler support all C++11 standards? The const char* what_arg argument in out_of_range added to C++11 as seen here 1. I’ve never used macOS, but I assume you are compiling with clang? If so, which version are you using?
st80966
Sorry for the delayed reply … Regarding the compiler I have: clang --version Apple LLVM version 8.0.0 (clang-800.0.42.1) Target: x86_64-apple-darwin16.7.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin I tried updating with brew install llvm which installs but “keg-only”: llvm keg-only.png2470×1718 490 KB I am currently running again: MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install If it doesn’t work, I will python3 setup.py clean and try running again the compilation with: export LDFLAGS="-L/usr/local/opt/llvm/lib" export CPPFLAGS="-I/usr/local/opt/llvm/include" I keep you updated if that works or still raises errors. Thanks !
st80967
update after trying brew install llvm if I open python3 I have the following still Python 3.7.3 (default, Jun 6 2019, 12:03:32) [Clang 8.0.0 (clang-800.0.42.1)] on darwin I tried, both from scratch after setup.py clean: MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install LDFLAGS="-L/usr/local/opt/llvm/lib" CPPFLAGS="-I/usr/local/opt/llvm/include" TORCH_CUDA_ARCH_LIST=6.1 python3 setup.py install In every case the compilation breaks with the same error at 75% which I cannot solve so far … Capture d’écran 2019-09-20 à 20.41.31.png1235×1097 190 KB According to http://clang.llvm.org/cxx_status.html " Clang 3.3 and later implement all of the ISO C++ 2011 standard." And I currently have: Xcode 8.2.1 Xcode : Build version 8C1002 clang : Apple LLVM version 8.0.0 (clang-800.0.42.1) However I am not sure how to relate my clang version to 3.3 and later for checking C++11 support …