id
stringlengths
3
8
text
stringlengths
1
115k
st83268
I’m not too sure that pytorch version >= 1.2.0 (nightly now) includes Transformer modules.
st83269
It should do https://github.com/pytorch/pytorch/releases/tag/v1.2.0 96 Which version are you running?
st83270
Why is that? If it’s not just a feature lacking temporarily, what alternatives are out there? I assume torch.solve works in most cases but that doesn’t leverage the positive definitiveness of a matrix: l = torch.cholesky(x) x_inv_u = torch.cholesky_solve(u, l) vs. x_inv_u = torch.solve(u, x)
st83271
Have you found any work around for this? I’m working with GPs and for numerical stability reasons the consensus is you need the cholesky_solve …which isn’t supported for .backward calls as you’ve pointed out.
st83272
No, I don’t currently need the derivatives, but I might in the future. It seems that this is an open issue 23, perhaps it will get prioritized if more people request it.
st83273
Hi, I am using pytorch to build CNN model. But I am facing some problem with data generation. Here is my data shape: X_train = (-1, 1, 169999, 200) X_test = (-1, 1, 3000, 200) y_train = (169999, 200) y_test = (3000, 200) train = torch.utils.data.TensorDataset(X_train, y_train) test = torch.utils.data.TensorDataset(X_test, y_test) But after that I am getting this error: assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors) AssertionError
st83274
Solved by albanD in post #6 You need to check that X_train.size(0) is the same as y_train.size(0).
st83275
I guess your problem is that the tensors for y_* are missing the batch dimension? each training sample should have it’s corresponding label.
st83276
It depends what your task is. Pytorch always assumeses that the first dimension is the batch dimension. If you do regression, then X_train and y_train should have exactly the same size. If you are doing classification, y_train will be a 1D tensor of size X_train.size(0) (the batch size).
st83277
I am doing classification task. I want my X as four dimension and my y as 1 dimension but getting error while I am trying to stack it together. train = torch.utils.data.TensorDataset(X_train, y_train), the error is coming from this line.
st83278
I built pytorch 1.2.0 from source and it works but it doesn’t register with pip so when I try to install torchvision with pip, torchvision tries to download it’s own pytorch? How can I get it registered with pip? I have checked and calling python and pip from the commandline both trigger the one in my conda environment
st83279
Try to install torchvision via pip install --no-deps. Alternatively, you could of course build torchvision from source.
st83280
Say I passed b1 through my net like this: y1 = my_net(b1) Then, I immediately pass another batch to get a different output, like this: y2=my_net(b2) Does that create two graphs? I’m computing something based on previous two results and then doing backprop some_function(y1, y2).backward() Is this ‘safe’?
st83281
Solved by Mazhar_Shaikh in post #2 Hi John_Deterious, This is safe. As long as y1 and y2 are retained in the memory, the graphs remain. It will be destroyed if you delete the tensors. Calling backward also destroys the graph, unless called with the “retain_graph=True” argument.
st83282
Hi John_Deterious, This is safe. As long as y1 and y2 are retained in the memory, the graphs remain. It will be destroyed if you delete the tensors. Calling backward also destroys the graph, unless called with the “retain_graph=True” argument.
st83283
Hi, I’m about to embark on a project doing distributed variational inference on (Bayesian) deep learning. What are the existing resources for Bayesian deep learning in pytorch – or more broadly in python at the moment (I’m only aware of ZhuSuan, https://github.com/thu-ml/zhusuan 25, which is designed on top of tensorflow)?
st83284
Thanks for the reply. It looks to me that BoTorch primarily supports Bayesian Optimization on (frequentist) neural networks. Is it going to be expanded to include Bayesian inference for deep models?
st83285
What should I do if the input to my model not only consists of Tensors but also python lists? That is, model = Net() add_graph(model, (Tensor1, Tensor2, list)) In this case, I get errors. How can I pass a list as an input to add_graph?
st83286
Hi, According to the current implementation: for bags of constant length, nn.EmbeddingBag with mode=sum is equivalent to nn.Embedding followed by torch.sum(dim=1). In other words, the embeddings all have the same weight, i.e. 1. I wonder if it is possible to weight the embeddings before summing them up or if there is any efficient way to do so? Currently what I did is first use nn.Embedding to extract embeddings, then multiply with weights, and finally summing them up. This, however, is very inefficient since we need to instantiate the intermediate embeddings. Is it possible to add such feature? Or perhaps providing some hints on how to do this effectively. Thanks for the help!
st83287
Hi, you can reformulate this as matrix (of stacked embedding vector) multiplied with a (weight-) vector. Then (possibly after sprinkling transpose on the weight vector and result) you can use torch.mv 110. For batch operation it might be easier to make the weight vector into a nx1 matrix and use torch.matmul 70. Best regards Thomas
st83288
Hi Thomas, Thanks for your reply. I guess my question is not clear enough. Here is a more concrete example: Say I have two sentences of different length. When computing the embedding one the two sentence, one naive way to do this is simply using nn.EmbeddingBag with mode='sum' or mode='mean'. However, what I want is sort of attention on the embeddings, i.e. rather than w_1 + w_2 + … + w_n, I’m looking for a_1 * w_1 + … + a_n * w_n, where both the attention weight a and the embeddings w are learnable. So what I did is first use nn.Embedding to extract the vectors, then weight it. Finally, since multiple sentences may have different length, I used nn.EmbeddingBag to sum the corresponding word embeddings by providing the offsets. The solution that you suggest seems to me like a dense version, where there will be lots of 0 in the matrix. I believe it will consumes way more memory. What I’m looking for is a memory efficient (sparse) way to deal with this. Please correct me if I’m misunderstand something
st83289
Hi, indeed, you have me confused. I thought that you had Embeddings “per seen word” from nn.Embedding (and can afford the memory from it) and wanted a weighted sum over them without an intermediate “multiply with weights” step. As far as I understand, EmbeddingBag avoids the “per seen word” memory allocation by (in its slow cpu version 15) using Tensor.index_add_. Indeed I am unaware of a way to do this and have the weights applied in the same step. Best regards Thomas
st83290
U could weight the embeddings first, then use F.embedding_bag to sum them efficiently.
st83291
When I try to calcuate the loss value of the expected value and the label value,I got the exception, even though there are some simliar topic, but I still cannot found the solution. Can you help me? File “/usr/local/lib/python3.6/site-packages/torch/tensor.py”, line 71, in repr return torch._tensor_str._str(self) File “/usr/local/lib/python3.6/site-packages/torch/_tensor_str.py”, line 286, in _str tensor_str = _tensor_str(self, indent) File “/usr/local/lib/python3.6/site-packages/torch/_tensor_str.py”, line 201, in _tensor_str formatter = _Formatter(get_summarized_data(self) if summarize else self) File “/usr/local/lib/python3.6/site-packages/torch/_tensor_str.py”, line 87, in init nonzero_finite_vals = torch.masked_select(tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0)) RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THC/THCReduceAll.cuh:327 My code is for step,(x,y1) in enumerate(train_loader): x_data, y_model = x.to(device), y1.to(device) optimizer.zero_grad() output_model = model(x_data, h0) print("output_model.shape:", output_model.size()) print("y_model.shape:", y_model.size()) print("output_model:", output_model) print("y_model:", y_model) loss_model = loss_func(output_model, y_model) print("loss_model:", loss_model) # the error occured in this line. the printed result is: output_model.shape: torch.Size([64, 2]) y_model.shape: torch.Size([64]) output_model: tensor([[-0.9913, -0.4638], [-0.9701, -0.4765], [-1.0105, -0.4526], [-0.9497, -0.4891], [-1.0548, -0.4281], [-1.1764, -0.3687], [-0.9274, -0.5035], [-0.9197, -0.5086],… [-0.9032, -0.5197]], device=‘cuda:0’, grad_fn=)] y_model: tensor([3, 3, 2, 1, 0, 3, 2, 2, 1, 3, 1, 3, 1, 1, 1, 1, 0, 3, 1, 1, 0, 1, 2, 2, 2, 1, 0, 1, 0, 2, 2, 2, 0, 3, 1, 1, 1, 1, 3, 2, 2, 2, 1, 1, 1, 3, 3, 0, 2, 2, 2, 3, 3, 0, 0, 2, 3, 0, 3, 2, 2, 3, 3, 2], device=‘cuda:0’)
st83292
Solved by Oli in post #2 I don’t know why the crash is happening but your model outputs and labels aren’t the same shape. Are you trying to do binary classification? Then I suspect that you want to change the model to only have one output instead of two.
st83293
I don’t know why the crash is happening but your model outputs and labels aren’t the same shape. Are you trying to do binary classification? Then I suspect that you want to change the model to only have one output instead of two.
st83294
huihuiM: [-0.9032, -0.5197] yes, the problem is the binary classification. But I have not do the one hot encoding for the output model. the last operation of the model is: F.log_softmax(pred_model) Is there some problem?
st83295
Thank you. I check the model outputs and labels again. And the problem is that there are not the same shape. I have fixed the bug. Thanks
st83296
Pytorch embedding or lstm (I don’t know about other dnn libraries) can not handle variable-length sequence by default. I am seeing various hacks to handle variable length. But my question is, why this is the case? I mean, sequences almost never the same size/length and rnn/lstm should loop through until the end of a sequence. If so, why it will be sensitive to the various length of the sequences of a minibatch? Pytorch Embedding is a look-up table. To me, I don’t see any reason to be sensitive to variable length. Should not be the ideal case is, I can give a minibatch of sentences with a variable number of words? Like the following: word_embedding = nn.Embedding(17,5) #each vector is a sentence word_embeds=word_embedding(torch.tensor([[1,2,3,4,5],[4,5,6,7]]))
st83297
Your example won’t work, as you cannot create a tensor using your inputs of different lengths. Internally each tensor holds the data as a blob storing internal attributes such as the stride and shape of the tensor. This makes is possible e.g. to apply batched operations. As you can see in the docs 59 of nn.Embedding, this layer will take LongTensors of arbitrary shape as its input.
st83298
I have an optimizer, I added three groups to it. I computed some loss, now I want to update only the first group of parameters. How can I do that? PS: in tensorflow, I explicitly ask for the gradients of the loss with respect to those parameters only. But here in Pyotrch, I just do loss.backward() and I everything is included. What is the equivalent operation in Pytorch?
st83299
.register_forward_hook for a subclass of nn.Module does not work when __call__ is used instead of forward. This code works: class Identity(nn.Module): def __init__(self): super(Identity, self).__init__() def forward(self, x): return x def print_forward(self, x_in, x_out): print('forward hook') eye = Identity() eye.register_forward_hook(print_forward) x = eye(torch.rand(10)) # prints "forward hook" and this one doesn’t (just change forward to __call__) # this doesn't class Identity(nn.Module): def __init__(self): super(Identity, self).__init__() def __call__(self, x): return x def print_forward(self, x_in, x_out): print('forward hook') eye = Identity() eye.register_forward_hook(print_forward) x = eye(torch.rand(10)) # no print here Is this a design decision? It would be nice to have a warning message in that case.
st83300
It is a design decision, since the __call__ method is registering the hooks and calls forward as seen here 28. Since you’ve created your own __call__ method, this behavior is disabled.
st83301
Hi, I’m new here and it’s the first time to use the LSTM network in reinforcement learning. The state which is also the input of the neural network is a matrix with size (64, 10000). The batchsize is 1, sequence length is 10000. So it is obviously a long and huge input. My goal is to get a (64,1) continuous output as the action. I use three Conv1d layers before LSTM to reduce the LSTM input to (64, 64). But the training is very slow and unstable, with a lot of fluctuation in the reward curve. Any advice?
st83302
I’ve trained a model that is to be used for text generation. I’m still kind of new to saving and loading models for inference so, for experience’s sake I coded two versions. One where I train the model, save it, and directly generate text, and another where I load the saved model and generate the text from that point. The first version generates text as I wanted it to, however the text generated from the loaded model is just gibberish. I noticed that the softmax results returned from the loaded model are all near equal as well… This is how I save the model: model_name = 'model.pth' checkpoint = {'n_hidden': net.n_hidden, 'n_layers': net.n_layers, 'state_dict': net.state_dict(), 'tokens': net.chars} with open(model_name, 'wb') as f: torch.save(checkpoint, f) And this is how it is loaded: model_path = 'model.pth' model = charRNN(chars, n_hidden, n_layers, lr, dropout) checkpoint = torch.load(model_path) model.load_state_dict(checkpoint['state_dict']) model.eval()
st83303
I want to predict how large a data set will be before I create it. Is it possible to get those estimates from Pytorch internally? e.g. if I had a model or a cifar10 image, how can I check in byes how big it is?
st83304
Solved by Nikronic in post #2 Hi, Basically, The vital parameter here is the size of your current dataset. For instance, let say you have 20GB images, and you have about 16GB RAM or a little more. Even if you have a small model, it is still impossible to load whole dataset into RAM. But for small datasets like CIFAR-10, you can…
st83305
Hi, Basically, The vital parameter here is the size of your current dataset. For instance, let say you have 20GB images, and you have about 16GB RAM or a little more. Even if you have a small model, it is still impossible to load whole dataset into RAM. But for small datasets like CIFAR-10, you can do some calculation here. You can do something like this : batch_size*channel_size*height*width = 28*3*32*32 = 86016 and let say we retain them in 64 bit numbers, then every input batch is about 688128 Bytes = 688kb. You can calculate number of parameters in your model too, because they all are matrices like the batches. Finally, you need some more memory for operation that needs more space such as all non-inplace operations. The autograd itself needs memory which are completely depends on your model and its parameters. Finally, If you have any problem in loading whole dataset into memory, You can just use DataLoader and Dataset class to use lazy loading which means dataloader will load a batch every time and reads it from memory and when the model is processing the inputs (let assume on GPU), the dataloader will prepare the next batch. You may have a little IO overload here (which only happens if loading data takes more time than processing in by the model), but the whole idea is great and actually, scientist who are working with large datasets like ImageNet (14 million) or Places365 (2 million), use lazy loading. By the way, I used DataLoader and it works great. Fast and reliable. Good luck
st83306
I am trying to plot ROC curve for multi class classification.I would like to plot multiple lines in a single graph for each class.But I am getting error Error Traceback (most recent call last): File "/home/ex/Downloads/HW/DATAfile.py", line 385, in <module> plot_roc(actuals, class_probabilities,n_class) File "/home/ex/Downloads/HW/DATAfile.py", line 230, in plot_roc fpr[i], tpr[i], _ = roc_curve(actuals[:, i], probabilities[:, i]) TypeError: list indices must be integers or slices, not tuple I could not understand how I will get rid of this error. Here is my code that I used def test_class_probabilities(model, test_loader, n_class): model.eval() actuals = [] probabilities = [] with torch.no_grad(): for sample in test_loader: labels = Variable(sample['grade']) inputs = Variable(sample['image']) outputs = net(inputs).squeeze() prediction = outputs.argmax(dim=1, keepdim=True) actuals.extend(labels.view_as(prediction) == n_class) probabilities.extend(np.exp(outputs[:, n_class])) return actuals,probabilities #[i.item() for i in actuals], [i.item() for i in probabilities] def plot_roc( actuals, probabilities, n_classes): """ compute ROC curve and ROC area for each class in each fold """ fpr = dict() tpr = dict() roc_auc = dict() for i in range(n_classes): fpr[i], tpr[i], _ = roc_curve(actuals[:, i], probabilities[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # plt.figure(figsize=(6,6)) for i in range(n_classes): plt.plot(fpr[i], tpr[i], label='ROC curve of class {0} (area = {1:0.2f})' ''.format(i, roc_auc[i])) # roc_auc_score plt.plot([0, 1], [0, 1], 'k--') # plt.grid() plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic to multi-class') plt.legend(loc="lower right") # plt.tight_layout() plt.show()
st83307
Solved by nrsyed in post #2 actuals is a list, but you’re trying to index into it with two values (:, i). Python lists are not arrays and can’t be indexed into with a comma-separated list of indices. Replace actuals[:, i] with actuals[i] and probabilities[:, i] with probabilities[i].
st83308
actuals is a list, but you’re trying to index into it with two values (:, i). Python lists are not arrays and can’t be indexed into with a comma-separated list of indices. Replace actuals[:, i] with actuals[i] and probabilities[:, i] with probabilities[i].
st83309
check my minimal example: import torch ​ x = torch.randn(4,2) y = torch.nn.ReLU(x) print(x) print(y) tensor([[-0.5674, -2.1012], [-0.4920, -0.3471], [-0.5981, 0.8400], [ 1.7876, -0.1611]]) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-1-be685ec3279f> in <module> 4 y = torch.nn.ReLU(x) 5 print(x) ----> 6 print(y) ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __repr__(self) 1034 # We treat the extra repr like the sub-module, one item per line 1035 extra_lines = [] -> 1036 extra_repr = self.extra_repr() 1037 # empty string will be split into list [''] 1038 if extra_repr: ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/activation.py in extra_repr(self) 100 101 def extra_repr(self): --> 102 inplace_str = 'inplace' if self.inplace else '' 103 return inplace_str 104 RuntimeError: bool value of Tensor with more than one value is ambiguous is this expected? related posts: https://stackoverflow.com/questions/52946920/bool-value-of-tensor-with-more-than-one-value-is-ambiguous-in-pytorch 2 RuntimeError: bool value of Variable objects containing non-empty torch.LongTensor is ambiguous RuntimeError: bool value of Tensor with more than one value is ambiguous https://discuss.pytorch.org/t/why-cant-one-pass-data-through-a-torch-relu-module-directly
st83310
Solved by SimonW in post #2 It is a module, not a function. Use torch.relu or torch.nn.functional.relu or tensor.relu() for functions.
st83311
It is a module, not a function. Use torch.relu or torch.nn.functional.relu or tensor.relu() for functions.
st83312
Based on what I’ve tried (discussed in this post 5 and this post 5 over on the NVidia devtalk forums), I’m pretty sure it’s not possible, and I’ve successfully managed to build PyTorch v0.3.1 without CUDA support, but was wondering if anyone here could offer insight on whether PyTorch with CUDA support can be built on the NVidia Jetson TK1. Jetson TK1 environment: Ubuntu 14.04 armv7l architecture (32-bit ARM) CUDA 6.5 (this is the maximum CUDA version supported on the TK1) Here’s what I’ve tried. First, I set the following environment variables: PATH=/usr/local/cuda/bin:$PATH NO_MKLDNN=1 (since MKL-DNN is incompatible with 32-bit) USE_NCCL=0 NO_DISTRIBUTED=1 TORCH_CUDA_ARCH_LIST=3.5 I found that PyTorch > 0.3.1 requires libnvrtc which is only part of CUDA >= 7.0, so attempted to build PyTorch 0.3.1, but encountered the following error: /tmp/tmp.i6b4rGpeXt/pytorch/torch/lib/THC/THCBlas.cu(495): error: identifier “cublasSgetrsBatched” is undefined /tmp/tmp.i6b4rGpeXt/pytorch/torch/lib/THC/THCBlas.cu(512): error: identifier “cublasDgetrsBatched” is undefined Turns out that the cuBLAS functions cublas<T>getrsBatched were introduced in CUDA 7.0 based on the CUDA 7.0 release notes. Did some digging and found that the earliest version of PyTorch that didn’t reference these functions was v0.1.10. However, attempting to build PyTorch v0.1.10 resulted in the following CMake error stating that CUDA >= 7.0 is required: [100%] Built target THCUNN Install the project… – Install configuration: “Release” – Installing: /tmp/tmp.CmqAOpt6SC/pytorch/torch/lib/tmp_install/lib/libTHCUNN.so.1 – Installing: /tmp/tmp.CmqAOpt6SC/pytorch/torch/lib/tmp_install/lib/libTHCUNN.so – Set runtime path of “/tmp/tmp.CmqAOpt6SC/pytorch/torch/lib/tmp_install/lib/libTHCUNN.so.1” to “” – Installing: /tmp/tmp.CmqAOpt6SC/pytorch/torch/lib/tmp_install/include/THCUNN/THCUNN.h – Installing: /tmp/tmp.CmqAOpt6SC/pytorch/torch/lib/tmp_install/include/THCUNN/generic/THCUNN.h – The C compiler identification is GNU 4.8.4 – The CXX compiler identification is GNU 4.8.4 – Check for working C compiler: /usr/bin/cc – Check for working C compiler: /usr/bin/cc – works – Detecting C compiler ABI info – Detecting C compiler ABI info - done – Detecting C compile features – Detecting C compile features - done – Check for working CXX compiler: /usr/bin/c++ – Check for working CXX compiler: /usr/bin/c++ – works – Detecting CXX compiler ABI info – Detecting CXX compiler ABI info - done – Detecting CXX compile features – Detecting CXX compile features - done CMake Error at /usr/local/share/cmake-3.5/Modules/FindPackageHandleStandardArgs.cmake:148 (message): Could NOT find CUDA: Found unsuitable version “6.5”, but required is at least “7.0” (found /usr/local/cuda) Call Stack (most recent call first): /usr/local/share/cmake-3.5/Modules/FindPackageHandleStandardArgs.cmake:386 (_FPHSA_FAILURE_MESSAGE) /tmp/tmp.CmqAOpt6SC/pytorch/cmake/FindCUDA/FindCUDA.cmake:1013 (find_package_handle_standard_args) CMakeLists.txt:5 (FIND_PACKAGE) Even the first tagged release of PyTorch (v0.1.1) seems to mention CUDA 7.0 in the CMake file pytorch/cmake/FindCUDA/FindCUDA.cmake, which leads me to think it’s impossible to build PyTorch with CUDA 6.5 support on the Jetson TK1. Am I correct or have I missed something?
st83313
I met the situation that the testing data accuracy is higher than the training data, I have checked the dataset and I am sure that the training data is the training data while the testing is testing… What is this indicate to and how to understand it? I used the following code to calculate the accuracy in certain epochs, cnn = cnn.eval() train_correct = 0 for _, (images, labels) in enumerate(train2_loader): images = Variable(images).cuda() labels = Variable(labels).long().cuda() #test outputs = cnn(images) train_loss = criterion(outputs, labels) train_pred = outputs.data.max(1, keepdim=True)[1] # get the index of the max log-probability train_correct += train_pred.eq(labels.data.view_as(train_pred)).long().sum().item() test_correct = 0 for _, (images, labels) in enumerate(test_loader): images = Variable(images).cuda() labels = Variable(labels).long().cuda() #test outputs = cnn(images) test_loss = criterion(outputs, labels) test_pred = outputs.data.max(1, keepdim=True)[1] # get the index of the max log-probability test_correct += test_pred.eq(labels.data.view_as(test_pred)).long().sum().item() cnn = cnn.train() while here the train2_loader is the subset of my train data and the test_loader is my test data. For some example, one of the output is: CNN Epoch [10/10], Iter [100/1563] Loss: 0.3904, Batch correct: 0.8516, Train Correct: 0.7488, Train Loss: 0.2358, Test Correct: 0.7503, Test Loss: 0.1708, At time: 4.08CNN Epoch [10/10], Iter [200/1563] Loss: 0.4498, Batch correct: 0.8125, Train Correct: 0.7690, Train Loss: 0.2809, Test Correct: 0.7677, Test Loss: 0.2149, At time: 4.11CNN Epoch [10/10], Iter [300/1563] Loss: 0.4276, Batch correct: 0.8125, Train Correct: 0.7468, Train Loss: 0.2255, Test Correct: 0.7510, Test Loss: 0.1658, At time: 4.14CNN Epoch [10/10], Iter [400/1563] Loss: 0.4731, Batch correct: 0.7852, Train Correct: 0.7264, Train Loss: 0.2473, Test Correct: 0.7294, Test Loss: 0.2133, At time: 4.17CNN Epoch [10/10], Iter [500/1563] Loss: 0.4603, Batch correct: 0.7891, Train Correct: 0.7352, Train Loss: 0.2176, Test Correct: 0.7389, Test Loss: 0.1578, At time: 4.20CNN Epoch [10/10], Iter [600/1563] Loss: 0.4361, Batch correct: 0.8047, Train Correct: 0.7471, Train Loss: 0.2297, Test Correct: 0.7493, Test Loss: 0.1737, At time: 4.23CNN Epoch [10/10], Iter [700/1563] Loss: 0.4625, Batch correct: 0.8125, Train Correct: 0.7709, Train Loss: 0.2767, Test Correct: 0.7704, Test Loss: 0.2082, At time: 4.26CNN Epoch [10/10], Iter [800/1563] Loss: 0.4046, Batch correct: 0.8359, Train Correct: 0.7140, Train Loss: 0.1969, Test Correct: 0.7171, Test Loss: 0.1459, At time: 4.29CNN Epoch [10/10], Iter [900/1563] Loss: 0.4715, Batch correct: 0.7812, Train Correct: 0.7494, Train Loss: 0.2575, Test Correct: 0.7533, Test Loss: 0.2058, At time: 4.32CNN Epoch [10/10], Iter [1000/1563] Loss: 0.3714, Batch correct: 0.8516, Train Correct: 0.7497, Train Loss: 0.2552, Test Correct: 0.7497, Test Loss: 0.1978, At time: 4.35CNN Epoch [10/10], Iter [1100/1563] Loss: 0.4417, Batch correct: 0.8125, Train Correct: 0.7710, Train Loss: 0.2845, Test Correct: 0.7725, Test Loss: 0.2007, At time: 4.38CNN Epoch [10/10], Iter [1200/1563] Loss: 0.4808, Batch correct: 0.7891, Train Correct: 0.7563, Train Loss: 0.2644, Test Correct: 0.7565, Test Loss: 0.2081, At time: 4.41 CNN Epoch [10/10], Iter [1300/1563] Loss: 0.4471, Batch correct: 0.8125, Train Correct: 0.7532, Train Loss: 0.2323, Test Correct: 0.7512, Test Loss: 0.1790, At time: 4.44 CNN Epoch [10/10], Iter [1400/1563] Loss: 0.4374, Batch correct: 0.8281, Train Correct: 0.7438, Train Loss: 0.2278, Test Correct: 0.7448, Test Loss: 0.1688, At time: 4.46 CNN Epoch [10/10], Iter [1500/1563] Loss: 0.4210, Batch correct: 0.8125, Train Correct: 0.7587, Train Loss: 0.2712, Test Correct: 0.7592, Test Loss: 0.2311, At time: 4.49 Some of the “Train correct” are higher and some are lower than the “Test correct”, and the “Test Loss” is smaller than “Train Loss”. Does this mean there is potential to increase the neural network complexity?
st83314
Hi li0218, This means that your model may have generalized well. You could try increasing the network size to try to get a better fit. Precaution : Have a look at the confusion matrix of your model for the training and test data. And compare them with the ratio of positive and negative classes in your data. It may be happening that your model is mostly predicting one class, and the accuracy simply reflects the proportion of that class in your data. Hope this helps! Regards
st83315
Thank you Mazhar, Good! I used equal ratios data so I think it’s not that problem. I will try increase the layers…
st83316
Hello all, I have a custom loss that only worked for 4D tensors such as NxBxHxW. However, I want to use the loss for my data that is 5D tensor such as NxBxDxHxW. I am wondering which way is fastest way to use the loss for my input, and how to use it? I hear that we can use the view() function. Thanks This is the example use the loss for 4D tensor s = myloss() a = torch.randint(0, 255, size=(20, 3, 256, 256), dtype=torch.float32).cuda() / 255. b = a * 0.5 a.requires_grad = True b.requires_grad = True loss = s(a, b) loss.backward()
st83317
Hi John1231983, Apologies. My intention was to help you optimize the loss function and generalize it to 5d tensors. In general, if you have a 5d tensor [N, C, H, W, D], you should utilize the information available due the fifth dimension. You could of course reshape your tensor to [N*C, H, W, D] using .view(N * C, H, W, D). However, it may give rise to unwanted effects. For example, in applying a weighted cross entropy loss for medical mri scans for brain tumor segmentation, per patient weight and per slice weight could give very different losses. Hope this helps!
st83318
I define a model import torch from torchvision import models model = models.resnet18() It has all its layers set as trainable (requires_grad = True for all layers) Then I freeze the final fc layer for param in model.fc.parameters(): param.requires_grad = False Then I save the state_dict of the model torch.save(model.state_dict(), ‘model.pth’) Now I want to load the weights again. So I define the model once again and load the saved weights in it… model_new = models.resnet18() model_new.load_state_dict(torch.oad(‘model.pth’)) Now when I print the requires_grad of its fc layer, all the requires_grad are back to the original setings (True) for param in model_new.fc.parameters(): print(param.requires_grad) Its prints True True So the question is that, how is the requires_grad setting getting changed on loading the weights? Does saving the model.state_dict() even save the requires_grad settings?
st83319
The state_dict contains the tensors (basically the data), not the nn.Parameters, which hold the requires_grad attribute. Since you are recreating the model and load the state_dict, all flags are re-initialized. model = models.resnet50() print(model.fc.weight) > Parameter containing: tensor([[-0.0046, -0.0034, -0.0171, ..., -0.0081, 0.0019, -0.0050], [-0.0167, -0.0093, 0.0100, ..., 0.0180, 0.0164, 0.0170], [ 0.0093, -0.0114, 0.0144, ..., 0.0003, 0.0202, 0.0163], ..., [-0.0188, -0.0040, -0.0019, ..., 0.0113, -0.0164, -0.0054], [ 0.0212, -0.0127, 0.0155, ..., -0.0200, 0.0092, 0.0188], [-0.0179, 0.0083, -0.0003, ..., -0.0012, -0.0048, -0.0127]], requires_grad=True) sd = model.state_dict() print(sd['fc.weight']) > tensor([[-0.0046, -0.0034, -0.0171, ..., -0.0081, 0.0019, -0.0050], [-0.0167, -0.0093, 0.0100, ..., 0.0180, 0.0164, 0.0170], [ 0.0093, -0.0114, 0.0144, ..., 0.0003, 0.0202, 0.0163], ..., [-0.0188, -0.0040, -0.0019, ..., 0.0113, -0.0164, -0.0054], [ 0.0212, -0.0127, 0.0155, ..., -0.0200, 0.0092, 0.0188], [-0.0179, 0.0083, -0.0003, ..., -0.0012, -0.0048, -0.0127]])
st83320
Thanks for the clarification @ptrblck ! : D But what do you suggest me to do if i want to save the requires_grad flag for all the layers too ?
st83321
yes I tried torch.save(model, ‘path_name.pth’) and it seemed to work. thanks @John_Deterious and whats the best way to save the state of the optimizers ? why do we even need to save optimizer’s state? Can you pls explain this to me ? Thanks in advance !!!
st83322
ptrblck: nn.Parameters , which hold the requires_grad attribute model = models.resnet50() sd = model.state_dict() sd[‘fc.weight’] has both .data and .requires_grad attribute So looking at the information that it holds it feels like saving the state_dict should be saving the requires_grad flag too… Can you pls help me on that…
st83323
The requires_grad attribute you are printing from sd['fc.weight'].requires_grad is the default one for torch.Tensors. n0obcoder: So looking at the information that it holds it feels like saving the state_dict should be saving the requires_grad flag too… I don’t think so, as the state_dict is supposed to store only the parameters, not the computation etc. @John_Deterious’s suggestion to store the complete model might work. n0obcoder: and whats the best way to save the state of the optimizers ? why do we even need to save optimizer’s state? I would follow the ImageNet example 35 and store a custom dict containing all data I would like to store. Some optimizers use internal buffers (e.g. running estimates etc.), that should also be stored/restored, if you plan on continuing the training.
st83324
Hi everyone, Do you know a way to estimate the GPU memory capacity needed for the training or the test of a model depending on the model itself, the input size, the batch size. Thank you.
st83325
Hi Pirate-Roberts, You may find this repository useful. link 19 The gives a theoretical estimate. Actual memory used depends a lot on versions of CUDA, CUDNN and Pytorch being used.
st83326
Hi, I am trying out jit trace functions to automatically parse the model structure. Originally, I use model_output.grad_fn for parsing the structure, but the graph structure seems simpler. FYI, I am using PyTorch version 0.3.1 (installed with pip) in Python 2.7 Below is a part of the code I wrote: input_var = Variable(torch.FloatTensor( np.random.random((1, init_shape[0], init_shape[1],init_shape[2])).astype(np.float32))).cuda() trace, out = torch.jit.trace(model, (input_var,)) torch.onnx._optimize_trace(trace) graph = trace.graph() Once I get the graph, I notice that I can get access to the Nodes and Inputs (I assume they are Nodes containing weights, including the input tensor). I want to get access to the weights inside the nodes, but I cannot find any instructions related to get the weight values into python. One workaround can be parse the Nodes, and retrieve the weights from model.state_dict(), but I think there should be as simpler way.
st83327
@JCPARK, any luck with the issue? I am stuck in a similar problem. Need inbound and outbound tensors for the node. Did not find any documentation to get attributes of torch._C.Value
st83328
As I know, nn.CrossEntropyLoss() automatically apply logSoftmax using FC layer output. So then, how can I get logsoftmax/softmax output? Thank you.
st83329
Solved by Oli in post #6 Code says more than words probs = nn.Softmax(dim=1) # Or logsoftmax criterion = nn.CrossEntropyLoss() outputs = model(inputs) softmax_output = probs(outputs) loss = criterion(outputs , labels)
st83330
The outputs would be the featurized data, you could simply apply a softmax layer to the output of a forward pass. Something like: model = nn.Sequential(...) probs = nn.Softmax(dim=1) outputs = model(input) probs(outputs)
st83331
Yeah that’s one way to get softmax output. But there is problem. I want to use this loss function. criterion = nn.CrossEntropyLoss().cuda() outputs = model(input) softmax_output = probs(outputs) loss = criterion(softmax_output , labels) ?? Then the loss is like nn.Softmax(nn.logsoftmax(outputs)), right? Because nn.CrossEntropyLoss() will apply logsoftmax itself. What should I do? Thank you.
st83332
From the official docs https://pytorch.org/docs/stable/nn.html: 65 nn.CrossEntropyLoss() combines nn.LogSoftmax() and nn.NLLLoss() together
st83333
Thank you. But the link is dead. Can I seperate nn.CrossEntropyLoss()'s nn.LogSoftmax() or nn.NLLLoss() ?
st83334
Code says more than words probs = nn.Softmax(dim=1) # Or logsoftmax criterion = nn.CrossEntropyLoss() outputs = model(inputs) softmax_output = probs(outputs) loss = criterion(outputs , labels)
st83335
I have my RNN/LSTM/GRU outputing a softmax distribution over tokens. How do I decide if to pass the soft-vector or the thresholded/sampled one-hot tensor version of it as input to the next cell? (teacher training is not possible in my application)
st83336
Hi pinocchio, You should pass the direct output of the rnn to the next cell. Sampling/threshold will cause a discontinuity in the graph and the gradients won’t back propagate through time.
st83337
We are doing continual learning, where after a while a new class is encountered and the amount of output units in the network’s last nn.Linear() classification layer are grown to reflect the new total amount of classes. The newly added slice is initialized, while the older class’ units are left as they are. To make sure that this is caught for optimization we also do the same with the gradient tensors and the respective tensors for a potential bias. (The optimizer is then re-instantiated.) These resizing (in-place) operations on e.g. weight.data.resize_(…) have worked fine in PyTorch 1.0 and we have checked numerically that the weight values are consistent and the updates change correctly. In the newer PyTorch version 1.1 these resizing operations seem to no longer be allowed. I have seen RuntimeError: set_sizes_contiguous is not allowed on Tensor created from .data or .detach(), in Pytorch 1.1.0 2 where the suggestion is to not use .data and use copy operations with torch.no_grad(). I am unsure if this is the ideal implementation for our case. Our code can be found here: https://github.com/MrtnMndt/OCDVAE_ContinualLearning/blob/master/lib/Models/architectures.py 2 in lines 8-52 and the corresponding issue is here: https://github.com/MrtnMndt/OCDVAE_ContinualLearning/issues/1 1 I went through the patch notes and couldn’t find any information with respect to resizing. I understand that operations on .data would be discouraged in general, but I would appreciate if someone could shed some more light into why it has been removed entirely as I believe there is cases (like dynamic architectures) where this functionality is pretty useful. We had been working on such dynamic architecture operations (in any layer) in earlier PyTorch versions like 0.3 and 0.4 and it always seemed to have been fine, like in this thread: Dynamically Expandable Networks - Resizing Weights during Training 6 Even more importantly, I would appreciate recommendations of how to change our code and adapt it to PyTorch 1.1 in the way it is meant to be (instead of coding some hacky solution). I suppose one of the more hacky solutions would be to copy all the old weight values, remove the last linear, create and add a new last layer of the correct shape, copy the weights back into the correct slice and create a new optimizer. Is this the only way of doing layer reshaping operations now or is there some more straightforward way like what we had done before? I will appreciate any answers, comments or pointers to patch notes etc…
st83338
Hi @Martin_Mundt, We are also doing continual learning. And we have been following the method as you have suggested at the end, i.e., redefining the Layer and assigning the weights values according to the old weights and the initialization for the new slice. So far, it has worked for us, through pytorch 0.4 to 1.1. Commenting here so that we can see if there is better way to do this than the method described above.
st83339
Hello the following code snippet gives the error in the title. Snippet works on cpu but fails on 2 GPU environment, I haven’t checked with a single GPU. Code particularly fails on w_theta_sin line. def forward(self, x, a): with torch.no_grad(): self.W_hat.div_(torch.norm(self.W_hat, dim=5, keepdim=True)) self.W_theta.fmod_(math.pi) W_theta_sin = torch.sin(self.W_theta) + eps RuntimeError: diff_view_meta->output_nr_ == 0 ASSERT FAILED at /opt/conda/conda-bld/pytorch_1556653215914/work/torch/csrc/autograd/variable.cpp:209, please report a bug to PyTorch. Searched the same error, I know that there is an open issue, but I did not understand much of it tbh:) is there a workaround for this? Thanks in advance.
st83340
Are you using nn.DataParallel? If so, could you post the shapes you’ve used (parameters and inputs) to raise this error?
st83341
Yes I am using nn.DataParallel , sorry for sharing small snippets, it is a fairly large script and my research. self.W_theta = nn.Parameter(torch.rand(1, 1, 1, in_channels, out_channels, 1, 1, 1)) self.W_hat = nn.Parameter(torch.rand(1, 1, 1, in_channels, out_channels, 3, 1, 1)) Input is a custom dataset with 2 channels. Everything is fine except this section I have done many experiments before adding this part.
st83342
Were you able to isolate this issue further or do you have a reproducible code snippet or is this issue reproducible using just the provided information regarding W_theta and W_hat? Also, could you share the shapes of x and a?
st83343
I had to drop the reassignment in order to avoid the error in my further experiments. I am sure that error happens at the following lines. with torch.no_grad(): self.W_hat.div_(torch.norm(self.W_hat, dim=5, keepdim=True)) self.W_theta.fmod_(math.pi) I will try to write a small snippet for reproduction in coming weeks, I only have one computer with gpu and that is working atm Sharing a and x would not help you, there are several steps before I use w_theta_sin I need to simplify it to make it easier for you to trace it. I think this issue is also related, but I am using 1.1.0: github.com/pytorch/pytorch Issue: Assertion fails when using DataParallel with two nn.Embedding 23 opened by neocheema on 2018-11-26 closed by yf225 on 🐛 Bug I'm using the nightly build: 1.0.0.dev20181123. This issue is very similar to #13569 . When I instantiate two nn.Embedding, with... high priority module: autograd triaged Thanks for asking,
st83344
I am implementing “Deconstructing lottery ticket” paper. I am training Alexnet for several times. In the first epoch, it has exactly the same weight of Alexnet. In the second Epoch, I selected 20% of the weights(4M indexes in index_dict) and keep them as zero while training. To do this, I defined hooks for all layers, and grad_clone[some weights] =0: def my_hook4d_conv1(grad): grad_clone = grad.clone() for i in range(len(index_dict4d[“conv1”])): a,b,c,d = index_dict4d[“conv1”][i] grad_clone[a,b,c,d] = 0 return grad_clone and then register hooks for all layers, before training started for the second epoch. This process makes the training very slow. (4M operation in the above for loop). Is there a way to speed up this process?
st83345
How do you go about indexing a variable with another variable? For instance, it’s not clear how you could do a spatial transformer network, since the output of the transformer layer would be a Variable. Does calling idx.data work, or will that cause the graph to be disconnected? Example import torch from torch.autograd import Variable x = Variable(torch.randn(3,3)) idx = Variable(torch.LongTensor([0,1]), requires_grad=True) # doesnt work print(x[idx]) # works, but can you cant call .backward? print(x[idx.data]) t = torch.sum(x[idx.data]) t.backward() # gives an error about no graph nodes require gradients
st83346
You can use index_select: import torch from torch.autograd import Variable x = Variable(torch.randn(3,3), requires_grad=True) idx = Variable(torch.LongTensor([0,1])) print(x.index_select(0, idx)) Note that the index variable (idx) can’t have requires_grad set to True. The variable being indexed (x) can have requires_grad=True. http://pytorch.org/docs/tensors.html#torch.Tensor.index_select 395
st83347
Ok, thank you for that explanation… But then it seems that a STN isn’t possible under that condition. I know this is very broad, but is there any way to index or grab values from a variable tensor with another index-like variable that does have require_grad=True besides using detach()? torch.gather doesn’t work either… Is scatter an option? It seems straight-forward in TF to so-called “differentiate the index”… is this just a limitation of define-by-run? Here’s a fairly straight-forward STN gist 51 I made showing how it should work except for the error in the indexing. And here is 29 a tensorflow version of the transformer layer. No worries though - if anyone else stumbles upon this and has insight, let me know EDIT: eh, I guess I see in the TF example above and the defomable conv pytorch 37 example how they still propagate the gradient through other means than the index… still unclear if differentiating the index is an inherent limitation of pytorch or in general.
st83348
Your tensorflow link is to a private repository so I can’t view it. The index operation doesn’t have any gradients defined w.r.t. the index variable. That’s not a limitation of “define by run”, that’s a property of the operation: it has integer domain. You need a differentiable sampling operation for spatial transformer networks. You can implement STN in PyTorch in roughly the same ways as in Tensorflow: sample floor(idx) and floor(idx) + 1 61 and linearly interpolate between the two. Note that the sampling doesn’t produce a meaningful gradient, it’s the interpolation that produces a useful gradient: import torch from torch.autograd import Variable torch.manual_seed(0) x = Variable(torch.randn(3,3), requires_grad=True) idx = Variable(torch.FloatTensor([0,1]), requires_grad=True) i0 = idx.floor().detach() i1 = i0 + 1 y0 = x.index_select(0, i0.long()) y1 = x.index_select(0, i1.long()) Wa = (i1 - idx).unsqueeze(1).expand_as(y0) Wb = (idx - i0).unsqueeze(1).expand_as(y1) out = Wa * y0 + Wb * y1 print(out) out.sum().backward() print(idx.grad) (You probably want to use gather instead of index_select and will need to interpolate in two dimensions instead of just one)
st83349
Ok, that explanation really makes it click. I got it to work with interpolation. the TF repository I linked was just using bilinear interpolation. Thanks a billion for taking the time.
st83350
Is the need for explicit index_select a “feature” or a “bug”? Should the indexing operator call index_select automatically for the Variables?
st83351
Hello everyone, I have the problem about indexing… In this paper 13 section 3.3 We first select Y frames (i.e. keyframes) based on the prediction scores from the decoder. The decoder output is [2,320], which means non-keyframe score and key frame score of the 320 frames. We want to find a 0/1 vector according to the decoder output but the process of [2,320] -> 0/1 vector seems not differentiable… How to implement this in pytorch? Thank you very much.
st83352
Consider a network (i) -> (h) -> (o) where i, h, o, are input, hidden, and output layers, respectively. I would like to associate a loss Lh and a loss Lo to layers h and o, respectively. However, I wish to backpropagate Lh only from layer h, backward, and loss Lo from layer o backward. Could anyone please point me through the right direction to do so? Many thanks
st83353
Solved by ptrblck in post #2 Here is a small example: class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.fc1 = nn.Linear(10, 10) self.fc2 = nn.Linear(10, 10) self.act = nn.ReLU() def forward(self, x): x1 = self.act(self.fc1(x)) x =…
st83354
Here is a small example: class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.fc1 = nn.Linear(10, 10) self.fc2 = nn.Linear(10, 10) self.act = nn.ReLU() def forward(self, x): x1 = self.act(self.fc1(x)) x = self.fc2(x1) return x, x1 # Create model and execute forward pass criterion = nn.MSELoss() model = MyModel() x = torch.randn(1, 10) o, h = model(x) # Calculate losses loss_o = criterion(o, torch.rand_like(o)) loss_h = criterion(h, torch.rand_like(h)) # Backward loss_h and keep intermediate activations loss_h.backward(retain_graph=True) # Check that self.fc2 grads are empty for name, param in model.named_parameters(): print(name, param.grad) # Backward loss_o loss_o.backward() # Gradient are accumulated in self.fc1 and newly populated in self.fc2 grads2 = [] for name, param in model.named_parameters(): print(name, param.grad) grads2.append(param.grad.clone()) You would also get the same results if you sum both losses and call .backward() on the result tensor.
st83355
Hello, I’m using a pretrained resnet50 only to do inference and I noticed the cached memory explodes in the first pass through the network. I was wondering if someone could shed some light on what’s being stored and if there is a way to control it. Here is an example: import torch from torchvision import models def fmt_MB(alloc, cached): print(f'Alloc : {alloc:>8,.1f} MB\n' f'Cached: {cached:>8,.1f} MB') def mem_usage(): return torch.cuda.memory_allocated() / 2**20, torch.cuda.memory_cached() / 2**20 alloc0, cached0 = mem_usage() fmt_MB(alloc0, cached0) Alloc: 0.0 MB Cached: 0.0 MB model = models.resnet50(pretrained=True).cuda() alloc1, cached1 = mem_usage() fmt_MB(alloc1, cached1) Alloc: 97.7 MB Cached: 118.0 MB x = torch.rand((1024, 3, 224, 224)).cuda() x_size = x.numel() * 32 / (8 * 2**20) alloc2, cached2 = mem_usage() fmt_MB(alloc2, cached2) alloc2 - alloc1, cached2 - cached1, x_size Alloc: 685.7 MB Cached: 706.0 MB (588.0, 588.0, 588.0) model.eval() with torch.no_grad(): y = model(x) y_size = y.numel() * 32 / (8 * 2**20) alloc3, cached3 = mem_usage() fmt_MB(alloc3, cached3) alloc3 - alloc2, y_size Alloc: 689.6 MB Cached: 13,250.0 MB (3.90625, 3.90625) torch.cuda.empty_cache() alloc4, cached4 = mem_usage() fmt_MB(alloc4, cached4) Alloc: 689.6 MB Cached: 3,842.0 MB
st83356
I would assume the forward activations will use the memory and will be cleared after the forward pass, since you’ve used no_grad() and thus they are not needed anymore.
st83357
Hello! I was reading through the documentation and was wondering why there seems to only be a file for rnn.py in the torch.backends.cudnn module? The reason I ask is because while attempting to create a language model with torch.nn.GRU as one of the layers I received the following error: “RuntimeError: cuDNN Error: CUDNN_STATUS_EXECUTION_FAILED” but this error goes away when I run the same code, setting torch.backends.cudnn.enabled = False. But then a new error occurs: “RuntimeError: Input and parameters tensors are not at the same device, found input tensor at cuda:0 and parameters tensor at cpu”. I am using the following GPU setup: RTX 2080ti CUDA Version: ‘9.0.176’ (this is after running torch.version.cuda) cuDNN: 7501 (after checking using torch.backends.cudnn.version()) And using: torch 1.1.0 Ubuntu 18.04 The first part of the forward method’s code is the following: embed = self.embeddings(x) h0 = self.init_hidden(self.batch_size) embed = embed.permute(1, 0, 2) if self.cuda: embed = embed.cuda() h0 = h0.cuda() temp, hidden = self.gru(embed, h0) Thank you!
st83358
cudnn is also used for e.g. convolutions as seen here 9. It’s was a good idea to disable cudnn, since it can potentially hide some other errors. As you can see in your example, the actual error is a device mismatch. Make sure all parameters and inputs are on the same device. While embed and h0 seem to be on the default GPU, self.gru or some other layers might still be on the CPU. Try to run your code with CUDA_LAUNCH_BLOCKING=1 python script.py args, as this will point to the right line of code, which causes this error.
st83359
Thank you, this helped greatly in debugging my issues. The issue was that my GRU layer wasn’t being loaded onto cuda:0 correctly, it remained on the cpu.
st83360
I am trying to install pytorch 1.2 (conda, python3.7, linux). However, using instructions on pytorch.org 9, when specifying cuda9.0, it still installs pytorch1.1.
st83361
I want to find the number of non-zero elements in a tensor along a particular axis. Is there any PyTorch function which can do this? I tried to use the nonzero() 1.2k method in PyTorch. torch.nonzero(losses).size(0) Here, lossess is a tensor of shape 64 x 1. When I run the above statement, it gives me the following error. TypeError: Type Variable doesn't implement stateless method nonzero But if I run, torch.nonzero(losses.data).size(0), then it works fine. Any clue, why this is happening?
st83362
You can only call torch.nonzero() on a simple tensor, not a variable. It makes sens: I doubt that the counting of non-zero element would be differentiable. but you have sum( abs( x/ (abs(x) + epsilon) )), that approximates the number of zero, and is differentiable.
st83363
Actually I was trying to take the average of all non-zero elements in a 1-d tensor which is actually the total loss for my model. I am doing the following. loss = losses.sum() / torch.nonzero(losses.data).size(0) It is working as expected and also backpropagation is not causing any problem, so I am assuming taking average of non-zero elements is differentiable. Do you have any thought about it?
st83364
Yes, of course your total loss L is (piecewise) differentiable. It can be more formally defined as: L = sum( Li ) / sum( 1{Li ≠ 0} ), where 1{c} is the indicator function (which is 1 when c is true and 0 otherwise). Clearly, the function f ( Li ) = 1{Li ≠ 0} has derivative equal to 0 everywhere, except at Li = 0, where the derivative does not exist. In practice, you may assume that it is 0 everywhere. Your biggest concern should be ensuring that you have no problem in using the tensor losses.data instead of the variable losses. This is because PyTorch will see torch.nonzero(losses.data).size(0) as a constant and not as a function of losses. Luckily, you may easily check that the derivative of L w.r.t. each of the losses Lj is the same whether you consider sum( 1{Li ≠ 0} ) as a function of Lj or not: dL / dLj = 1 / sum( 1{Li ≠ 0} )
st83365
@wasiahmad, what you are minimizing is just losses.sum(), and your gradient descent steps are multiplied by a weight that depends on the number of non-zeros element, different at each iteration. But nothing guarantees that it will minimize sum(x)/non-zeros(x) for all x, which is (I think) what you want to do.
st83366
noticed this too! seems nonzero() is super slow. It also varies a lot every time its called. We had a situation where the first time its called it runs very fast and then subsequent calls run 10x slower.
st83367
Have the same problem. For the same function with the exact input, first time runs 2.1s and second time 0.002s. Don’t know exactly what causes this problem… experience it in Pytorch 1.0