id
stringlengths
3
8
text
stringlengths
1
115k
st104900
Basically what happens in pseudoish code is: time 1 for items in dataset do some stuff update parameters accumulate costs end of for time2 If I use .data[0] to accumulate costs, the total time is 11-12 and if I use .item() it is 16. Moreover putting sync just before time2 still gives total time 12 but putting sync anywhere in the for loop increases the total time to 16.
st104901
Hmmm. Thats kind of mysterious. You are printing the actual concrete cost, and that is being displayed, and that’s included in the timing of .data and .item, and yet they give different times? Because, in order to print the value, the data needs to have been transferred from the gpu: the sync must have already happened. cost is a zero dimensional tensor right? like, if you print cost.size(), it’s []?
st104902
Yes it is [] I am going to try to replicate this with a simpler code so that people can read it and see if there is something I am missing.
st104903
So here is another test. I increased the number of workers for dataloading from 1 to 2. If I use .item() the total time is 3.77 where as if I use .data[0] it is total time 2.71. Previously the time difference was about 6 but now it is about 1. It seems the delay somehow is caused by the dataloading stage which can be reduced by increasing number of workers…
st104904
I trained a model with among others had the following layer: final_layer.append(nn.Conv2d(64, 1, kernel_size=1)) and then saved it to a file with state_dict and torch.save. Then, when I wanted to load that model using load_state_dict, but by accident the same layer was setup as follows: final_layer.append(nn.Conv2d(64, 32, kernel_size=1)) Nevertheless, the model was loaded without error. It seemed that the weights were just duplicated 32 times, but I have not verified. So my question is how this is consistent with API documentation. I have not found a statement that says load_state_dict would somehow fix shape inconsistencies automatically. If this was true, then you have a huge documentation vs reality mismatch here. This would actually qualify as a gitHub bug report, but I first wanted to ask if I missed anything before filing a bug report. You might argue “what’s wrong with this. It’s a good thing that pytorch corrects inconsistencies automatically” but I would say this is not good if you do serious research because then the model might look different than you think it looks and your conclusions can become invalid. At least one should know what kind of magic correction happens inside each function. P.S. pytorch release 0.4
st104905
Solved by ptrblck in post #2 Can reproduce this issue with: path = './test_model.pth' model = nn.Conv2d(64, 1, 3, 1, 1) torch.save(model.state_dict(), path) model = nn.Conv2d(64, 32, 3, 1, 1) model.load_state_dict(torch.load(path)) for w in model.weight[:, :, 0, 0]: print(w) Seems like an unwanted behavior to me.
st104906
Can reproduce this issue with: path = './test_model.pth' model = nn.Conv2d(64, 1, 3, 1, 1) torch.save(model.state_dict(), path) model = nn.Conv2d(64, 32, 3, 1, 1) model.load_state_dict(torch.load(path)) for w in model.weight[:, :, 0, 0]: print(w) Seems like an unwanted behavior to me.
st104907
Thank you for the fast answer. It is now a bug report on gitHub: https://github.com/pytorch/pytorch/issues/8282 18
st104908
This is strange. I am trying to use fft (pytorch_fft) and facing type issues. Below function in fft.py uses python builtin type() function to check type of the tensor, which in pytorch 0.4, gives ‘torch.Tensor’ irrespective of the type of tensor (float, double, int). def fft(X_re, X_im): if 'Float' in type(X_re).__name__ : f = th_fft.th_Float_fft1 elif 'Double' in type(X_re).__name__: f = th_fft.th_Double_fft1 else: raise NotImplementedError return _fft(X_re, X_im, f, 1) With pytorch 0.3.1: a = torch.randn(3,4) >>> type(a) <class 'torch.FloatTensor'> >>> a.type() 'torch.FloatTensor' With pytorch 0.4: >>> type(a) <class 'torch.Tensor'> >>> a.type() 'torch.FloatTensor'
st104909
This is expected behavior. Since Variables and tensors were merged, you should now use tensor.type() or isinstance() instead of type(). This is described in the Migration Guide 18.
st104910
Ah OK, didn’t realize that. It seems the repo is abandoned. Maybe you could use the built-in method: torch.fft 9.
st104911
I’m trying to use the kl_divergence function with two multivariate normal distributions and I’m getting the following error: RuntimeError: MAGMA potrf : A(4,4) is 0, A cannot be factorized The code is this: import torch from torch.distributions.multivariate_normal import MultivariateNormal from torch.distributions import kl_divergence p = MultivariateNormal(torch.zeros(5).cuda(), torch.eye(5).cuda()) q = MultivariateNormal(torch.randn(1, 5).cuda(), torch.tril(torch.randn(5,5)).cuda()) kl_divergence(p,q) Do you know why this is happening? Thank you
st104912
Note @gelarazi Please consider adding the appropriate imports to your code (I tried running it, to see what happens, and then had to ponder figuring out what all the relevant imports are … ). Edit: oh, also, `#lower triangular 5x5 tensor# is a variable, not some inline commenting style I didnt know about before. I think it would be easier to reproduce if you created some random tensor (including seed), to demonstrate the issue with.
st104913
@hughperkins I just omitted the imports in the code snippet, of course I I added them; otherwise the Error would be import related
st104914
Sure, but I didnt add them, and I’m too lazy to go off and add them. You are reducing the pool of potential people who might reply.
st104915
Ok. So, as Simon pointed out, your covariance matrix needs to be positive semi-definite. As the doc at https://pytorch.org/docs/stable/distributions.html#multivariatenormal 5 points out, you can achieve that by multiplying your lower triangular matrix by its transpose: import torch from torch.distributions.multivariate_normal import MultivariateNormal from torch.distributions import kl_divergence # p = MultivariateNormal(torch.zeros(5).cuda(), torch.eye(5).cuda()) # q = MultivariateNormal(torch.randn(1, 5).cuda(), torch.tril(torch.randn(5,5)).cuda()) # kl_divergence(p,q) p = MultivariateNormal(torch.zeros(5), torch.eye(5)) print('p.sample()', p.sample()) if False: q_mean = torch.randn(1, 5) q_cov = torch.tril(torch.randn(5,5)) print('q_mean', q_mean) print('q_cov', q_cov) q = MultivariateNormal(q_mean, q_cov) print('q', q) q_mean = torch.randn(1, 5) L = torch.tril(torch.randn(5,5)) q_cov = L @ L.transpose(0, 1) print('q_mean', q_mean) print('q_cov', q_cov) q = MultivariateNormal(q_mean, q_cov) print('q.sample()', q.sample()) kl = kl_divergence(p,q) print('kl', kl) Output: p.sample() tensor([ 0.1836, -0.6165, 0.7646, -0.9500, -1.9736]) q_mean tensor([[ 0.7356, -1.4405, 0.4172, 0.2697, 1.2461]]) q_cov tensor([[ 3.7545, -0.4939, 0.2997, 0.3735, 0.6791], [-0.4939, 0.1115, 0.4387, -0.1818, 0.0683], [ 0.2997, 0.4387, 7.0361, -2.1879, 2.3761], [ 0.3735, -0.1818, -2.1879, 1.1052, -0.5114], [ 0.6791, 0.0683, 2.3761, -0.5114, 1.1063]]) q.sample() tensor([[ 4.9458, -1.7962, 1.3363, 0.2006, 1.5021]]) kl tensor([ 117.6113])
st104916
If the latter, you can use that too, but, as the doc says, the lower triangular matrix needs to have positive diagonal elements: import torch from torch.distributions.multivariate_normal import MultivariateNormal from torch.distributions import kl_divergence p = MultivariateNormal(torch.zeros(5), torch.eye(5)) print('p.sample()', p.sample()) q_mean = torch.randn(1, 5) print('q_mean', q_mean) L = torch.tril(torch.randn(5,5).abs() + 0.1) print('L', L) q = MultivariateNormal(q_mean, scale_tril=L) print('q.sample()', q.sample()) kl = kl_divergence(p,q) print('kl', kl) output: p.sample() tensor([ 0.6368, -0.0909, 0.0822, 0.1273, 1.1169]) q_mean tensor([[ 1.1417, -0.2070, -1.2527, 0.4430, -1.5522]]) L tensor([[ 0.6695, 0.0000, 0.0000, 0.0000, 0.0000], [ 1.4359, 0.7194, 0.0000, 0.0000, 0.0000], [ 1.4802, 0.7240, 0.3760, 0.0000, 0.0000], [ 1.2003, 0.3702, 0.6561, 1.0024, 0.0000], [ 1.4357, 0.4746, 0.1379, 1.3532, 1.9632]]) q.sample() tensor([[ 1.4201, -0.7672, -1.6811, 1.1081, -1.4706]]) kl tensor([ 30.6731]) (Yes, I’m learning as I go along too. That’s kind of the point of why answering questions is fun )
st104917
@hughperkins Thank you for the answer and for pointing that out. I thought that the internal _scale_tril function employed by the kl_divergence functio does this job for us.
st104918
I am trying to implement batched seq2seq model in pytorch, after understanding and implementing the single batch one 76. However, I am not sure whether my implementation is correct as after few epochs of training all it outputs is the padding character. Specifically, these are the changes I made from the tutorial: Input is now a transposed matrix of sequence x batch_size, where the each column is a sequence. Encoder is changed to handle batch inputs like this: class EncoderRNN(nn.Module): def __init__(self, input_size, hidden_size,batch_size, n_layers=1): super(EncoderRNN, self).__init__() self.n_layers = n_layers self.hidden_size = hidden_size self.batch_size = batch_size self.embedding = nn.Embedding(input_size, hidden_size, padToken) self.gru = nn.GRU(hidden_size, hidden_size) def forward(self, input, hidden): embedded = self.embedding(input).view(1, self.batch_size, -1) # sequence length x batch size x hidden_size output = embedded for i in range(self.n_layers): output, hidden = self.gru(output, hidden) return output, hidden def initHidden(self): result = Variable(torch.zeros(1, self.batch_size, self.hidden_size)) if use_cuda: return result.cuda() else: return result I found transferring AttentionDecoder into batched mode to be tricky. After considerable tinkering with matrix sizes, this is what I came up with: class AttnDecoderRNN(nn.Module): def __init__(self, hidden_size, output_size, batch_size=5, n_layers=1, dropout_p=0.1, max_length=25): super(AttnDecoderRNN, self).__init__() self.hidden_size = hidden_size self.output_size = output_size self.n_layers = n_layers self.dropout_p = dropout_p self.max_length = max_length self.batch_size = batch_size self.embedding = nn.Embedding(self.output_size, self.hidden_size) self.attn = nn.Linear(self.hidden_size * 2, self.max_length) self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size) self.dropout = nn.Dropout(self.dropout_p) self.gru = nn.GRU(self.hidden_size, self.hidden_size) self.out = nn.Linear(self.hidden_size, self.output_size) def forward(self, input, hidden, encoder_outputs): embedded = self.embedding(input) embedded = self.dropout(embedded) # 1xbatch_size x hidden_size dt = torch.cat((embedded, hidden), 2) attn_weights = self.attn(dt[0]) attn_weights = F.softmax(attn_weights) encoder_outputs = encoder_outputs.transpose(0,1) attn_applied = torch.bmm(attn_weights.unsqueeze(1), # bmm - matmul encoder_outputs) # should be batch_size x 1 x hidden_size embedded = embedded.transpose(0,1) output = torch.cat((embedded, attn_applied), 2) # batch_size x 1 x hidden*2 output = self.attn_combine(output.squeeze(1)).unsqueeze(1) # batch_size x 1 x hidden output = output.transpose(0,1) for i in range(self.n_layers): output = F.relu(output) output, hidden = self.gru(output, hidden) output = F.log_softmax(self.out(output[0])) return output, hidden, attn_weights def initHidden(self): result = Variable(torch.zeros(1, self.batch_size, self.hidden_size)) if use_cuda: return result.cuda() else: return result I am trying to implement scheduled sampling while training (using teacher forcing with probability 0.5). Teacher forcing is similar to implement when we feed the next real output as input. But when we try to use the generated output as input, I do it like this: # Without teacher forcing: use its own predictions as the next input for di in range(max_length): decoder_output, decoder_hidden, decoder_attention = decoder( decoder_input, decoder_hidden, encoder_outputs) topv, topi = decoder_output.data.topk(1) nis = [] for nit in topi: ni = nit[0] nis.append(ni) decoder_input = Variable(torch.LongTensor([nis])) decoder_input = decoder_input.cuda() if use_cuda else decoder_input for ct in range(batch_size): loss += criterion(decoder_output[ct], target_variables[di][ct]) loss = loss / batch_size Is my implementation correct till now? Because while training I find the generated output quickly producing only pad tokens and nothing else. I have researched on related implementations but they have one or two limitations for me to follow them. @spro’s batched seq2seq tutorial 101 LuongAttnDecoderRNN is not batched, it runs one step at a time While the notebook mentions scheduled sampling, only teacher forcing is implemented and not the without part. The tutorial used PackedSequence (also this discussion), but I could not understand why does it run the encoding step once. Is passing one timestep of each sequence in batch to nn.gru for each row in sequences is same as passing full timesteps of each sequence once? It would be great if someone can help me understand this implementation. Also, if I would had used PackedSequence, how would I had modified the above?
st104919
@koustuvsinha have you figured out if this implementation is correct? I am also running into the same issue as you while attempting to implement the schedules sampling method. If you’ve figured this out, do you have any good resources you refer to?
st104920
The implementation looks incorrect to me. The main problem seems to be that the loss treats the padded target sequence always as the correct ones and try to learn to predict the pad (which, in the best case, seems at least not very useful); this may be related to the behavior you have observed during test. Personally, I think the problem can be tackled by optimizing a masked_loss, Pytorch doesn’t seem to have native one yet, so may require some workaround. This discussion seems related How can i compute seq2seq loss using mask? 47
st104921
As far as how to obtain the mask, I’m not sure if this has been addressed, or if there is a better/easier way, but what I’ve been doing is something like: cumsum = torch.Tensor(N, K).fill_(1).cumsum(-1) - 1 mask = cumsum < lens.view(N, 1).expand_as(cumsum) Not sure if there is a better/easier/more standard way of creating the mask?
st104922
The following code, inside the forward fo a network module, results in a module that learns correctly: def forward(self, x, state_cell_tuple): ... xdot = x @ self.Wx xdot = xdot.view(batch_size, 4, self.embedding_size) .... i = F.tanh(xdot[:, 0] + hdot[:, 0] + self.bias[0]) ... (etc) ... However, transposing the view, and the accessor fails to learn correctly: xdot = x @ self.Wx xdot = xdot.view(4, batch_size, self.embedding_size) ... i = F.tanh(xdot[0] + hdot[0] + self.bias[0]) ... (etc) ... (from an LSTM cell implementation of course). So, the question is: why is this view transposition failing? a matrix multiplication is effectively a fully-connected layer, should not matter which way around I do the .view(), I think? What am I missing here?
st104923
Solved by albanD in post #4 Hi, The thing is that view is not transposition. It just looks at data differently. When you do .view(4, batch_size, ...), then the data from different samples in your batch are mixed (which most certainly confuses the learning a lot): >>> import torch >>> batch_size = 3 >>> seq_len = 2 >>> inp = …
st104924
As far as I know, if one permutes a matrix, that is used as a fully-connected layer, the model should learn identically, compared to any other permutation? ie if I do: self.w = nn.Parameter(torch.Tensor(batch_size, neurons)) self.w.data.uniform(-stdv, stdv) self.w.data = permute(self.w.data.view(-1)).view(batch_size,neurons) ... x = x @ self.w … the model will converge exactly the same, no matter what is the permutation inside permute? (edited, to correct a bit…)
st104925
Hi, The thing is that view is not transposition. It just looks at data differently. When you do .view(4, batch_size, ...), then the data from different samples in your batch are mixed (which most certainly confuses the learning a lot): >>> import torch >>> batch_size = 3 >>> seq_len = 2 >>> inp = torch.arange(batch_size).unsqueeze(-1).expand(batch_size, 4*seq_len).contiguous() >>> print(inp) tensor([[ 0, 0, 0, 0, 0, 0, 0, 0], [ 1, 1, 1, 1, 1, 1, 1, 1], [ 2, 2, 2, 2, 2, 2, 2, 2]]) >>> print(inp[0]) tensor([ 0, 0, 0, 0, 0, 0, 0, 0]) >>> print(inp.view(batch_size, 4, seq_len)) tensor([[[ 0, 0], [ 0, 0], [ 0, 0], [ 0, 0]], [[ 1, 1], [ 1, 1], [ 1, 1], [ 1, 1]], [[ 2, 2], [ 2, 2], [ 2, 2], [ 2, 2]]]) >>> print(inp.view(batch_size, 4, seq_len)[0]) tensor([[ 0, 0], [ 0, 0], [ 0, 0], [ 0, 0]]) >>> print(inp.view(4, batch_size, seq_len)) tensor([[[ 0, 0], [ 0, 0], [ 0, 0]], [[ 0, 0], [ 1, 1], [ 1, 1]], [[ 1, 1], [ 1, 1], [ 2, 2]], [[ 2, 2], [ 2, 2], [ 2, 2]]]) >>> print(inp.view(4, batch_size, seq_len)[:, 0]) tensor([[ 0, 0], [ 0, 0], [ 1, 1], [ 2, 2]])
st104926
albanD: When you do .view(4, batch_size, ...) , then the data from different samples in your batch are mixed Oooo, right, good point
st104927
Hello, I’d like to know if there is a way to sample several variables from a Multivariate Normal distribution in a batch fashion ? For instance, given a tensor of size [n,3], I tried to stack n covariance matrices of size [3,3] but I get an error when i try to sample. I couldn’t figure out whether there’s a specific way to define the distribution or it is just not possible. Has anyone faced this yet ? Thanks!
st104928
Hi Mehdi, I just ran into the same problem with an error RuntimeError: Lapack Error in potrf : the leading minor of order 2 is not positive definite at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524589329605/work/aten/src/TH/generic/THTensorLapack.c:617 and managed to solve it by realizing the covariance matrices I used first were not positive definite. By using symmetric matrices as covariance matrices and stacking them w.r.t. axis 0, I got it to work. You can try the example below. import torch from torch.distributions.multivariate_normal import MultivariateNormal mean = torch.Tensor([[1, 2, 3], [4, 5, 6]]) cov1 = torch.eye(3) cov2 = torch.Tensor([[1, 1, 1], [1, 2, 2], [1, 2, 3]]) cov = torch.stack([cov1, cov2], 0) distrib = MultivariateNormal(loc=mean, covariance_matrix=cov) distrib.rsample() As suggested by the source code comments 161, since they just perform a Cholesky decomposition to an n-by-n covariance matrix to get a scale_tril tensor of size n, it might be a better practice to pass in a tensor of size n directly when possible.
st104929
I got an problem while set an transfer learning task. In my task, the source model and dst model couldn’t run in an Graphics, so is there any function or some method set the model run in different Graphics.
st104930
If you have two different models, you can push them to different GPUs using an id: model1 = ... model2 = ... model1 = model1.to('cuda:0') model2 = model2.to('cuda:1') Note that you have to push the outputs etc. also to the appropriate GPU: output = model1(x) # output is on GPU0 # output should be used in model2 now (which is on GPU1) output = output.to('cuda:1') output = model2(output)
st104931
Hi, I have a i7 quad-core cpu running at 4.2GHz, with 3 x NVIDIA TITAN V GPUs. While running my training loop for a U-Net model, with the following batch sizes, a performance profile shows that the code spends 62.8% of its time in method acquire of _thread.lock # training parameters train: batch_size: 20 num_workers: 4 # validation parameters valid: batch_size: 6 num_workers: 4 201806081050-train-unetv2-pp-acquire-thread-lock-62.png1589×237 25.6 KB Is this somehow PyTorch related, with respect to the DataLoader automatically spinning up worker threads and trying to allocate threads for processing? Each of my GPUs have 12GB of memory, but they are effectively operating at 1% utilization. If I reduce the batch size to 1 with 4 workers each, the acquire thread lock still take up most of the time at 50%: # training parameters train: batch_size: 1 num_workers: 4 # validation parameters valid: batch_size: 1 num_workers: 4 201806081100-train-unetv2-pp-acquire-thread-lock-58-b1-w4.png905×340 31.7 KB It gets weirder when I reduce both batch size and num of workers to 1, the acquire thread.lock method ends up consuming 71.1% of the total time. 201806081110-train-unetv2-pp-acquire-thread-lock-58-b1-w1.png911×339 32.1 KB I got better results I matched the number of threads with the number of samples in the batch with the number of GPUs available. The acquire _thread.lock method was at 47.1% with 5.08secs/iteration. # training parameters train: batch_size: 3 num_workers: 3 # validation parameters valid: batch_size: 2 num_workers: 2 201806081128-train-unetv2-pp-acquire-thread-lock-58-b3-w3.png911×500 46.9 KB
st104932
Hi, I am not 100% sure but I would say this is expected for the main thread. Indeed the autograd engine works by using worker threads for different GPUs during backward. So the main thread will mostly wait for them to finish. You can check the autograd profiler here 255 to see which part of your net is taking the most time.
st104933
As the topic above, how do i call nn.LayerNorm the same way as nn.BatchNorm. The normalization file from official document has already added the LN; however, i just can’t use the LN like nn.BatchNorm in my script.
st104934
I am trying to run a pytorch code that was written in python 2.7. So I setup anaconda2 and created a new python env within it. I check the python version. python --version gives Python 2.7.15 :: Anaconda, Inc.. I then go to the pytorch page and generate the conda install command using linux, conda, Python 2.7 and CUDA 8 as parameters. The command is conda install pytorch torchvision -c pytorch. After executing the command, I see that the python version has changed to 3.6 and which python is now pointing to the python within the newly created env. TLDR; I can’t seem to have pytorch installed with python 2.7. Can some one help me navigate through this? An observation: changing between python version to generate the conda install command on the pytorch home page does not change anything in the command. Not sure if anaconda will automatically decipher the python version that it should use? In any case, that does not seem to be working in my case.
st104935
Could you try to activate your environment and run conda install --name myenv pytorch torchvision -c pytorch
st104936
@ptrblck, it says, “All packages are already installed” . I guess the problem is, the conda install commands always install PyTorch with Py3.6. What should I change in the conda install command so that I can install PyTorch with Py2.7 and not Py3.6?
st104937
I think your python2 environment shouldn’t even have python3. Could you activate the env, call python, and call: import torch print(torch.__file__) It should point to something like: .../envs/ENV_NAME/lib/python2.7/site-packages/torch.__init__.py
st104938
@ptrblck, sorry for the delayed response. Precisely ! I do a fresh login: python --version Python 2.7.15 :: Anaconda, Inc. which python /home/yash/anaconda2/bin/python Then I source the environment: source activate py2pytorch Then the status is: python --version Python 3.6.5 :: Anaconda, Inc. which python /home/yash/anaconda2/envs/py2pytorch/bin/python I also did as you asked: print(torch.__file__) /home/yash/anaconda2/envs/py2pytorch/lib/python3.6/site-packages/torch/__init__.py So the environment inside anaconda2 is python 3.6 ?! Thanks again !
st104939
Thanks for the update! It looks like your default anaconda installation ships with Python2.7. The environment you’ve created uses Python3.6. If you would like to install PyTorch for Python2.7, you could either install it in the default environment or create a new environment with Python2.7: conda create -n py27 python=2.7
st104940
I want to rescale (i.e. resize) the output of a convolution layer. I don’t mean to reshape the tensor, but actually make an n by n map and change it to an m by m for the entire minibatch. Is this possible in PyTorch?
st104941
Solved by sniklaus in post #4 What operation do you want to use, how do you want to map values from one domain into the other? I am also not sure why the following isn’t what you were looking for. torch.nn.Upsample(size=(50, 50), mode='bilinear')(torch.rand(1, 3, 64, 64))
st104942
You could have a look at the Upsample module. https://pytorch.org/docs/master/nn.html#torch.nn.Upsample 1.1k
st104943
That is helpful, but my goal is to downsample to an arbitrary size. Let’s say I have 12 x 64 x 64 feature map and want to change it to a 12 x 50 x 50. Using downsampling/padding doesn’t always work.
st104944
What operation do you want to use, how do you want to map values from one domain into the other? I am also not sure why the following isn’t what you were looking for. torch.nn.Upsample(size=(50, 50), mode='bilinear')(torch.rand(1, 3, 64, 64))
st104945
Oh, I can use Upsample to downsample! I guess the name confused me. Thank you Simon; this is exactly what I am looking for! I am loving PyTroch.
st104946
I try to train a YOLO object detector by pytorch and implement the model on FPGA. And When I check the value of running_var in batchnorm layer, there were some negative values. I thought running_var was supposed to be positive. Did I miss something? Or Could anyone tell me the formula to calculate the batchnorm output with variables weight, bias, running_var, running_mean? Thanks, Min
st104947
Hi, I came across this problem too. This problem is caused by the 32/64 bit compiler problem. The metadata of Yolo’s weight file is composed of 3 int and 1 size_t, unfortunately the size_t depends on your compiler and machine. For the official weight file, size_t==32bit, for your own weight file, it varies. In brief, you may want to escape the first 5 int32 instead of 4 int32. e.g. github.com marvis/pytorch-yolo2/blob/6c7c1561b42804f4d50d34e0df220c913711f064/darknet.py#L247 5 loss.coord_scale = float(block['coord_scale']) out_filters.append(prev_filters) models.append(loss) else: print('unknown type %s' % (block['type'])) return models def load_weights(self, weightfile): fp = open(weightfile, 'rb') header = np.fromfile(fp, count=4, dtype=np.int32) self.header = torch.from_numpy(header) self.seen = self.header[3] buf = np.fromfile(fp, dtype = np.float32) fp.close() start = 0 ind = -2 for block in self.blocks: if start >= buf.size: break you may try count=5 in this case.
st104948
Is it something like: bias = torch.Tensor(batch_size) bias.needs_grad = True x = x + bias ?
st104949
bias = Variable(torch.FloatTensor([…]),requires_grad=True) … x = x + bias … err = ___ err.backward() bias.data = bias.data - learning_rate*bias.grad.data bias.grad.data_zero()
st104950
I have seen this 23 and tried to understand the documentation: This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. The module is replicated on each machine and each device, and each such replica handles a portion of the input. During the backwards pass, gradients from each node are averaged. The batch size should be larger than the number of GPUs used locally. It should also be an integer multiple of the number of GPUs so that each chunk is the same size (so that each GPU processes the same number of samples). Does that mean the entire batch of data is first loaded in rank 0, then chunks are sent to other ranks? or each rank will read only the part of the batch that is in its chunk portion? I do not know how that would be possible (for each worker to read only part of the batch) unless there is a requirement for data loader to subclass torch.utils.data.Dataset that has __getitem__. If for example data comes from a custom class, how would DataParallel know how to read the proper chunk? and if the entire batch is read each time, it could be inefficient to send huge amount of say images among nodes, where each node could just read the images from a shared directory.
st104951
Solved by andfoy in post #3 @dashesy You can use torch.utils.data.distributed.DistributedSampler (docs) to prevent input data broadcasting among nodes. You can see it in action in the distributed ImageNet example
st104952
Looking at the code 18 it does look like distributed scatters the input! this is very inefficient when inputs are images for example. It would be much more efficient for each process to read its portion from a shared directory/file. I think I would subclass DistributedDataParallel and do just that. How is the performance of distributed pytorch with large inputs on multi-gpu (say 32+ nodes)?
st104953
@dashesy You can use torch.utils.data.distributed.DistributedSampler (docs 33) to prevent input data broadcasting among nodes. You can see it in action in the distributed ImageNet example 58
st104954
A few more observations: I am trying to understand how DistributedSampler prevents scattering inputs among nodes when using DistributedDataParallel. According to this answer, imagenet example uses torch.utils.data.distributed.DistributedSampler to do that. This line: # compute output output = model(input) however, calls forward on DistributedDataParallel which does call scatter 9: def forward(self, *inputs, **kwargs): self.need_reduction = True inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) So even though DistributedSampler has the data already chunked up for each replica, the batches will be still scattered, but they will be scattered among local devices, not among nodes! It took me some time to realize that device_ids are all local devices for each node and DistributedDataParallel does not scatter among nodes.
st104955
I am trying to apply a different threshold to each input channel (e.g., similar to PReLU with \alpha set to nChannels), but the thresholds do not need to be learned. So far I have the following, but it is too slow due to the for loop. class ChannelwiseThreshold(nn.Module): def __init__(self): super(ChannelwiseThreshold, self).__init__() def forward(self, x, thresholds): h = [] for i in range(len(thresholds)): h.append(F.threshold(x[:, i], thresholds[i], 0)) return torch.stack(h, dim=1) Is there any more efficient way of achieving this? Thanks, Brad
st104956
Solved by ptrblck in post #5 I’m not sure what your shapes are, but could this work? a = torch.randn(3, 2) th = torch.randn(3, 1) print(a) > tensor([[ 1.1200, -0.1365], [-0.1328, 0.0842], [-0.5945, 0.6210]]) print(th) > tensor([[-0.7215], [ 1.1406], [-0.4683]]) idx = (a > th).float() print(a…
st104957
I think the stacking operation in your case is expensive. I checked the PRelu code and it seems to be done in C. But, I think doing the operation in place should make it fast enough. class ChannelwiseThreshold(nn.Module): def __init__(self): super(ChannelwiseThreshold, self).__init__() def forward(self, x, thresholds): for i in range(len(thresholds)): F.threshold_(x[:, i], thresholds[i], 0) return x
st104958
Sorry if I was unclear, but the thresholds do not need to be learned but x may be a Variable. If I try to use this code I get the following error: RuntimeError: one of the variables needed for the gradient computation has been modified by an inplace operation I assume this is due to repeated inplace operations on the Variable x.
st104959
Also, I tried removing the stack operation (just allocating a new Variable before entering the for loop) and it is still slow (from 45 seconds an epoch to 30 minutes an epoch). For larger layers, (e.g., C = 500), this translates to 500 calls to F.threshold for each forward pass in that layer.
st104960
I’m not sure what your shapes are, but could this work? a = torch.randn(3, 2) th = torch.randn(3, 1) print(a) > tensor([[ 1.1200, -0.1365], [-0.1328, 0.0842], [-0.5945, 0.6210]]) print(th) > tensor([[-0.7215], [ 1.1406], [-0.4683]]) idx = (a > th).float() print(a * idx) > tensor([[ 1.1200, -0.1365], [-0.0000, 0.0000], [-0.0000, 0.6210]])
st104961
Thanks! I think this could work. Do you know how to do this over say dim=1 in the case of 4 dimensions? a = torch.randn(3, 2, 4, 4) th = torch.randn(1, 2, 1, 1) idx = (a > th).float() #over dim=1 <idx would be shape (3,2,4,4)> I could always permute the data to make it two dimensions as in your example, but I think that would be a bit slower. Edit: Disregard this comment – I didn’t realize the semantics already work this way!
st104962
Hi, I’m using a set of transformers defined like this for the train_dataset: def train_transformer(): """ Train transformer. :return: a transformer """ transformer = transforms.Compose([ transforms.RandomCrop(size=(256, 256)), # randomly crop am image transforms.RandomRotation(degrees=5), # randomly rotate image transforms.RandomHorizontalFlip(), # randomly flip image vertically transforms.RandomVerticalFlip(), # randomly flip image horizontally transforms.ToTensor()]) # transform it into a torch tensor return transformer When I try to use it just before returning the sample (dict containing ‘image’ and ‘mask’), sample = {'image': image, 'mask': mask} if self.transform: sample = self.transform(sample) return sample I get the following error: AttributeError: Traceback (most recent call last): File "/tool/python/conda/env/gis36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 57, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/tool/python/conda/env/gis36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 57, in <listcomp> samples = collate_fn([dataset[i] for i in batch_indices]) File "/project/geospatial/application/cs230-sifd/source/step/loader/sifd/dataset.py", line 375, in __getitem__ sample = self.transform(sample) File "/tool/python/conda/env/gis36/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py", line 49, in __call__ img = t(img) File "/tool/python/conda/env/gis36/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py", line 421, in __call__ i, j, h, w = self.get_params(img, self.size) File "/tool/python/conda/env/gis36/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py", line 394, in get_params w, h = img.size AttributeError: 'dict' object has no attribute 'size'
st104963
I assume self.transform is the transformer. You cannot apply the transformation on a dict. You should apply it on PIL.Images. So probably self.transform(sample['image']) will work. If you need the exact same transformation for your sample and mask, which seems to be the case, have a look at this post 118.
st104964
So, I need to define a transform method internal to my dataset class and use the transforms.compose set of transforms to process the image and mask separately. What is the recommended/torch framework compatible way of returning images and labels from a dataset? Is it better to return them separately or use a sample dict to return it? The PyTorch tutorials use the sample dict approach: https://pytorch.org/tutorials/beginner/data_loading_tutorial.html 43 Sample of our dataset will be a dict {‘image’: image, ‘landmarks’: landmarks}. Our dataset will take an optional argument transform so that any required processing can be applied on the sample. Its easy to miss the fact that torchvision transforms cannot handle both the image and mask in one go, without going through the sources.
st104965
I think it depends on your coding style. Dicts could make it easier to pass data around. On the other side, as far as I know, no method handles dicts directly, so you would have to unpack them. I think the safest way is to include your transformations into the __getitem__ and make sure to apply the same transformation on the sample and mask.
st104966
I added this method to my dataset class, which take the user-supplied transforms as an additional parameter: def _transform(self, image, mask, transform): """ Apply transforms to both the image and the mask. :param transform: A transform object. :return: transformed image and mask """ image = transform(image) mask = transform(mask) return image, mask I call it in my getitem() def __getitem__(self, index): <snip> """ Sample of our dataset will be dict {'image': image, 'mask': mask}. This dataset will take an optional argument transform so that any required processing can be applied on the sample. We will convert the scale and convert the sample to uint8 so that it can be viewed as a normal file. This will be useful for offline batch processing to generate a cached dataset. """ image = (image * 255).astype(np.uint8) mask = (mask * 255).astype(np.uint8) """ Apply user-specified transforms to image and mask. """ if self.transform: image, mask = self._transform(image, mask, self.transform) """ Sample of our dataset will be dict {'image': image, 'mask': mask}. This dataset will take an optional argument transform so that any required processing can be applied on the sample. """ sample = {'image': image, 'mask' : mask} return sample and when I try to use it, I get the following error: TypeError: Traceback (most recent call last): File "/tool/python/conda/env/gis36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 57, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/tool/python/conda/env/gis36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 57, in <listcomp> samples = collate_fn([dataset[i] for i in batch_indices]) File "/project/geospatial/application/cs230-sifd/source/step/loader/sifd/dataset.py", line 388, in __getitem__ image, mask = self._transform(image, mask, self.transform) File "/project/geospatial/application/cs230-sifd/source/step/loader/sifd/dataset.py", line 189, in _transform image = transform(image) File "/tool/python/conda/env/gis36/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py", line 49, in __call__ img = t(img) File "/tool/python/conda/env/gis36/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py", line 421, in __call__ i, j, h, w = self.get_params(img, self.size) File "/tool/python/conda/env/gis36/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py", line 394, in get_params w, h = img.size TypeError: 'int' object is not iterable
st104967
Try to convert your numpy image to a PIL.Image: import torchvision.transforms.functional as TF transform = transforms.RandomCrop(24) x = np.ones((3, 50, 50), dtype=np.uint8) x = x * 255 x = torch.from_numpy(x) x = TF.to_pil_image(x) transform(x) As a side note, I think your transformation won’t work correctly, since now transform is called sequentially on the image and mask, which will sample different random values for your random transformations. Your image might therefore be cropped using another position than the mask.
st104968
Oh, what should I do? I thought I’d resort to using this to save some time. I ran into some issues with PIL not being able to read a 10-ch binary mask from disk. I was planning on using the Augmenter library, which is really good at generating cached datasets. The image cropping works fine, but if the ground truths have more than I’m guessing 3 channels, PIL which is used internally by the Augmenter library fails. So, I thought I’d use PyTorch’s transforms routines to consistently crop, rotate and flip images from the dataset to feed the NN.
st104969
Well, torchvision’s transformation also build on top of PIL. I’m not sure if there is a workaround to use 10-channel images in PIL. Maybe you could load the 10-channel mask and slice it into 10 binary images? I’ve posted a link to an example using the functional API to transform the image and mask.
st104970
ptrblck: Maybe you could load the 10-channel mask and slice it into 10 binary images? This is my generate mask function. I was concatenating it and returning it to the dataset. def _generate_mask(self, sample_shapes, height, width, sample_grid): """ Generate ground truth. :param sample_shapes: :param height: :param width: :param sample_grid: :return: """ layers = [] # generate individual mask layers for class_id in range(1, self.classes, 1): layer = self._generate_class_truth_layer(shapes=sample_shapes[class_id], height=height, width=width, sample_grid=sample_grid) layers.append(np.expand_dims(layer, 2)) # concatenate individual mask layers mask = np.concatenate(layers, axis=2) # generate background mask if self.background_mask: background = np.ones((height, width, 1)) - np.expand_dims(np.logical_or.reduce((mask[:, :, :10] == 1), axis=2), axis=2) # first channel is the background mask mask = np.concatenate((background, mask), axis=2).astype(np.uint8) return mask This is the portion of my dataset that returns the image and mask towards the end. """ Mask generation """ mask = self.mask_generator.mask(id=id_, height=h, width=w) if mask is None: raise ValueError('Could not generate concatenated mask!') """ Sample of our dataset will be dict {'image': image, 'mask': mask}. This dataset will take an optional argument transform so that any required processing can be applied on the sample. We will convert the scale and convert the sample to uint8 so that it can be viewed as a normal file. This will be useful for offline batch processing to generate a cached dataset. """ image = (image * 255).astype(np.uint8) mask = (mask * 255).astype(np.uint8) image = torch.from_numpy(image) image = tvf.to_pil_image(image) mask = torch.from_numpy(mask) mask = tvf.to_pil_image(mask) """ Apply user-specified transforms to image and mask. """ if self.transform: image, mask = self._transform(image, mask, self.transform) """ Sample of our dataset will be dict {'image': image, 'mask': mask}. This dataset will take an optional argument transform so that any required processing can be applied on the sample. """ sample = {'image': image, 'mask' : mask} return sample So. we’re doing all this just because PIL can’t handle transformation of a 10-ch mask. If I do not use Torch transformers and write my own Numpy based transformers, it should work?
st104971
Yes it should work and also should be quite fast, since numpy conversions are nearly free, because they share the same data with the tensor. Also, have a look at @ncullen93’s gist 8. There might to be some useful transformations for you.
st104972
I tried this one, taken from the gist, def _transform(self, image, mask): """ Apply transforms to both the image and the mask. :param transform: A transform object. :return: transformed image and mask """ # image ordering img_row_axis = 0 img_col_axis = 1 channel_axis = 2 # random crop c_h = self.preprocessing_params['image']['crop']['height'] c_w = self.preprocessing_params['image']['crop']['width'] image, mask = random_crop(image, mask, size=(c_h, c_w)) if self.mode == 'train' or self.mode == 'dev': # random horizontal flip if np.random.random() > 0.5: image = np.asarray(image).swapaxes(img_col_axis, 0) image = image[::-1, ...] image = image.swapaxes(0, img_col_axis) mask = np.asarray(mask).swapaxes(img_col_axis, 0) mask = mask[::-1, ...] mask = mask.swapaxes(0, img_col_axis) # transform to tensor image = image_to_tensor(image) mask = binary_mask_to_tensor(mask, threshold=0.5) return image, mask but it throws an error: ValueError: Traceback (most recent call last): File "/tool/python/conda/env/gis36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 57, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/tool/python/conda/env/gis36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 57, in <listcomp> samples = collate_fn([dataset[i] for i in batch_indices]) File "/project/geospatial/application/cs230-sifd/source/step/loader/sifd/dataset.py", line 441, in __getitem__ image, mask = self._transform(image, mask) File "/project/geospatial/application/cs230-sifd/source/step/loader/sifd/dataset.py", line 236, in _transform image = image_to_tensor(image) File "/project/geospatial/application/cs230-sifd/source/step/preprocessing/image/tensor.py", line 48, in image_to_tensor tensor = torch.from_numpy(image).float() ValueError: some of the strides of a given numpy array are negative. This is currently not supported, but will be added in future releases. The random crop operation works fine. The random horizontal flip operation fails just before the tensor conversion operation. The tensor conversion functions are as follows: def image_to_tensor(image): """ Transform an numpy image to a torch tensor. We will have to swap the channel axis because numpy uses channel last ordering and torch uses channel first ordering. - numpy image: H x W x C - torch image: C X H X W :param image (np.ndarray): Input image. :return: tensor: A PyTorch tensor. """ image = image.transpose(2, 0, 1) tensor = torch.from_numpy(image).float() return tensor def binary_mask_to_tensor(mask, threshold): """ Transform a binary mask to a tensor. We will have to swap the channel axis because numpy uses channel last ordering and torch uses channel first ordering. - numpy image: H x W x C - torch image: C X H X W :param mask (np.ndarray): A binary mask array, usually of type uint8. :param threshold: The threshold used to consider if the mask is present. :return: tensor: A PyTorch tensor. """ mask = mask.transpose(2, 0, 1) mask = binarize(mask, threshold).astype(np.float32) tensor = torch.from_numpy(mask).float() return tensor
st104973
Try to add .copy() to the numpy arrays with negative strides, e.g. here: image = image[::-1, ...]copy() This will copy the data and make the data contiguous again.
st104974
Can anyone help me in converting a vector of vectors to a Tensor format in the most efficient way? Using Accessor to write into the tensor is taking a lot of time.
st104975
Using negative indexes on a Tensor along anything but the first dimension seems to circularly shift the entries of the slice by one. For example: import torch import numpy as np A = np.arange(15).reshape(3,5) B = torch.Tensor(A) idx = [-1,0,1] Then taking slices along the first dimension gives the same thing as numpy A[idx,:] Out: array([[10, 11, 12, 13, 14], [ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9]]) B[idx,:] Out: tensor([[ 10., 11., 12., 13., 14.], [ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.]]) but if you take slices along the next dimension the slice with the negative index gets circularly shifted by one entry A[:,idx] Out: array([[ 4, 0, 1], [ 9, 5, 6], [14, 10, 11]]) B[:,idx] Out: tensor([[ 14., 0., 1.], [ 4., 5., 6.], [ 9., 10., 11.]]) Is this intentional? I couldn’t find much documentation of Tensor indexing and the 60-minute blitz claims things should work the same as numpy.
st104976
This was the warning that came when I called backward function File "/home/anurag/naman/pytorch3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward variables, grad_variables, retain_graph) RuntimeError: Expected object of type Variable[torch.FloatTensor] but found type Variable[torch.cuda.FloatTensor] for argument #1 'mat2' As I know this happens when model or input are non cuda but since forward worked so this is new to me. Can anyone help out?
st104977
Solved Created a Variable inside my model with torch.zeros(…) without cuda, so one of the output was non cuda. This code worked in pytorch0.4!
st104978
Hi! Is there any recommended way for storing preprocessed tensors? I had found some tutorial where they used torch.save to store the tensors(training data not the model) and I did the same but the resultant files are 3x times bigger in size compared to the corresponding tfrecord (22MB->7MB). So, I would like to know if there is any other recommended way of storing preprocessed tensors.
st104979
I’m not sure but how about np.save or np.savez doc 2? You know, we can easily change torch.tensor <-> numpy.ndarray.
st104980
I have defined a model which performs convolutions over a batch of character-sequences. kernals = [3, 4, 5, 6] cnns = [] for k in kernals: seq = nn.Sequential( nn.Conv1d(char_embed_size, output_size // 4, k, padding=0), nn.Tanh(), nn.MaxPool1d(max_seq_length - k + 1) ) cnns.append(seq) self.cnns = nn.ModuleList(cnns) In my forward method I obtain a representation for each sequence using: def forward(self, char_emb): #char_emb has shape (batch_size, char_emb_size, max_seq_len) tmp = [cnn(char_emb).squeeze() for cnn in self.cnns] seq_representations = torch.cat(tmp, dim=1) return seq_representations Is there a way to avoid the synchronous loop [cnn(char_emb).squeeze() for cnn in self.cnns] and have all the cnns in self.cnns to parallelly perform convolutions over the input?
st104981
CUDA kernels are run asynchronously, so your list comprehension isn’t a synchronization point.
st104982
But I see a slow down per kernel added. Is that just overhead due to other factors? I don’t have exact timing numbers but 1 kernel is definitely lot faster than 4.
st104983
oh right, you should run them on different cuda streams https://pytorch.org/docs/master/cuda.html#streams-and-events 23. but in general you should expect slow down in paralleling tasks. there will always be overhead and usually not all tasks can run with 100% speed.
st104984
Thanks for the quick reply! thanks for the pointer regarding streams and events. It’s not clear (I did a quick read through) Are there any working examples of streams you can refer me to?
st104985
Within a CUDA stream, kernels are run sequentially. But different streams are run in parallel. By default all ops are run on stream 0, so I suggest you can try running each forward pass in a separate stream.
st104986
Thanks again! I am now doing def forward(self, emb): #old way tmp = [cnn(emb).squeeze() for cnn in self.cnns] seq_representation = torch.cat(tmp, dim=1) #new way stream_tmp = [] streams = [(idx, torch.cuda.Stream()) for idx, cnn in enumerate(self.cnns)] for idx, s in streams: with torch.cuda.stream(s): cnn = self.cnns[idx] #<--- how to ensure idx is in sync with the idx in for loop? stream_tmp.append((idx, cnn(emb).squeeze())) stream_tmp = [t for idx, t in sorted(stream_tmp)] seq_representation_stream = torch.cat(stream_tmp, dim=1) #comparing the two diff = abs(seq_representation_stream - seq_representation).sum().data[0]) print(diff) assert diff == 0.0 return seq_representation In some random batches the assert fails (the diff is very large > 1000 so its not a rounding error). I am pretty sure it is because the idx in the for loop is not in sync with the idx inside the with torch.cuda.stream(s) block. Sorry this is more of a python question that a pytorch question – but from the documentation, it is not clear how to open multiple streams and concat their results.
st104987
You should synchronize all the streams after the for loop (torch.cuda.synchronize). Because the cat is run on default stream, when it is run, other streams may not have finished.
st104988
I added torch.cuda.synchronize() as you said (that could have been the problem some of the time) but the assertion still fails on some random batches. I suspect that idx is getting mixed up between the for loop and the with code block. And that will make my concat happen in the wrong order (since I’m using idx to sort the intermediate results) def forward(self, emb): #old way tmp = [cnn(emb).squeeze() for cnn in self.cnns] seq_representation = torch.cat(tmp, dim=1) #new way stream_tmp = [] streams = [(idx, torch.cuda.Stream()) for idx, cnn in enumerate(self.cnns)] for idx, s in streams: with torch.cuda.stream(s): cnn = self.cnns[idx] #<--- how to ensure idx is in sync with the idx in for loop? stream_tmp.append((idx, cnn(emb).squeeze())) torch.cuda.synchronize() # added synchronize stream_tmp = [t for idx, t in sorted(stream_tmp)] seq_representation_stream = torch.cat(stream_tmp, dim=1) #comparing the two diff = abs(seq_representation_stream - seq_representation).sum().data[0]) print(diff) assert diff == 0.0 return seq_representation
st104989
FileNotFoundError: [Errno 2] No such file or directory: ‘/food/data/public_datasets/Food/train_set/train_015286.jpg’ Exception ignored in: <bound method _DataLoaderIter.del of <torch.utils.data.dataloader._DataLoaderIter object at 0x7fa2a5368ba8>> Traceback (most recent call last): File “/food/home/kunal/miniconda3/envs/deep-learning/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 349, in del File “/food/home/kunal/miniconda3/envs/deep-learning/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 322, in _shutdown_workers File “/food/home/kunal/miniconda3/envs/deep-learning/lib/python3.6/threading.py”, line 521, in set File “/food/home/kunal/miniconda3/envs/deep-learning/lib/python3.6/threading.py”, line 364, in notify_all File “/food/home/kunal/miniconda3/envs/deep-learning/lib/python3.6/threading.py”, line 347, in notify TypeError: ‘NoneType’ object is not callable Can some one make out something from the error?
st104990
I am trying to vectorize some code, however it is proving to be a lot more difficult than I initially expected. Any help would be greatly appreciated. My first objective is to create a vector with the same dimensions of another vector, however add 1 more dimension. Right now I have this and it seems to generalize quite well to different dimension sizes: given some tensor a that has size = [a, b, c, … , n] (n dimensions), b = torch.zeros_like(a.unsqueeze(0)) b = b.numpy() b = np.repeat(b, 3, axis = 0) b = torch.from_numpy(b.squeeze()) now I end up with a tensor b with size = [3, a, b, …, n], an n+1 dimensional tensor. My issue arises with the following bit. If i have another tensor, say c = [0,1,2], how can I have it so that when I multiply c by b element wise, the multiplication results in something like this, b[0,:,:, … ,:] will be multiplied by all 0’s, b[1,:,:, … ,:] will be multiplied by all 1’s and so on? i.e i multiple each element of the 3 (initially identical) tensors in the 0th dimension by 0, 1 and 2 respectively? I’ve tried using the expand function, but what I’m doing doesn’t work well. Again, any help would be much appreciated!
st104991
This should do the trick: b = torch.ones(3, 5, 5, 5) c = torch.tensor([0., 1., 2.]) d = b * c.view(-1, 1, 1, 1)
st104992
ptrblck: b = torch.ones(3, 5, 5, 5) c = torch.tensor([0., 1., 2.]) d = b * c.view(-1, 1, 1, 1) Hi @ptrblck, Thank for the quick response! Much appreciated! Part of the reason why I’ve gone about this in such a confusing way is because I won’t know the dim of the initial tensor a. That’s where I run into troubles using view.
st104993
Pretty ugly, but should work: d = b * c.view([-1] + [1]*b.ndimension()) I try to come up with a cleaner solution.
st104994
Describe the issue If you use torch.ByteTensor as a constructor and pass it a numpy array with dtype np.bool, the returned array contains the value of the underlying byte, not 0 and 1’s (as expected right?). Am I missing part of the picture? Reproduce the issue python 3.6, torch 0.3.1 In [7]: array = np.random.rand(30) > .5 In [8]: torch.ByteTensor(array) Out[8]: 0 0 0 0 0 0 0 16 103 209 153 179 255 7 0 80 2 0 78 55 251 127 0 0 237 204 134 124 63 63 [torch.ByteTensor of size 30] In [9]: array Out[9]: array([ True, False, True, True, True, True, True, False, True, True, True, True, True, False, False, True, True, False, True, True, True, True, False, True, False, True, True, False, False, True])
st104995
Hi, You should use torch.from_numpy() to create tensor from numpy arrays to avoid such issues.
st104996
I understand that this is a solution, but do we really want to keep such a behavior? It can never be intended, and it is unclear from the documentation that torch.ByteTensor should not be used this way. Instead of doing unsafe casting, should we just raise an error when passing a boolean array to torch.ByteTensor?
st104997
Hi, The problem is that numpy array behave like a sequence. So if your function supports things like torch.ByteTensor([0, 1, 2]), then the numpy array will be converted to look like a sequence and a Tensor will be created from it. Unfortunately, during this conversion, types can change as it’s going through python objects. I’m not sure we can easily change this behavior.
st104998
I’m trying to implement a random projection using the Fastfood algorithm (Le, Quoc, Tamás Sarlós, and Alex Smola. “Fastfood-approximating kernel expansions in loglinear time.”). Essentially, what I am trying to do is implicitly multiply a vector v by a random square Gaussian matrix M, whose side is equal to a power of two. The matrix is factorized into multiple matrices: M = HGΠHB, where B is a random diagonal matrix with entries ±1 with equal probability, H is a Hadamard matrix, Π is a random permutation matrix, and G is a random diagonal matrix with independent standard normal entries. Multiplication by a Hadamard matrix can be done via the Fast Walsh-Hadamard Transform O(d log d) time and takes no additional space. The other matrices have linear time and space complexities. Crucial to my code is that the gradients propagate through all this sequence of operations. Does somebody have an idea on how to implement this? I tried to look around, but could not find any Pytorch implementation of Fastfood neither of Fast Walsh-Hadamard Transform.
st104999
I have 2 dense linear net (model1 and model2) with 30 neurons as the output for each. With the following forward function the loss does not reduce but when I change it to single net with single output neuron and hardtanh activation then the loss starts decreasing. Is there anything wrong with my forward function? def forward(self, x1, x2): tens1 = self.model1(x1) tens2 = self.model2(x2) distance = torch.sqrt(torch.sum(( tens1 - tens2) ** 2, 1)) return nn.Hardtanh(0.001, 1)(distance) Does the way I have defined my forward function make any issue with the gradient back propagation ? Thanks P.S. I could narrow my problem down to the fact that it as something to do with the numerical calculations because I get the same issue with distance = torch.sqrt(torch.sum(tens1 ** 2, 1)) return nn.Hardtanh(0.001, 1)(distance) or distance = torch.sqrt(torch.sum(torch.abs(tens1), 1)) With the power operation I could get it to work with modifying learning rate, but with abs function no matter what rate I choose it doesn’t decrease! Can anyone tell me what I am doing wrong please? I think I figured it out myself. It is all about the range of values and learning rate. Playing with those parameters and ranges will fix the issue.