id
stringlengths
3
8
text
stringlengths
1
115k
st81068
nn.Transformer was added in PyTorch 1.2.0, so you should update to the latest stable version. Have a look at the install instructions for the binaries: link 7.
st81069
I’d like to create some conv filters with orientation so that they can capture directional informations. The vertical and horizontal filters are easy to implement by changing aspect ratio. But how to create directional filters, such as diagonal?
st81070
Solved by phan_phan in post #7 That is why (in the diagonal case), you can create a parameter w with only the size: [out_channels, in_channels, kernel_size] (for example : [16, 32, 3]). This parameter will only contain the useful diagonal information, without the useless zeros. So at the optimization step, only these floats wil…
st81071
Yes. The alternative solution is rotating feature map manually. I wonder if there are some other simple solutions?
st81072
You can create a nn.Parameter 5 w with a size [out_channels, in_channels, kernel_size]. Then torch.diag_embed(w) 12 will have a size [out_channels, in_channels, kernel_size, kernel_size], in which each squared kernel is diagonal. So you can use nn.functional.conv2d(x, torch.diag_embed(w)) 7.
st81073
Thanks for your reply and I have another questions. In the forward, the filter does collect the directional information. But the backward still compute the gradient which change the zeros in original filter. This solution cannot keep the shape of filter?
st81074
That is why (in the diagonal case), you can create a parameter w with only the size: [out_channels, in_channels, kernel_size] (for example : [16, 32, 3]). This parameter will only contain the useful diagonal information, without the useless zeros. So at the optimization step, only these floats will be updated. It is just the tensor torch.diag_embed(w) which contains also the zeros. This one has the size: [out_channels, in_channels, kernel_size, kernel_size] (for example : [16, 32, 3, 3]). And in the general case, I’m sure you can always find a way to create a parameter with only the useful floats (that need to be updated), and then transform it to the form you need.
st81075
At the beginning of a training session, the Adam Optimizer takes quiet some time, to find a good learning rate. I would like to accelerate my training by starting a training with the learning rate, Adam adapted to, within the last training session. Therefore, I would like to print out the current learning rate, Pytorchs Adam Optimizer adapts to, during a training session. thanks for your help
st81076
for param_group in optimizer.param_groups: print(param_group[‘lr’]) should do the job
st81077
There is a paper that proposes tuning the learning rate using the gradient of the update rule with respect to the learning rate itself. Basically, it dynamically learns the learning rate during training. If you are interested, here is the paper https://arxiv.org/abs/1703.04782 280 and my implementation https://github.com/jpeg729/pytorch_bits/blob/master/optim/adam_hd.py#L67 430
st81078
thanks for the quick response, but unfortunately, this command just prints out the initial learning rate and not the current adapted one. I would really surprise me, if it wasn’t possible, get the adapted learning rate somehow…
st81079
at the moment I am just using two input images for my training. When I start a training session with the network, pretrained by me, the error increases by some magnitudes (from a few hundred to 10.000 up to 40.000) and commutes than back to the level, where it was at the end of the last session. Through all this the learning rate printed out on the console is always the same, initial one, what makes ne sense to me. I don’t know what else could be the reason for this big temporal fluctuation of the error.
st81080
Did you save the optimizer state with the model? Different optimizers tend to find different solutions so changing optimizers or resetting their state can perturbe training. That is why it can be important to not only save the model parameters but also the optimizer state. Of course, in this case, that might have nothing to do with the fluctuation of the error that you are seeing.
st81081
Adam has a separate learning rate for each parameter. The param_group['lr'] is a kind of base learning rate that does not change. There is no variable in the PyTorch Adam implementation that stores the dynamic learning rates. One could save the optimizer state, as mentioned here: Saving and loading a model in Pytorch? If I have a model class and a trainer class. I create an instance of the model and train it. model = mymodel() train = trainer.train(model...) How can I save the model to a file, after it has been trained and how can I then reload it and continue training? I searched for this but didn’t get an answer. The PyTorch implementation of Adam can be found here: https://pytorch.org/docs/stable/_modules/torch/optim/adam.html 310 The line for p in group['params']: iterates over all the parameters and calculates the learning rate for each parameter. I found this helpful: http://ruder.io/optimizing-gradient-descent/index.html#adam 417
st81082
Jonathan_R_Williford: dynamic learning rates Sorry guys, but these comments seem very misleading! Optimizers have a fixed learning rate for all parameters. param_group['lr'] would allow you to set a different LR for each layer of the network, but it’s generally not used very often, and most people have 1 single LR for the whole nn. What Adam does is to save a running average of the gradients for each parameter (not a LR!). The learning rate is still the same throughout the whole training, unless you use a lr_scheduler like CosineAnnealingLR etc.
st81083
Hi, I am training a 7 layers linear fully connected network with 48 neurons in each hidden layer (which gives 14353 learnable parameters). My data has 3 input features and 1 output. Data size is around 51230. I am using Dataloader with 20 batches. However the time improvement from CPU to GPU is only of 30-40% reduction in training time. After experimentation, i have noticed that GPU would only give significant time improvement if the total number of learnable parameters in significantly increased, say to the order of millions (then we can have training time reduced around 7 times). Can we not achieve significant benefit from GPU for NN model with 14353 parameters? Overall if i train it for 200 epochs, these are time comparisons: For 14k model parameters: CPU: 5.7 min GPU: 4.1 min For 3.54 millions model parameters: CPU: 31.0 min GPU: 4.27 min Is there any other way i can reduce my training time for around 14k model parameters?
st81084
If you are working with a small dataset, you could preload the whole dataset and push it to the GPU before training, to avoid loading times. Also, make sure not to create any unnecessary synchronization points in your training loop, e.g. printing the loss often.
st81085
Since the whole data has been transferred to GPU before creating dataset and train loader, i think it’s not having any loading time issues. pin_memory wouldn’t work for data already on GPU, if that’s what you’re suggesting. Also i have tried to minimize loss printing. Here are major parts of the code for clarity. class Net(torch.nn.Module): def __init__(self, D_in, H, D_out): super(Net, self).__init__() self.linear1 = torch.nn.Linear(D_in, H) self.linear2 = torch.nn.Linear(H, H) self.linear3 = torch.nn.Linear(H, H) self.linear4 = torch.nn.Linear(H, H) self.linear5 = torch.nn.Linear(H, H) self.linear6 = torch.nn.Linear(H, H) self.linear7 = torch.nn.Linear(H, H) self.linear8 = torch.nn.Linear(H, D_out) def forward(self, x): out = F.relu(self.linear1(x)) out = F.relu(self.linear2(out)) out = F.relu(self.linear3(out)) out = F.relu(self.linear4(out)) out = F.relu(self.linear5(out)) out = F.relu(self.linear6(out)) out = F.relu(self.linear7(out)) y_pred = self.linear8(out) return y_pred D_in, H, D_out = 3, 768, 1 model = Net(D_in, H, D_out) criterion = nn.MSELoss(reduction='sum') optimizer = Adam(model.parameters(), lr=5e-4) device = torch.device('cuda') model.to(device) dataX = dataX.to(device) dataY = dataY.to(device) dataset = TensorDataset(dataX,dataY) training_batches = 20 batch_size_train = int(len(dataX)/training_batches) +1 train_loader = DataLoader(dataset, batch_size=batch_size_train, shuffle=True) start_time = time.time() for epoch in range(201): running_loss = 0.0 for i, data in enumerate(train_loader): features, target = data optimizer.zero_grad() forward = model(features) loss = criterion(forward, target) if epoch % 100 == 0: running_loss += loss.item() loss.backward() optimizer.step() if epoch % 100 == 0: print('Epoch: {}, Training Loss: {:.2e}'.format(epoch, running_loss/training_batches)) elapsed = time.time() - start_time print('GPU Time: {:.2f} min'.format(elapsed/60))
st81086
I’ve added torch.cuda.synchronize() before starting and stopping the timer and used some dummy data: dataX = torch.randn(1000, 3).to(device) dataY = torch.randn(1000, 1).to(device) CPU: 0.44854 seconds/epoch GPU (TitanV): 0.0877 seconds/epoch Based on your estimate (4.1 minutes for 200 epochs), it seems that each epoch takes (4.1*60)/200 = 1.22seconds on your GPU. Which GPU, CUDA, cudnn, PyTorch versions are you using?
st81087
I have tried the same dummy data with 1000 samples and the GPU i am using would do it in 0.117 seconds/epoch. GPU: Tesla P100-SXM2-16GB CUDA driver version: 10010 cudnn: 7.5.1_10.1 PyTorch: 1.1.0 Can the issue be with data i am loading from files? I am converting it to FloatTensor from pandas dataframe using command dataX = torch.tensor(dataX.values).to(dtype=torch.float) Data is available in .csv text file. Number are mentioned in this format. 2.489500679193377281e-03,0.000000000000000000e+00,3.944573057764559962e+02,1.833216381585000068e-02 Thanks.
st81088
This shouldn’t make a speed difference, but could you try to use torch.from_numpy instead of torch.tensor to create dataX?
st81089
If I understand the issue correctly at the moment, you are seeing a time of 0.117 s/epoch using random data on the GPU and 1.22 s/epoch if you use your real data? We are aware of denormal values, which might slow down the execution on the CPU, but this shouldn’t be the case on the GPU. Could you nevertheless set torch.set_flush_denormal(True) and time it again?
st81090
0.117 s/epoch was time for 1000 samples of random dummy data, while my real data size is 51320 and 1.22 s/epoch corresponds to that. My concern is with the comparison of CPU & GPU time for this real data of size 51320, in which i am not getting significant reduction in training time with GPU. (Also if i increase the size for random dummy data to 51320, times are almost the same) For 14k model parameters: CPU: 5.7min GPU: 4.1min I am not sure if i am doing everything i can, to get the maximum benefit from GPU. Isn’t GPU benefit significant for model with only 14k parameters?
st81091
If you change the data size to 51320, the batch size will increase to 2566, as you are calculating it dynamically using: training_batches = 20 batch_size_train = int(len(dataX)/training_batches) Running the script with this batch size gives: GPU: 0.44s/epoch CPU: 3.27s/epoch
st81092
Thanks a lot @ptrblck_de for your help throughout the discussion. Here’s an enigma. Although GPU Tesla P100 provide better times for data size of 1000, the training time increases rapidly by just increasing the data size, as corresponding times are mentioned below: 1k : 0.117 s/epoch 11k : 0.327 s/epoch 21k : 0.546 s/epoch 31k : 0.762 s/epoch 41k : 0.981 s/epoch 51k : 1.206 s/epoch Do you understand why there’s so much difference in time increase for two different GPUs? Also i tried a code from a research paper written with tensorflow. Having made no changes to it, training time on Tesla P100 was 6.3 min while it’s mentioned in the paper that NVIDIA Titan X did it in 1 min. I can’t figure any reason since i think P100 is more powerful.
st81093
I can rerun these tests tomorrow on a few GPUs and report some numbers. In the meantime, could you update PyTorch to the latest stable release (1.2.0) so that we get comparable numbers?
st81094
I am unable to update it to version 1.2.0. I have tried updating it using these commands: conda update pytorch conda install pytorch=1.2.0 -c pytorch I am running it on university’s computing resources, where i access GPU remotely and i think i am not allowed to update packages freely.
st81095
If you can run docker, you could try to run the code inside it. Here are my results: P100, 16GB N = 1000 GPU Time: 0.076056s/epoch N = 11000 GPU Time: 0.148172s/epoch N = 21000 GPU Time: 0.220223s/epoch N = 31000 GPU Time: 0.299151s/epoch N = 41000 GPU Time: 0.368989s/epoch N = 51000 GPU Time: 0.448879s/epoch V100, 16GB N = 1000 GPU Time: 0.081930s/epoch N = 11000 GPU Time: 0.147144s/epoch N = 21000 GPU Time: 0.254728s/epoch N = 31000 GPU Time: 0.282772s/epoch N = 41000 GPU Time: 0.382091s/epoch N = 51000 GPU Time: 0.440113s/epoch Note that I just executed the code once for each config.
st81096
So that’s a concern. Your results are as expected. I am not sure why they aren’t like that for me. What’s the clock speed on your GPUs?
st81097
On a unrelated note, I think it worth mentioning that these many linear layers in sequence is not advisable in most cases. You will be probably better of using two linear layers (in total) and playing with the number of neurons in the first linear layer.
st81098
If you want more sensible comparisons I’d fix the batch size at the same size regardless of the dataset size . I’m not sure why you’d want a fixed number of batches per epoch. Not doign this will result in changing GPU utilization with different dataset sizes . I wouldnt’ be suprised if the performance, even on GPU is CPU bound due to dataset/dataloader python overhead. The call __getitem__ per batch element of many many thousands will chew up a lot of CPU time when done in Python. I’d try building a random index and building batches from X, Y tensors manually.
st81099
So i have worked with the administrators of computing resources and turns out that P100 GPU i was using wasn’t performing optimally for some reasons. Same GPU on another system gave comparable performance as you mentioned. Many thanks for your time and support.
st81100
It doesn’t make much difference. Except increasing the number of batches for same data size increases the computing time. Also whole dataset is already on GPU before training starts.
st81101
Hi I have a 3D MRI image of size (128,128,128) as input to my model. When it enters the model it has the shape (8, 4, 128, 128, 128) which is (Batch, Channels, H, W, D). I would like to separate the channels and perform a convolution on blocks of (32,32,32) for this (128,128,128) input. Then I wish to take the conv weights and multiply it with the input values to the conv and remap them to a (128,128,128) block. My current inefficient solution (using many for loops and scikit-image) is below however it takes too long and requires too much memory. What’s the best way to do this? from skimage.util.shape import view_as_blocks class LFBlock(nn.Module): def __init__(self, input_shape=(128,128,128), kernel_size=(1,1,1), blk_div=4): super(LFBlock,self).__init__() # Divides the (128,128,128)//4 -> (32,32,32) self.block_shape = (input_shape[0]//blk_div, input_shape[1]//blk_div, input_shape[2]//blk_div) self.num_blocks = (input_shape[0]//self.block_shape[0])*(input_shape[0]//self.block_shape[0])*\ (input_shape[0]//self.block_shape[0]) conv_list = [] for n in range(self.num_blocks): conv_list.append(nn.Conv3d(1,1, kernel_size=kernel_size, stride=1, padding=0, bias=True)) self.conv1x1s = nn.ModuleList(conv_list) def forward(self, lf_in): # Batch for i in range(lf_in.shape[0]): # Modality for ch in range(lf_in.shape[1]): x_lf = lf_in[i,ch,:] lf_blocks = view_as_blocks(x_lf.cpu().numpy(), block_shape=self.block_shape) # Do Conv3d on each block for x in range(len(lf_blocks)): for y in range(len(lf_blocks)): for z in range(len(lf_blocks)): conv_idx = x*len(lf_blocks) + y*len(lf_blocks) + z # Convolve the block, then multiply with the weight of the block. tensor_img = torch.from_numpy(lf_blocks[x,y,z])[None, None,:] conv = self.conv1x1s[conv_idx](tensor_img.cuda()) # w * x. # view_as_blocks returns a view so modifications are done in-place lf_blocks[x,y,z] = tensor_img.cpu()*self.conv1x1s[conv_idx].weight.data.cpu() # Linearly sum the modalities together # out = w0*x0 + w1*x1 + w2*x2 + w3*x3 out = (lf_in[:,0]+lf_in[:,1]+lf_in[:,2]+lf_in[:,3])[:,None] return out Any help is appreciated. Thank you!
st81102
k=torch.tensor([2,1]) print(k.size()) k= k.contiguous().view(1,2) #1 # or k= k.reshape(1,2) #2
st81103
Solved by albanD in post #2 The one you prefer .reshape() has been added recently to help users more used to numpy.
st81104
The one you prefer .reshape() has been added recently to help users more used to numpy.
st81105
Solved by albanD in post #2 I think this is mostly historical as .view() is so cheap anyway that you don’t really need to do it inplace.
st81106
I think this is mostly historical as .view() is so cheap anyway that you don’t really need to do it inplace.
st81107
Hello all, I have two variables a,b with requires_grad=True. The loss1 will take the a and b as inputs, while the loss2 will take the normalization of a and b as input. So, should I use a clone in this case? This is my implementation loss1= loss1(a,b) a_norm = a / 255 b_norm = b/255 loss2 = loss2(a_norm, b_norm) loss_total = loss1+loss2 fig.png935×467 19.5 KB
st81108
Solved by albanD in post #4 When you do a_norm = a / 255, you do not modify a in any way.
st81109
albanD: t need to clone or detach her @albanD: Thanks. But do you think the loss 1 will be received a normalization value after gradient update because we use a_norm= a/ 255 in forward
st81110
I have copied a program from github and tried to run the code. The above error message is coming. Please help to resolve the issue.
st81111
Hi, The problem is that you provide a Tensor of type Long to a function that expects a Tensor of type Float. You will need to look at the stack trace to know which function it is. Then you can fix this most likely by just converting the LongTensor to a FloatTensor by using .float().
st81112
I am currently profiling the memory-consumption of my model. My code is essentially: log_probs = model.log_prob(inp) loss = (-1)*log_probs.mean() loss.backward() I have ~4 gig of GPU memory (3980MB) before calling backward and ~8 gigs (8882MB) after, but this is suprising to me. Pytorch maintains the whole computational graph before calling backward, which it should be able to free while computing it. I have 2 281 048 parameters, where for each parameter a gradient is computed. I am not sure why I now occupy ~8 gigs of GPU memory? If I am correct, this means: (8 gigabytes) / (4 bytes) = 2 000 000 000 floats which are a lot of floats and more than I would expect. If this is not expected, is there any way in which I could figure out what’s going on? For example, get the memory consumption per module? EDIT: The reason for my investigation is that I get an OOM when scaling to a higher-dimensional dataset and I am currently looking into reducing the memory-requirement.
st81113
Solved by albanD in post #2 Hi, How do you measure the gpu memory usage? You should be using torch.cuda.memory_allocated to get the memory actually used by Tensors. Note that this number only counts Tensors, and not the memory required by the cuda driver at initialization. If you check via nvidia-smi, you will see a larger n…
st81114
Hi, How do you measure the gpu memory usage? You should be using torch.cuda.memory_allocated to get the memory actually used by Tensors. Note that this number only counts Tensors, and not the memory required by the cuda driver at initialization. If you check via nvidia-smi, you will see a larger number because we have a special allocator for speed reasons that do not return the memory back to the driver when Tensors are freed.
st81115
Ok, thanks I have not looked at torch.cuda.memory_allocated. This is off-topic, but do you by chance know what happens when cublas fails to allocate memory (or whether it uses pytorchs allocator)? I suspect that the caching allocator consumes all the memory and then cublas fails when i use it in optimizer.step() because the memory requirement after backward.step() is indeed very small.
st81116
I don’t think cublas manages any memory, it takes pointers to inputs. Do you think a of a particular function in there?
st81117
Hmm, I unfortunately didn’t save the error. It might have been magma? It was an assert error and not an pytorch OOM error. I will come back or open a new issue if I manage to reproduce it (or when torch.cuda.memory_allocated before optmizier.step fixes the issue).
st81118
Hi, Recently, I encounter some problem with data loading. That is, when I utilizing DataLoader with more than 1 workers to load HDF5 file with h5py, the retrieved data usually appears random-like. I have turn off the shuffle and use the default sampler and collate_fn. The problem may come from h5py. I wonder if there is any solution to such problem. Best, Yikang
st81119
I am trying to use tensorboard with pytorch (1.2.0+cu92). When calling writer = SummaryWriter(), I get the following error: AttributeError: module 'tensorflow.io' has no attribute 'gfile' How can I fix this? The following few lines would reproduce the error. import torch from torch.utils.tensorboard import SummaryWriter writer = SummaryWriter()
st81120
I am using torch.manual_seed to fix the generator but it is giving different values at different runs unless I manually declare it before every run. One of the example run is, torch.manual_seed(10) d = torch.randn(4) e = torch.randn(4) print(d,e) torch.manual_seed(10) e = torch.randn(4) print(d,e) The output is, tensor([-0.6014, -1.0122, -0.3023, -1.2277]) tensor([ 0.9198, -0.3485, -0.8692, -0.9582]) tensor([-0.6014, -1.0122, -0.3023, -1.2277]) tensor([-0.6014, -1.0122, -0.3023, -1.2277])
st81121
A seed in a random number generator makes sure to return the same sequence of random numbers. As you can see in your example, the second e gets the same values as d after resetting the seed. I’m not sure I understand the issue correctly, but would you like to get the exactly same values for each torch.randn call? This would be an unusual use case.
st81122
ptrblck: would you like to get the exactly same values for each torch.randn call? I believe that was the intention inferring from the context of the thread. Additionally, if we replace the above example with numpy we get the same behaviour. import random import numpy as np random.seed(2019) np.random.seed(2019) rng = np.random.RandomState(2019) d = rng.randn(4) e = rng.randn(4) c = rng.randn(4) print(d, e, c) every call to rand will produce a different result but the running the above code in a script over and over again will yield three tensors with different random values due to the call to randn but those values will be consistent across different runs of the same script.
st81123
Yeah, my usecase was to produce the same random numbers at two different parts in a script. I had a wrong understanding of how manual_seed works. Thank You for clearing it out!
st81124
Hi, Is there an idiomatic way to do a batched shuffle of feature vectors? I want to generate K negative samples per batch of data. i.e, given a tensor with shape (Batch x Features), I want to generate noise with the shape (K x Batch x Features), where the features in the noise are shuffled versions of the original data. e.g. given a batch like [ [ a b c ] [ a d e ] [ b d e ] ], I want to generate two negative examples per datum like so: a b c a c b | c b a a d e -> a e d | d e a b d e b d e | e b d I’ve tried doing this using randperm, but it is prohibitively slow, especially on the GPU, and requires a lot of logistics and scaffolding.
st81125
I’m not sure, how exactly you are shuffling your input. Is there any specific logic or are you just randomly sampling shuffle indices? As far as I understand, randperm seems to work but is too slow?
st81126
I want to generate shuffles on the feature level over a batch. So given a datum like [a b c] i want to generate k shuffles of it, e.g. two shuffled versions: [ [ a c b ] [c b a ] ], with this generalized over batched data. The issue is that the call to randperm only generates one permutation, so I need to make (K x Batch) calls to randperm (which is very slow) and then index out the shuffles from the original data. Now that i think about it, it wouldn’t be a problem if randperm could return multiple permutations.
st81127
I recently wanted to do the same thing. In case anyone is interested, the following is my solution: import torch n_batch = 8 n_feat = 11 rand = torch.rand(n_batch, n_feat) batch_rand_perm = rand.argsort(dim=1) print(batch_rand_perm) tensor([[ 1, 2, 4, 6, 0, 9, 3, 7, 8, 5, 10], [10, 6, 9, 2, 7, 4, 3, 5, 1, 0, 8], [ 8, 7, 4, 6, 0, 1, 9, 10, 2, 5, 3], [ 1, 3, 5, 4, 6, 8, 2, 9, 0, 7, 10], [ 3, 10, 7, 1, 4, 5, 0, 8, 9, 6, 2], [ 2, 10, 0, 1, 6, 8, 7, 9, 5, 3, 4], [ 5, 9, 8, 10, 0, 1, 6, 7, 4, 2, 3], [ 0, 9, 8, 4, 1, 3, 2, 7, 5, 10, 6]])
st81128
I am suspecting NaN values in my script so I would like to use the anomaly detector 4 of pytorch. However, I am confused as to how exactly to use it. My dataset class uses h5py so I am wondering where I have to include the context manager in in order for it to work.
st81129
Solved by albanD in post #4 I meant the backward invocation in your training loop. At the moment, the anomaly detection mode only detect nans that appear during the .backward() call as it is hard for the end user to poke around in what happens there. I won’t detect nans that appear outside of the backward invocation. If na…
st81130
Hi, You need to wrap your forward and backward pass with it, so wherever you do that. If you want it for the whole script, you can also add torch.autograd.set_detect_anomaly(True) at the beginning of your script and it will stay on !
st81131
I dont have an explicit backward pass, only the forward method of my model, or do you mean the backward invocation in my training loop? Will the inclusion of this line at the beginning of my file then also apply to the crucial part in which I load in hdf5 files and convert them into tensors? Because this is where I suspect NaN values to occur…
st81132
I meant the backward invocation in your training loop. At the moment, the anomaly detection mode only detect nans that appear during the .backward() call as it is hard for the end user to poke around in what happens there. I won’t detect nans that appear outside of the backward invocation. If nans appear in your own code, You can add some check at few places in your code to find where they appear. To check if a tensor contains nans, you can check: if your_tensor.ne(your_tensor).any():
st81133
How expensive is the detect_anomaly to run during training? Unsignificant right :D?
st81134
Actually very significant It does a lot of bookkeeping and checks the return values of each low level function for nans. As mentioned above, you should only use it for debugging.
st81135
Great, thanks. I’ll keep to checking the loss then assert not torch.isnan(loss), 'nan err msg'
st81136
I tried to normalize the input during the forward pass of the model doing this: class Model(nn.Module): def __init__(self): mean = torch.as_tensor([0.485, 0.456, 0.406])[None, :, None, None] std = torch.as_tensor([0.229, 0.224, 0.225])[None, :, None, None] self.register_buffer('mean', mean) self.register_buffer('std', std) ... def forward(self, inputs): # Input size [batch, channel, width, height] # Normalize inside the model inputs = inputs.sub(self.mean).div(self.std) ... return output During training everything is fine and working but when I switch to eval() mode, model starts to give random outputs. Disabling eval() helps to get meaningful outputs during validation, but I need eval() mode since I use dropout and batchnorm in the model. Any idea what causes this weird behavior?
st81137
Do your training and validation images have the same distributions or are you processing them differently? Are you seeing the bad results also after calling eval() and using your training images? If so, this problem might be unrelated to the processing inside the model, but might come from a small batch size and thus skewed running estimates in the batch norm layers.
st81138
They have the same distribution and processing is identical for both sets. When I disable normalization inside the model and perform it using torchvision library things get back to normal. I suppose the weird performance drop is purely from normalization inside the model.
st81139
Hi, I am trying to get a feature map from VGG11 conv5 layer as (14 X 14 X 512) and also getting another feature map from another different layer as (1 X 1 X 512). I want to multiply these maps and get the output as (14 X 14 X 512) and pass it to FC layer. Is there are any layer that can do multiplication like this?
st81140
Solved by InnovArul in post #2 just torch.mul(x1, x2) will do, where x1, x2 are those two features.
st81141
def visfeaturemap(feature_maps, filename='default.png', save=False): # plot all 64 maps in an 8x8 squares square = 8 ix = 1 for _ in range(square): for _ in range(square): # specify subplot and turn of axis ax = plt.subplot(square, square, ix) ax.set_xticks([]) ax.set_yticks([]) # plot filter channel in grayscale plt.title(filename) plt.imshow(feature_maps[0, ix - 1, :, :].cpu().numpy(), cmap='gray') ix += 1 plt.title(filename) plt.show() if save: plt.savefig(filename) I want just one total figure title. Plz help me^^. Thank you.
st81142
Solved by Swarchal in post #2 I think you might be looking for plt.suptitle()
st81143
If you just want one title,why don’t you remove the plt.title(filename) within your loops?
st81144
Hi! I have 2 networks: Encoder (already trained) + Decoder. I’d like to train both nets End2End. But I’d like to keep The Decoder’s parameters “freeze” - no updates while training! I’ve tried to set all the parameters of the Decoder with: “require_grads = False” + “decoder.eval()” before training. Also, I optimize over the Encoder’s parameters: so I’ve put them inside my optimizer object. In the train / test process I set : “encoder.train()” & “encoder.eval()” respectively. When I compare the Decoder’s parameters before & after training I found out that values were changed. Please help me to understand what might went wrong. Thanks, Or Rimoch
st81145
Hi @Or_Rimoch, Or_Rimoch: I have 2 networks: Encoder (already trained) + Decoder. I assume that you mean: Encoder + Decoder (already trained). If you set Decoder’ parameters’ requires_grad to False and you don’t pass them to the optimizer, there shouldn’t be any way for them to get modified. Do you have a link to the code or a simple repro?
st81146
I’m using PyTorch to build a model for binary classification. class XceptionLike(nn.Module): def __init__(self): super().__init__() # CNN part ... # final self.fc_last_1 = nn.Linear(384, 64) self.fc_last_2 = nn.Linear(64, 1) def forward(self, input_16, input_32, input_48): # CNN part ... out = torch.cat([out_16, out_32, out_48], dim=1) out = F.relu(self.fc_last_1(out)) out = F.relu(self.fc_last_2(out)) return out # the model output part output = model(batch_z_16, batch_z_32, batch_z_48) # model prediction output = torch.sigmoid(output) loss = criterion(output, batch_label) What confuses me is that can this model used for binary classification really? In my understanding, for binary classification output of model [0, 0.5] means prediction for one class. output of model [0.5, 1] means prediction for the other one. But ReLU function returns [0, positive infinity], and when sigmoid function gets the output of the model, it returns [0.5, 1], so the model cant’t return [0, 0.5], which means it can’t predict the class which is belong to [0, 0.5]. What is wrong with my understanding? How can I deal with it?
st81147
Solved by dejanbatanjac in post #3 Yes, you should use sigmoid function. def sigmoid(x): return 1/(1 + (-x).exp()) It will convert the space of [-inf, inf] into a probability [0,1]. Note this sigmoid works on a tensor. So it will do that for all your activations. What ever goes to the sigmoid you can call “logit”, even though thi…
st81148
At the last layer, I should not use out = F.relu(out) but out = torch.sigmoid(out), then the model can output [0, 1]. (so can predict class [0, 0.5] and [0.5, 1]) def forward(self, input_16, input_32, input_48): # CNN part ... out = torch.cat([out_16, out_32, out_48], dim=1) out = F.relu(self.fc_last_1(out)) out = self.fc_last_2(out) out = torch.sigmoid(out) return out # the model output part output = model(batch_z_16, batch_z_32, batch_z_48) # model prediction loss = criterion(output, batch_label) Is this the correct answer?
st81149
Yes, you should use sigmoid function. def sigmoid(x): return 1/(1 + (-x).exp()) It will convert the space of [-inf, inf] into a probability [0,1]. Note this sigmoid works on a tensor. So it will do that for all your activations. What ever goes to the sigmoid you can call “logit”, even though this is not a mathematical logit function. After that you will use bce, which works on a batch. def binary_cross_entropy(p, y): return -(p.log()*y + (1-y)*(1-p).log()).mean() Note that sigmoid used exponential function to grab the probabilities and now we use log of these probabilities.
st81150
Below is an example of matrix multiplication that incurs precision loss in float32. Any idea why the result is different when I run it on Mac vs Linux? import torch x = torch.tensor([[11041., 13359, 15023, 18177], [13359, 16165, 18177, 21995], [15023, 18177, 20453, 24747], [18177, 21995, 24747, 29945]]) y = torch.tensor([[29945., -24747, -21995, 18177], [-24747, 20453, 18177, -15023], [-21995, 18177, 16165, -13359], [18177, -15023, -13359, 11041]]) print(x@y) On Linux I’m seeing the following result, both GPU and CPU (colab) [[ 33. 17. 1. 1.] [ 11. 27. -5. -5.] [ 11. -5. 27. -21.] [ -7. 9. 9. 25.]] Whereas on mac laptop, I see the following [[28., 0., -4., 0.], [ 0., 28., 0., -4.], [20., 0., 20., 0.], [ 0., 20., 0., 20.]]
st81151
Solved by albanD in post #6 You can use the get_env_info() function from the torch.utils.collect_env module to get some informations. To be sure what is used, the best way is to open python. import torch. Copy the path to the main c library with torch._C.__file__. Then in your terminal, run ldd path_to_that_library. It will l…
st81152
Hi, The expected result is 16 * Identity right? I guess using different blas libraries that potentially use different algorithms for mm will lead to different results. Running on different gpus / different number of cpu cores (for larger matrices) would give different results as well.
st81153
Yes, 16*Identity is the correct answer. The curious part was that CPU result is identical to GPU result when running on Volta, so it must be using the same algorithm
st81154
Actually on my machine, a binary install gives me the same thing as your mac while a source install gives the same thing as your linux (with everything else being the same). Do you use OpenBLAS on your linux and mkl on your mac by any chance?
st81155
For both Linux and Mac, I installed PyTorch latest official conda instructions for PyTorch 1.2. Is there a way to check if it’s using MKL? BTW, this came out when trying to debug Hessian calculation of f(X)=sum(Y*Y) with Y=AXB, all three matrices are 2x2 initialized with entries 1,2,3,…,12 . Looks like numerical cancellation can be an issue even for tiny examples.
st81156
You can use the get_env_info() function from the torch.utils.collect_env module to get some informations. To be sure what is used, the best way is to open python. import torch. Copy the path to the main c library with torch._C.__file__. Then in your terminal, run ldd path_to_that_library. It will list the shared libraries it links to. In my case, for the source install, there is the line: libopenblas.so.0 => MY_OPENBLAS_INSTALL_PATH/lib/libopenblas.so.0. So this one definitely use my own openblas install. For the binary install, no such line exist, but if you check libtorch.so, it jumps from 95MB in a source install to 230MB for a binary one. That’s because the mkl library is statically linked into it. You can see this by first getting the path to libtorch.so from the ldd command above, then run nm -gC path_to_libtorch.so | grep mkl to see all the symbols associated with mkl (there is a lot of them, this will spam heavily your terminal). Note that running that same command on the libtorch.so from the source install does not see any mkl symbols (only mkldnn ones that are unrelated). Now if both your computers are using the mkl library, maybe there is different handling of mac machines in mkl?
st81157
Thanks for the tip! It looks like conda install brings in MKL, whereas default AWS AMI images don’t include MKL. I observed a couple of other results for this problem (curiously, always integer valued) on different configurations, looks like the real issue is the numerics of the underlying example
st81158
Hello I am a beginner in Deep Learning and doing research comparing keras backend tensorflow and pytorch. I just try writing model in pytorch and i had succeed print the weights. Is it possible to save those weights to csv file? for reference this is my code class MultiLayerPerceptron(nn.Module): def init(self, input_size, hidden_size, num_classes): super(MultiLayerPerceptron, self).init() self.fc1 = nn.Linear(input_size, hidden_size) self.Sigmoid = nn.Sigmoid() self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x): out = self.fc1(x) out = self.Sigmoid(out) out = self.fc2(out) return out model = MultiLayerPerceptron(input_size, hidden_size, num_classes) weights = model.fc1.weight print(weights)
st81159
Solved by Nathan_Drake in post #2 import numpy as np import pandas as pd … weights = weights.detach().numpy() pd.DataFrame(weights).to_csv( ’ weights.csv ’ )
st81160
import numpy as np import pandas as pd … weights = weights.detach().numpy() pd.DataFrame(weights).to_csv( ’ weights.csv ’ )
st81161
Hi all, I’ve been having some trouble with a U-Net style implementation in a multi-GPU setup. I’ve mostly traced the central problem to be how latent tensors across skip connections (saving layers in the down convolution pathway and concatenating them into the up convolution pathway). The network performs fine in a single GPU setup. The issue is not with the nn.DataParallel setup, as I tested the same data on a simple model with only convolution layers, the issue comes up when trying to save/concat intermediate tensors during the forward() pass. This isn’t an issue with the actual convolution layers it seems, as the down pathway work, and when I turn off any latent/skip connections the upwards works too. It’s actually kinda of confusing, because I also tried scoping these tensors to the class self method, but that also didn’t work, but yet “x” is completely free and is handled fine… which I don’t get why that is exactly. Is there a workaround or fix? Perhaps some combination of proper scoping will work? Any help would be greatly appreciated, as being able to train on multiple GPUs would be great given we have them Thanks! Andy Here is the relevant code: if self.first_time: print (" +++ Building Encoding Pathway +++ ") # encoder pathway, save outputs for merging for i, module in enumerate(self.down_convs): if self.first_time: print ("Layer {0} :".format(i) + str(x.shape)) x, self.before_pool = module(x) self.encoder_outs.append(self.before_pool) if self.first_time: print ("Bottlenecking Skip Layers") for i, skip_layer in enumerate(self.encoder_outs): **L323** skip_layer = self.skip_conv_ins[i](skip_layer) if self.first_time: print ("Layer {0}: ".format(i) + str(skip_layer.shape)) skip_layer = self.skip_conv_outs[i](skip_layer) self.encoder_outs[i] = skip_layer ############################################ L66 class BottleConv(nn.Module): """ Collapses or expands (w.r.t to channels) latent layers. Composed of a single conv1x1 layer """ def __init__(self, in_channels, out_channels, pooling=True): super(BottleConv, self).__init__() self.in_channels = in_channels self.out_channels = out_channels self.pooling = pooling self.conv1 = conv1x1(self.in_channels, self.out_channels) self.bn = nn.BatchNorm2d(self.out_channels) def forward(self, x): **L80** x = self.bn(F.relu(self.conv1(x))) return x Here is the trace back, I’;ve bolded the lines which correspond to the above code Original Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) **File "/home/andrew/Desktop/deep-learning-ihc/Encoders.py", line 323, in forward** skip_layer = self.skip_conv_ins[i](skip_layer) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) **File "/home/andrew/Desktop/deep-learning-ihc/Encoders.py", line 80, in forward** x = self.bn(F.relu(self.conv1(x))) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 343, in forward return self.conv2d_forward(input, self.weight) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 340, in conv2d_forward self.padding, self.dilation, self.groups) RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 1 does not equal 0 (while checking arguments for cudnn_convolution)
st81162
I’m not sure I understand the code completely, but it looks like you are storing some activations in self.encoder_outs. Are these activations for debugging only or is this list part of the model? In the latter case, how did you define self.encoder_outs and how would you like to reduce it? I guess each replica will initialize its own self.encoder_outs, so that you end up having N lists.
st81163
Hi, I am confused about how to use torch.nn.NLLLoss. Below is a simple session in the Python REPL. I am expecting the result to be 0.35667494393873245, but I am getting -0.7000. I’d greatly appreciated it if someone could steer me in the right direction on this. Thanks! >>> import torch >>> import torch.nn as nn >>> input = torch.tensor([[0.70, 0.26, 0.04]]) >>> loss = nn.NLLLoss() >>> target = torch.tensor([0]) >>> output = loss(input, target) >>> output tensor(-0.7000) >>> import math >>> -math.log(0.70) 0.35667494393873245
st81164
I think I can see what’s happening now. I was a bit confused about how NLLLoss works. The calculation below shows that applying the negative log likelihood to an input processed through softmax produces the same result as running the input through log_softmax first, then just multiplying by -1. It also shows that applying CrossEntropyLoss to the raw_input is the same as applying NLLLoss to log_softmax(input). I’m guessing that the log_softmax approach may be more numerically stable than using softmax first, and calculating the log of the result separately. >>> raw_input = torch.tensor([[0.7, 0.26, 0.04]]) >>> softmax_input = torch.softmax(raw_input, dim=1) >>> softmax_input tensor([[0.4628, 0.2980, 0.2392]]) >>> -torch.log(softmax_input) tensor([[0.7705, 1.2105, 1.4305]]) >>> log_softmax_input = torch.log_softmax(raw_input, dim=1) >>> log_softmax_input * -1 tensor([[0.7705, 1.2105, 1.4305]]) >>> loss = nn.NLLLoss() >>> loss(log_softmax_input, torch.tensor([0])) tensor(0.7705) >>> cross_entropy_loss = nn.CrossEntropyLoss() >>> cross_entropy_loss(raw_input, torch.tensor([0])) tensor(0.7705)
st81165
I understand from your experiment that F.NLL_Loss does not expect the likelihood as input, but the log likelihood (log softmax), do you agree with this assessment? If that is so, I guess this is a source of lots of errors because I believe most people would guess (wrongly) that you should forward pass the likelihood to F.NLL_Loss. Right?
st81166
Yes, indeed. The documentation does state this, though, so I either didn’t read it properly at the time or misinterpreted it. https://pytorch.org/docs/stable/nn.html#nllloss 36 : The input given through a forward call is expected to contain log-probabilities of each class.
st81167
Hello. pytorch users. Like title, can torch.Tensor have different precision element-wisely? for example: my_tensor = torch.Tensor([1, 1.0, 1.000]) # respectively types are: torch.int8, torch.half, torch.float... etc. If it is impossible(maybe I think pytorch doesn’t support this semantic), is there another way to implement like this? I’d like to reduce memory usage as much as I can… Any answers will be helpful for me. Thanks