id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st80768 | Hi,
I think the version of the roi_pooling you’re using is made for an older version of pytorch.
In particular, with 0.4+, self._backend = type2backend[type(input)] should be replaced with self._backend = type2backend[input.type()]. |
st80769 | The same error occurred. Fixed it by your method, but a new error occurred.
Traceback (most recent call last):
File “demo.py”, line 275, in
fps_pred, y_pred = model_conv(x)
File “/home/fan/anaconda3/envs/tf/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/home/fan/anaconda3/envs/tf/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py”, line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File “/home/fan/anaconda3/envs/tf/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “demo.py”, line 230, in forward
roi1 = roi_pooling_ims(_x1, boxNew.mm(p1), size=(16, 8))
File “/home/fan/CCPD-master/rpnet/roi_pooling.py”, line 74, in roi_pooling_ims
output.append(adaptive_max_pool(im, size))
File “/home/fan/CCPD-master/rpnet/roi_pooling.py”, line 36, in adaptive_max_pool
return AdaptiveMaxPool2d(size[0], size[1])(input)
File “/home/fan/CCPD-master/rpnet/roi_pooling.py”, line 20, in forward
self._backend.SpatialAdaptiveMaxPooling_updateOutput(
File “/home/fan/anaconda3/envs/tf/lib/python3.7/site-packages/torch/_thnn/utils.py”, line 27, in getattr
raise NotImplementedError
NotImplementedError
How to solve the problem? |
st80770 | Is there no support for type2backend in the new Pytorch? type2backend.SpatialConvolutionMM_updateOutput throws NotImplemeted Error but just for new Pytorch cuda10. |
st80771 | Hi,
I’m not sure. But I think convolution was moved out of thnn. And so does not use this anymore. You can check the implementation of nn.ConvNd for more details. |
st80772 | I found:
import torch
import gc
for obj in gc.get_objects():
try:
if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
print(type(obj), obj.size())
except: pass
Are these objects gc.get_objects() ready to be collected or they will not be collected unless we set them to None inside of the program?
Then I found:
# prints currently alive Tensors and Variables
import torch
import gc
for obj in gc.get_objects():
if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
del obj
torch.cuda.empty_cache()
What will this code do?
Is del obj same as obj=None.
What is the difference between torch.cuda.empty_cache() and gc.collect(). |
st80773 | I train my model with batch size 128, however if I don’t use batch in the evaluation phase, the network output is wrong.
If the network’s input is in batches:
crit = nn.MSELoss(reduction='mean')
target = []
netout = []
model.eval() # To handle drop out layers and batch norm
for A, M, label in dataloaders['val']:
with tr.set_grad_enabled(False): # We don't need gradient computation in eval mode (speed up)
out = model(A, M)
target.extend(label.data.cpu().numpy())
netout.extend(out.data.cpu().numpy())
print(tr.Tensor(target).shape, tr.Tensor(netout).shape)
print(f'Loss: {crit(tr.Tensor(target), tr.Tensor(netout))}')
Result:
torch.Size([849]) torch.Size([849, 1])
Loss: 1.135872483253479
batch_input.png708×376 82.5 KB
If I iterate through the single elements, I attain different result:
crit = nn.MSELoss(reduction='mean')
target = []
netout = []
model.eval() # To handle drop out layers and batch norm
for A, M, label in testset:
with tr.set_grad_enabled(False): # We don't need gradient computation in eval mode (speed up)
out = model(tr.unsqueeze(A, 0), tr.unsqueeze(M, 0))
target.append(label.item())
netout.append(out.item())
print(tr.Tensor(target).shape, tr.Tensor(netout).shape)
print(f'Loss: {crit(tr.Tensor(target), tr.Tensor(netout))}')
Result
torch.Size([849]) torch.Size([849])
Loss: 87.88565063476562
upload://d8EOWbBPkZovUeiMrUWm4aNWUze.png 1
But the result should be exactly the same, because the output is deterministic!
What could be the problem? I can overfit the model for a specific batch size, but if I evaluate on single elements, then the result is wrong. |
st80774 | Solved by Terbe_Daniel in post #10
This was the problem! Now working with this:
B, _, H, W = A.shape
norm = 2*tr.norm(mask1, p=1, dim=(1,2,3))
norm = norm.reshape(B, 1, 1, 1)
mask1 = tr.div(mask1*H*W, norm)
Thank you for your help! |
st80775 | Could you print the shape of tr.Tensor(target) and tr.Tensor(netout) before passing them to the criterion?
Also, which criterion are you using at the moment? |
st80776 | crit = nn.MSELoss(reduction='mean')
For batch input:
torch.Size([849]) torch.Size([849, 1])
For single input:
torch.Size([849]) torch.Size([849])
I edited the post to contain these information. |
st80777 | It seems you might be accidentally broadcasting the inputs to your criterion.
In the latest PyTorch version (1.2.0) you should get a warning.
Make sure to pass the input and target as [batch_size, 1] or [batch_size] (not mixed). |
st80778 | Yes, but if a correct this (with squeeze), I get a correct loss:
Loss: 0.8491315245628357
But the issue remains – the network output differs when batched or single input. |
st80779 | That shouldn’t be the case.
Could you post the model architecture so that we could have a look? |
st80780 | I have already spent a lot of time on this problem. My criterion CrossEntropyLoss. The input is [batch, num_class], the target is [batch]. Everything works well on batch training and testing. On a single package does not work. I think that NN sees the data inside the batch. We have to train the model with the size of the batch = 1. |
st80781 | I think, I found the bug:
_, _, H, W = A.shape
mask2 = tr.div(mask2*H*W, 2*tr.norm(mask2, p=1))
Here I calculate the norm overall what is wrong!
I’ll try to correct this – calculate the norm of the mask for each batch… |
st80782 | This was the problem! Now working with this:
B, _, H, W = A.shape
norm = 2*tr.norm(mask1, p=1, dim=(1,2,3))
norm = norm.reshape(B, 1, 1, 1)
mask1 = tr.div(mask1*H*W, norm)
Thank you for your help! |
st80783 | I have a trained model and I want to do some surgery on it like merging layers into a fewer ones. For that, I need the following information for every layer in the model:
layer class name - like nn.Conv2d, nn.BatchNorm2d or my custom classes.
layer definition information - like what stride, what padding, bias true or false etc.
layer parameters - like weights, bias, running mean & variance etc.
I have not been able to find a mechanism to get this consolidated information.
When I parse through the state dictionary, I only get a key and associated parameters. There is no way to find layer class/definition information from the key name. And when I use named_modules(), I get the layer information, but don’t get the key information to retrieve appropriate parameters from the state dictionary.
Appreciate your inputs, thanks! |
st80784 | Hi,
When you go through named_modules, you have access to the parameters of each module. Why do you need to map it back to the state_dict? |
st80785 | Why, when changing batch size, do the weights of neurons change? If I test new data with a batch size equal to the size with which I trained NN, then the results are good. If you change the batch size, the results are bad.
class MyResNet(ResNet):
def __init__(self):
super(MyResNet, self).__init__(BasicBlock, [2, 2, 2, 2], num_classes=3)
self.conv1 = torch.nn.Conv2d(1, 64,
kernel_size=(7, 7),
stride=(2, 2),
padding=(3, 3), bias=False)
...
model.load_state_dict(torch.load('save.pth'))
criterion = nn.CrossEntropyLoss(reduction='sum')
optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate)
...
outputs = model(x)
loss1 = criterion(outputs, y)
optimizer.zero_grad()
loss1.backward()
optimizer.step() |
st80786 | slavavs:
Why, when changing batch size, do the weights of neurons change?
The weights do not change based on the batch size or the forward pass alone.
Could you explain this issue a bit more?
slavavs:
If I test new data with a batch size equal to the size with which I trained NN, then the results are good. If you change the batch size, the results are bad.
Make sure to call model.eval() before evaluating your model, as otherwise e.g. the running estimates of batchnorm layers will be updated, which depends on the used batch size. |
st80787 | I don’t know how, but my can teaches samples in batch.
I did not shuffle.
After training, if I change the batch size, then the network can no longer correctly predict.
If I call model.eval () the result is very bad |
st80788 | This might be due to skewed running estimates in your batch norm layer.
Try to use a higher batch size during training or adapt the momentum. |
st80789 | I want to use resnet18. According to the documentation, she receives an image 224x224x3 in size at the entrance. My semple has a size of 224x3x3. I want to add a linear (3x224) layer to the resnet18 input to get semple (224x224). Help me change def forward.
class MyResNet(ResNet):
def __init__(self):
super(MyResNet, self).__init__(BasicBlock, [2, 2, 2, 2], num_classes=3)
self.myfc1 = nn.Linear(3, 224)
nn.init.xavier_uniform_(self.myfc1.weight)
self.conv1 = torch.nn.Conv2d(1, 64,
kernel_size=(7, 7),
stride=(2, 2),
padding=(3, 3), bias=False) |
st80790 | Interestingly, if I upload to a resnet sample of size 224x3, there is no error. Why is that? The resnet network itself converts my semples |
st80791 | I ran this command conda install pytorch torchvision cudatoolkit=10.0 -c pytorch but it was 0.3.0.post4 that got installed. Then I ran conda update pytorch torchvision but got the following message below. How do I update to the latest version of PyTorch? I have a GTX 1080 and the Cuda driver version is 10.1.
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.7.10
latest version: 4.7.12
Please update conda by running
$ conda update -n base -c defaults conda
Package Plan
environment location: /home/kong/anaconda3/envs/ffp
added / updated specs:
- pytorch
- torchvision
The following packages will be downloaded:
package | build
---------------------------|-----------------
libgcc-7.2.0 | h69d50b8_2 269 KB
pytorch-0.2.0 |py35h01b9ba7_4cu75 311.5 MB soumith
torchvision-0.1.9 | py35h72e4c6f_1 87 KB soumith
------------------------------------------------------------
Total: 311.8 MB
The following NEW packages will be INSTALLED:
libgcc pkgs/main/linux-64::libgcc-7.2.0-h69d50b8_2
The following packages will be SUPERSEDED by a higher-priority channel:
pytorch pytorch::pytorch-0.3.0-py35_cuda8.0.6~ --> soumith::pytorch-0.2.0-py35h01b9ba7_4cu75
torchvision pytorch::torchvision-0.2.0-py35heaa39~ --> soumith::torchvision-0.1.9-py35h72e4c6f_1
Running nvcc -V gives me
nvcc: NVIDIA ® Cuda compiler driver
Copyright © 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243 |
st80792 | Could you update conda as given by the command in your output and try to install PyTorch again? |
st80793 | Hi, I am new in Pytorch and using DL in image segmentation. However, the train_loss does not descend apparently and always fluctuate regularly and the test dice coefficient does not ascend apparently.
I wonder the problem is on the programming itself or the network structure or the data set or the hyper parameters. Could anyone give some advise and experiences ?
Screenshot from 2019-05-26 17-13-41.png351×539 24.4 KB |
st80794 | 1.png1815×741 49.5 KB
2.png1816×756 53.2 KB
3.png1817×768 51.6 KB
As you can see when the batch size is 40 the Memory-Usage of GPU is about 9.0GB, when I increase the batch size to 50, the Memory-Usage of GPU decrease to 7.7GB. And I continued to increase the batch size to 60, and it increase to 9.2GB. Why the Memory-Usage of GPU was so high.According to the common sense, it should be lower than 7.7GB. |
st80795 | The displayed memory usage should show the CUDA context + the actual memory used to store tensors + cached memory + other applications.
Try to check the memory using torch.cuda.memory_allocated() and torch.cuda.memory_cached(). |
st80796 | I add the flowing sentence in my code
if (iteration+1)%10 == 0:
stop = time.time()
print("epoch: [%d/%d]"%(epoch,EPOCHS), "iteration: [%d/%d]"%(iteration + 1,len(train_dataset)//BATCH_SIZE), "loss:%.4f" % loss.item(),
'time:%.4f' % (stop - start))
print("torch.cuda.memory_allocated: %fGB"%(torch.cuda.memory_allocated()/1024/1024/1024))
print("torch.cuda.memory_cached: %fGB"%(torch.cuda.memory_cached()/1024/1024/1024))
start = time.time()
And the output in the terminal as follow:
(pt1.2) D:\code\rnn>python train_new.py -m test -b 40
epoch: [0/3000] iteration: [10/92] loss:0.1948 time:28.8928
torch.cuda.memory_allocated: 0.141236GB
torch.cuda.memory_cached: 8.539062GB
epoch: [0/3000] iteration: [20/92] loss:0.0986 time:6.5122
torch.cuda.memory_allocated: 0.141236GB
torch.cuda.memory_cached: 8.539062GB
(pt1.2) D:\code\rnn>python train_new.py -m test -b 50
epoch: [0/3000] iteration: [10/73] loss:0.1436 time:29.8940
torch.cuda.memory_allocated: 0.144663GB
torch.cuda.memory_cached: 7.197266GB
epoch: [0/3000] iteration: [20/73] loss:0.0644 time:7.6573
torch.cuda.memory_allocated: 0.144663GB
torch.cuda.memory_cached: 7.197266GB
(pt1.2) D:\code\rnn>python train_new.py -m test -b 60
epoch: [0/3000] iteration: [10/61] loss:0.1918 time:31.1637
torch.cuda.memory_allocated: 0.151530GB
torch.cuda.memory_cached: 8.666016GB
epoch: [0/3000] iteration: [20/61] loss:0.0936 time:8.8493
torch.cuda.memory_allocated: 0.151408GB
torch.cuda.memory_cached: 8.666016GB |
st80797 | yes, but I wonder why the memory cached of b40 will be larger than b50. It will be difficult for me to set the proper batch size. |
st80798 | The cache size might vary e.g. due to cudnn benchmarking and shouldn’t yield an out of memory error, if I’m not mistaken. Are you running out of memory using a smaller batch size? |
st80799 | IndexError: list index out of range
I can’t understand why this error is being called.
import numpy as np
import random
wn1 = 2
epochs = 10
st_s1 = np.random.rand(10,9)
st_t1 = np.random.rand(10,1)
len_st = len(st_s1)
b = len_st-wn1
for epoch in range(epochs):
shuffle_list = list(range(1, b))
random.shuffle(shuffle_list)
for wn_start in range(b):
wn_all2 = []
los_l = []
shuffle_temp = shuffle_list[wn_start]
los_l.append(st_t1[shuffle_temp+wn1])
wn_all2.append(st_s1[shuffle_temp:shuffle_temp+wn1,3:6]) |
st80800 | Solved by spanev in post #2
Hi @slavavs,
Here you declare a list of length 7:
shuffle_list = list(range(1, b))
So in the inner loop you should iterate of range(b-1) and not range(b). |
st80801 | Hi @slavavs,
Here you declare a list of length 7:
shuffle_list = list(range(1, b))
So in the inner loop you should iterate of range(b-1) and not range(b). |
st80802 | What is the canonical way to assert that a given tensor has the correct shape, i.e. if it is known beforehand what shape it should have? Currently, I use assertions in the following way, which adds a lot of clutter to the code:
assert x.shape == torch.Size([dim1, dim2])
A similar question about IDE based tensor-shape checking has been asked here 34, but has not received an answer. |
st80803 | torch.onnx.export(model,
x,
model_path,
verbose=False,
opset_version=10,
export_params=True)
# verbose=False,
# opset_version=10,
# strip_doc_string=True,
# do_constant_folding=True)
print('onnx model exported.')
I export maskrcnn-benmark maskrcnn model to onnx. but it keeps give me huge logging message.
Even with verbose=False.
How to disable it???
Anyone nknows? |
st80804 | I’m training a vanilla RNN on MNIST (sequential, len=784, one scalar value at each time step), and I would like to visualize the hidden states at every time step, not just the final time step. How can I access the hidden states prior to the one at the final time step? It seems that torch.nn.RNN api only allows me to access the hidden state at the final time step.
I can probably accomplish this by using torch.nn.RNNCell instead of torch.nn.RNN: looping through the input sequence manually and saving all the hidden states; however, I’m a bit concerned about the performance drop if I implement an RNN using RNNCell instead of RNN, both in terms of speed (see here 8) and classification accuracy (see here 8).
Any suggestions/advice would be greatly appreciated! |
st80805 | Solved by John_Smith in post #2
When you run a rnn net over a sequence of data, the output has two tensors: (output, h_n). While h_n is the hidden state from the last timestamp, the tensor output actually has the hidden states for every timestamps in the sequence.
rnn_net = nn.RNN(4, 3)
X = torch.rand((6,5,4))
out, hidden = rnn_n… |
st80806 | When you run a rnn net over a sequence of data, the output has two tensors: (output, h_n). While h_n is the hidden state from the last timestamp, the tensor output actually has the hidden states for every timestamps in the sequence.
rnn_net = nn.RNN(4, 3)
X = torch.rand((6,5,4))
out, hidden = rnn_net(X)
out.shape
hidden.shape
You will see the shape of out is torch.Size([6, 5, 3]) which represents (seq_len, batch, hidden_size) |
st80807 | Thanks for pointing that out! Looks like I have been very sloppy when reading documentation… |
st80808 | But output contains hidden states from only the last LSTM layer. What to do in case we want hidden states from all the layers of each time step?
Looping makes it a lot slower! |
st80809 | `all_hyp, all_scores, all_attn = [], [], []
n_best = self.opt.n_best
all_lengths = []
for b in range(batch_size):
scores, ks = beam[b].sortBest()
all_scores += [scores[:n_best]]
hyps, attn, length = zip(*[beam[b].getHyp(k) for k in ks[:n_best]])
all_hyp += [hyps]
print('all_hyp========',all_hyp) #### WORKS FINE
all_lengths += [length]
print('hyps========',hyps)
print('all_hyp========',all_hyp)
# if(src_data.data.dim() == 3):
if self.opt.encoder_type == 'audio':
valid_attn = decoder_states[0].original_src.narrow(2, 0, 1).squeeze(2)[:, b].ne(onmt.Constants.PAD) \
.nonzero().squeeze(1)
print('valid_attn.shape', valid_attn.shape)
else:
valid_attn = decoder_states[0].original_src[:, b].ne(onmt.Constants.PAD) \
.nonzero().squeeze(1)
attn = [a.index_select(1, valid_attn) for a in attn]
all_attn += [attn]`
done`
Everything works fine inside the for loop for “all_hyp”, however, when I add one more line to return all_hyp, it gives me the “RuntimeError: CUDA error: device-side assert triggered” message.
FYI, inside the for loop, all_hyp is
all_hyp======== [([tensor(10, device='cuda:0'), tensor(9, device='cuda:0'), tensor(8, device='cuda:0'), tensor(4, device='cuda:0'), tensor(7, device='cuda:0'), tensor(22, device='cuda:0'), tensor(11, device='cuda:0'), tensor(20, device='cuda:0'), tensor(4, device='cuda:0'), tensor(5, device='cuda:0'), tensor(7, device='cuda:0'), tensor(9, device='cuda:0'), tensor(19, device='cuda:0'), tensor(15, device='cuda:0'), tensor(6, device='cuda:0'), tensor(6, device='cuda:0'), tensor(16, device='cuda:0'), tensor(7, device='cuda:0'), tensor(13, device='cuda:0'), tensor(20, device='cuda:0'), tensor(7, device='cuda:0'), tensor(14, device='cuda:0'), tensor(15, device='cuda:0'), tensor(13, device='cuda:0'), tensor(20, device='cuda:0'), tensor(21, device='cuda:0'), tensor(13, device='cuda:0'), tensor(14, device='cuda:0'), tensor(10, device='cuda:0'), tensor(6, device='cuda:0'), tensor(7, device='cuda:0'), tensor(4, device='cuda:0'), tensor(11, device='cuda:0'), tensor(7, device='cuda:0'), tensor(8, device='cuda:0'), tensor(6, device='cuda:0'), tensor(10, device='cuda:0'), tensor(10, device='cuda:0'), tensor(7, device='cuda:0'), tensor(13, device='cuda:0'), tensor(4, device='cuda:0'), tensor(8, device='cuda:0'), tensor(7, device='cuda:0'), tensor(13, device='cuda:0'), tensor(20, device='cuda:0'), tensor(7, device='cuda:0'), tensor(10, device='cuda:0'), tensor(11, device='cuda:0'), tensor(20, device='cuda:0'), tensor(16, device='cuda:0'), tensor(25, device='cuda:0'), tensor(9, device='cuda:0'), tensor(4, device='cuda:0'), tensor(6, device='cuda:0'), tensor(15, device='cuda:0'), tensor(7, device='cuda:0'), tensor(4, device='cuda:0'), tensor(15, device='cuda:0'), tensor(9, device='cuda:0'), tensor(20, device='cuda:0'), tensor(8, device='cuda:0'), tensor(14, device='cuda:0'), tensor(11, device='cuda:0'), tensor(15, device='cuda:0'), tensor(4, device='cuda:0'), tensor(9, device='cuda:0'), tensor(4, device='cuda:0'), tensor(13, device='cuda:0'), tensor(11, device='cuda:0'), tensor(20, device='cuda:0'), tensor(7, device='cuda:0'), tensor(8, device='cuda:0'), tensor(4, device='cuda:0'), tensor(6, device='cuda:0'), tensor(28, device='cuda:0'), tensor(6, device='cuda:0'), tensor(7, device='cuda:0'), tensor(9, device='cuda:0'), tensor(7, device='cuda:0'), tensor(16, device='cuda:0'), tensor(11, device='cuda:0'), tensor(11, device='cuda:0'), tensor(15, device='cuda:0'), tensor(13, device='cuda:0'), tensor(20, device='cuda:0'), tensor(19, device='cuda:0'), tensor(7, device='cuda:0'), tensor(9, device='cuda:0'), tensor(20, device='cuda:0'), tensor(16, device='cuda:0'), tensor(7, device='cuda:0'), tensor(14, device='cuda:0'), tensor(13, device='cuda:0'), tensor(14, device='cuda:0'), tensor(6, device='cuda:0'), tensor(7, device='cuda:0'), tensor(12, device='cuda:0'), tensor(9, device='cuda:0'), tensor(23, device='cuda:0'), tensor(15, device='cuda:0'), tensor(13, device='cuda:0'), tensor(21, device='cuda:0'), tensor(9, device='cuda:0'), tensor(4, device='cuda:0'), tensor(13, device='cuda:0'), tensor(11, device='cuda:0'), tensor(20, device='cuda:0'), tensor(7, device='cuda:0'), tensor(23, device='cuda:0'), tensor(24, device='cuda:0'), tensor(8, device='cuda:0'), tensor(13, device='cuda:0'), tensor(20, device='cuda:0'), tensor(6, device='cuda:0'), tensor(8, device='cuda:0'), tensor(8, device='cuda:0'), tensor(6, device='cuda:0'), tensor(8, device='cuda:0'), tensor(7, device='cuda:0'), tensor(12, device='cuda:0'), tensor(11, device='cuda:0'), tensor(15, device='cuda:0'), tensor(7, device='cuda:0'), tensor(9, device='cuda:0'), tensor(20, device='cuda:0'), tensor(7, device='cuda:0'), tensor(24, device='cuda:0'), tensor(20, device='cuda:0'), tensor(16, device='cuda:0'), tensor(13, device='cuda:0'), tensor(8, device='cuda:0'), tensor(21, device='cuda:0'), tensor(10, device='cuda:0'), tensor(11, device='cuda:0'), tensor(8, device='cuda:0'), tensor(6, device='cuda:0'), tensor(16, device='cuda:0'), tensor(7, device='cuda:0'), tensor(8, device='cuda:0'), tensor(24, device='cuda:0'), tensor(22, device='cuda:0'), tensor(3, device='cuda:0')],)]
Before returning, I printed out some info regarding the list:
`print(‘all_hyp=’,all_hyp) = ==> RuntimeError: CUDA error: device-side assert triggered
print(‘len(all_hyp)=’,len(all_hyp)) ==> 1
print(‘all_hyp[0]=’,all_hyp[0]) ==> RuntimeError: CUDA error: device-side assert triggered
print(‘len(all_hyp[0])=’,len(all_hyp[0])) ==> 1
`
Appreciate if somebody can help on this one. Thanks |
st80810 | Could you rerun your script using
CUDA_LAUNCH_BLOCKING=1 python script.py args
and post the stack trace here? |
st80811 | Thanks for the hint. By running on CPU, I have located the bug. It is completely unrelated to the list. |
st80812 | I have trained an LSTM in PyTorch on financial data where a series of 14 values predicts the 15th. I split the data into Train, Test, and Validation sets. I trained the model until the loss stabilized. Everything looked good to me when using the model to predict on the Validation data.
When I was writing up my research to explain to my manager, I happened to get different predicted values each time I ran the model (prediction only) on the same input values. This is not what I expected so I read some literature but was not able to explain my results. Intuitively, my results indicate there is some random variable, node, gate, that is influences the prediction, but I cannot figure out where this is or if/how this can be configured.
Here is my model definition:
class TimeSeriesNNModel(nn.Module):
def __init__(self):
super(TimeSeriesNNModel, self).__init__()
self.lstm1 = nn.LSTM(input_size=14, hidden_size=50, num_layers=1)
self.lstm2 = nn.LSTM(input_size=50, hidden_size=25, num_layers=1)
self.linear = nn.Linear(in_features=25, out_features=1)
self.h_t1 = None
self.c_t1 = None
self.h_t2 = None
self.c_t2 = None
def initialize_model(self):
self.h_t1 = torch.rand(1, 1, 50, dtype=torch.double)
self.c_t1 = torch.rand(1, 1, 50, dtype=torch.double)
self.h_t2 = torch.rand(1, 1, 25, dtype=torch.double)
self.c_t2 = torch.rand(1, 1, 25, dtype=torch.double)
def forward(self, input_data, future=0):
outputs = []
self.initialize_model()
output = None
for i, input_t in enumerate(input_data.chunk(input_data.size(1), dim=1)):
self.h_t1, self.c_t1 = self.lstm1(input_t, (self.h_t1, self.c_t1))
self.h_t2, self.c_t2 = self.lstm2(self.h_t1, (self.h_t2, self.c_t2))
output = self.linear(self.h_t2)
outputs += [output]
outputs = torch.stack(outputs, 1).squeeze(2)
return outputs
If anyone can point out what is wrong with my model or my understanding, I’d be really grateful. |
st80813 | RNN do give the same result once its parameters is given.
I ran into the same problem yesterday.
The seemingly random behavior may be caused by:
dropout: When training, dropout layer randomly choose some units to block, which may cause random behavior. see https://pytorch.org/docs/stable/nn.html#dropout-layers 6 for more information. In the predicting phase, you should set model.training to false by model.eval() to distribute your weight over all layer units.
random initialization: You should not only reconstruct your network, but also load your parameters.(My case) |
st80814 | This might be useful: https://pytorch.org/docs/stable/notes/randomness.html 84
It talks about reproducibility. |
st80815 | import torch
import torch.nn as nn
import time
import numpy as np
import os
from Cifar10 import dataloader
import torchvision.models as models
os.environ['CUDA_VISIBLE_DEVICES'] = '2'
test_loader = dataloader.get_test_loader('../../../../data/cifar10/',
batch_size=125,
)
device = 'cuda'
vgg16 = models.vgg16(pretrained=True).to(device)
criterion = nn.CrossEntropyLoss().cuda()
def test():
vgg16.eval()
test_loss = 0
correct = 0
total = 0
global total_acc_test_max
acc_test_max = []
with torch.no_grad():
for batch_idx, (images, labels) in enumerate(test_loader):
images, labels = images.to(device), labels.to(device)
outputs = vgg16(images)
loss = criterion(outputs, labels)
test_loss += loss.data
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += predicted.eq(labels.data).cpu().sum()
if batch_idx % 10 == 9:
acc_test_max.append(100. * correct.item() / total)
total_acc_test_max.append(100. * correct.item() / total)
print('[TEST] : Loss: ({:.4f}) | Acc: ({:.2f}%) ({}/{})'
.format(test_loss / (batch_idx + 1), 100. * correct.item() / total, correct, total))
print('*************** Val Mean Accuracy : ({:.2f}%) ***************'.format(np.mean(acc_test_max)))
print('************ Total Val Mean Accuracy : ({:.2f}%) ************'.format(np.mean(total_acc_test_max)))
print()
start_time = time.time()
total_acc_test_max = []
for epoch in range(0, 5):
print("*************** {} ***************".format(epoch + 1))
test()
print("--- {:.0f} minutes {:.2f} seconds ---".format((time.time() - start_time) // 60, (time.time() - start_time) % 60))
I load vgg16 pretrained weight, and test it.
But the accuracy is so low.
image.png704×275 38.3 KB
What is the problem? |
st80816 | How did you define test_loader?
Make sure to use the same preprocessing as was applied during training. |
st80817 | @ptrblck
After load pretrained weight, it doesn’t need to train again right?
So I just tested it to check accuracy, didn’t retrain it.
But the result is like that.
I don’t understand which is problem yet. |
st80818 | You can get decent results without retraining. But, make sure your input and output formats are as expected (for VGG16). |
st80819 | @Abhilash_Srivastava @ptrblck
Is pytorch pretrained model used ImageNet?
So, after pretrained model and weight, should I test it only with ImageNet?
I tested it with Cifar10. |
st80820 | I found that there is some insignificant numerical error between the result obtained from batch mode computation and those obtained from iterating over the instances in the batch. For example:
batch_size = 32
input_size = 128
output_size = 256
linear = torch.nn.Linear(input_size,output_size)
x = torch.rand(batch_size, input_size)
output = linear(x)
output1 = linear(x[:1])
output2 = linear(x[:(batch_size//2)])
print(10e6*torch.max(torch.abs(output[:1] - output1)))
print(10e6*torch.max(torch.abs(output[:(batch_size//2)] - output2)))
The above code may print out something like:
tensor(1.4901, grad_fn=<MulBackward>) # a non-zero value, i.e., there is some numerical error
tensor(0., grad_fn=<MulBackward>) # a zero value, i.e, there is no error
Where does the above error come from? Any help is highly appreciated. |
st80821 | Solved by albanD in post #2
Hi,
The precision of a floating point number is around 1e-6. So anything smaller is going to be noise.
Also floating point operations are not really commutative or associative and will create very small error if you do them in a different order.
So this is expected behavior I’m afraid. You can us… |
st80822 | Hi,
The precision of a floating point number is around 1e-6. So anything smaller is going to be noise.
Also floating point operations are not really commutative or associative and will create very small error if you do them in a different order.
So this is expected behavior I’m afraid. You can use double precision numbers if you require more precision. |
st80823 | Hi Tuan Anh (and Alban)!
smutahoang:
I found that there is some insignificant numerical error between the result obtained from batch mode computation and those obtained from iterating over the instances in the batch.
…
Where does the above error come from? Any help is highly appreciated.
I do find this a bit odd.
It is true, as Alban points out, you can’t really complain and call
this an error. The results of the two versions of the computation
do agree to within 32-bit floating-point round-off error.
But floating-point computations don’t just give you random
differences (and they’re not allowed to). Again, as Alban notes,
doing a floating-point computation in a different (but mathematically
equivalent) order can lead to a round-off error difference in the result.
(A minor quibble: Floating-point operations are not associative,
but are commutative – at least in any sensible world, e.g., IEEE.)
My problem is that I can’t cook up any good reason the two versions
of the computation should be being done in different orders.
For a little more fun, I tweaked your sample code to run on my decrepit
pytorch 0.3.0 installation, and added to it a few more tests.
The two highlights
When done with 64-bit doubles, no difference arises.
When done with 32-bit floats, a difference only arises when batches
of size 1, 2, and 3 are passed into linear.
(When run on the gpu instead of the cpu I get very similar results,
except that the non-zero difference only shows up for a batch of
size 1.)
My complete script and output for these tests appear below.
Now it is true that our complaint will not stand up in a court of
floating-point law, but I am curious where this behavior comes
from and if there is really any good reason for it. Perhaps some
experts could chime in with more insight.
There must be a sensible reason, no?
Surely pytorch is not – dare I say it? – random …
Best.
K. Frank
Script:
import torch
print (torch.__version__)
torch.manual_seed (2019)
gpu = False
# gpu = True
print ('gpu =', gpu)
batch_size = 32
input_size = 128
output_size = 256
linear = torch.nn.Linear(input_size,output_size)
x = torch.autograd.Variable (torch.rand(batch_size, input_size))
if gpu:
linear.cuda()
x = x.cuda()
output = linear(x)
output1 = linear(x[:1])
output2 = linear(x[:(batch_size//2)])
print(10e6*torch.max(torch.abs(output[:1] - output1)))
print(10e6*torch.max(torch.abs(output[:(batch_size//2)] - output2)))
for n in range (1, 5):
outputn = linear (x[:n])
print ('n =', n, ', diff =',10e6*torch.max(torch.abs(output[:1] - outputn[:1])))
print ('n =', n, ', diff =',10e6*torch.max(torch.abs(output1 - outputn[:1])))
diffcount = 0
for n in range (1, batch_size):
outputn = linear (x[:n])
maxdiff = torch.max(torch.abs(output[:1] - outputn[:1])).data[0]
if maxdiff != 0.0:
print ('n =', n, ', maxdiff =', maxdiff)
diffcount += 1
print ('diffcount =', diffcount)
dlinear = torch.nn.Linear(input_size,output_size)
dlinear.weight = torch.nn.parameter.Parameter (linear.weight.data.double())
dlinear.bias = torch.nn.parameter.Parameter (linear.bias.data.double())
dx = x.double()
doutput = dlinear(dx)
doutput1 = dlinear(dx[:1])
doutput2 = dlinear(dx[:(batch_size//2)])
print(10e6*torch.max(torch.abs(doutput[:1] - doutput1)))
print(10e6*torch.max(torch.abs(doutput[:(batch_size//2)] - doutput2)))
ddiffcount = 0
for n in range (1, batch_size):
doutputn = dlinear (dx[:n])
maxdiff = torch.max(torch.abs(doutput[:1] - doutputn[:1])).data[0]
if maxdiff != 0.0:
print ('n =', n, ', maxdiff =', maxdiff)
ddiffcount += 1
print ('ddiffcount =', ddiffcount)
Output (set to run on cpu):
0.3.0b0+591e73e
gpu = False
Variable containing:
2.9802
[torch.FloatTensor of size 1]
Variable containing:
0
[torch.FloatTensor of size 1]
n = 1 , diff = Variable containing:
2.9802
[torch.FloatTensor of size 1]
n = 1 , diff = Variable containing:
0
[torch.FloatTensor of size 1]
n = 2 , diff = Variable containing:
2.6822
[torch.FloatTensor of size 1]
n = 2 , diff = Variable containing:
2.3842
[torch.FloatTensor of size 1]
n = 3 , diff = Variable containing:
2.3842
[torch.FloatTensor of size 1]
n = 3 , diff = Variable containing:
2.0117
[torch.FloatTensor of size 1]
n = 4 , diff = Variable containing:
0
[torch.FloatTensor of size 1]
n = 4 , diff = Variable containing:
2.9802
[torch.FloatTensor of size 1]
n = 1 , maxdiff = 2.980232238769531e-07
n = 2 , maxdiff = 2.682209014892578e-07
n = 3 , maxdiff = 2.384185791015625e-07
diffcount = 3
Variable containing:
0
[torch.DoubleTensor of size 1]
Variable containing:
0
[torch.DoubleTensor of size 1]
ddiffcount = 0 |
st80824 | My explanation based on the original code is that depending on the amount of computation to be done, different algorithms can be used.
If we focus only on cpu, we have flags like this 4 one that make the choice between using a single thread or OpenMP for multithreaded computations.
I think this is why you see a difference in the original code between the full operation and the one with one sample (mono-core algorithm used), again the other one where if you use half the batch or the whole batch (multi-core algorithm used).
Of course these different algorithms will give rise to different rounding.
A similar argument can be made on GPU where the grid/block sizes are decided based on the input size.
Is that a more satisfying explanation? |
st80825 | Hello Alban!
albanD:
My explanation based on the original code is that depending on the amount of computation to be done, different algorithms can be used.
If we focus only on cpu, we have flags like this one that make the choice between using a single thread or OpenMP for multithreaded computations.
…
Is that a more satisfying explanation?
Yes, I’ll buy that. Certainly bumping over to an OpenMP algorithm
(or some other size-dependent change of algorithm) would be
expected to change the details of the floating-point result.
(One specific detail doesn’t seem to add up: Your quoted
#define OMP_THRESHOLD 100000 seems to be too large to trigger
OpenMP for Tuan Ahn’s example, at least for my naive estimates of
what totalElements might be.)
Thanks.
K. Frank |
st80826 | Yes, this is most likely not even used in this example. This was just an example of a place where the underlying implementation is dependent on the input size
Also the underlying libraries we use like MKL and OpenBLAS will have their own thresholds. And the cuda libraries will have their own set of threshold (you can actually play with cudnn algorithm selection for conv by setting torch.backends.cudnn.benchmark and torch.backends.cudnn.deterministic).
I’m afraid it’s beyond my knowledge which thresholds are hit in this particular case.
If you really want to know, you can trace down the call stack to see exactly which function is used. And what are the conditions on that function. I would be interested to know the answer if you do that! |
st80827 | Hello!
I want to train a GAN that is able to output pairs of the form (x, x²), simple as that.
First, In the following, we define the discriminator, the generator and some true samples of the form (x, x²) where x varies from -0.5 to 0.5.
import torch
import numpy as np
import matplotlib.pyplot as plt
class SampleGenerator:
def __init__(self, function, value_range):
self.function = function
self.range = value_range
def generate(self, n_samples):
x = (self.range[1] - self.range[0])*np.random.rand(n_samples) + self.range[0]
x = x.reshape(-1,1)
return (np.hstack([x, self.function(x)]), np.array(n_samples*[1]).reshape(-1,1))
class Discriminator(torch.nn.Module):
def __init__(self):
super().__init__()
self.design = torch.nn.Sequential(
torch.nn.Linear(2, 10),
torch.nn.PReLU(),
torch.nn.Linear(10, 1),
torch.nn.Sigmoid()
)
def forward(self, x):
return self.design(x)
class Generator(torch.nn.Module):
def __init__(self):
super().__init__()
self.design = torch.nn.Sequential(
torch.nn.Linear(5, 20),
torch.nn.PReLU(),
torch.nn.Linear(20, 20),
torch.nn.PReLU(),
torch.nn.Linear(20, 2),
)
def forward(self, x):
return self.design(x)
s = SampleGenerator(lambda x: x**2, (-0.5, 0.5))
X_true, y_true = s.generate(100)
d = Discriminator()
doptimizer = torch.optim.Adam(d.parameters(), lr=0.0002, betas=(0.5, 0.999))
g = Generator()
goptimizer = torch.optim.Adam(g.parameters(), lr=0.0002, betas=(0.5, 0.999))
criterion = torch.nn.BCELoss()
X_true = torch.FloatTensor(X_true)
y_true = torch.FloatTensor(y_true)
y_fake = torch.FloatTensor(len(y_true)*[0]).view(-1,1)
# Now the crucial training step.
for epoch in range(10000):
d.train()
g.train()
# Train discriminator
# ... with real examples
doptimizer.zero_grad()
output = d(X_true)
dloss_real = criterion(output, y_true)
dloss_real.backward()
d_x = output.mean().item()
# ... with fake examples
seed = torch.randn(100, 5)
X_fake = g(seed)
output = d(X_fake)
dloss_fake = criterion(output, y_fake)
dloss_fake.backward()
dloss = dloss_real + dloss_fake
d_gz = output.mean().item()
doptimizer.step()
# Train Generator
goptimizer.zero_grad()
output = d(g(seed))
gloss = criterion(output, y_true)
gloss.backward()
goptimizer.step()
if epoch%10==0:
print(d_x, d_gz)
# I let it run and run and run and in the end, the generated sample output looks quite bad.
seed = torch.rand(100,5)
plt.scatter(g(seed).detach().numpy()[:, 0], g(seed).detach().numpy()[:, 1])
plt.scatter(X_true[:, 0], X_true[:, 1])
Is there anything obviously wrong with what I’ve done? I also noticed, that the errors that I output are always around 0.5. I have understodd it like d_x should start at 1 and decrease towards 0.5, while d_gz should start at 0 and increase towards 0.5 in the training process. But here both start off at the goal. Why?
Thank you very much!
Sincerely
Garve |
st80828 | why do I get this error, I pass in cifar10 images, [3, 32, 32] in size to this model
def conv_block(in_channels, out_channels, k):
# set_trace()
return nn.Sequential(
nn.Conv2d(in_channels, out_channels, k, padding=0),
nn.BatchNorm2d(out_channels),
nn.ReLU(),
nn.MaxPool2d(2)
)
from IPython.core.debugger import set_trace
class Top(nn.Module):
def __init__(self):
super().__init__()
self.encoder = conv_block(3, 16, 3)
self.lin = nn.Linear(20, 10)
self.childone = Second()
self.childtwo = Second()
def forward(self, x):
# set_trace()
a = self.childone(self.encoder(x))
b = self.childtwo(self.encoder(x))
# print('top', a.shape, b.shape)
out = torch.cat((a, b), dim=-1)
return self.lin(out)
class Second(nn.Module):
def __init__(self):
super().__init__()
self.encoder = conv_block(16, 32, 3)
self.lin = nn.Linear(20, 10)
self.childone = Middle()
self.childtwo = Middle()
def forward(self, x):
a = self.childone(self.encoder(x))
b = self.childtwo(self.encoder(x))
# print('middle', a.shape, b.shape)
out = torch.cat((a, b), dim=-1)
return self.lin(out)
class Middle(nn.Module):
def __init__(self):
super().__init__()
self.encoder = conv_block(32, 64, 1)
self.lin = nn.Linear(20, 10)
self.childone = Bottom()
self.childtwo = Bottom()
def forward(self, x):
a = self.childone(self.encoder(x))
b = self.childtwo(self.encoder(x))
# print('middle', a.shape, b.shape)
out = torch.cat((a, b), dim=-1)
return self.lin(out)
class Bottom(nn.Module):
def __init__(self):
super().__init__()
self.encoder = conv_block(64, 128, 1)
self.lin_one = nn.Linear(128, 10)
def forward(self, x):
# print('bottom', x.shape)
out = self.encoder(x)
return (self.lin_one(out.view(out.size(0), -1)))
model = Top()
# inp = [None, train_dataset[0][0]]
model.to('cuda') |
st80829 | Hi,
I guess x is not the right shape at the beginning of the forward of Second?
Can you add prints there to check that?
Also do you have an exact stack trace where this error comes from? |
st80830 | this error comes when
for i, (data, target) in enumerate(train_loader):
data, target = data.to('cuda'), target.to('cuda')
optimizer.zero_grad()
set_trace()
output = model(data)
after model(data)
after using set_trace() in Top()
ipdb> n
> <ipython-input-4-c93b1c3acece>(82)forward()
80 def forward(self, x):
81 set_trace()
---> 82 a = self.childone(self.encoder(x))
83 b = self.childtwo(self.encoder(x))
84 # print('top', a.shape, b.shape)
ipdb> x.shape
torch.Size([100, 3, 32, 32])
ipdb> n
RuntimeError: The expanded size of the tensor (3) must match the existing size (16) at non-singleton dimension 0. Target sizes: [3, 15, 15]. Tensor sizes: [16, 15, 15]
> <ipython-input-4-c93b1c3acece>(82)forward()
80 def forward(self, x):
81 set_trace()
---> 82 a = self.childone(self.encoder(x))
83 b = self.childtwo(self.encoder(x))
84 # print('top', a.shape, b.shape)
when I use
torchsummary.summary(model, (3, 32, 32), batch_size=100)
then it prints out the model, but I get error during training |
st80831 | But where is this error raised inside the execution of this line of Top()? Is it during Second? Or Middle? Or Bottom? |
st80832 | I used set_trace() in forward of every class, it does not call any set_trace() of other classes, only set_trace() of Top() is called, and gives the error as
RuntimeError: The expanded size of the tensor (3) must match the existing size (16) at non-singleton dimension 0. Target sizes: [3, 15, 15]. Tensor sizes: [16, 15, 15] |
st80833 | When the error is raised, you should have a full traceback of where the error occurred no?
This is weird as at the top level, nothing has size 15x15… |
st80834 | ok I got it, I used make_grid torchvision, by registering hook, that is why it was giving error, thanks. |
st80835 | I recently asked on the pytorch beginner forum if it was good practice to wrap the data with Variable each step or pre-wrap the data before training starts. It seems that its better (for some unknown reason to me) to wrap each step rather than before the training starts. Do people know why thats true?
Context, saw the practice here:
github.com
vinhkhuc/PyTorch-Mini-Tutorials/blob/master/2_logistic_regression.py#L20 6
def build_model(input_dim, output_dim):
# We don't need the softmax layer here since CrossEntropyLoss already
# uses it internally.
model = torch.nn.Sequential()
model.add_module("linear",
torch.nn.Linear(input_dim, output_dim, bias=False))
return model
def train(model, loss, optimizer, x_val, y_val):
x = Variable(x_val, requires_grad=False)
y = Variable(y_val, requires_grad=False)
# Reset gradient
optimizer.zero_grad()
# Forward
fx = model.forward(x)
output = loss.forward(fx, y)
# Backward |
st80836 | Hi,
Few points:
As soon as you wrap a Tensor in a Variable, it will start saving all computation history for it. So each operation is slightly more costly (less and less true with recent changes). Moreover, when you call backward, it will go through all the history, and so the bigger it is, the longer it’s gonna take.
It is quite easy to do an operation on your Variable outsite of the training loop (moving to gpu for example). And in that case, you will end up with and error saying that at the second iteration, you try to backpropagate through the graph a second time.
That being said, Variables are going to disapear soon as they will be merged with Tensors, so don’t overthing this |
st80837 | Sorry to resurrect this topic. I’m new to PyTorch, switching from TF, and trying to learn it.
So is this still the case? Something like
inputs = Variable(inputs, requires_grad=False) is still considered best practice?
Cheers! |
st80838 | Hi,
Not at all
Variable doesn’t exist anymore (it’s a noop).
What do you want to do?
If you want a new Tensor that does not share history with the current one, you can use inputs = inputs.detach().
Also feel free to open a new thread if you have question about how to write specific stuff or if you want to double check with us that you’re doing things the right way! |
st80839 | I have a dataset A that contains the high-resolution and low-resolution of flower images. They are labeled as 6 classes. During training, I will randomly select 16 high resolution and 16 low-resolution image and feed them to the network (total we will feed 32 images–batch size =32). A cross-entropy loss will be used as a loss function to train the classifier.
I want to assign the higher weight for training high-resolution image and lower weight for training low-resolution image. Is it possible in pytorch? And how to do it? The dataloader will return likes
high_res_image, class_high_res, low_res_image, class_low_res = data
If I used the loss likes
criterion = torch.nn.CrossEntropyLoss()
high_res_image, class_high_res, low_res_image, class_low_res = data
pred_high = networ(high_res_image)
pred_low = networ(low_res_image)
loss_total = 0.7* criterion(pred_high, class_high_res) + 0.3* criterion(pred_low, class_low_res)
For above code, we just use the batch size of 16 instead of 32 because we feed the data seperately. Any good suggestion to use full 32 images and assign different weight for loss? Thanks. |
st80840 | Solved by ptrblck in post #2
You could avoid the reduction in your criterion by using:
criterion = nn.CrossEntropyLoss(reduction='none')
This will give you a loss values for each sample in the batch.
After weighting each element, you could take the average (or normalize, sum etc.). |
st80841 | You could avoid the reduction in your criterion by using:
criterion = nn.CrossEntropyLoss(reduction='none')
This will give you a loss values for each sample in the batch.
After weighting each element, you could take the average (or normalize, sum etc.). |
st80842 | @ptrblck: So, if we use reduction=none, then is the loss a scale or a vector? If it is scale, how can I weight the loss among sample in the batch?
Updated: Ok it reuturned a vector. Thanks |
st80843 | @ptrblck: It worked. However, the loss returned a maxtrix (assume I worked on semantic segmentation with 2 class). The weights are assigned for each batch as 1.0, 2.0 3.0 and 4.0. The total loss will be take the average of the loss after multiplying with the weight. I have tried the loss_total = torch.mean(loss * weights) but it did not work (weights size of [3]). So, I have to use the loop for. Do you have any suggestion to make it vectorization for the loop?
This is my code
import numpy as np
num_class =2
b,h,w =4,8,8
input = torch.randn((b, 1, h, w), requires_grad=True)
target = torch.empty((b, h, w), dtype=torch.long).random_(num_class)
pred = torch.rand((b, num_class, h, w), dtype=torch.float)
criterion = nn.CrossEntropyLoss(reduction='none')
loss = criterion(pred, target)
weights= torch.from_numpy(np.asarray([1.0, 2.0, 3.0, 4.0]))
#loss_total = torch.mean(loss * weights)
loss_total =0
for i in range (b):
loss_total += loss [i] * weights[i]
loss_total = torch.mean(loss_total / b)
print (loss_total) |
st80844 | Hi,
Can you please suggest me a way to extract semantic interaction between two feature maps with different numbers of channels? e.g. (512,7,7) and (128,7,7)
Thanks |
st80845 | Pytorch 1.2, tacotron gst model.
I wonna try to mask out gst outputs.
I have two 3d array with shape [32, 177, 512]
1
So i just wonna have zeros in first tensor like in second
Taking mask and expand shape
gst_mask = ~get_mask_from_lengths(text_lengths)
gst_mask = gst_mask.unsqueeze(-1).expand_as(gst_outputs)
Seems ok
Then i do
gst_outputs.data.masked_fill_(gst_mask, 0.0)
But get unexpected output: |
st80846 | Solved by albanD in post #4
Yes it will
As I was suspecting, gst_outputs is created with an expand. This means that this is not backed by actual memory and writing to one place of it will have effect on other places.
Using clone will force it to be backed by real memory and remove this problem ! |
st80847 | Hi,
How is gst_outputs created? Can your add a gst_outputs = gst_outputs.clone() before doing the masking? |
st80848 | Basically it is like this
github.com
mozilla/TTS/blob/master/models/tacotrongst.py#L70 1
def inference(self, characters, speaker_ids=None, style_mel=None):
B = characters.size(0)
inputs = self.embedding(characters)
encoder_outputs = self.encoder(inputs)
encoder_outputs = self._add_speaker_embedding(encoder_outputs,
speaker_ids)
if style_mel is not None:
gst_outputs = self.gst(style_mel)
gst_outputs = gst_outputs.expand(-1, encoder_outputs.size(1), -1)
encoder_outputs = encoder_outputs + gst_outputs
mel_outputs, alignments, stop_tokens = self.decoder.inference(
encoder_outputs)
mel_outputs = mel_outputs.view(B, -1, self.mel_dim)
linear_outputs = self.postnet(mel_outputs)
linear_outputs = self.last_linear(linear_outputs)
return mel_outputs, linear_outputs, alignments, stop_tokens
def _add_speaker_embedding(self, encoder_outputs, speaker_ids):
if hasattr(self, "speaker_embedding") and speaker_ids is not None:
speaker_embeddings = self.speaker_embedding(speaker_ids)
will the gradient propagate if i copy tensor? |
st80849 | Yes it will
As I was suspecting, gst_outputs is created with an expand. This means that this is not backed by actual memory and writing to one place of it will have effect on other places.
Using clone will force it to be backed by real memory and remove this problem ! |
st80850 | If I want to normalize the time series from 3 to 7, can I just use the np.interp (data, [3,7], [0,1) function? Or should I do it another way? I ask because my method does not work in eval () mode. The model is trained well, but as I understand it, Batchnorm layers do not work correctly in eval () mode. |
st80851 | I’m trying to incorporate this tutorial: https://adversarial-ml-tutorial.org/adversarial_examples/ 10
import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pyplot as plt
import os
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = models.googlenet(num_classes=5)
num_ftrs = model.aux1.fc2.in_features
model.aux1.fc2 = nn.Linear(num_ftrs, 5)
num_ftrs = model.aux2.fc2.in_features
model.aux2.fc2 = nn.Linear(num_ftrs, 5)
# Handle the primary net
#num_ftrs = model_ft.fc.in_features
#model_ft.fc = nn.Linear(num_ftrs,num_classes)
model.load_state_dict(torch.load("standard_googlenet.pth"))
#phase_transforms = transforms.Compose(transform)
vtype_train = datasets.ImageFolder(os.path.join("data/train"), transforms.Compose([transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]))
vtype_test = datasets.ImageFolder(os.path.join("data/val"), transforms.Compose([transforms.Resize(224), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]))
train_loader = DataLoader(vtype_train, batch_size = 96, shuffle=True)
test_loader = DataLoader(vtype_test, batch_size = 16, shuffle=False)
def fgsm(model, X, y, epsilon):
""" Construct FGSM adversarial examples on the examples X"""
delta = torch.zeros_like(X, requires_grad=True)
loss = nn.CrossEntropyLoss()(model(X + delta), y)
loss.backward()
return epsilon * delta.grad.detach().sign()
for X,y in test_loader:
X,y = X.to(device), y.to(device)
break
def plot_images(X,y,yp,M,N):
f,ax = plt.subplots(M,N, sharex=True, sharey=True, figsize=(N,M*1.3))
for i in range(M):
for j in range(N):
ax[i][j].imshow(1-X[i*N+j][0].cpu().numpy())
title = ax[i][j].set_title("Pred: {}".format(yp[i*N+j].max(dim=0)[1]))
plt.setp(title, color=('g' if yp[i*N+j].max(dim=0)[1] == y[i*N+j] else 'r'))
ax[i][j].set_axis_off()
plt.tight_layout()
yp = model(X)
#yp = model_dnn_2(X)
plot_images(X, y, yp, 3, 6)
def epoch_adversarial(model, loader, attack, *args):
total_loss, total_err = 0.,0.
for X,y in loader:
X,y = X.to(device), y.to(device)
delta = attack(model, X, y, *args)
yp = model(X+delta)
loss = nn.CrossEntropyLoss()(yp,y)
total_err += (yp.max(dim=1)[1] != y).sum().item()
total_loss += loss.item() * X.shape[0]
return total_err / len(loader.dataset), total_loss / len(loader.dataset)
#print(" GoogLeNet CNN:", epoch_adversarial(model, test_loader, fgsm, 0)[0])
Got this error, while I’m trying to plot:
Traceback (most recent call last):
File "pgd.py", line 55, in <module>
plot_images(X, y, yp, 3, 6)
File "pgd.py", line 48, in plot_images
plt.setp(title, color=('g' if yp[i*N+j].max(dim=0)[1] == y[i*N+j] else 'r'))
RuntimeError: bool value of Tensor with more than one value is ambiguous
And when I comment the plot proceed to epoch_adversarial, got one more error:
Traceback (most recent call last):
File "pgd.py", line 67, in <module>
print(" GoogLeNet CNN:", epoch_adversarial(model, test_loader, fgsm, 0)[0])
File "pgd.py", line 60, in epoch_adversarial
delta = attack(model, X, y, *args)
File "pgd.py", line 35, in fgsm
loss = nn.CrossEntropyLoss()(model(X + delta), y)
File "/home/yolo/Documents/raja/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/yolo/Documents/raja/anaconda3/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 904, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/yolo/Documents/raja/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 1970, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/yolo/Documents/raja/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 1295, in log_softmax
ret = input.log_softmax(dim)
AttributeError: 'GoogLeNetOuputs' object has no attribute 'log_softmax'
corresponding GoogLeNet: https://github.com/pytorch/vision/blob/master/torchvision/models/googlenet.py 19
any thoughts to solve? |
st80852 | related posts:
https://stackoverflow.com/questions/52946920/bool-value-of-tensor-with-more-than-one-value-is-ambiguous-in-pytorch 525
RuntimeError: bool value of Variable objects containing non-empty torch.LongTensor is ambiguous 133
RuntimeError: bool value of Tensor with more than one value is ambiguous
https://discuss.pytorch.org/t/why-cant-one-pass-data-through-a-torch-relu-module-directly 59 |
st80853 | I am using a saved model for inference. When I used it to predict values, the values converge very quickly and the majority are predicted as the same value. This is even though my input dataset has significant variability. The loss decreases when training.
I am not sure why this is happening, any suggestions would be much appreciated!
Here is my code and some sample input data and output data:
model_state = torch.load(model_path)
model = LSTMModel(model_state['input_dim'], model_state['hidden_dim'],
model_state['num_layers'], model_state['output_dim'])
model.load_state_dict(model_state['state_dict'])
for parameter in model.parameters():
parameter.requires_grad = False
model.eval()
num_features = (len(data.columns)-1)
actual_data = torch.tensor(data[target_col].values).float()
prediction_data = torch.tensor(data.drop(columns=target_col).values).float()
actual_data = actual_data.view(-1, 1, 1)
prediction_data = prediction_data.view(-1, 1, num_features)
prediction = model(prediction_data)
prediction = prediction.data.numpy()
Input Data:
Screen Shot 2019-09-17 at 12.31.28 PM.png996×958 79.6 KB
Output predictions:
Screen Shot 2019-09-17 at 12.32.30 PM.png314×1118 123 KB |
st80854 | Have you seen the same effect during validation?
Do you handle the validation and test set in the same manner, i.e. regarding the initial hidden state etc.? |
st80855 | Yes I am seeing the same effect in validation and I am handling both sets in the same manner.
For clarity my LSTM has 10 neurons per layer, 1 hidden layer, a learning rate of 0.001 and 20000 training epochs. Might it be that my network is not deep enough?
Attached is a figure showing the output I am receiving.
The orange line is the actual data and the blue line is my forecast. |
st80856 | Hey guys,
I have a general question about running nn.modules in for loops. Are memory leaks, slow gradients, or prohibitive memory usage things I should be concerned about? Is there a limit to the size of a nn.Module I can iterate over in a for loop? I have heard varying things from users of PyTorch, and I feel like the question could be well addressed here.
Lets say, for example:
I have an encoded question size: (batch, seq_length, question_dim).
I then want to apply a time-dependent linear transformation to each time step of the question
I want to concatenate my result from (2.) to another vector called “context_previous” size (batch, 1, question_dim) and linearly transform that to yield “context_present” size (batch, 1, question_dim)
“context_present” then becomes “context_previous” in the next time step.
The only way that I see for this kind of computation to be possible is via a for loop. However, I feel like iterating repeatedly over the same nn.Module causes issues.
Here is some example code that I have for a project that I have been working on:
def forward(self, q, q_lens, v = None, n_objs = None):
'''controller process all time steps at once'''
### get question embedding ###
bs = q.size(0)
embeddings = self.embedding(q)
if self.lstm:
q, (o, h), q_lens = self.encoder(embeddings, q_lens)
else:
q, o, q_lens = self.encoder(embeddings, q_lens)
q_mask = generate_mask(q, q_lens,reverse = True, device = self.device).squeeze(1)
q = q.masked_fill(q_mask, 0)
### get context and module probs for each timesep
probs = []
c_list = []
obj_probs = []
c_prev = torch.zeros(bs, self.h_dim).to(self.device)
v_prev = v if self.v_dim else None
### get contexts, module probs, and obj_probs for each timestep ###
for timestep in range(self.t_controller):
w1_t = self.w1[timestep](o).to(self.device)
u = self.w2(torch.cat((w1_t, c_prev), dim = 1))
### get module scores ###
if self.n_modules is not None:
module_weights = self.mlp(u)
module_scores = self.softmax(module_weights)
probs.append(module_scores)
### question attention for context ###
elem_prod = u.unsqueeze(1).repeat(1,q.size(1),1) * q
q_weights = self.w3(elem_prod.masked_fill(q_mask, 0))
q_weights[q_weights == 0] = float("-inf")
q_att = self.softmax_context(q_weights)
### context ###
c_prev = q * q_att
c_prev = torch.sum(c_prev, dim = 1)
### get obj probs ###
if self.v_dim is not None:
c_logits = self.contx_v_att(c_prev, v, n_objs)
v_prev = c_logits * v_prev
obj_probs.append(c_logits)
### append context and module_probs to list ###
c_list.append(c_prev)
contexts = torch.stack(c_list, dim = 1)
module_probs = torch.stack(probs, dim = 1) if probs else None
obj_probs = torch.stack(obj_probs, dim = 1) if obj_probs else None
return contexts, q, o, module_probs, obj_probs
Any insights would be appreciated! Thank you!!! |
st80857 | for loops hurt the performace (not specific to Pytorch, but Numpy in general). If you can vectorize some of the operations (may be a little difficult in this case), you can expect drastic performance gain. |
st80858 | Hmm, interesting. Do you have any idea or experience as to why that is the case? I would have tried to vectorize the entire process and eliminate any loops, but I think that with the case that I explained above, it would be impossible. So, I am unsure how to get around this issue |
st80859 | Hello,
Previously I used keras for CNN and so I am a newbie on both PyTorch and RNN. In keras you can write a script for an RNN for sequence prediction like,
in_out_neurons = 1
hidden_neurons = 300
model = Sequential()
model.add(LSTM(hidden_neurons, batch_input_shape=(None, length_of_sequences, in_out_neurons), return_sequences=False))
model.add(Dense(in_out_neurons))
model.add(Activation("linear"))
but when it comes to PyTorch I don’t know how to implement it. I directly translate the code above into below, but it doesn’t work.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.rnn1 = nn.GRU(input_size=seq_len,
hidden_size=128,
num_layers=1)
self.dense1 = nn.Linear(128, 1)
def forward(self, x, hidden):
x, hidden = self.rnn1(x, hidden)
x = self.dense1(x)
return x, hidden
def init_hidden(self, batch_size):
weight = next(self.parameters()).data
return Variable(weight.new(128, batch_size, 1).zero_())
how can I implement something like the keras code? thank you. |
st80860 | The input_size argument to any RNN says how many features will there be for each step in a sequence, not what it’s length is going to be. Keras uses static graphs, so it needs to know the length of the sequence upfront, PyTorch has dynamic autodifferentiation so it doesn’t care about the sequence length - you can use a different one every iteration.
See the GRU docs 825 for more details on the arguments.
Apart from this, your module looks good to me! |
st80861 | Thank you for your quick response, but the word features in a context of RNN is still unclear to me. The GRU doc says,
input : A (seq_len x batch x input_size) tensor containing the features of the input sequence.
and
input_size – The number of expected features in the input x
For example, if you input a sequence
[[[ 0.1, 0.2]],
[[ 0.1, 0.2]],
[[ 0.3, 0.1]]]
, then seq_len is 3, batch is 1 and input_size i.e. features is 2? |
st80862 | Thanks a lot for your help, finally the code below works,
import torch
import torch.nn as nn
from torch.autograd import Variable
features = 1
seq_len = 10
hidden_size = 128
batch_size = 32
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.rnn1 = nn.GRU(input_size=features,
hidden_size=hidden_size,
num_layers=1)
self.dense1 = nn.Linear(hidden_size, 1)
def forward(self, x, hidden):
x, hidden = self.rnn1(x, hidden)
x = x.select(1, seq_len-1).contiguous()
x = x.view(-1, hidden_size)
x = self.dense1(x)
return x, hidden
def init_hidden(self):
weight = next(self.parameters()).data
return Variable(weight.new(1, batch_size, hidden_size).zero_())
model = Net()
model.cuda()
hidden = model.init_hidden()
X_train_1 = X_train[0:batch_size].reshape(seq_len,batch_size,features)
y_train_1 = y_train[0:batch_size]
model.zero_grad()
T = torch.Tensor
X_train_1, y_train_1 = T(X_train_1), T(y_train_1)
X_train_1, y_train_1 = Variable(X_train_1).cuda(), Variable(y_train_1).cuda()
output, hidden = model(X_train_1, Variable(hidden.data)) |
st80863 | Thanks for your helping, like I wrote above the script works, “literally” but the loss doesn’t decrease over the epochs, so give me some advice. I think the related parts are,
class Net(nn.Module):
def __init__(self, features, cls_size):
super(Net, self).__init__()
self.rnn1 = nn.GRU(input_size=features,
hidden_size=hidden_size,
num_layers=1)
self.dense1 = nn.Linear(hidden_size, cls_size)
def forward(self, x, hidden):
x, hidden = self.rnn1(x, hidden)
x = x.select(0, maxlen-1).contiguous()
x = x.view(-1, hidden_size)
x = F.softmax(self.dense1(x))
return x, hidden
def init_hidden(self, batch_size=batch_size):
weight = next(self.parameters()).data
return Variable(weight.new(1, batch_size, hidden_size).zero_())
def var(x):
x = Variable(x)
if cuda:
return x.cuda()
else:
return x
model = Net(features=features, cls_size=len(chars))
if cuda:
model.cuda()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=lr)
def train():
model.train()
hidden = model.init_hidden()
for epoch in range(len(sentences) // batch_size):
X_batch = var(torch.FloatTensor(X[:, epoch*batch_size: (epoch+1)*batch_size, :]))
y_batch = var(torch.LongTensor(y[epoch*batch_size: (epoch+1)*batch_size]))
model.zero_grad()
output, hidden = model(X_batch, var(hidden.data))
loss = criterion(output, y_batch)
loss.backward()
optimizer.step()
for epoch in range(nb_epochs):
train()
the input is “one-hot” vector and I tried changing its learning rate but the result is the same. |
st80864 | I’m not sure, it’s hard to spot bugs in code that you can’t run. Why do you do this:
x = x.select(0, maxlen-1).contiguous()
Don’t you want to return predictions for the whole sequence? It seems to me that you’re only taking the last output. |
st80865 | In fact I’m trying to (re)implement keras’s text generation example 64 with PyTorch. In keras’s Recurrent layers, there is
return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.
and in the example, this is false, so I think taking only the last output is needed. |
st80866 | I’m not sure, I don’t know Keras. I’m just pointing it out (it might be easier to do x[-1] to achieve the same thing).
If you have the full code available somewhere I can take a look. |
st80867 | OK, thanks. Does
x = x[-1] i.e. x = x.select(0, maxlen-1).contiguous()
interfere in back propagation?
I uploaded my code here 99 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.