id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st83368 | perhaps a sigmoid could also work (since counting is like a step function and sigmoids are like step functions).
(I think u can even use a relu…) |
st83369 | There is one old and dusty article in here: https://distill.pub/2017/momentum/ 1
And I think the alpha and beta are lr and momentum that corresponds to PyTorch:
optimizer = torch.optim.SGD(model.parameters(), lr=0.05, momentum=0.9)
Now, if I print the optimizer I will get:
SGD (
Parameter Group 0
dampening: 0
lr: 0.05
momentum: 0.9
nesterov: False
weight_decay: 0
)
But what I would like is to get the effective learning rate (ELR) inside the forward.
For instance if batchnumer=0 is the first batch, I assume ELR is 0.05.
As the batches counter increases ELR may change. See how in that article step size (ELR) will vary in size.
Can you give me the idea how to do that?
I assume this is one step (ELR): |
st83370 | I was looking at the paper titled “Attention Is All You Need” (https://arxiv.org/pdf/1706.03762.pdf 1) which introduces the Transformer, an encoder-decoder sequence model based solely on attention mechanisms.
Here’s a snippet of the code from a PyTorch implementation (https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/master/train.py 1):
# prepare data
src_seq, src_pos, tgt_seq, tgt_pos = map(lambda x: x.to(device), batch)
gold = tgt_seq[:, 1:]
# forward
optimizer.zero_grad()
pred = model(src_seq, src_pos, tgt_seq, tgt_pos)
# backward
loss, n_correct = cal_performance(pred, gold, smoothing=smoothing)
loss.backward()
According to this, tgt_seq is one of the inputs to the decoder, while gold is the target of the decoder. How is it fair that one of the inputs to the model is tgt_seq, where gold, the variable used to compute the loss, is simply a slice of tgt_seq (an input to the model )?
Any ideas will be much appreciated, thanks in advance! |
st83371 | I think that they are using “teacher forcing” (the concept of using the real target outputs as each next input, instead of using the decoder’s guess as the next input) which is why they pass in the target output as input to the decoder. |
st83372 | What I really want is to seed the dataset and dataloader. I am adapting code from:
gist.github.com
https://gist.github.com/kevinzakka/d33bf8d6c7f06a9d8c76d97a7879f5cb 6
data_loader.py
"""
Create train, valid, test iterators for CIFAR-10 [1].
Easily extended to MNIST, CIFAR-100 and Imagenet.
[1]: https://discuss.pytorch.org/t/feedback-on-pytorch-for-kaggle-competitions/2252/4
"""
import torch
import numpy as np
This file has been truncated. show original
utils.py
import matplotlib.pyplot as plt
label_names = [
'airplane',
'automobile',
'bird',
'cat',
'deer',
'dog',
'frog',
This file has been truncated. show original
Anyone know how to seed this properly? What are the best practices for seeding things in Pytorch.
Honestly, I have no idea if there is an algorithm specific way for GPU vs CPU. I care mostly about general pytorch and make sure my code is “truly random”. Specially when it uses GPU I guess… |
st83373 | related:
Best practices for seeding random numbers on gpu? 27
The Random Seed 18
Best practices for generating a random seed to seed Pytorch?
stackoverflow.com
Best practices for generating a random seeds to seed Pytorch? 42
pytorch
asked by
Pinocchio
on 04:20PM - 08 Aug 19 UTC |
st83374 | The random numbers I get on the gpu are different than on the cpu:
In [17]: torch.manual_seed(123); torch.rand(device='cuda', size=(2,3))
Out[17]:
tensor([[ 0.0474, 0.1951, 0.2235],
[ 0.5574, 0.7839, 0.9306]], device='cuda:0')
In [18]: torch.manual_seed(123); torch.rand(device='cuda', size=(2,3))
Out[18]:
tensor([[ 0.0474, 0.1951, 0.2235],
[ 0.5574, 0.7839, 0.9306]], device='cuda:0')
In [19]: torch.manual_seed(123); torch.rand(device='cpu', size=(2,3))
Out[19]:
tensor([[ 0.2961, 0.5166, 0.2517],
[ 0.6886, 0.0740, 0.8665]])
In [20]: torch.manual_seed(123); torch.rand(device='cpu', size=(2,3))
Out[20]:
tensor([[ 0.2961, 0.5166, 0.2517],
[ 0.6886, 0.0740, 0.8665]])
One option could be to always generate on the cpu, then copy to the gpu. But presumably this characteristic is going to apply to eg dropout masks too? Another option would be to choose that seed consistency is only guaranteed conditioned on choice of using gpu or cpu.
What are best practices that are generally used to ensure reproducibility given a random seed? |
st83375 | Did you find an answer?
related:
Best practices for seeding random numbers on gpu?
The Random Seed 20
Best practices for generating a random seed to seed Pytorch? 24
stackoverflow.com
Best practices for generating a random seeds to seed Pytorch? 30
pytorch
asked by
Pinocchio
on 04:20PM - 08 Aug 19 UTC |
st83376 | Today i use such method to set random seed
The args.seed is default to 1
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed(args.seed)
But everytime the result is slightly different, why?
I use PyTorch0.4 |
st83377 | Are you using the GPU with cuDNN?
If so, you could set the cuDNN behavior to be deterministic, which would unfortunately trade performance for determinism.
torch.backends.cudnn.deterministic = True
Also, are you using some other libraries sampling random numbers?
If so, you should also seed them. |
st83378 | No, i use none of them.
If i haven’t use
import cudnn
means no use of the module of cuDNN,i only use torch and torchvision
Does Xavier Initialization matter?I use such way to init my network. |
st83379 | The initializations should yield the same random numbers, if the seed was set.
To see, if you are using cuDNN, use print(torch.backends.cudnn.enabled). |
st83380 | Why can’t you write meaningful titles? It makes everyone’s life harder. Please write meaningful titles.
related:
Best practices for seeding random numbers on gpu? 4
The Random Seed
Best practices for generating a random seed to seed Pytorch? 6
stackoverflow.com
Best practices for generating a random seeds to seed Pytorch? 4
pytorch
asked by
Pinocchio
on 04:20PM - 08 Aug 19 UTC |
st83381 | I wish to have a multi-task model for two tasks in Pytorch. I coded it as follows:
x_trains=[]
y_trains=[]
num_tasks=2
for i in range(num_tasks):
x_trains.append(torch.from_numpy(np.random.rand(100,1,50)).float())
y_trains.append(torch.from_numpy(np.array([np.random.randint(10) for i in range(100)])).long())
nb_classes=10
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.all_task_layers=[]
for i in range(num_tasks):
self.all_task_layers.append(nn.Conv1d(1, 128, 8))
self.all_task_layers.append(nn.BatchNorm1d(128))
self.all_task_layers.append(nn.Conv1d(128, 256, 5))
self.all_task_layers.append(nn.BatchNorm1d(256))
self.all_task_layers.append(nn.Conv1d(256, 128, 3))
self.all_task_layers.append(nn.BatchNorm1d(128))
self.all_task_layers.append(nn.Linear(128, nb_classes))
#self.dict_layers_for_tasks[i][1]
self.all_b1s=[]
self.all_b2s=[]
self.all_b3s=[]
self.all_dense1s=[]
def forward(self, x_num_tasks):
for i in range(0,len(self.all_task_layers),num_tasks):
self.all_b1s.append(F.relu(self.all_task_layers[i+1](self.all_task_layers[i+0](x_num_tasks[i]))))
for i in range(0,len(self.all_task_layers),num_tasks):
self.all_b2s.append(F.relu(self.all_task_layers[i+3](self.all_task_layers[i+2](self.all_b1s[i]))))
for i in range(0,len(self.all_task_layers),num_tasks):
self.all_b3s.append(F.relu(self.all_task_layers[i+5](self.all_task_layers[i+4](self.all_b2s[i]))))
for i in range(0,len(self.all_task_layers),num_tasks):
self.all_dense1s.append(self.all_task_layers[i+6](self.all_b3s[i]))
return self.all_dense1s
model = Net()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
losses=[]
for t in range(50):
y_preds= model(x_trains)
optimizer.zero_grad()
for i in range(num_tasks):
loss=criterion(y_preds[i], y_trains[i])
losses.append(loss)
print(t,i,loss.item())
loss.backward(retain_graph=True)
optimizer.step()
But it gives me error for initializing the model.parameters(). Specifically I get an empty list. The same model if I try to initialize the layers for the two tasks with two sets of Conv->BatchNorm->Conv->BatchNorm->Conv->BatchNorm->GlobalAveragePooling->Linear then it would work. But if I have let’s say 5 tasks, then it can get pretty messy, and that is why I created a list of layers, and indexed them. For example, the first 7 layers are for task 1 and then the last 7 for task 2. But then the model.parameters() is giving me an empty list. How else could I do it? Or is there a simple fix, I am overlooking? |
st83382 | Try to use nn.ModuleList instead of Python lists, as this will properly register the modules internally. |
st83383 | I was trying to implement using
criterion = torch.nn.MSELoss()
lr = 1e-4
weight_decay = 0 # for torch.optim.SGD
lmbd = 0.9 # for custom L2 regularization
optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay)
for data in test_loader:
images, labels = data
test = Variable(images.view(tuple(images.shape)))
outputs = model(test)
true_y = labels.max(1)[1]
predicted_y = outputs.max(1)[1]
# Compute and print loss.
loss = criterion(predicted_y, true_y)
optimizer.zero_grad()
reg_loss = None
for param in model.parameters():
if reg_loss is None:
reg_loss = 0.5 * torch.sum(param**2)
else:
reg_loss = reg_loss + 0.5 * param.norm(2)**2
loss += lmbd * reg_loss
print('Loss: ', loss)
but getting below error
RuntimeError: _thnn_mse_loss_forward not supported on CPUType for Long |
st83384 | Solved by Nikronic in post #2
Hi,
You can only pass float tensors to calculate gradient using MSELoss. Try to add float() at the end of predicted_y and true_y tensors like below:
The reason is when you use .max() it returns Long or simply integer not float numbers. So you have to cast them before passing to the loss function. … |
st83385 | Hi,
You can only pass float tensors to calculate gradient using MSELoss. Try to add float() at the end of predicted_y and true_y tensors like below:
Py_Buddy:
loss = criterion(predicted_y.float(), true_y.float())
The reason is when you use .max() it returns Long or simply integer not float numbers. So you have to cast them before passing to the loss function.
Good luck
Nik |
st83386 | Thanks for your help
but need only loss value(single value only) , but getting something below
image.png1112×566 36.1 KB |
st83387 | Oh, I forgot to say which is weird that you are not getting any error.
torch.max(tensor, dim) returns a tuple of values and corresponding indices. So change your code in this way:
Py_Buddy:
_, true_y = labels.max(1)[1]
_, predicted_y = outputs.max(1)[1]
Finally, if you want to get loss values and aggregate them or anything that only related to its value as you said, you must use loss.item().
Py_Buddy:
print('Loss: ', loss.item()) |
st83388 | Anything, If you want to just print the loss value and do not change it in anyway, use .item() and it will return the corresponding value. In your case, just .item() to the print function.
But if you want to change the loss itself, for instance, merging two different losses by weighted sum, something like loss = 10*loss1 + 5*loss2, you should not use .item() because you will lose grad_fn which is your backward function. |
st83389 | Hi,
I’m trying to implement this paper 3. Basically, at test time, for every test image x_i, I want to fetch images relevant to that test image x_i, finetune the model on the retrieved images, make a prediction on x_i, and then discard the finetuned weights.
Right now, everything is sequential, which basically means that my test batch size is 1, which is pretty slow. Is it possible to have multiple (independent) models on the same GPU that are updated in parallel ?
Thanks,
Lucas |
st83390 | My Code is actually really simple just for testing but I get this error and I can’t find anything similar online
class customDatasetflowGraph(Dataset):
def __init__(self,path):
self.x = np.load(path+"/"+"dataX.npy")
self.infoSuperp = np.load(path+"/"+"infoSuperp.npy")
self.data_len = self.infoSuperp.shape[0]
def __getitem__(self,index):
print(index)
print(self.x.shape)
vid = self.x
return vid
def __len__(self): # This need to exist from the documentation
return self.data_len
And this is the main
train_flow = aDataLoaderClass.customDatasetflowGraph('../flowData')
# train_loader_flow = DataLoader(train_flow, batch_size=64, shuffle=True)
train_loader_flow = DataLoader(train_flow, batch_size=1, shuffle = True)#, num_workers=1)
for vid in train_loader_flow:
print(vid.shape)
ink = input("next ")
I am stack for days now if anyone has any ideas I will be more than happy to descuss thank you for your time |
st83391 | Hi,
Can your add the stacktrace of error and the output of the print function you have added? |
st83392 | Hi everyone !
If I have a single GPU, are the following functions equivalent ?
my_model = model.cuda()
my_model = DataParallel(model, devices_ids = [torch.cuda.current_device()])
my_model = DataParallel(model, devices_ids = [torch.cuda.current_device()]).cuda()
Thank you! |
st83393 | If you are using a single GPU, these approaches should yield the same results.
However, wrapping the model in an nn.DataParallel will also use the .module attribute for the original model:
model = nn.Linear(1, 1)
print(model)
> Linear(in_features=1, out_features=1, bias=True)
model = nn.DataParallel(model, device_ids=[0])
print(model)
> DataParallel(
(module): Linear(in_features=1, out_features=1, bias=True)
)
Do you see any unexpected behavior? |
st83394 | Thanks for the response.
If I do my_model = model.cuda() I have to do model_input = model_input.cuda(), otherwise I get the following message:
RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight'
But if I do:
my_model = DataParallel(model, devices_ids = [torch.cuda.current_device()])
or
my_model = DataParallel(model, devices_ids = [torch.cuda.current_device()]).cuda()
I don’t need to do model_input = model_input.cuda(), it works as well with model_input = model_input.cpu(). |
st83395 | However for both model.cuda() and DataParallel(model, ...) I don’t have to specify the map_location in torch.load(path, map_location = …), it can either be 'cpu' or 'cuda'.
I use torch.load() to load the weights of a trained model from a .pt file. |
st83396 | Hi.
I’m trying to use GPU with my code. This is the code about cuda.
model = model.cuda()
model = nn.DataParallel(model, device_ids=range(torch.cuda.device_count)))
But I get RuntimeError: module mush have its paraeters and buffers on device cuda:0 (device_ids[0]) but found one of them on device:cpu this error.
My Pytorch already know that I have GPU on my computer. I checked it with this code.
>>> import torch
>>> torch.cuda.device_cound()
1 |
st83397 | It looks like you are passing the function torch.cuda.device_count to device_ids (note the missing parentheses).
Try to call this method to get the device count: device_ids=torch.cuda.device_count(). |
st83398 | Thanks for your advice but sadly it was just my mistake.
My actual code is device_ids=torch.cuda.device_count().
I think I need another adivce. |
st83399 | Could you try to call DataParallel first, then push the model to the device?
I don’t think it’ll get rid of the error, but it’s the recommended way, if I’m not mistaken.
If that doesn’t help, could you check, if you have to(device) or .cuda() calls inside your model?
If so, these will mess up the DataParallel wrapper, as they force this particular tensor to be pushed to the specified device. |
st83400 | I have a model with a custom RoiPooling operator that inherits from torch.autograd.Function. When trying to convert the pytorch model to caffe2, I’m getting the error:
File "/home/thomasbalestri/PycharmProjects/pytorch-faster_rcnn/tools/../lib/nets/network.py", line 110, in _roi_pool_layer
return RoIPoolFunction(cfg.POOLING_SIZE, cfg.POOLING_SIZE, 1. / 16.)(bottom, rois)
RuntimeError: Tracing of legacy functions is not supported
I see that there’s a list of supported operators here 32. Does this mean that there’s no way to export pytorch models with custom operators to caffe2? |
st83401 | Hi @Feynman27, right now, we don’t support neither custom operators nor legacy operators.
For custom operators and legacy operators, later we may expose APIs to allows users to define the translation rules. |
st83402 | Is there any progress on this since Sep 2017? If I have a custom function with forward and backward passes, how can I export it to onnx? |
st83403 | Hi Guys,
I have GLIBC version issue.
Current linux system GLIBC is 2.12 and pytorch need 2.14.
I tried two ways:
Install from the source - No success!
Installed GLIBC_2.14 in home and linked pytorch with it. I could import pytorch, but when I run gives segmentataion fault.
I see similar GLIBC version issue with tensorflow as well when I install using pip. But no issues if installed using conda. I suspect that tensorflow is picking glibc from anaconda.
But, no luck with pytorch either ways (pip or conda).
Any commments ?
Thanks, |
st83404 | I found the wrong description on Pytorch Docs.
So I wanna revise them.
How can I fix it? |
st83405 | Solved by Nikronic in post #4
It is odd. Because you can access the GitHub page exactly at the same place the docs are. If it does not bother you, can you share the doc page you are considering? Maybe I can help you find the source code. |
st83406 | Hi,
The best way is to fork the documentation on GitHub, then make changes and send a pull request. Developers will reach you there for sure.
Good luck |
st83407 | Thnaks to your reply.
I forked the pytorch Github, but I didn’t find the where description is. |
st83408 | It is odd. Because you can access the GitHub page exactly at the same place the docs are. If it does not bother you, can you share the doc page you are considering? Maybe I can help you find the source code. |
st83409 | Sure
I want to revise this page
https://pytorch.org/docs/stable/nn.html 1
Oh, I see.
The URL of the docs page is corresponding to the directory in Github!
I didn’t know that…
Thank you! |
st83410 | I am setting up FLOPs counter for models using register_forward_hook to register a counter at each module. However, I found forward hooks can only be applied on modules defined in the __init__ function of the model after some coding experiments, meaning we cannot register hooks on APIs like nn.functional.interpolate because they need to confirm the size of output which can be various if the input size is not fixed.
A post from Calculating flops of a given pytorch model 8 also mentioned this problem.
Are there any workarounds for registering a forward hook for those functions? Or are there any workarounds for counting FLOPs of interpolate together with other modules? |
st83411 | if nn.functional.interpolate is used for upsampling, then one can register forward hooks on nn.Upsample.
However, one can always wrap nn.functional.interpolate into a nn.Module inherited class to define a module containing nn.functional.interpolate. Then instantiate the defined module in the __init__ function of the model, where the defined modules can be registered forward hooks. |
st83412 | I trained LSTM-MDN model using Adam, the training loss decreased firstly, but after serveral hundreds epoch, it increased and higher than Initial value. Then I retrained from the point where the loss was lowest and reduced learning rate 10 times(from 1e-3 to 1e-4). The training loss alse decreased firstly and increased later.I initially thought there was some bugs in the code, but I didn’t find any bug. Then I replaced adam with SGD(momentum=0), training loss didn’t increased, but it converged to a relatively large value which was higher than the loss from adam, so I thougtht there was something wrong in adam.
I never found the reason, I hope someone can help me find the reason.Thansk!
loss (adam)
image.png2610×1096 106 KB |
st83413 | Do you zero out the gradients after each optimizer step?
I’ve seen similar behavior when the gradients were accidentally accumulated. |
st83414 | ptrblck:
zero out
I really appreciate you replying to me.Here is my code.Is there a wrong order between computing the loss and calling zero_grad? Thank you very much~~
image.png1750×448 32.5 KB |
st83415 | When I replaced adam with SGD(momentum=0), training loss didn’t increase, but it converged to a relatively large value which was higher than the loss from adam.
image.png2384×946 77 KB |
st83416 | The order of your calls looks alright.
I’m unfortunately not really familiar with your use case and the loss function you are using.
Also, what is ensure_shared_grad doing? Is it just copying the current gradients to another model? |
st83417 | Yes, it is. Thanks for your replying.I’m still looking for the reason.Maybe there is something wrong with adam? |
st83418 | I also encounter this issue.
Is there any suggestion when we use adam optimizer? |
st83419 | Hello,
I am training a network with 3D MR-CT images.
right now I have the epoch wise loss being printed like:
image.png1019×533 8.67 KB
The overall train and validation loss looks so far:
train and validation losses in last 15 epochs:
Overall validation loss:
The model saving criteria is minimum validation loss, so the model was last saved in epoch 30.
As the network is still training and will be so untill epoch 120. What can I interpret from these data about the network being trained |
st83420 | I am having a trouble with understanding the workings of .backward() and .grad() in PyTorch. Particularly, w.grad seems to become None type in the following code. Could you help to clarify what’s going on here. I am using PyTorch version 1.0.0. Thanks.
Screenshot 2019-02-23 at 23.06.17.png1140×623 68.1 KB |
st83421 | Not sure what’s going on there, but your issue is probably fixed if you change x_data to
x_data = torch.tensor([1., 2., 3.])
and the same for y_data. |
st83422 | Hi,
I’m trying to implement a custom loss function that is a regularization of the standard cross entropy loss function. Essentially I’m adding a penalty parameter. I was wondering the best way to implement this loss function - as a nn.Module, or just a def myloss() type. I can provide specifications for the function if necessary.
Thanks in advance! |
st83423 | If your custom loss function uses internal parameters or some other arguments, I would use the nn.Module approach as it will properly encapsulate all attributes.
On the other hand you could just define a method, if you are using a pure functional approach. |
st83424 | Thanks for the input! I think I’ll go with the nn.Module approach. If I do this, I shouldn’t need to define a backward function, only a forward, correct? |
st83425 | If you are using PyTorch methods only (no numpy etc.) you can just define forward, as Autograd will automatically create the backward. |
st83426 | Before implementing the RCE (adding penalty parameter), I wanted to make sure my model works with the regular cross entropy function (the RCE required for my model only guarantees smoothness in probability distribution - when penalty parameter is set to 0 in my custom loss, it is the normal CE loss function). However, I don’t believe I have the data in correct format. When calling:
nn.CrossEntropyLoss()(out, target.squeeze())
I get a run time error:
RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15
A little background:
My data looks like this:
1.0, 72
0.9741266547569558, 69
0.8379395394315619, 68
0.6586391650305975, 77
0.8268797607858336, 55
0.1315648101238786, 69
0.016174165344821745, 74
0.09840399297894489, 87
0.6690187519456767, 65
0.38906138226255316, 68
The data is representative of: y,C(dy), where y is the output of a defined mathematical function, and C(dy) is the class to which each point should belong. (With this specific data set, I have 144 classes.)
dy is generated by taking y[i+1] - y[i]. The number of classes for the entire data set is:
int(np.floor(abs(dy.amax - dy.amin) / 0.04)).
The 0.04 is just a specified classification ‘bin’ width.
The line before I call the loss function, my out is a tensor of size [20,100], and my target is a tensor of size [20,100] (after a target.squeeze().
Any ideas?
Edit: Fix formatting. |
st83427 | After further investigation, I did not have the correct class. I ended up removing everything relating to numpy and used pure pytorch methods. I fixed the loss using CEL by calling:
loss = nn.CrossEntropyLoss()(out, torch.max(target,1)[1])
However, I’m not quite sure what changing the targets argument actually did. |
st83428 | nn.Linear(num_in_features, num_out_features) ?
I have to make a network where the initial hidden layer bias is constant, how can I do that ? |
st83429 | If you don’t want to update the bias parameter, you could set the requires_grad attribute of the bias to False and don’t pass it to the optimizer:
lin = nn.Linear(1, 1)
lin.bias.requires_grad = False
optimizer = torch.optim.Adam([lin.weight], lr=1.)
output = lin(torch.randn(1, 1))
output.backward()
lin.bias.grad
>
lin.weight.grad
> tensor([[-0.0095]]) |
st83430 | These methods should work:
lin = nn.Linear(1, 1)
with torch.no_grad():
lin.bias.fill_(1.)
# or
lin.bias = nn.Parameter(torch.randn(1)) |
st83431 | I want to implement a decoder but I don’t need its input. Do I just pass a torch.zeros for its input?
e.g.
def main_test_zero_input_to_lstm():
## model params
n_layers, nb_directions = 1, 1
hidden_size = 64
input_size = 3
decoder = nn.LSTM(input_size=input_size,hidden_size=hidden_size).to(device)
print(lstm.hidden_size)
## initialize the hidden state(s)
batch_size = 1
h_n = torch.randn(n_layers*nb_directions, batch_size, hidden_size)
c_n = torch.randn(n_layers*nb_directions, batch_size, hidden_size)
hidden = (h_n, c_n)
## pass through fake data
seq_len, embedding_dim = 1, input_size
zeros_fake_data = torch.zeros(seq_len, batch_size, embedding_dim)
#zeros_fake_data = torch.zeros(seq_len, embedding_dim) # this throws an error, we need the batch dimension!!!
## do a decoder step
out, hidden = decoder(zeros_fake_data, hidden)
##
print(f'out size equals hidden state size: {out.size() == hidden[0].size()}')
print(f'out.size() = {out.size()}')
print(out)
if __name__ == '__main__':
print('start')
main_test_zero_input_to_lstm()
print('DONE \a')
or is this bad practice? Perhaps I do need its input from previous step…? |
st83432 | When trying to work out bottlenecks in my code I found that the _batch_mahalanobis function from torch.distributions.multivariate_normal was quite slow.
I decided to compare the most recent release of the function (from master branch 8) with a previous version 3, and the latest version is notably slower (on my machine almost 4x slower).
Here are the results of my script that compares the runtime and results of the two methods:
[New] Time Taken: 0.677909s
[Old] Time Taken: 0.159045s
Average Relative Error: 7.209917385342379e-11
Comparison Script
import time
import torch
import numpy as np
def _batch_trtrs_lower(bb, bA):
"""
Applies `torch.trtrs` for batches of matrices. `bb` and `bA` should have
the same batch shape.
"""
flat_b = bb.reshape((-1,) + bb.shape[-2:])
flat_A = bA.reshape((-1,) + bA.shape[-2:])
flat_X = torch.stack([torch.trtrs(b, A, upper=False)[0] for b, A in zip(flat_b, flat_A)])
return flat_X.reshape(bb.shape)
def _batch_mahalanobis_old(bL, bx):
r"""
Computes the squared Mahalanobis distance :math:`\mathbf{x}^\top\mathbf{M}^{-1}\mathbf{x}`
for a factored :math:`\mathbf{M} = \mathbf{L}\mathbf{L}^\top`.
Accepts batches for both bL and bx. They are not necessarily assumed to have the same batch
shape, but `bL` one should be able to broadcasted to `bx` one.
"""
n = bx.size(-1)
bL = bL.expand(bx.shape[bx.dim() - bL.dim() + 1:] + (n,))
flat_L = bL.reshape(-1, n, n) # shape = b x n x n
flat_x = bx.reshape(-1, flat_L.size(0), n) # shape = c x b x n
flat_x_swap = flat_x.permute(1, 2, 0) # shape = b x n x c
M_swap = _batch_trtrs_lower(flat_x_swap, flat_L).pow(2).sum(-2) # shape = b x c
return M_swap.t().reshape(bx.shape[:-1])
def _batch_mv(bmat, bvec):
r"""
Performs a batched matrix-vector product, with compatible but different batch shapes.
This function takes as input `bmat`, containing :math:`n \times n` matrices, and
`bvec`, containing length :math:`n` vectors.
Both `bmat` and `bvec` may have any number of leading dimensions, which correspond
to a batch shape. They are not necessarily assumed to have the same batch shape,
just ones which can be broadcasted.
"""
return torch.matmul(bmat, bvec.unsqueeze(-1)).squeeze(-1)
def _batch_mahalanobis(bL, bx):
r"""
Computes the squared Mahalanobis distance :math:`\mathbf{x}^\top\mathbf{M}^{-1}\mathbf{x}`
for a factored :math:`\mathbf{M} = \mathbf{L}\mathbf{L}^\top`.
Accepts batches for both bL and bx. They are not necessarily assumed to have the same batch
shape, but `bL` one should be able to broadcasted to `bx` one.
"""
n = bx.size(-1)
bx_batch_shape = bx.shape[:-1]
# Assume that bL.shape = (i, 1, n, n), bx.shape = (..., i, j, n),
# we are going to make bx have shape (..., 1, j, i, 1, n) to apply batched tri.solve
bx_batch_dims = len(bx_batch_shape)
bL_batch_dims = bL.dim() - 2
outer_batch_dims = bx_batch_dims - bL_batch_dims
old_batch_dims = outer_batch_dims + bL_batch_dims
new_batch_dims = outer_batch_dims + 2 * bL_batch_dims
# Reshape bx with the shape (..., 1, i, j, 1, n)
bx_new_shape = bx.shape[:outer_batch_dims]
for (sL, sx) in zip(bL.shape[:-2], bx.shape[outer_batch_dims:-1]):
bx_new_shape += (sx // sL, sL)
bx_new_shape += (n,)
bx = bx.reshape(bx_new_shape)
# Permute bx to make it have shape (..., 1, j, i, 1, n)
permute_dims = (list(range(outer_batch_dims)) +
list(range(outer_batch_dims, new_batch_dims, 2)) +
list(range(outer_batch_dims + 1, new_batch_dims, 2)) +
[new_batch_dims])
bx = bx.permute(permute_dims)
flat_L = bL.reshape(-1, n, n) # shape = b x n x n
flat_x = bx.reshape(-1, flat_L.size(0), n) # shape = c x b x n
flat_x_swap = flat_x.permute(1, 2, 0) # shape = b x n x c
M_swap = torch.triangular_solve(flat_x_swap, flat_L, upper=False)[0].pow(2).sum(-2) # shape = b x c
M = M_swap.t() # shape = c x b
# Now we revert the above reshape and permute operators.
permuted_M = M.reshape(bx.shape[:-1]) # shape = (..., 1, j, i, 1)
permute_inv_dims = list(range(outer_batch_dims))
for i in range(bL_batch_dims):
permute_inv_dims += [outer_batch_dims + i, old_batch_dims + i]
reshaped_M = permuted_M.permute(permute_inv_dims) # shape = (..., 1, i, j, 1)
return reshaped_M.reshape(bx_batch_shape)
input_1 = torch.from_numpy(np.linalg.cholesky(np.diag(np.random.rand(2).astype(np.float32)))).cuda()
input_2 = torch.from_numpy(np.random.rand(13440, 2).astype(np.float32)).cuda()
runs = 1000
total_time_new, total_time_old = 0, 0
relative_error_cum = 0
for _ in range(runs):
torch.cuda.synchronize()
st = time.perf_counter()
m_new = _batch_mahalanobis(input_1, input_2)
torch.cuda.synchronize()
total_time_new += time.perf_counter() - st
st = time.perf_counter()
m_old = _batch_mahalanobis_old(input_1, input_2)
torch.cuda.synchronize()
total_time_old += time.perf_counter() - st
relative_error_cum = torch.norm(m_new-m_old) / torch.norm(m_new)
print(f'[New] Time Taken: {total_time_new:.6f}s')
print(f'[Old] Time Taken: {total_time_old:.6f}s')
print(f'Average Relative Error: {relative_error_cum/runs}')
I was wondering if anyone could provide any insight into why this might be the case?
The difference between the results of the two methods seems to be negligible (as seen in the relative error), so I can’t see any advantage of using the most recent version right now. |
st83433 | Thanks for raising this issue!
I’ll just forward some information:
@fehiepsi found the root cause of this issue and @vishwakftw will patch it soon. |
st83434 | Hello, I am trying to implement some kind of importance sampling from a pseudo in the following code,
inp = torch.rand([32, 20, 10, 16])
inp_norms = torch.norm(inp, dim=3, keepdim=True, p=2)
print(inp.size())
sums = torch.sum(inp_norms, dim=1, keepdim=True)
probs = inp_norms/sums
t = torch.split(probs, dim=2, split_size_or_sections=1)
sampls = torch.multinomial(inp_norms, num_samples=15, replacement=True)
samples = torch.stack([torch.multinomial(tt, num_samples=15, replacement=True) for tt in t], dim=2)
print(samples.size())
I would like to get 15 samples from dim=1 of inp for each element in dim=2. samples contain the indices of the samples but I can not get the samples from inp. (I do not want to use 2 for loops one for batch dimension and second for dim=2 of inp.) Is there any other way that I can follow, I would be glad if someone can at least give me keywords to search:D
index_select and gather did not do good to me or at least I could not understand them enough to convert to my problem.
Best and thanks in advance!
Barış |
st83435 | Hello everyone,
I would like to remove some images from my Image directory, and I was wonder if the PyTorch Dataloader is able to train on custom dataset by reading image path from csv then if the image is not found, juste pass to another image.
Is the PyTorch Dataloader can handle this or I have to remove these images from the csv file ?
Thank you all,
Medhy |
st83436 | If you are fine with some repeated images, you could check in the __getitem__ method of your Dataset, if the current file exists, and if not just sample a new random index.
This approach will repeat as many valid images as there are missing ones.
Would that work for you? |
st83437 | Thanks @ptrblck, interesting but unfortunately I prefer not to have repeated images. Is there a way to juste pass to the next iteration if the image is not present ? |
st83438 | Solved by pinocchio in post #2
check if it has flag requires_grad to true:
Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> x = torch.randn(3,4)
>>> x
tensor([[ 0.5905, 0.7622, 0.5487, 0.4738],
… |
st83439 | check if it has flag requires_grad to true:
Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> x = torch.randn(3,4)
>>> x
tensor([[ 0.5905, 0.7622, 0.5487, 0.4738],
[-0.2488, 0.1633, 2.3185, -0.6563],
[ 1.0734, -0.3375, -0.5587, 0.3377]])
>>> x.grad
>>> x.requires_grad
False
>>> x.requires_grad = True
>>> x
tensor([[ 0.5905, 0.7622, 0.5487, 0.4738],
[-0.2488, 0.1633, 2.3185, -0.6563],
[ 1.0734, -0.3375, -0.5587, 0.3377]], requires_grad=True)
>>>
source: https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html 5 |
st83440 | Yes a Tensor will have it’s .grad field populated if requires_grad=True and is_leaf=True. |
st83441 | I am trying to train a 1-D ConvNet for time series classification as shown in this paper (refer to FCN om Fig. 1b) https://arxiv.org/pdf/1611.06455.pdf 6
The Keras implementation is giving me vastly superior performance. Could someone explain why is that the case?
The code for Pytorch is as follow:
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv1d(x_train.shape[1], 128, 8)
self.bnorm1 = nn.BatchNorm1d(128)
self.conv2 = nn.Conv1d(128, 256, 5)
self.bnorm2 = nn.BatchNorm1d(256)
self.conv3 = nn.Conv1d(256, 128, 3)
self.bnorm3 = nn.BatchNorm1d(128)
self.dense = nn.Linear(128, nb_classes)
def forward(self, x):
c1=F.relu(self.conv1(x))
b1 = F.relu(self.bnorm1(c1))
c2=F.relu(self.conv2(b1))
b2 = F.relu(self.bnorm2(c2))
c3=F.relu(self.conv3(b2))
b3 = F.relu(self.bnorm3(c3))
output = torch.mean(b3, 2)
dense1=self.dense(output)
return F.softmax(dense1)
model = Net()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.5, momentum=0.99)
losses=[]
for t in range(1000):
y_pred_1= model(x_train.float())
loss_1 = criterion(y_pred_1, y_train.long())
print(t, loss_1.item())
optimizer.zero_grad()
loss_1.backward()
optimizer.step()
For comparison, I use the following code for Keras:
x = keras.layers.Input(x_train.shape[1:])
conv1 = keras.layers.Conv1D(128, 8, padding='valid')(x)
conv1 = keras.layers.BatchNormalization()(conv1)
conv1 = keras.layers.Activation('relu')(conv1)
conv2 = keras.layers.Conv1D(256, 5, padding='valid')(conv1)
conv2 = keras.layers.BatchNormalization()(conv2)
conv2 = keras.layers.Activation('relu')(conv2)
conv3 = keras.layers.Conv1D(128, 3, padding='valid')(conv2)
conv3 = keras.layers.BatchNormalization()(conv3)
conv3 = keras.layers.Activation('relu')(conv3)
full = keras.layers.GlobalAveragePooling1D()(conv3)
out = keras.layers.Dense(nb_classes, activation='softmax')(full)
model = keras.models.Model(inputs=x, outputs=out)
optimizer = keras.optimizers.SGD(lr=0.5, decay=0.0, momentum=0.99)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
hist = model.fit(x_train, Y_train, batch_size=x_train.shape[0], nb_epoch=2000)
The only difference I see between the two is the initialization but however, the results are just vastly different. For reference, I use the same preprocessing as follows for both the datasets, with a subtle difference in input shapes, for Pytorch (Batch_Size, Channels, Length) and for Keras: (Batch_Size, Length, Channels). |
st83442 | Here are a few differences skimming through the code:
Keras uses Conv1D-BN-ReLU, while your PyTorch model uses Conv1d-ReLU-BN1d-ReLU
I’m not sure what GlobalAveragePooling1D is exactly doing, but I assume you made sure torch.mean(..., 2) is doing the same
nn.CrossEntropyLoss expects raw logits, so you should remove the F.softmax call
Could you fix these issues and try it again? |
st83443 | ptrblck:
Conv1d-ReLU-BN1d-ReLU
It has no sense to use Conv1d-ReLU-BN1d-ReLU.
Have you meant to say: Conv1d-ReLU-BN1d. |
st83444 | No, I’m pointing the user to the wrong implementation in the PyTorch model.
c1=F.relu(self.conv1(x))
b1 = F.relu(self.bnorm1(c1)) |
st83445 | Aha, but there is no much need to use
b1 = F.relu(self.bnorm1(c1))
in my understanding. If the paper claims so, I would say this has no sense. |
st83446 | Maybe there is a misunderstanding, but I’m trying to say the same.
The PyTorch model should be fixed and adapted to the Keras one, which uses Conv-BN-ReLU. |
st83447 | No problemo, Keras model uses Conv-BN-ReLU
conv1 = keras.layers.Conv1D(128, 8, padding='valid')(x)
conv1 = keras.layers.BatchNormalization()(conv1)
conv1 = keras.layers.Activation('relu')(conv1)
which is so-so, and I would use what you suggested Conv-ReLU-BN for all models. This is a good suggestion. |
st83448 | Hi guys! thank you for pointing out the mistakes, I corrected it, seems to actually learn something now, but I don’t get why is it that CrossEntropyLoss requires raw logits? Why should I not have a softmax there? And as a side note, when I changed it to “Log softmax” the model started performing much better. |
st83449 | If you investigate on CrossEntropyLoss actually you will find it assumes LogSoftmax and NLLLoss together. |
st83450 | But now do I give it Logsoftmax or just give it the last dense layer output without any activation? what would make sense for Pytorch’s CrossEntropyLoss? |
st83451 | Just give it the last dense layer output. This is the activation. It will predict
nb_classes. What is your nb_classes value.? Corresponds to what? You may need to use argmax to grab the class value. |
st83452 | i have written a torch.autograd.Function for quantization as:
class Quantize(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
return input.round()
@staticmethod
def backward(ctx, grad_output):
return grad_output #pass through
fake_uniform_quantize = Quantize.apply
how can i apply to a weight parameter? , E.g
self.kernel = nn.Parameter(torch.Tensor(out_channel, in_channel, kernel_size, kernel_size))
self.kernel = fake_uniform_quantize(self.kernel)
But, I end up with the error:
TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'kernel' (torch.nn.Parameter or None expected) |
st83453 | i solved the problem by
self.kernel = nn.Parameter(torch.Tensor(out_channel, in_channel, kernel_size, kernel_size))
kernel = fake_uniform_quantize(self.kernel)
then ...
x = F.conv2d(x, kernel, None, stride=self.stride, padding=0, dilation=self.dilation, groups=self.group) |
st83454 | I am trying to use multiple GPUs for a deep neural network. When I restrict to a neural network with forward and backward functions, torch nn dataparallel works fine. However, I would like to include one more function in the neural network like this
class MyNN(nn.module):
def __init__(self):
super().__init__()
self.layer = nn.Linear(10, 20)
def forward(self, x):
x = self.block(x)
return x
def evaluate(self,x):
return some_scalar_value(x,self)
The function evaluate returns some scalar value. I am wondering how to average over the values obtained by multiple GPUs? If I had to simply do forward and backward pass, I could simply use
model = MyNN()
model = torch.nn.Dataparallel(model,device_ids) |
st83455 | eval_model = MyNN()
train_model = torch.nn.Dataparallel(eval_model,device_ids)
Use eval_model for evaluation and train_model for training. The parameters are shared. |
st83456 | Morning,
I think i am getting myself into a pickle. I have an input tensor [-1,1,x] which i then split into 2 tensors of size [-1,1,x/2]. I would like to then pass these 2 tensors over the same CNN and then concatanate back before going into the FC layers. I have used torch.split and this gives me tuple which is then allocated to 2 input tensors (data_a and data_b).
do need to pass these using model.encoder(data_a) and model.encoder(data_b)?
If i do need to do that then do i need to then copy the the statements in forward so that i end up with a list of entries for data_a and data_b? This strarts to get very messy very quickly, is there a neater way? For example is it possible to use use tuples all the way through the forward function???
def forward(self, data_a,data_b):
data_a=self.conv1(data_a)
data_a=self.bn1a(data_a)
data_a=self.DP1(data_a)
data_a=self.in1(data_a) #this is the inception modulue for strand data_a
#REPEAT ABOVE FOR DATA_B
data_b=self.conv1(data_b)
data_b=self.bn1a(data_b)
data_b=self.DP1(data_b)
data_b=self.in1(data_b) #this is the inception modulue for strand data_b
#CONCACTANTE DATA STREAM A & B
data= torch.cat((data_a,dat_b),1)
data=self.conv1e(data)
data=self.bn1e(data)
data=self.DP1e(data)
data=self.HT(self.fc1(data))
data=self.HT(self.fc1a(data))
data=self.HT(self.fc2(data))
data=self.fc3(data)
#THIS IS WHERE I GET COFUSED BECAUSE I NEED TO RETURN Z_LOC & Z_SCALE TO MODEL.ENCODER, MODEL.ENCODER CAN ONLY HAVE A SINGLE INPUT (DATA_A OR _DATA_B, AND HAVING 2 ENCODERS WOULD GIVE 2 Z_LOC & Z_SCALE, CAN I SUM THE 2 AND GET THE SAME ANSWER?
z_loc=self.fc31(data)
z_scale=self.fc32(data)
return z_loc, z_scale |
st83457 | I have tried using:
data_s=torch.split(data,512,dim=2)
for split in data_s:
data = data.cuda()
z_loc, z_scale = model.Encoder(data)
z = model.reparam(z_loc, z_scale)
out = model.Decoder(z)
loss = loss_fn(out, data, z_loc, z_scale)
optimizer.zero_grad()
loss.backward(retain_graph=True)
optimizer.step()
But i am not seeing anything which suggests that its passing each of the data blocks through as seperate fields, how can i check this? |
st83458 | It looks like you are not using split, but data as the input to your model.
The splitting and loop should generally work, if you pass split to your model and concatenate the output afterwards. |
st83459 | hi PtrBlck,
apologies for the delay thanks for spotting the deliberate mistake …
cheers,
chaslie |
st83460 | Hy guys, how can I evaluate my model by one only element?
Is it possible without data loader?
I would be interested in evaluating my element by element model.
Sorry for my bad English |
st83461 | If you would like to pass a single sample to your model, you could load/create the sample and add the batch dimension manually to it:
# Create model
model = models.resnet50()
# Load your sample here (or create it somehow)
x = torch.randn(3, 224, 224)
# Add batch dim
x = x.unsqueeze(0)
# Forward pass
output = model(x)
You could also call model.eval() and wrap the forward pass in a with torch.no_grad() block, if you just want to run the inference. |
st83462 | Hy man, there is a problem (maybe). The cnn doing me the same inference for every sample.
Here the procedure of training :
def train_regression(model, train_loader, test_loader,exp_name='train_regressor', lr=0.0001, epochs=1000, momentum=0.90, weight_decay = 0.001):
criterion = nn.MSELoss()
optimizer = SGD(model.parameters(), lr, momentum=momentum, weight_decay=weight_decay)
# meters
loss1_meter = AverageValueMeter()
loss2_meter = AverageValueMeter()
total_loss_meter = AverageValueMeter()
# plotters
loss1_logger = VisdomPlotLogger('line', env=exp_name, opts={'title': 'Loss1', 'legend':['train','test']})
loss2_logger = VisdomPlotLogger('line', env=exp_name, opts={'title': 'Loss2', 'legend':['train','test']})
total_loss_logger = VisdomPlotLogger('line', env=exp_name, opts={'title': 'Total Loss', 'legend': ['train', 'test']})
visdom_saver = VisdomSaver(envs=[exp_name])
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
loader = {'train': train_loader, 'test': test_loader}
for e in range(epochs):
for mode in ['train', 'test']:
loss1_meter.reset()
loss2_meter.reset()
total_loss_meter.reset()
model.train() if mode == 'train' else model.eval()
with torch.set_grad_enabled(mode == 'train'): # abilitiamo i gradienti solo in training
for i, batch in enumerate(loader[mode]):
x = batch['image'].to(device)
dxdz = batch['movement'][:, :2].float().to(device)
dudv = batch['movement'][:, 2:4].float().to(device)
output = model(x)
out1, out2 = output[:,:2], output[:,2:4]
l1 = criterion(out1, dxdz)
l2 = criterion(out2, dudv)
l=l1+l2
if mode == 'train':
l.backward()
optimizer.step()
optimizer.zero_grad()
n = x.shape[0]
loss1_meter.add(l1.item()*n,n)
loss2_meter.add(l2.item()*n,n)
total_loss_meter.add(l.item() * n, n)
if mode == 'train':
loss1_logger.log(e + (i + 1) / len(loader[mode]), loss1_meter.value()[0], name=mode)
loss2_logger.log(e + (i + 1) / len(loader[mode]), loss2_meter.value()[0], name=mode)
#loss3_logger.log(e + (i + 1) / len(loader[mode]), loss3_meter.value()[0], name=mode)
total_loss_logger.log(e + (i + 1) / len(loader[mode]), total_loss_meter.value()[0], name = mode)
loss1_logger.log(e + (i + 1) / len(loader[mode]), loss1_meter.value()[0], name=mode)
loss2_logger.log(e + (i + 1) / len(loader[mode]), loss2_meter.value()[0], name=mode)
total_loss_logger.log(e + (i + 1) / len(loader[mode]), total_loss_meter.value()[0], name=mode)
visdom_saver.save()
torch.save(model.state_dict(), '%s.pth' % exp_name)
return model
And here the model:
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.feature_extractor= nn.Sequential(
nn.Conv2d(9, 32, kernel_size=3), # input 9x224x224 output 32x222x222
nn.MaxPool2d(kernel_size=2), # input 32x222x222 output 32x111x111
nn.ReLU(),
nn.Conv2d(32, 32, kernel_size=3), # input 32x111x111 output 32x109x109
nn.MaxPool2d(kernel_size=2), # input 32x109x109 output 32x54x54
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=3), #input 32x54x54 output 64x52x52
nn.MaxPool2d(kernel_size=2), #input 64x52x52 output 64x26x26
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3), # input 64x26x26 output 64x24x24
nn.MaxPool2d(kernel_size=2), # input 64x24x24 output 64x12x12
nn.ReLU()
)
self.classifier = nn.Sequential(
nn.Linear(9216, 64), #input 64x12x12
nn.ReLU(),
nn.Linear(64, 32),
nn.ReLU(),
nn.Linear(32,4)
)
def forward(self, x):
x = self.feature_extractor(x)
x = self.classifier(x.view(x.shape[0], -1))
return x
I open the topic for inference of sample by sample, because I think the code for inference maybe was wrong. But now I don’t see why my model inference the same result for every sample. |
st83463 | I’d like to build a stateful LSTM but I receive the runtime error “Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.”
This is to do with not being able to perform backprop due to results being cleared to save memory. This topic is covered in these two threads and albanD describes what is happening clearly.
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time
I keep running into this error:
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
Can someone explain what this means? Independent of the context of the problem, I don’t understand what the buffers are and what it means for them to be “freed”.
Context: In my case, it happens the second time I call loss.backward(), in a function: where the model’s function execute one step …
Training Stateful LSTM in Pytorch cause runtime error
It might interest you to know that I’ve been trying to do something similar myself: Confusion regarding PyTorch LSTMs compared to Keras stateful LSTM
Although I’m not sure if just wrapping the previous hidden data in a torch.Variable ensures that stateful training works
The two solutions are retaining the computational graph (which I don’t want to do) and detaching the hidden layer between batches.
class stateful_LSTM(nn.Module):
"""A Long Short Term Memory network
model"""
def __init__(self, num_features, hidden_dim, output_dim,
batch_size, series_length, device,
dropout=0.1, num_layers=2, debug=True):
super(stateful_LSTM, self).__init__()
# Number of features
self.num_features = num_features
# Hidden dimensions
self.hidden_dim = hidden_dim
# Number of hidden layers
self.num_layers = num_layers
# The output dimensions
self.output_dim = output_dim
# Batch Size
self.batch_size = batch_size
# Length of sequence
self.series_length = series_length
# Dropout Probability
self.dropout = dropout
# CPU or GPU
self.device = device
# Define the LSTM layer
self.lstm = nn.LSTM(
input_size = self.num_features,
hidden_size =self.hidden_dim,
dropout = self.dropout ,
num_layers =self.num_layers)
# Fully Connected Layer
self.fc1 = nn.Linear(in_features=self.hidden_dim,
out_features=self.hidden_dim)
# Activation function
self.act = nn.ReLU()
# Output layer
self.out = nn.Linear(in_features=self.hidden_dim,
out_features=self.output_dim)
self.hidden = self.init_hidden()
def init_hidden(self):
"""Initialised the hidden state to be zeros"""
return (torch.zeros(self.num_layers, self.batch_size, self.hidden_dim).to(self.device),
torch.zeros(self.num_layers, self.batch_size, self.hidden_dim).to(self.device))
def forward(self, x):
"""Forward pass through the neural network"""
# Adjust to a variable batch size
batch_size = x.size()[0]
series_length = x.size()[1]
# Keeps the dimensions constant regardless of batchsize
x = x.contiguous().view(series_length, batch_size, -1)
# Pass through through lstm layer
x, self.hidden = self.lstm(x, self.hidden)
x = x[-1]
# Fully connected hidden layer
x = self.act(self.fc1(x))
return self.out(x)
I am slightly confused as to when this detach is meant to occur. I’ve tried detaching it straight after the self.LSTM layer but that doesn’t work. Could someone please explain when (and why then) you should detach the hidden layer?
Thanks! |
st83464 | Dropout and Batch-Norm already have this kind of behavior. In other words what a Dropout or Batch-Norm module outputs when it is in eval mode is different from when it is in train mode. How can I achieve something similar with my custom module? |
st83465 | Solved by ptrblck in post #2
You could use the internal self.training attribute.
Here is a dummy example:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.fc = nn.Linear(10, 10)
def forward(self, x):
x = self.fc(x)
if self.training:
… |
st83466 | You could use the internal self.training attribute.
Here is a dummy example:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.fc = nn.Linear(10, 10)
def forward(self, x):
x = self.fc(x)
if self.training:
x = x * 1000
return x
model = MyModel()
x = torch.randn(1, 10)
output = model(x)
print(output)
> tensor([[ -151.6117, 20.6451, -589.1161, -120.6478, 395.1652, -950.3046,
-1062.1073, 973.9295, 61.4954, -412.5521]],
grad_fn=<MulBackward0>)
model.eval()
output = model(x)
print(output)
> tensor([[-0.1516, 0.0206, -0.5891, -0.1206, 0.3952, -0.9503, -1.0621, 0.9739,
0.0615, -0.4126]], grad_fn=<AddmmBackward>) |
st83467 | I am faced with memory problem when I use pytorch to implement certain network. To locate the problem, I check the memory of a very simple single LSTM network and found the memory for each epoch would increase for early turns, while for the same LSTM network in TensorFlow, the memory keeps the same (and the similar size of the first epoch’s memory by LSTM).
Moreover, the problem would become more serious with the grow of hidden size.
Could someone help me find out what happened? Thanks a lot!
The assigned pictures shows the detail.
By the way, no worry for the difference in loss, cuz I didn’t average for all batches in the TensorFlow implementation.
Thanks for your attention again!
Code for training step:
for epoch in range(num_epochs):
start = time.time()
i = 0
loss_sum = 0
total_round = 0
while (i < self.train_size):
self.rnn.train()
self.optim.zero_grad()
batch_end = i + batch_size
if (batch_end >= self.train_size):
batch_end = self.train_size
var_x = self.to_variable(x_train[i: batch_end])
var_y = self.to_variable(y_train[i: batch_end])
var_y_seq = self.to_variable(y_seq_train[i: batch_end])
if var_x.dim() == 2:
var_x = var_x.unsqueeze(2)
y_res = self.rnn(var_x)
var_y = var_y.view(-1,1)
loss = self.loss_func(y_res, var_y)
loss.backward()
self.optim.step()
loss_sum += loss.detach().numpy()
i = batch_end
total_round += 1
end = time.time()
print('epoch [{}] finished, the average train loss is {}, with time: {}'.format(epoch, loss_sum/total_round, end-start))
memory_usage()
train_loss_list.append(loss_sum/total_round)
start = time.time()
self.rnn.eval()
test_y_res = self.rnn(var_x_test)
test_loss = self.loss_func(test_y_res, var_y_test)
test_loss_list.append(test_loss.detach().numpy())
end = time.time()
print('the average test loss is {}, with time: {}'.format(float(test_loss), end-start))
Code for my rnn model:
class my_rnn(nn.Module):
def __init__(self,input_size,hidden_size,time_step):
super(my_rnn,self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.T = time_step
self.lstm = nn.LSTM(input_size=input_size,hidden_size=hidden_size,num_layers=1,batch_first = True)
self.linear = nn.Linear(hidden_size,1)
def forward(self,driving_x):
lstm_out, hidden = self.lstm(driving_x)
output = self.linear(lstm_out[:,-1,:])
return output
IMG_6284.JPG1000×2282 855 KB |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.