id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st102200 | I wonder that, what’s the difference between them? According to 1, I also can get the correct result. |
st102201 | linyu:
I wonder that, what’s the difference between them? According to 1, I also can get the correct result
I wonder that, what’s the difference between them? According to 1, I also can get the correct result |
st102202 | The main problem is that all cuda operations are asynchronous.
This means that if you don’t explicitly synchronize, the time that you measure could actually correspond to the time it took to perform other operations that we asked for before you started your timing.
What 2 is doing that 1 does not is that it makes sure that nothing is still running on the gpu before your start running your own code. And then it makes sure that all the operations you asked for are finished before you end your timer. |
st102203 | Thank you for your apply.
Actually, when I use the method 1, it takes 5 ms, while I use the method 2, it takes 300 ms. I wonder what’s the extra 295 ms does? I feel, during 5 ms, the operation should has been OK. |
st102204 | No a vgg takes in the order of hundreds of ms to run a forward pass.
5ms is how long it takes to run the python code + queue up all the cuda operations + start the execution. |
st102205 | However, if I do not use the torch.cuda.synchronize(), after 5 ms, my code is end and return the result, this should prove that I have got what I want.
Here is my code:
import time
import torch
def do_it(X,kh,kw):
if not X.is_cuda:
raise Exception,"X is not in GPU!"
if not len(X.shape) == 3:
raise Exception,"The shape of X is error!"
C, H, W = X.shape
X_unfold = X.unfold(1,kh,1).unfold(2,kw,1)
X_unfold = X_unfold.permute(1,2,0,3,4)
shape_ = X_unfold.shape
sample_num = shape_[0]*shape_[1]
sample_dim = shape_[2]*shape_[3]*shape_[4]
Y = X_unfold.reshape(sample_num, sample_dim)
K = torch.mm(Y,Y.t())
return Y, K
if __name__ == '__main__':
X = torch.rand(512,60,60).cuda()
Y, K = do_it(X,15,15)
torch.cuda.synchronize()
start = time.time()
for i in range(10):
_, K = do_it(X,15,15)
torch.cuda.synchronize()
end = time.time()
print(end-start) |
st102206 | It returns but these tensors are not readily available. This is all done transparently by pytorch, but if for example you try to print one of the value of the output, it will take the 300ms, because to get the value you need to wait for it to be computed.
What pytorch does is pass around tensors without knowing their content and you only wait for the content when you don’t have any other choice. Which is mostly when the value is required on the cpu side. |
st102207 | How can i find class weights for pixel-wise loss for image segmentation?I am working with camvid dataset with 12 classes. |
st102208 | You could calculate your mask and multiply it with the unreduced loss.
I’ve created a small gist 687 a while ago, which might be a good starter.
Let me know, if this works for you! |
st102209 | that code is giving errors when i tried to run, can u explain the approach a bit , so that i can implement it. |
st102210 | I’ve checked the code again and it’s running on my machine.
Could you post the error message, so that I can have a look?
I think I’ve misunderstood your question. I thought you would like to weight pixel-wise, not class-wise.
For a class weighting you could use the weight argument in nn.NLLLoss 158 or nn.CrossEntropyLoss 254.
In my example I create a weight mask to weight the edges of the targets more, but that’s apparently not your use case.
Let me know, if the class weight works for you! |
st102211 | the errors are
:‘module’ object has no attribute ‘long’
‘module’ object has no attribute ‘where’
These errors are mostly because of pytorch version
Yes i tried to find class weights for each class, by there frequency, but not much improvement.
So what’s a good approach to find class weights. |
st102212 | How does your confusion matrix look?
Is your model overfitting on the majority class? |
st102213 | I think it’s over-fitting for some classes, because the overall IOU is good but some classes which are around less than 1-2%(column-pole, sign-symbol,etc) in the dataset the results are not good, i used IOU as a metric. |
st102214 | Did the weighting change your stats?
If a certain class is underperforming you could weight it more and trade the other accuracies for this specific class accuracy.
I’m not sure there are other approaches besides manually balancing your class accuracies. |
st102215 | Using grid_sample in ‘border’ padding_mode shows unexpected behaviour when sampling far off the regular grid [-1,1].
For example,
img = torch.FloatTensor([0.3304, 0.9959, 0.9013, 0.4473, 0.6256, 0.4969, 0.6143, 0.4897, 0.6461, 0.6996, 0.8513, 0.8399, 0.3916, 0.6773, 0.6800, 0.9299]).view(1,1,4,4)
grid_x = torch.FloatTensor([-9.3808e+07, 9.3455e+07, -3.2075e+07, -9.7417e+07, 9.3614e+07, 3.0895e+06, 3.5309e+07, -4.3387e+07, -4.5572e+07, 1.1426e+07, 1.0985e+07, -3.3500e+07, 6.3854e+06, 5.8119e+07, 6.1684e+07, 5.5792e+07]).view(1,4,4,1)
grid_y = torch.FloatTensor([ -9.1669e+07, 3.0808e+07, -3.5392e+07, 9.4505e+07, 3.8825e+07, -2.4498e+07, -3.2516e+07, 8.8106e+07, 2.6877e+07, 9.6970e+06, -5.1589e+06, -2.8973e+07, 1.6749e+07, -2.4837e+07,-5.2990e+07, -7.7938e+07]).view(1,4,4,1)
grid = torch.cat((grid_x,grid_y),dim=3)
p_img = grid_sample(img,grid,padding_mode='border')
Following is a sampled image:
Variable containing:
(0 ,0 ,.,.) =
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 1.8598 0.4473 0.0000
1.8598 0.0000 0.0000 0.0000
[torch.FloatTensor of size 1x1x4x4]
One can see that among all the non-zero entries none should be higher than 1.0 as these values are generated by interpolation from ‘border_padded’ pixels.
In my project occasionally I end up getting arbitrarily high grid co-ordinates, and I assumed that grid_sample will be able to handle them gracefully.
However, it appears on some rare occasions grid_sample misbehaves.
Can someone suggest the reason behind this behaviour? |
st102216 | Hi all,
We have a question:
In order to train the model, we use two datasets, that is, dataset_a and dataset_b.
We feed the model using only one batch data, either from dataset_a or from dataset_b.
How can we realize this using data_loader in pytorch?
The code is shown as follows:
for step in range(10):
if (step+1) % 2 == 0
batch_data = dataset_a[step]
else
batch_data = dataset_b[step]
output = model(batch_data)
Thank you very much. |
st102217 | Solved by albanD in post #6
I guess a simple solution would be to sample one batch from each dataset using zip at each step and them randomly discard 2 of them.
Otherwise, you have a for loop that counts indices, and manually manage the iterators for each of the 4 dataset, calling next() only on the two that you selected. But… |
st102218 | Hi,
Thanks.
Our model needs two batches data in one forward and backward step.
one batch data from dataset_a and one batch data from dataset_b.
How can we get two batches when using concatDataset? |
st102219 | Sorry in your original question you said that you need a batch from either dataset. Which you can do with ConcatDataset.
If you need a batch from each dataset, then you can just do that in your for loop: for sample_a, sample_b in zip(loader_a, loader_b):. |
st102220 | I am very sorry I did not express my question clearly.
In fact, we have four data sets. In a forward and backward step, the model needs two batches which from two of the four data sets. Moreover, the model goes through four data sets in an epoch. The batches data is not repeated in an epoch.
The pseudo code is as follows:
For step in range(100)
Randomly sample two data sets. (like dset A and dset B);
Load two batches (like batch a and batch b) from these two data sets; # In next round, if we sample dsetA and dsetB, the batch a’ and batch b’ is not the same as batch a and batch b. Finally, we need go through all samples in four datasets.
Feed into the model;
End |
st102221 | I guess a simple solution would be to sample one batch from each dataset using zip at each step and them randomly discard 2 of them.
Otherwise, you have a for loop that counts indices, and manually manage the iterators for each of the 4 dataset, calling next() only on the two that you selected. But I am not sure how you can handle the fact that you are going to “finish” some dataset before others. |
st102222 | How about provide a default in calling next(), then use it to check the “finish”?
https://docs.python.org/3.6/library/functions.html#next 27 |
st102223 | Sorry if this sounds ignorant but I am really new to pytorch. I am trying to use DistributedDataParallel but this error appears: ValueError: Invalid backend, only NCCL and GLOO backends are supported by DistributedDataParallel
I have already installed nccl but the error persists. Thanks a lot for your help in advance
autoencoder = nn.parallel.DistributedDataParallel(autoencoder.cuda(), device_ids=[0, 1, 2, 3]) |
st102224 | Hi, I am fairly new with using multiple GPUs so this might be a stupid question to ask but I couldn’t find an answer suitable for my particular situation given that the two GPUs are exactly the same and they have drastically different memory uptake throughout the whole training. As you can see, GPU 0 has more than twice the amount of memory being used than GPU 1. As of now observing the training, the difference is even larger with the first taking up about 12 gb. The volatile GPU-utility is also almost always significantly higher for GPU 0.
I hope someone can help me out here in understanding this situation and please let me know if you need more information to answer my question. Thanks! |
st102225 | github.com/pytorch/pytorch
Issue: GPU usage extremely in-balance for segmentation task 2
opened by zhanghang1989
on 2017-06-23
closed by zhanghang1989
on 2017-06-25
The usage of the first GPU is much larger than the others. I think it is because by default, the DataParallel...
I found this link, but is there a better solution that is part of the stable pytorch release now that more than a year has passed? |
st102226 | Hi,
What this post says is that to have even usage, you should put everything in the DataParallel block.
In your case, if you have some part of your net that is only on one GPU (which is gpu0 by default) then it is expected that gpu0 is more used no? |
st102227 | import caffe first:
>>> import caffe
>>> import torch
File "<stdin>", line 1, in <module>
File "/home/name/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 53, in <module>
from torch._C import *
ImportError: dlopen: cannot load any more object with static TLS
import torch first:
>>> import torch
>>> import caffe
Segmentation fault (core dumped)
My pytorch version is 0.2.0, which is old but I must use this version because I want to convert a pytorch model to caffemodel. Anyone knows how to solve this problem? |
st102228 | The first error is just that both libraries are really big and require too many C functions. You can look for the error message on google. That also happens quite often with matlab for example when you try and load big libraries.
The second one is a problem with caffe I guess, or pytorch loading some library that caffe does not expect and makes it crash. |
st102229 | Thank you. I have solved the second problem simply by switching python from 3.6 to 2.7. |
st102230 | I just finished a linear algebra/differential equations class, so I am familiar with how machine learning works under the hood. I need some help with the terminology and how the numbers are dealt with. If I draw comparisons to what I learned in calculus and LA/Diff Eq that are wrong or bad let me know. I am try to learn by association as it’s the most effective way for me.
If I have, lets say a furnace, and it has a pressure sensor, a temperature sensor and an outside skin sensor(easiest for me to picture as all the variables are directly related.) this would produce a vector of 3 variables, let’s say [100, 200, 150].
If that furnace had 3 zones of sensors, it would produce a matrix [100,200,150; 120, 220, 160; 110,240,220] (using semicolon between rows because I took a Matlab class last semester).
How would I enter this in a machine learning model? Would the matrix be considered 1 batch? Are the measurements (matrix elements) what are called features in this forum? If I lost the temp sensor, that would become my y variable and the other 2 would be x_1 and x_2, so how do I solve for a single y given 2 x as input? I saw were the input dimensions are batch, sequence length and feature. Am I wrong or is batch the whole matrix at time t or is it a group of matrices, say from t-5 to t, a 3d matrix with the third dimension as time? ? Sequence length is the second input. Is that the maximum length of any 1 row of the matrix or is it the whole matrix? The second dimension or number of columns? From what I read, the 3rd input is number of features, I believe this is the number of elements in the matrix, is that right? Any help would be very appreciated. Also, I have used some example code and tried to plug multiple inputs(x’s) into the model and tried to get one y out and it complains about the input and output being different sizes. I have seen some example of multi input to single output, but I don’t know what’s going on and am trying to understand it.
Thank you |
st102231 | I found this keras explanation. Is it the same as pytorch or different?
https://machinelearningmastery.com/reshape-input-data-long-short-term-memory-networks-keras/ 2 |
st102232 | Hi, I have a custom loss function, and wish to update weights of the center?
and the other problem is when i write loss,centers = center_loss() I get error that loss function is not iteratble.
see the following code
class center_loss(torch.nn.Module):
def __init__(self,num_class=10,num_dim=2,num_batch=100):
super(center_loss,self).__init__()
self.num_class = num_class
self.num_dim = num_dim
self.num_batch = num_batch
self.centers = torch.nn.Parameter(torch.randn(num_class,num_dim).cuda())
def forward(self,features,labels):
index = labels.type(torch.cuda.FloatTensor) # casting the variables to int
features = features.type(torch.cuda.FloatTensor)
#calculating new centers
var4 = self.centers.unsqueeze(1)
var5 = var4.transpose(0,1)
ar6 = var5.repeat(10,1,1)
labels = one_hot_embedding(index,10)
embed2 = labels.unsqueeze(1).view(self.num_batch,10,1)
embed3 = embed2.repeat(1,1,2)
embed3 = embed3.type(torch.FloatTensor)
var6 =var6.type(torch.FloatTensor)
a = torch.mm(var6[:,:,0],embed3[:,:,0].t())
b = torch.mm(var6[:,:,1],embed3[:,:,1].t())
aa = a.t()
bb = b.t()
c = torch.zeros(self.num_batch,2)
c[:,0] = aa[:,1]
c[:,1] = bb[:,1]
diff = c - features.type(torch.FloatTensor)
imm = torch.zeros(self.num_batch,2)
imm.index_add_(0, index, diff)
unique, counts = np.unique(index.numpy(), return_counts=True)
s = torch.Tensor(self.num_batch).view(10,1)
centers_update = self.centers - (imm/s)*0.5
# calculating loss
x_2 = torch.pow(features,2)
c_2 = torch.pow(self.centers,2)
x_2_s = torch.sum(x_2,1)
x_2_s = x_2_s.view(self.num_batch,1)
c_2_s = torch.sum(c_2,1)
c_2_s = c_2_s.view(self.num_class,1)
x_2_s_e = x_2_s.repeat(1,self.num_class)
c_2_s_e = c_2_s.t().repeat(self.num_batch,1)
xc = 2*torch.mm(features,center.t())
# we want only positive values,
dis = x_2_s_e + c_2_s_e - xc
di = dis.type(torch.FloatTensor)
di = torch.sqrt(torch.clamp(di, min=0))
# since center loss focuses on intra distances, we are not concerened about the distance that we calculated from
#other centers, we will use the other centers to increase inter loss.
bl = labels.type(torch.ByteTensor)
dii = torch.masked_select(di,bl)
center_loss = dii/self.num_batch
return center_loss, self.centers
I want he self.centers weight to be update by centers_update value |
st102233 | Hi,
To use the Module, you need to create one instance and then use it.
For the parameters in a Module to be learnt, you just need to pass them to your optimizer:
your_model = Model()
your_loss = center_loss()
your_optimizer = optim.SGD(your_model.parameters() + your_loss.parameters, ...)
# In your training:
features = your_model(inputs)
loss = your_loss(features, labels)
your_optimizer.zero_grad()
loss.backward()
your_optimizer.step() |
st102234 | Nilesh_Pandey:
centers_update = self.centers - (imm/s)*0.5
This is how I am calculating wiegth of the center
centers_update = self.centers - (imm/s)*0.5
I need to update self.centers by the values of centers_update |
st102235 | How should you centers be updated? Is this centers_update a surrogate for gradient? Or is it the new value they should take? |
st102236 | centers_update is a new value, that I wish to assign to the variable self.centers, no it is not a gradient. |
st102237 | Then self.centers does not need to be a nn.Parameter() and can be defined as self.centers = torch.randn(num_class,num_dim).cuda().
You can then update it by just doing one of the two below
# works only if self.centers.requires_grad == False
self.centers.copy_(centers_update.detach())
# Will work in all cases
with torch.no_grad():
self.centers.copy_(centers_update) |
st102238 | I tried but it throws error as
# criterion_xent = nn.CrossEntropyLoss()
criterion_cent = center_loss()
# optimizer_model = torch.optim.Adam(cnn.parameters(), lr=0.001)
optimizer_centloss = torch.optim.Adam(criterion_cent, lr=0.01)
**---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in ()
2 criterion_cent = center_loss()
3 # optimizer_model = torch.optim.Adam(cnn.parameters(), lr=0.001)
----> 4 optimizer_centloss = torch.optim.Adam(criterion_cent, lr=0.01)
~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/optim/adam.py in init(self, params, lr, betas, eps, weight_decay, amsgrad)
39 defaults = dict(lr=lr, betas=betas, eps=eps,
40 weight_decay=weight_decay, amsgrad=amsgrad)
—> 41 super(Adam, self).init(params, defaults)
42
43 def setstate(self, state):
~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/optim/optimizer.py in init(self, params, defaults)
34 self.param_groups = []
35
—> 36 param_groups = list(params)
37 if len(param_groups) == 0:
38 raise ValueError(“optimizer got an empty parameter list”)
TypeError: ‘center_loss’ object is not iterable
** |
st102239 | Hi,
I created a loss function, which is the weighted sum of two losses:
Loss = a * loss1 + b * loss2
in which loss1 is a CTC loss, and loss 2 is a KL divergence loss, and a, b are adjustable values. To verify the correctness of the loss, I first removed loss2, so in this case Loss = loss1, and trained my network. After that, I set a = 1, and b = 0, so Loss = 1 * loss1 + 0 * loss2, and I expected the same result as the previous case. However, I got very different result. I am wondering any suggestions for the difference?
Thank you so much for your help.
Mike |
st102240 | Can you post some sample code ?
When you remove loss2, loss=a*loss1; what’s the value of ‘a’ here? |
st102241 | I have two models, an SI (speaker independent) model, which is already trained, and an SD (speaker dependent) model (to be learned). At the beginning, they are the same. I want to adapt SI to a new speaker by minimizing CTC loss on SD data, starting from SI model. But since I do not want to overfit, I add a weighted KLD loss to the CTC, to prevent the adapted model to go too far from the SI model. The weighting factor is self.mu in the loss. The loss code looks like this:
import torch
from warp_ctc_pytorch import CTCLoss
from torch.nn import KLDivLoss as KLD
from torch.nn.functional import F
class CTC_KLD(nn.Module):
def init(self, mu):
super(CTC_KLD, self).init()
self.mu = mu
self.ctc_loss = CTCLoss(length_average = True)
self.KLD = KLD(size_average = False)
def forward(self, SI_logits, SD_logits, SD_targets, SD_target_sizes, input_sizes, input_sizes_list):
SD_logits_ctc = torch.transpose(SD_logits, 0, 1).contiguous() # SD_logits_ctc size: T, N, D
CTC_loss = self.ctc_loss(SD_logits_ctc, SD_targets, input_sizes, SD_target_sizes).type(torch.cuda.FloatTensor) # ctc loss
SI_logits = rnn_utils.pack_padded_sequence(SI_logits, input_sizes_list, batch_first = True).data
SD_logits_KL = rnn_utils.pack_padded_sequence(SD_logits, input_sizes_list, batch_first = True).data
batch_size = SI_logits.size(0)
log_probs_SD = F.log_softmax(SD_logits_KL, dim = 1)
probs_SI = F.softmax(SI_logits, dim = 1)
KLD_loss = self.KLD(log_probs_SD, probs_SI) / batch_size
loss = (1.0 - self.mu) * CTC_loss + self.mu * KLD_loss
return loss
In the main script, for an input variable x of size N, T, D, which contains N sequences from the new speaker, it first goes through both SI and SD models, to obtain SD_logits and SI_logits, then I detach SI_logits from the graph using SI_logits = SI_logits.detach(), since the SI model should not be updated. It only provides the targets for the KLD loss. Then, I pass SI_logits and SD_logits through the loss function.
In the above loss code, if I wrote loss = CTC_loss, then the training works fine. But when I wrote loss = 1.0 * CTC_loss + 0.0 * KLD_loss (in which self.mu = 0), the result (measured by word error rate in speech recognition) becomes very different than simply writing loss = CTC_loss, but they should be the same loss function (with only CTC_loss). Anyone has any ideas why they differ a lot? |
st102242 | My code is as following, when alpha is set to 0 in the first function and train the network, I expect to get similiar behavior when using second function for training. But I get totally different results!!! Setting alpha to 0 leads to wrong results. This bothers me a lot.
def loss_fn_kd(outputs, labels, teacher_outputs, alpha, T):
"""
Compute the knowledge-distillation (KD) loss given outputs, labels.
"Hyperparameters": temperature and alpha
"""
loss1 = nn.KLDivLoss(size_average=False)(F.log_softmax(outputs/T, dim=1),
F.softmax(teacher_outputs/T, dim=1)) * (alpha * T * T)
loss2 = F.cross_entropy(outputs, labels, size_average=False) * (1. - alpha)
KD_loss = loss1 + loss2
return KD_loss / outputs.size(0)
def loss_fn_kd(outputs, labels, teacher_outputs, alpha, T):
"""
Compute the knowledge-distillation (KD) loss given outputs, labels.
"Hyperparameters": temperature and alpha
"""
KD_loss = F.cross_entropy(outputs, labels, size_average=False) * (1. - alpha)
return KD_loss / outputs.size(0) |
st102243 | Continuing the discussion from Weighted sum of two losses can not reduce to the loss of one component:
Hi,
I throw away the built-in KLDivLoss. Instead, I wrote my own KLD. I think in knowledge distillation, you can use the part in the KLD loss which is dependent on your student model only, and throw away the other part.
The original KLD loss is:
P_teacher * log(P_teacher / P_student)
I use:
P_teacher * log(1/P_student)
since this is the term related to your student model. In fact, this modified KLD is what most paper uses.
You can implement this loss in a module, just like how you define a network. I did not see your problem with the modified KLD. |
st102244 | Hi guys!
I’ve tried to make a 2 layers NN learn a simple linear interpolation for a discrete function, I’ve tried lots of different learning rates as well as different activation functions, and it seems like nothing is being learned!
I’ve literally spent the last 6 hours trying to debug the following code, but it seems like there’s no bug! So I’m wondering,is there an explanation to this?
from torch.utils.data import Dataset
import os
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import random
LOW_X=255
MID_X=40000
HIGH_X=200000
LOW_Y=torch.Tensor([0,0,1])
MID_Y=torch.Tensor([0.2,0.5,0.3])
HIGH_Y=torch.Tensor([1,0,0])
BATCH_SIZE=4
def x_to_tensor(x):
if x<=MID_X:
return LOW_Y+(x-LOW_X)*(MID_Y-LOW_Y)/(MID_X-LOW_X)
if x<=HIGH_X:
return MID_Y+(x-MID_X)*(HIGH_Y-MID_Y)/(HIGH_X-MID_X)
return HIGH_Y
class XYDataset(Dataset):
LENGTH=10000
def __len__(self):
return self.LENGTH
def __getitem__(self, idx):
x=random.randint(LOW_X,HIGH_X)
y=x_to_tensor(x)
return x,y
class Interpolate(nn.Module):
def __init__(self, planes,hidden_size=10):
super(Interpolate, self).__init__()
self.hidden_size=hidden_size
self.x_to_hidden = nn.Linear(1, hidden_size)
self.hidden_to_out = nn.Linear(hidden_size,planes)
self.activation = nn.Tanh() #I have tried Sigmoid and Relu activations as well
self.softmax=torch.nn.Softmax(dim=1)
def forward(self, x):
out = self.x_to_hidden(x)
out = self.activation(out)
out = self.hidden_to_out(out)
out = self.softmax(out)
return out
dataset=XYDataset()
trainloader = torch.utils.data.DataLoader(dataset, batch_size=BATCH_SIZE,
shuffle=True, num_workers=4)
criterion= nn.MSELoss()
def train_net(net,epochs=10,lr=5.137871216190041e-05,l2_regularization=2.181622809797563e-12):
optimizer= optim.Adam(net.parameters(),lr=lr,weight_decay=l2_regularization)
net.train(True)
running_loss=0.0
for epoch in range(epochs):
for i,data in enumerate(trainloader):
inputs,targets=data
inputs,targets=torch.FloatTensor(inputs.float()).view(-1,1),torch.FloatTensor(targets.float())
optimizer.zero_grad()
outputs=net(inputs)
loss=criterion(outputs,targets)
loss.backward()
optimizer.step()
running_loss+=loss.item()
if (len(trainloader)*epoch+i)%200==199:
running_loss=running_loss/(200*BATCH_SIZE)
print('[%d,%5d] loss: %.6f ' % (epoch+1,i+1,running_loss))
running_loss=0.0
for i in range(-11,3):
net=Interpolate(3)
train_net(net,lr=10**i,epochs=1)
print('for learning rate {} net output on low x is {}'.format(i,net(torch.Tensor([255]).view(-1,1)))) |
st102245 | Huh Just normalizing the inputs is the right answer
A moderator may feel free to delete this post |
st102246 | The inputs were integer numbers from 255 to 200000. I’ve normalized the input to be in the range [0,1] |
st102247 | When I read the docs about loading data using callable in here 1, storage and loc mysteriously appears in the code example (the examples are pasted below).
# Load all tensors onto the CPU, using a function
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage)
# Load all tensors onto GPU 1
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
Since I could execute the code snippet without problem, I am wondering if storage or loc are something pre-defined and I happened to miss them.
Could someone help me, thank you in advance. |
st102248 | It’s first time I’m training imagenet dataset with VGG16.
As I know, training dataset that I can download from image-net.org 3 is for both classification and localization.
I want to do is training VGG16 only for classification., but I don’t think this training dataset is appropriate for training classification.
However, I saw several papers that researchers used ILSVRC2012 dataset for classification training.
Also, trying to train usining ILSVRC2012 is failed(low acc) with pytorch official github code.
GitHub
pytorch/examples 4
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc. - pytorch/examples
What is the problem?
image.png1124×364 17.8 KB |
st102249 | You’ll want something like this:
data = Variable(data).cuda()
label = Variable(label).cuda()
out = net(data)
out will also be a Variable object on the GPU in this case. |
st102250 | Hello, now, I only have a model file, how to know through the code, can give the complete code, sorry, I just learned |
st102251 | if you have a saved .pth you’ll first want to torch.load(...some file path...)
Though, many models that are distributed (incl. the torchvision ones, I believe) use a state_dict so you may need to instantiate an instance of the model class and then use model.load_state_dict(...some file path...)
If you upload your code, it will be easier to help. |
st102252 | The output of the first level in Pytorch is not the input of the next layer, or the output of the first two layers is the input of the next layer, etc., how do we connect the layer to layer? If so, how can we get the relation between them? Similar to the top and bottom. of caffe. |
st102253 | model = nn.Sequential(
nn.Linear(100, 50),
nn.Linear(50, 20),
nn.Linear(20, 10)
)
This is the easiest way. The nn.Sequential object will collect all the modules that you pass to it and when it call its .forward() method it will feed each layer into the next. If you need to define a more complicated forward pass you can override the forward of an inherited nn.Module:
class CoolModel(nn.Module):
def __init__(self, *args, **kwargs):
super(CoolModel, self).__init__()
self.layer1 = nn.Linear(100, 50)
self.layer2 = nn.Linear(50, 20)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
return out
Hope that helps! |
st102254 | Hi,
Is it possible to get topological relationships between layers in PyTorch?
I understand that we can do forward and backprop by calling forward(). What if I really need the model relationships? For example, I want to do network pruning. When I delete one Conv layer, I need to update all following layers connected to it accordingly. |
st102255 | I am trying out a couple of variants of networks. Depending on the variant, I want to use certain specific layers and losses - different variants make use of different set of layers and losses.
Would it be safe to use ‘if…else’ conditional statements to select layers and losses depending on the variant I am trying out; so that I don’t have to create separate files/projects for different variants? (Network structure remains constant during the experimentation with a variant) |
st102256 | Solved by SimonW in post #2
Yes, it would work because of the beauty of dynamic computation graph |
st102257 | I am using only a part of a pretrained net. If I keep the unused parameters, then I am getting the following results:
Output of method 1:
Epoch: 0 Iteration: 0 Loss: tensor(18.7229, device=‘cuda:0’)
However, if I get rid of the unused parameters, then I am getting the following results:
Output of method 2:
Epoch: 0 Iteration: 0 Loss: tensor(30.2404, device=‘cuda:0’)
Why would the unused parameters affect the outcome of the forward pass? |
st102258 | Solved by Paritosh in post #4
I tried with a minimal example, and I don’t see unused parameters affecting the forward pass. But I am still facing the problem in my main code, so there should be some mistake that I am not able to identify. |
st102259 | They shouldn’t. Are you sure you don’t use them? Are you fine-tuning the network / continue training ? |
st102260 | @justusschock tried with both eval() and train() modes. I am getting different results in both the cases. For more clarification I have attached my code below (first the pretrained net and then my added layers):
Output of pretrained net goes into my added layers.
*************************** Pretrained net part (keeping the unused layers):
class C3D(nn.Module):
def init(self):
super(C3D, self).init()
self.conv1 = nn.Conv3d(3, 64, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.pool1 = nn.MaxPool3d(kernel_size=(1, 2, 2), stride=(1, 2, 2))
self.conv2 = nn.Conv3d(64, 128, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.pool2 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2))
self.conv3a = nn.Conv3d(128, 256, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.conv3b = nn.Conv3d(256, 256, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.pool3 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2))
self.conv4a = nn.Conv3d(256, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.conv4b = nn.Conv3d(512, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.pool4 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2))
self.conv5a = nn.Conv3d(512, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.conv5b = nn.Conv3d(512, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.pool5 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2), padding=(0, 1, 1))
self.fc6 = nn.Linear(8192, 4096)
self.fc7 = nn.Linear(4096, 4096)
self.fc8 = nn.Linear(4096, 487)
self.dropout = nn.Dropout(p=0.5)
self.relu = nn.ReLU()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
h = self.relu(self.conv1(x))
h = self.pool1(h)
h = self.relu(self.conv2(h))
h = self.pool2(h)
h = self.relu(self.conv3a(h))
h = self.relu(self.conv3b(h))
h = self.pool3(h)
h = self.relu(self.conv4a(h))
h = self.relu(self.conv4b(h))
h = self.pool4(h)
h = self.relu(self.conv5a(h))
h = self.relu(self.conv5b(h))
h = self.pool5(h)
h = h.view(-1, 8192)
h = self.relu(self.fc6(h))
#h = self.dropout(h)
# h = self.relu(self.fc7(h))
# h = self.dropout(h)
# logits = self.fc8(h)
# probs = self.softmax(logits)
return h
********************* Pretrained net part (not loading the unused parameters):
class C3D_altered(nn.Module):
def init(self):
super(C3D_altered, self).init()
self.conv1 = nn.Conv3d(3, 64, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.pool1 = nn.MaxPool3d(kernel_size=(1, 2, 2), stride=(1, 2, 2))
self.conv2 = nn.Conv3d(64, 128, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.pool2 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2))
self.conv3a = nn.Conv3d(128, 256, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.conv3b = nn.Conv3d(256, 256, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.pool3 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2))
self.conv4a = nn.Conv3d(256, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.conv4b = nn.Conv3d(512, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.pool4 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2))
self.conv5a = nn.Conv3d(512, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.conv5b = nn.Conv3d(512, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.pool5 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2), padding=(0, 1, 1))
self.fc6 = nn.Linear(8192, 4096)
# self.fc7 = nn.Linear(4096, 4096)
# self.fc8 = nn.Linear(4096, 487)
self.dropout = nn.Dropout(p=0.5)
self.relu = nn.ReLU()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
h = self.relu(self.conv1(x))
h = self.pool1(h)
h = self.relu(self.conv2(h))
h = self.pool2(h)
h = self.relu(self.conv3a(h))
h = self.relu(self.conv3b(h))
h = self.pool3(h)
h = self.relu(self.conv4a(h))
h = self.relu(self.conv4b(h))
h = self.pool4(h)
h = self.relu(self.conv5a(h))
h = self.relu(self.conv5b(h))
h = self.pool5(h)
h = h.view(-1, 8192)
h = self.relu(self.fc6(h))
# h = self.dropout(h)
# h = self.relu(self.fc7(h))
# h = self.dropout(h)
# logits = self.fc8(h)
# probs = self.softmax(logits)
return h
My newly added layers on top of the pretrained net part:
class my_fc(nn.Module):
def init(self):
super(my_fc, self).init()
self.fc_1 = nn.Linear(4096,1)
self.fc_2 = nn.Linear(4096,3)
self.fc_3 = nn.Linear(4096,2)
self.fc_4 = nn.Linear(4096,4)
self.fc_5 = nn.Linear(4096,10)
self.fc_6 = nn.Linear(4096,8)
def forward(self, x):
op1 = self.fc_1(x)
op2 = self.fc_2(x)
op3 = self.fc_3(x)
op4 = self.fc_4(x)
op5 = self.fc_5(x)
op6 = self.fc_6(x)
return op1, op2, op3, op4, op5, op6 |
st102261 | I tried with a minimal example, and I don’t see unused parameters affecting the forward pass. But I am still facing the problem in my main code, so there should be some mistake that I am not able to identify. |
st102262 | I am working on a LSTM model and trying to use a DataLoader to provide the data. I am using stock price data and my dataset consists of:
Date (string)
Closing Price (float)
Price Change (float)
Right now I am just looking for a good example of LSTM using similar data so I can configure my DataSet and DataLoader correctly.
To test my DataLoader I have the following code:
for i, d in enumerate(dataloader):
print(i, d)
Using the following definition of dataloader the test works.
dataloader = DataLoader(pricedata,
batch_size=30,
shuffle=False,
num_workers=4)
This give me x batches of size 30, which makes sense.
However, I need to use a sliding window of size n so, assuming there are k instances in the dataset, I would like k-n batches with n instance in each batch.
So I redefined dataloader as:
dataloader = DataLoader(pricedata,
batch_sampler=torch.utils.data.sampler.SequentialSampler(pricedata),
shuffle=False,
num_workers=4)
With this change I get the following error message:
TypeError: 'int' object is not iterable
when the code hits the test:
for i, d in enumerate(dataloader):
Based on this I have two questions:
Is torch.utils.data.sampler.SequentialSampler the appropriate sampler to use for a sliding window?
Can anyone point me to a good example of configuring an LSTM using a DataLoader to load numeric data? All the examples I have seen are for NLP.
Thanks |
st102263 | Solved by ptrblck in post #2
The SequantialSampler samples your data sequentially in the same order.
To use a sliding window, I would create an own Dataset and use the __getitem__ method to get the sliding window.
Here is a small example (untested):
class MyDataset(Dataset):
def __init__(self, data, window):
self… |
st102264 | The SequantialSampler samples your data sequentially in the same order.
To use a sliding window, I would create an own Dataset and use the __getitem__ method to get the sliding window.
Here is a small example (untested):
class MyDataset(Dataset):
def __init__(self, data, window):
self.data = data
self.window = window
def __getitem__(self, index):
x = self.data[index:index+self.window]
return x
def __len__(self):
return len(self.data) - self.window |
st102265 | Hi everyone,
So I need the anaconda environment with Python2.7. And when I am installing PyTorch as the official web said, Anaconda is always upgrading my python version, which completely confuses me.
At first, I thought it is because of that I am using the anaconda in Windows. But after I change to linux, same things happen.
Does anyone know how to install PyTorch using anaconda without upgrading the python version?
Any suggestion would help a lot. |
st102266 | Solved by viraat in post #2
PyTorch does not support python 2.7 on Windows
[image]
I am on MacOS and was able to install PyTorch using Conda for Python 2.7 without any issues. What are the steps you are taking? Could you provide some example output?
The steps I followed are below. I created a new conda environment name py2… |
st102267 | PyTorch does not support python 2.7 on Windows
image.png1260×564 49.5 KB
I am on MacOS and was able to install PyTorch using Conda for Python 2.7 without any issues. What are the steps you are taking? Could you provide some example output?
The steps I followed are below. I created a new conda environment name py2.7 and installed PyTorch.
$ conda create -n py2.7 python=2.7
$ source activate py2.7
$ conda install pytorch torchvision -c pytorch
$ python
Python 2.7.15 |Anaconda, Inc.| (default, May 1 2018, 18:37:05)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'0.4.1' |
st102268 | Hi @viraat,
I am using Anaconda2 and create the environment without ‘python=2.7’. But I check python version in the environment created and it is 2.7. But it just does not work. But by creating environment with ‘python2.7’, everything works fine. I cannot figure out why.
Anyway, problems solved. Thank you so much! |
st102269 | I am implementing a sparse random projection, using a random sparse matrix (Li, Ping, Trevor J. Hastie, and Kenneth W. Church. “Very sparse random projections.”).
This operation should in principle be much faster than a dense random projection, but experimentally I get the converse. In my example, I am multiplying a 5k vector by a 10k*5k matrix. When using a dense matrix, doing torch.mm(dense_mat, vec) executes in around 500 microseconds, while using a sparse matrix, torch.mm(sparse_mat, vec) executes in 40 ms.
If I further increase the size, the dense multiplication is still much faster than the sparse one.
Does somebody have some insights in why this happens? |
st102270 | image.png1674×684 60.1 KB
xType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [97,0,0], thread: [61,0,0] Assertion srcIndex < srcSelectDimSize failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:321: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [97,0,0], thread: [62,0,0] Assertion srcIndex < srcSelectDimSize failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:321: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [97,0,0], thread: [63,0,0] Assertion srcIndex < srcSelectDimSize failed.
THCudaCheck FAIL file=/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/generic/THCTensorMath.cu line=226 error=59 : device-side assert triggered
Traceback (most recent call last):
File “dialog_train.py”, line 537, in
main()
File “dialog_train.py”, line 533, in main
trainModel(model, trainData, validData, testData, dataset, optim, criterion)
File “dialog_train.py”, line 376, in trainModel
train_loss, train_acc, train_loss_ppl = trainEpoch(epoch)
File “dialog_train.py”, line 333, in trainEpoch
outputs, topic_dist = model(batch[0], targets)
File “/home/zeng/envs/pytorch_0.1.10_py27/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 202, in call
result = self.forward(*input, **kwargs)
File “/home/zeng/parlAI/conversation/OpenNMT-Dialog/Models.py”, line 243, in forward
out, dec_hidden, _attn = self.decoder(target_embedding, hidden_n, context, init_output)
File “/home/zeng/envs/pytorch_0.1.10_py27/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 202, in call
result = self.forward(*input, **kwargs)
File “/home/zeng/parlAI/conversation/OpenNMT-Dialog/Models.py”, line 189, in forward
emb_t = torch.cat([emb_t, output], 2)
File “/home/zeng/envs/pytorch_0.1.10_py27/lib/python2.7/site-packages/torch/autograd/variable.py”, line 840, in cat
return Concat(dim)(*iterable)
File “/home/zeng/envs/pytorch_0.1.10_py27/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py”, line 305, in forward
return torch.cat(inputs, self.dim)
RuntimeError: cuda runtime error (59) : device-side assert triggered at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/generic/THCTensorMath.cu:226 |
st102271 | this means that you are indexing out-of-bounds indices in your code. For example x[10] when x size is only 4 |
st102272 | The weired thing is that, i output all the intermediate result, especially each input of embedding layer did not find any index big than the tensor size |
st102273 | github.com/pytorch/pytorch
GPU torch.multinomial produces an out-of-bounds index 266
opened
Feb 27, 2017
closed
Mar 1, 2017
ankitkv
torch.multinomial on the GPU can produce indices that are out of bounds.
Consider the following code:
from __future__ import print_function
import torch
i = 0
while...
high priority
Do i have the error like this? |
st102274 | Im trying to get predictions from my model using multiple GPU’s for which i have,
model = retinanet
model = torch.nn.DataParallel(model, device_ids =[ 0,2])
But when i run this i get ,
AssertionError: Invalid device id
Both those device ids are valid, I’m not sure whats going on here, any suggestions would be helpful, Thanks in advance. |
st102275 | You can try the suggestion of setting CUDA_VISIBLE_DEVICES as given in this post 855 |
st102276 | Hi, I have a problem during implementing distributed training.
The dataset is an M*N matrix and input is a vector.
The dataset is loaded as:
class ReadDataset(data.Dataset):
def __init__(self, filename):
self._filename = filename
self._total_data = 0
with open(filename, 'r') as f:
self._total_data = len(f.readlines()) - 1
def __getitem__(self, idx):
line = linecache.getline(self._filename, idx + 1)
return line
def __len__(self):
return self._total_data
Then, read it
dataset = ReadDataset(training_filename)
dataLoader = data.DataLoader(dataset)
the model is:
model = nn.Sequential( nn.Linear(2000, 150), nn.Linear(150, 2000) )
In distributed training:
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
dataLoader = data.DataLoader(dataset, num_workers=args['world-size'], sampler=train_sampler)
for epoch in range(10):
train_sampler.set_epoch(epoch)
for target in dataLoader:
in_val = ## a vector, whose size equals to len(target) ##
out = model(in_val)
The error when training with 2 GPUs is,
RuntimeError: size mismatch, m1: [1 x 1000], m2: [2000x 150] at /opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/generic/THCTensorMathBlas.cu:249.
Something is wrong, but I do not know where to start. Can someone give some suggestions?
Thanks |
st102277 | It looks like your input matrix is sized (1, 1000) and your first matrix is sized (2000, 150). |
st102278 | Thanks for looking into this issue!
The in_val is a vector (size of 2000), according to the profiling result. |
st102279 | You mention this happens when training with two gpus. Does this error not occur when only training with one GPU? |
st102280 | Just to clarify, if you run
import torch
import torch.nn as nn
model = nn.Sequential(nn.Linear(2000, 150), nn.Linear(150, 2000))
in_val = torch.randn(1, 2000)
out = model(in_val)
do you get the same error? |
st102281 | It runs fine without error.
This is also observed that if no distributed training used (that two training processes started with MPI but not in a distributed training way in pytorch), it passes. |
st102282 | Could you post the full code that produces the error then? Or at least a minimal, runnable example that produces the error? |
st102283 | Hi @aplassard, thanks for help! Following is the cleaned code to repro the error.
from __future__ import print_function
import argparse
from collections import OrderedDict
import linecache
import os
import torch
import torch.nn as nn
import torch.utils.data as data
import torch.utils.data.distributed
# Load matrix from file
class LazyDataset(data.Dataset):
def __init__(self, filename):
self._filename = filename
self._total_data = 0
with open(filename, 'r') as f:
self._total_data = len(f.readlines()) - 1
def __getitem__(self, idx):
line = linecache.getline(self._filename, idx + 1)
return idx, line
def __len__(self):
return self._total_data
if __name__ == "__main__":
# Input args processing
parser = argparse.ArgumentParser()
parser.add_argument('-datadir', '--datadir', help='Data directory where the training dataset is located', required=False, default=None)
args = vars(parser.parse_args())
# Training dataset maybe splitted into multiple files
training_filename = args['datadir'] + '/matrixTest'
# Load the dataset
dataset = LazyDataset(training_filename)
dataLoader = data.DataLoader(dataset)
# Initialize the model
model = nn.Sequential(
nn.Linear(20, 10),
nn.Linear(10, 20)
)
# Initialize the distributed one
torch.distributed.init_process_group(world_size=2, \
init_method='file:///' + os.path.join(os.environ['HOME'], 'distributedFile'), \
backend='gloo')
model.cuda()
model = nn.parallel.DistributedDataParallel(model)
# load dataset
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
dataLoader = data.DataLoader(dataset, num_workers=2, sampler=train_sampler)
# Put the model to GPU if used
model.cuda()
for epoch in range(5):
total_loss = 0
train_sampler.set_epoch(epoch)
for idx, _ in dataLoader:
in_val = torch.zeros(20)
in_val[idx] = 1.0
output = model(in_val)
The matrixTest can be as following, but actually not used.
1 1 2 1 1 1 1 1 1 1 1 1 1 2 3 4 5 6 8 9
2 3 2 2 2 1 2 2 1 3 4 5 1 2 3 4 1 1 1 1
The error from one process (two processes have the same error):
Traceback (most recent call last):
File "test.py", line 67, in <module>
output = model(in_val)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 216, in forward
outputs = self.parallel_apply(self._module_copies[:len(inputs)], inputs, kwargs)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 223, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 65, in parallel_apply
raise output
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 41, in _worker
output = module(*input, **kwargs)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward
return F.linear(input, self.weight, self.bias)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/functional.py", line 994, in linear
output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [1 x 10], m2: [20 x 10] at /opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/generic/THCTensorMathBlas.cu:249
terminate called after throwing an instance of 'gloo::EnforceNotMet'
what(): [enforce fail at /opt/conda/conda-bld/pytorch_1524586445097/work/third_party/gloo/gloo/cuda.cu:249] error == cudaSuccess. 29 vs 0. Error at: /opt/conda/conda-bld/pytorch_1524586445097/work/third_party/gloo/gloo/cuda.cu:249: driver shutting down |
st102284 | I am still confused that which part of this code could be wrong.
Both the input and model size are as expected:
output = model(in_val)
The length of in_val is 20, while the model is
DistributedDataParallel(
(module): Sequential(
(0): Linear(in_features=20, out_features=10, bias=True)
(1): Linear(in_features=10, out_features=20, bias=True)
)
)
Any suggestion or comment would be appreciated. |
st102285 | I’m training a deep learning model that need more than 8GB of GPU RAM. And I wonder can that trained model be used to predict on 2GB of RAM GPU like GTX 1030, so I can run test while training? |
st102286 | You could try to perform the inference on your second GPU and see, if if has enough memory:
device = 'cuda:1' # assuming your GTX1030 is cuda:1
model = model.to(device)
with torch.no_grad():
for data, target in val_loader:
data = data.to(device)
target = target.to(device)
output = model(data)
...
If you want to run the prediction simultaneously while training, you could save the current model checkpoints, load them in the prediction script and run the inference part. |
st102287 | I have added an LSTM layer after a convolution in the VGG-16 model. Overtime, the model learns just fine. However, after adding just one LSTM layer, which consists of 32 LSTM cells, the process of training and evaluating takes about 10x longer.
I added the LSTM layer to a VGG framework as follows
def make_layers(cfg, batch_norm=False):
# print("Making layers!")
layers = []
in_channels = 3
count=0
for v in cfg:
count+=1
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels=v
if count==5:
rlstm =RLSTM(v)
rlstm=rlstm.cuda()
layers+=[rlstm]
Is this a common issue? The LSTM layer I added is very similar to RowLSTM, from Google’s Pixel RNN paper. Do LSTM layers just take long to train in general? |
st102288 | Hello, everyone!
I need to apply user defined function to all strings of cuda tensor. Because of that function takes very a few of GPU resourses but significant time I want to speed up the calculation applying that function to the several strings contemporaneously.
But when I try to execute that code
ctx = torch.multiprocessing.get_context(‘spawn’)
for ind in range(0,self.pop_size,th_cnt):
processes = []
for proc_n in range(th_cnt):
p = ctx.Process(target=pw.forw, args=((ind+proc_n, wgts[ind+proc_n,:,:], errs)))
p.start()
processes.append§
for p in processes:
p.join()
I’m gettring and error
File “C:\ProgramData\Anaconda3\lib\multiprocessing\process.py”, line 105, in start
self._popen = self._Popen(self)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\context.py”, line 322, in _Popen
return Popen(process_obj)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py”, line 65, in init
reduction.dump(process_obj, to_child)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File “C:\ProgramData\Anaconda3\lib\site-packages\torch\multiprocessing\reductions.py”, line 108, in reduce_storage
metadata = storage.share_cuda()
RuntimeError: cuda runtime error (71) : operation not supported at c:\users\administrator\downloads\new-builder\win-wheel\pytorch\torch\csrc\generic\StorageSharing.cpp:253
Traceback (most recent call last):
File “”, line 1, in
File “C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py”, line 105, in spawn_main
exitcode = _main(fd)
File “C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py”, line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
Could you explain me, what should I fix in my code or may be you will addvice better way to achieve my goal? |
st102289 | I also meet a same problem. My code is working on the small MNIST, but not in my own dataset. I don’t know what is the problem… |
st102290 | Hi :),
I’, using Alexnet to recognice euler angles of Persons in front of a Camera. This is my model:
class AlexNet(nn.Module):
def __init__(self, num_classes=1000):
super(AlexNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(256 * 6 * 6, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes),
)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), 256 * 6 * 6)
x = self.classifier(x)
#print(self.classifier.parameters())
return x
def alexnet(pretrained=True, **kwargs):
model = AlexNet(**kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls[‘alexnet’]))
model.classifier._modules[‘6’] = nn.Linear(4096, 3)
return model
i modified the last Layer to only 3 outputs (the angles). My Goal now is to replace the 3 Linear Layers (The 2 in die classifier and my changed one) with RNN Layers. Preferable still trained (of course only the ih weight. The hh weigts needs to be trained. And of cource only from the first 2 Layers. My changed 3 Layer needs to be trained completly) To test this i tried to change my changed Leniar Layer(4069, 3) to a RNN Layer(4069, 3) Just like this:
model.classifier._modules[‘6’] = nn.RNN(4096, 3)
But when I’m trying to do so, I get this error:
…
File “/home/jan/anaconda3/envs/TensorboardX/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 178, in forward
self.check_forward_args(input, hx, batch_sizes)
File “/home/jan/anaconda3/envs/TensorboardX/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 126, in check_forward_args
expected_input_dim, input.dim()))
RuntimeError: input must have 3 dimensions, got 2
My Input is a [32, 3, 244, 244] Tensor.
What do I do wrong? |
st102291 | Although I don’t know what you’re trying to do, but nn.RNN needs inputs to be 3 dimensions.
https://pytorch.org/docs/stable/nn.html#torch.nn.RNN 19
Inputs: input, h_0
input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See torch.nn.utils.rnn.pack_padded_sequence() or torch.nn.utils.rnn.pack_sequence() for details. |
st102292 | I use Alexnet to estimate the angle of a persons head in 3D in Realtime (e.g. Person in front of Camera). I want to improve this estimation by using an RNN. This should improve the estimation, because the frames are sequential. (Hope this is the right word. My english is not so good as you probrably noticed )
First i implemented alexnet normally. But now I’m trying to replace all 3 Linear Layers in “classifier” (see above) with RNN Layers.
Because I’m using a Pretrained Alexnet I need to change the Layers after loading the model. I’m allready did this to simplify the output to only the 3 angles. ( model.classifier._modules[‘6’] = nn.Linear(4096, 3) )
I now changed all 3 Layers to RNNs
model.classifier._modules[‘1’] = nn.RNN(256 * 6 * 6, 4096, batch_first=True)
model.classifier._modules[‘4’] = nn.RNN(4096, 4096, batch_first=True)
model.classifier._modules[‘6’] = nn.RNN(4096, 3, batch_first=True)
(I know these Layers are now not pretrained any more. I still have to figure out how to copy the pretrained weights to the right spots)
If i understand it right, i need batch_first because of my [32, 3, 244, 244] Inputtensor. 32 is Batchsize, and 3,244,244 is the frame
I also modified out = model(input) to out, hidden = model(input, hidden). I set hidden = None for the first batch.
And now I get this error:
Traceback (most recent call last):
File “/home/jan/anaconda3/envs/TensorboardX/lib/python3.6/threading.py”, line 916, in _bootstrap_inner
self.run()
File “/home/jan/anaconda3/envs/TensorboardX/lib/python3.6/threading.py”, line 864, in run
self._target(*self._args, **self._kwargs)
File “/home/jan/PycharmProjects/Bachelor-Kopfposenschaetzung/AlexRNN.py”, line 221, in train
out, hidden = model(input, hidden)
File “/home/jan/anaconda3/envs/TensorboardX/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 491, in call
result = self.forward(*input, **kwargs)
TypeError: forward() takes 2 positional arguments but 3 were given
Is my Input in a wrong shape? @klory wrote, that I need an Input of shape (seq_len, batch, input_size), but I actually don’t know how this means. Is this the my input in the line out, hidden = model(input, hidden) or is a different input meant?
Hope It is understandable what my Problem is. Thanks |
st102293 | Ah I found one mature problem. The classifier of the Alexnet is a Sequentiell model. If I unterstand that right, that means, that this does not support RNNs. |
st102294 | Do we really need both glibc2.14 and glibc2.17 to import torch in python.
If I cannot upgrade glibc2.17 is there some other alternate?
Config: anaconda2, cuda 8.0, pytorch .4
Thanks |
st102295 | PyTorch is built on CentOS-7, which has glibc: 2.17. Hence, you need a maching that has glibc >= 2.17 to work.
Here’s a list of Linux distros and their glibc versions: https://gist.github.com/wagenet/35adca1a032cec2999d47b6c40aa45b1 135 |
st102296 | Hello ,
If I trained a model and I want to save it how can I do this ? and how I can retrieve again ?
and Thank you. |
st102297 | You can save and load the state_dict from the model and if you need the optimizer.
Have a look at the Serialization semantics 5 for more information. |
st102298 | Hi!
I can monitor my GPU and I can see no hit on my GPU. But I get an out of memory error.
I even set the batch size to 1. I’m feeding two images of size (224,224); (448,448).
Runing of a GPU Quatro K1200
RuntimeError: cuda runtime error (2) : out of memory at c:\users\administrator\download
s\new-builder\win-wheel\pytorch\aten\src\thc\generic/THCStorage.cu:58
Is that a normal behavior?
Thank you |
st102299 | Solved by ptrblck in post #5
You could use nn.DataParallel to split your batch onto your GPUs.
Here is a good tutorial to get you started. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.