id
stringlengths
3
8
text
stringlengths
1
115k
st103200
Hi Thomas, Thank you for the response. Oh yes, bessel’s correction uses (n - 1) instead of (n) for calculating the std. This would cause a ‘divide by 0’ case resulting in nan. I actually got caught up in searching for the source of this function. Thanks again. Best regards, Animesh
st103201
im trying out different Hyper-parameters for my model and for that i need to cross-validate over them. it looks some thing like this: image.png1007×503 17.8 KB the issue is that after each variation of Hyper-Parameter, reset is required for the model, otherwise the ‘‘learning’’ will continue where the last one stopped (need to re-initialize the weights after each itiration). im having problem doing that, help would be much appriciated.
st103202
If you are already using a weight init function, you could just call it again to re-initialize the parameters. Alternatively, you could create a completely new instance of your model. Using this you would have to re-create the optimizer as well. Another way would be iterate your layers holding parameters and call .reset_parameters() on them.
st103203
thanks for the quick reply, im kind of new to the Pytorch framework, still trying to get a grasp on it. thats the model class: image.png1008×668 37.1 KB how could i implement the first option you offered, i think it would be the better approach out of 3.
st103204
You can define a function to initialize the weights based on the layer and just apply your model on it: def weights_init(m): if isinstance(m, nn.Conv2d): torch.nn.init.xavier_uniform(m.weight.data, nn.init.calculate_gain('relu')) m.bias.data.zero_() elif isinstance(m, nn.Linear): ... model.apply(weights_init) You can find the different init functions here 108.
st103205
unfortunately im still having problem to implement that (not sure what initialization i need to use and how to use the init class) is it possible some how to “save” the weights before the cross validation and set the “saved” weights after each itiration ?
st103206
That’s also a good idea. You can save the state_dict and reload it after your experiment. Have a look at the Serialization Semantics 102. You use the weights_init function just by calling model.apply(weights_init). This will use every submodule to call the function.
st103207
# data collection that converts into Social Transmedia import torch from torch.autograd import Variable import torch.nn as nn import torch.optim as optim x = Variable(torch.Tensor([[1],[2],[3],[4]])) y = Variable(torch.Tensor([[2],[4],[6],[8]])) class SocialTransmedia(nn.Module): def __init__(self, input_size, output_size): super(SocialTransmedia, self).__init__() self.linear = nn.Linear(input_size, output_size) def forward(self, x): y_predict = self.linear(x) return y_predict model = SocialTransmedia(1,1) criteria = nn.MSELoss() optimizer = optim.SGD(model.parameters(), 0.01) for epoch in range(500): y_predict = model(x) loss = criteria(y_predict, y) optimizer.zero_grad() loss.backward() optimizer.step() print(epoch, float(loss.data[0])) test = Variable(torch.Tensor([[20]])) z = model.forward(test) print(float(z.data[0])) # 39.88399887084961 # Logistic Regression import torch from torch.autograd import Variable import torch.nn as nn import torch.optim as optim import torch.nn.functional as f x = Variable(torch.Tensor([[25],[35],[45],[15]])) y = Variable(torch.Tensor([[0],[1],[1],[0]])) class SocialTransmedia(nn.Module): def __init__(self, input_size, output_size): super(SocialTransmedia, self).__init__() self.linear = nn.Linear(input_size, output_size) def forward(self, x): y_predict = f.sigmoid(self.linear(x)) return y_predict model = SocialTransmedia(1,1) criteria = nn.BCELoss() optimizer = optim.SGD(model.parameters(), 0.01) for epoch in range(500): y_predict = model(x) loss = criteria(y_predict, y) optimizer.zero_grad() loss.backward() optimizer.step() print(epoch, float(loss.data[0])) test = Variable(torch.Tensor([[20]])) z = model.forward(test) print(float(z.data[0]))
st103208
Let s say I have a list consist of K tensors L = [ torch.rand(B,C,D,D) for _ in range(K)] for simplicity lets say B=1 I want to find the max value and the corresponding index to the max value of each element. How I can do that in an efficient way? Example: Lets say I have: B = 1 C = 1 D = 3 K = 2 L = [ torch.rand(B,C,D,D) for _ in range(K)] print(L) [tensor([[[[ 0.9226, 0.3428, 0.5824], [ 0.4465, 0.5420, 0.3884], [ 0.2781, 0.2483, 0.6952]]]]), tensor([[[[ 0.9592, 0.1106, 0.9677], [ 0.3140, 0.6602, 0.7806], [ 0.8007, 0.5433, 0.7550]]]])] and then if i want the max and index of the max value at C=0 I can do this: A = torch.zeros(0,D,D) A1 = torch.zeros(0,D,D) A2 = torch.zeros(0,D,D) A1= torch.unsqueeze(L[0][0,0,:,:],0) A2 = torch.unsqueeze(L[1][0,0,:,:],0) A = torch.cat((A1,A2),0) values , index= torch.max(A,0) print(values) print(index) tensor([[ 0.9592, 0.3428, 0.9677], [ 0.4465, 0.6602, 0.7806], [ 0.8007, 0.5433, 0.7550]]) tensor([[ 1, 0, 1], [ 0, 1, 1], [ 1, 1, 1]]) but this is not efficient for cases when i have big B and C, Any suggestions?
st103209
Solved by ptrblck in post #2 You could use torch.stack instead of torch.cat: L = [ torch.rand(B,C,D,D) for _ in range(K)] print(L) L = torch.stack(L) L.max(0)
st103210
You could use torch.stack instead of torch.cat: L = [ torch.rand(B,C,D,D) for _ in range(K)] print(L) L = torch.stack(L) L.max(0)
st103211
For example, I have a tensor with random values. For each element, set it 1 if its value is larger than 0.5, else set it 0.
st103212
Solved by justusschock in post #2 X > 0.5 Creates a binary mask if X is your tensor. You may need to cast it to an appropriate type.
st103213
X > 0.5 Creates a binary mask if X is your tensor. You may need to cast it to an appropriate type.
st103214
Is there any difference between v.detach() and Variable(v.data)? edit: Maybe detach() preserves some flags and attributes.
st103215
Solved by SimonW in post #2 detach is safer as it uses the same version counter as the original tensor. Version counter is a mechanism that helps tracking if some tensor that is needed in backward has been modified inplace.
st103216
detach is safer as it uses the same version counter as the original tensor. Version counter is a mechanism that helps tracking if some tensor that is needed in backward has been modified inplace.
st103217
Dear all, I try to train CNN(encoder) with LSTM(decoder) for video sequence. For every 16 video frame, it will estimate one value for regression. I use vgg16 for extracting feature for each video frame. The feature was extracted from the conv5_3 layer. So the shape of the tensor is (64, 512, 14, 14)(batch_size, depth, height, width ). I reshape the shape of the tensor to (64, 512, 196) and sum the last row of tensor(64, 512) , then the tensor was used to train LSTM. However, the shape of the input is (batch_size, seq_len, input_size). My sequence length setting is 16(Many to one). So, how to set up the shape of the input tensor for LSTM? Do i need tensor.view(64, 16, -1) for LSTM?
st103218
If I understood the problem correctly, you first convert every 16 frames from a video to a 512-D vector which means if your video has, say, n frames you’ll get n/16 of 512-D vectors. Your sequence length is n/16. But you said its 16. Are all videos 256 frames long? So if you know your ‘n/16’, use tensor.view(batch_size, n/16, 512). I’m not sure if I answered your question. It will be better if you mention the paper you’re implementing.
st103219
Thank you for your answer! I found the answer from other discuss in here. Pytorch timedistributed 23
st103220
I’m wondering that if when training in batch, is there anyway to avoid the fact that the RNN will take the hidden output of the previous sequence as the input to the first timestep in the next sequence? This is a relationship that shouldn’t be learned because the time series won’t necessarily be in order in my case.
st103221
i’m not an expert on this but i think that when you train a batch it all gets forwarded at the same time and the hidden states of one sequence in the batch is not used as the input to the next sequence. that’s why the hidden states are of size “lstm_layers X batch size X hidden size”
st103222
It’s not immediately clear what it is that you to do, or what you’re already doing. Are you talking about training a batch (or mini-batch) in an off-line sense by a single call to the forward function of the model? If that is what you are doing, then I can say with very high certainty that the individual sequences are processed in parallel and their time evolution is independent. In the absence of better tutorials than I have been able to find, it is highly instructive to code up a toy model of an RNN-- something like a batch size of 4, a sequence length of 5, and an input width of 2. This toy model is small enough that you can design your input batch tensor by hand in order to prove a point to yourself. In this case, just code up all 4 sequences to be the same sequence. The same random sequence, even. Then you can inspect the output of this and see that all four output sequences are the same. This would not be the case if the final hidden output of the first sequence were fed back into the first hidden state for the second sequence. Finally, remember that you have the ability to define what happens to the hidden states in your forward method. I do not think it is required here, though, so I will not lengthen an already long post.
st103223
Jared_77: I’m wondering that if when training in batch, is there anyway to avoid the fact that the RNN will take the hidden output of the previous sequence as the input to the first timestep in the next sequence? The hidden state of the sequence will not be kept in the RNN/LSTM/… modules - you can either pass it in or it will be set to 0s for each invocation. Only the weights are learned. Best regards Thomas
st103224
Hi, currently I’m trying to replace all linear operation (i.e. y = WX + b) in a CNN with my own version. So far I can customize those linear operations in fully connected layers, but not the convolution layers. Is it possible to modify those in convolution layers? Thanks!!
st103225
We can’t help alot if you won’t tell how to modifyby ‘your version’, Try looking at functional nn 9 maybe it’ll be a good place to start…
st103226
Actually, I want to use the hardware to perform this linear computation, and use the result in the CNN. In order to do this, I need to first ensure the linear operations can be replaced properly. I have checked the document you provided, it seems the conv2d function has the form _add_docstr(torch.conv2d, …), and _add_docstr function is imported from torch._C library. I guess the convolution operation was implemented in C (correct me if I’m wrong). Does it mean I need to modify C code in order to achieve my goal?
st103227
Thanks for your reply. I think about it again, eventually I would like to replace the original linear operation (i.e. y = WX + b) with some additional noise (i.e. y = WX + b + n where n is some customized noise). Is there a simple way to do it for convolutional layers?
st103228
Screenshot from 2018-07-14 22-36-37.png1920×1080 285 KB Well, my code is like above, when I run mnist.py, there is an error: transform = transforms.Compose(transforms.ToTensor(),transforms.Normalize(mean=[0.5], std=[0.5])) TypeError: init() takes exactly 2 arguments (3 given) However, I found the init function of class Normalize(object) have 2 inputs indeed, and they are std and mean. So could anyone can help to find what’s wrong with the codes?
st103229
The error seems to be thrown by transforms.Compose rather than Normalize. You have to wrap your transformations in []. The docs 16 have an example.
st103230
is there a way to put tuple in tensor? e.g. FM = torch.ones(1,4,3,3) FM[0,0,1,1] = (2,2)
st103231
I don’t think we can. Instead, you may use the tensor list. FM = [torch.ones(1,4,3,3) for i in range(2)] FM[0][0,0,1,1] = 2 FM[1][0,0,1,1] = 2
st103232
No, tensors are restricted to floats and ints, in varying sizes: https://pytorch.org/docs/stable/tensor_attributes.html#tensor-attributes-doc 75
st103233
I want to load a dataset with both size of 224 and it’s acutal size. But if i use transform in DataLoader i can only get one form of dataset, so i want to know how can i load they together?
st103234
You may refer to the implementation of ImageFolder github.com pytorch/vision/blob/master/torchvision/datasets/folder.py#L115-L122 46 Args: index (int): Index Returns: tuple: (image, target) where target is class_index of the target class. """ path, target = self.imgs[index] img = self.loader(path) Here is pseudo code that may be helpful: import torchvision as tv class MyImageFoler(tv.datasets.ImageFolder): def __getitem__(self,index): origin_data = process(self.imgs[data]) transoform_data = transform(origin_data) return origin_data,transoform_data,label dataloader = Dataloader(MyImageFoler()) for origin_datas, transoform_datas, labels in dataloader: train()
st103235
Thanks for your elegant method,and I wonder whether the follow implementation is work: first I just use transform = transforms.ToTensor() for ImageLoader to load the original dataset, then use scale_transform = transforms.Compose([ transforms.Scale(256), transforms.RandomCrop(224), ]) data_fixed = scale_transform(data) for scale the image of the dataset
st103236
when you do transform = transforms.ToTensor() In dataset , it return a tensor,while transforms.Scale(256), transforms.RandomCrop(224), they were both designed for PIL Image. so you need to scale_transform = transforms.Compose([ transforms.ToPILImage(), transforms.Scale(256), transforms.RandomCrop(224), transforms.ToTensor() ])
st103237
Hi guys! Please consider this idea 217 I just came up with. The ideas is to load different batches of images randomly, but have only similar images in one batch.
st103238
when i use model witch i have trainde to predict .i just could conver the picture to tensor with the shape of CWH.But my data missing a batch dimension.So how to add the batch dimension when prediction.?Thank you very much.
st103239
Assume your image being in tensor x you could do x.unsqueeze(0) or you could use the pytorch data package 265 and it’s Datasets/Dataloader which automatically create minibatches. For vision there is something similar in the torchvision package.
st103240
I am training the same LSTM network architecture with caffe and pytorch. But they give very different results. caffe’s model accuracy is about 98%, but the accuracy of pytorch version is just 50%. Why?
st103241
the optimizers might be subtly different, where one’s learning rate or momentum scaling is a bit different than the other…
st103242
hi I meet similar problem, the result of pytorch is worse than caffe’s. Have you solved it? Thank you!
st103243
I finally got a close result to the Caffe version after I clarified some differences between Caffe and Pytorch: SGD implementation. dropout is not applied if there is only one RNN layer. and also I found the data preprocessing of my pytorch version is slightly different from the Caffe version. After I solved these problems, I got a comparable result. Hope this helps!
st103244
Thanks for your reply! Could you please describe the first factor in detail? What difference matters?
st103245
You could refer to here: http://pytorch.org/docs/master/optim.html 56 In the Note of SGD: The implementation of SGD with Momentum/Nesterov subtly differs from Sutskever et. al. and implementations in some other frameworks.
st103246
I noticed this difference too. Whereas my training works fine in Caffe, in PyTorch if I change the learning rate for the same stages/iterations that caffe changes them (step-wise) suddenly I get nan loss values. Looking at the difference here 6 I thought I can change the adjust_learning_rate to also change the momentum like: def adjust_learning_rate(optimizer, lr, momentum): for param_group in optimizer.param_groups: param_group['lr'] = lr param_group['momentum'] = momentum / lr But still, as soon as lr is changed, the loss becomes nan @Nick_Young how did you solve the problem with SGD discrepancy with Caffe?
st103247
I ended up with this implementation of Caffe SGD 86. Appreciate if you can take a look.
st103248
suppose i have of tensor A of size bz x 512, and another tensor B of size bz x C x 512. my goal is to compute a pairwise distance tensor D of size bz x bz x C, where D[i][j][k] = ||A[i] - B[j][k]||. can anyone help me out without using for loop.
st103249
import torch import matplotlib.pyplot as plt import numpy as np import torch.nn.functional as F with open(‘data.txt’) as f: data_list = [i.split(’\n’)[0].split(’,’) for i in f.readlines()] data = [(float(i[0]),float(i[1]),float(i[2])) for i in data_list] x0_max = max([i[0] for i in data]) x1_max = max([i[1] for i in data]) data = [(i[0]/x0_max,i[1]/x1_max,i[2])for i in data] x0 = list(filter(lambda x:x[-1]==0.0,data)) x1 = list(filter(lambda x:x[-1]==1.0,data)) plot_x0 = [i[0] for i in x0] plot_y0 = [i[1] for i in x0] plot_x1 = [i[0] for i in x1] plot_y1 = [i[1] for i in x1] #plt.plot(plot_x0,plot_y0,‘ro’,label=‘x_0’) #plt.plot(plot_x1,plot_y1,‘bo’,label=‘x_1’) #plt.legend(loc=‘best’) x_data = [(i[0],i[1]) for i in data] y_data = [(i[2]) for i in data] x_data = torch.tensor(x_data) y_data = torch.tensor(y_data)#.unsqueeze(1) class Net(torch.nn.Module): def init(self,n_in,n_hide,n_out): super(Net,self).init() self.hide = torch.nn.Linear(n_in,n_hide) self.out = torch.nn.Linear(n_hide,n_out) def forward(self,x): x = F.relu(self.hide(x)) x = self.out(x) return x net = Net(n_in=2,n_hide=10,n_out=2) loss_func = torch.nn.BCELoss() optim = torch.optim.SGD(net.parameters(),lr = 0.2,momentum=0.9) print(x_data.shape) print(y_data.shape) for i in range(1000): pre = net(x_data) _, predicted = torch.max(pre.data, 1) print(pre.shape) loss = loss_func(pre,y_data) optim.zero_grad() loss.backward() optim.step() #print(loss.item()) “”“ data.txt 34.62365962451697,78.0246928153624,0 30.28671076822607,43.89499752400101,0 35.84740876993872,72.90219802708364,0 60.18259938620976,86.30855209546826,1 79.0327360507101,75.3443764369103,1 45.08327747668339,56.3163717815305,0 61.10666453684766,96.51142588489624,1 75.02474556738889,46.55401354116538,1 76.09878670226257,87.42056971926803,1 84.43281996120035,43.53339331072109,1 95.86155507093572,38.22527805795094,0 75.01365838958247,30.60326323428011,0 and so on (还有很多行) ”“”
st103250
You need a sigmoid layer as the output of your model for the BCELoss. Alternatively, you could keep your model as it is currently and use BCEWithLogitsLoss.
st103251
Thanks, i had resolved this question~ but i want to know that someone said that output layer should’t be activated 我已经解决了,但是我还想问一下 输出层不是应该不激活吗?
st103252
nn.BCELoss expects probabilities from a sigmoid layer, while nn.BCEWithLogitsLoss expects the raw logits.
st103253
Maybe there’s another way of achieving my underlying goal, but in order to create tensors of the appropriate cudaness in my modules, I’ve been doing something like: class Foo(nn.Module): def __init__(self): super().__init__() self.torch_constr = torch def cuda(self): super().cuda() self.torch_constr = torch.cuda def forward(self, x): state = self.torch_constr.FloatTensor(...).zero_() ... The problem I get is that the cuda method seems to only get called on the top-level class? Not sure that it’s called on the children? What are standard way(s) to handle this? (edit: I’ve gone back to doing torch_constr = torch.cuda if x.is_cuda else torch for now)
st103254
Solved by SimonW in post #2 can’t you just torch.zeros(...., device=x.device)?
st103255
In Python, when a variable is initialized within a function, the variable is placed on a stack such that the memory it uses will be freed at the end of the function. Is this the case for pytorch tensors, or do the tensors have to be manually freed within a function?
st103256
python objects have their own refcounting so you don’t need to worry about freeing as a user in python.
st103257
Is there a way to output some form of the compiled cuda code that gets run in pytorch or some representation of the graph in human readable form? Preferably without a rebuild of the pytorch. Thanks!
st103258
Do you mean from JIT? PyTorch w/o JIT runs everything as a dynamic graph so there isn’t compilation.
st103259
The profiler does something similar to that, but at op level not at kernel level
st103260
Do I have to write the backward pass myself? Or is there a workaround that allows me to use autograd but with the gradient normalized?
st103261
Have a look at backward hooks 644. It should work without an own implementation of the backward function.
st103262
I want to create an activation function, y=x^2, and call it in my network , how to realize it?
st103263
For ReLU activation, we do this: out = some_convolution(input) out = nn.functional.relu(out) # relu activation In the same way, you can realize your quadratic activation as, out = some_convolution(input) out = torch.pow(out, 2) # quadratic activation
st103264
OS: Ubuntu 16.04 LTS PyTorch version: 0.5.0a0+1483bb7 (and also the latest ones from today) How you installed PyTorch (conda, pip, source): source Python version: 3.5.2 torch.backends.cudnn.version(): 7104 CUDA version: 9.0.176 (also tested with 9.1.85) NVIDIA driver version: 390.48 (also tested with 390.67) I compiled PyTorch from source in my singularity container and tried to run the CIFAR classification code from here. I only move the network and mini batches to GPU before/while training. However, I get the following error when I do forward-pass for the first time: RuntimeError: CuDNN error: CUDNN_STATUS_NOT_INITIALIZED Doing torch.device("cuda:0" if torch.cuda.is_available() else "cpu") will correctly show me cuda:0. Then I do net.to(device) and inputs, labels = inputs.to(device), labels.to(device). However, I still get that error when doing the forward pass. I may note that I have been building PyTorch from source exactly the same way in the past couple of months and I never encountered any issues. My previous containers were built around two months ago and had no issues. The new Singularity containers all have the same issue. Initially I was thinking this is only a PyTorch issue but I noticed that running my Lua Torch code gives me the same exact issue. I’ve been using that code since a couple of months ago without any issues with previous revisions of CuDNN v7. As suggested in the issue 6 I opened on PyTorch GitHub repo, Soumith suggested upgrading the NVIDIA driver version. I upgraded the driver version to 390.67 (the latest from NVIDIA) but still have this issue. I am suspicious that the new version of CuDNN is causing this but I’m not entirely about that. I also sent an email to Felix Abecassis at NVIDIA and told him about this issue and asked him if he thinks the updates on CuDNN might be causing this. This is his reply: The versions of cuBLAS and cuDNN for CUDA 9.0 were updated, but I think that’s all. These are the only potential culprits I see on the image side. I wonder, has anyone been encountering this issue? Does anyone know what might be causing this?
st103265
I’m trying to replicate a Keras model which starts out like this: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) (None, 720, 1, 1) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 720, 1, 128) 1152 _________________________________________________________________ The input is (batch_size, 720, 1, 1) and then a conv2d layer is applied on it using 128 filters and kernel size of 8. In trying to replicate this in pytorch, I have: import torch a = torch.randn(32,720, 1, 1) print('a:', a.size()) # a: torch.Size([32, 720, 1, 1]) torch.nn.Conv2d(720, 128, kernel_size=8, stride=1)(a) But I’m getting the following error… RuntimeError: Calculated padded input size per channel: (1 x 1). Kernel size: (8 x 8). Kernel size can’t greater than actual input size at /pytorch/aten/src/THNN/generic/SpatialConvolutionMM.c:48 Any ideas what I’m doing wrong and why this is working on keras and not on pytorch?
st103266
Solved by colesbury in post #5 Your Keras model probably uses NHWC ordering for the convolutions. PyTorch uses NCHW for convolutions. This means your input should be transposed. Also, it looks like you’re doing a 1-d convolution so you probably want to change the kernel size to: (8, 1). a = torch.randn(32, 1, 720, 1) torch.nn.C…
st103267
Could you post the keras code? It doesn’t really make sense to use a kernel size of 8 on an input of 1x1 spatial dimension.
st103268
Sure. Here’s the first part of the Keras model. x = keras.layers.Input(x_train.shape[1:]) conv1 = keras.layers.Conv2D(128, 8, 1, border_mode='same')(x) conv1 = keras.layers.normalization.BatchNormalization()(conv1) conv1 = keras.layers.Activation('relu')(conv1)
st103269
Your Keras model probably uses NHWC ordering for the convolutions. PyTorch uses NCHW for convolutions. This means your input should be transposed. Also, it looks like you’re doing a 1-d convolution so you probably want to change the kernel size to: (8, 1). a = torch.randn(32, 1, 720, 1) torch.nn.Conv2d(1, 128, kernel_size=(8,1), stride=1)(a) Here N=32, C=1, H=720, W=1. In the output C will be 128. N is batch. C is channels. H and W are the spatial dimensions of the inputs.
st103270
Hello, can you help me to find the regularization loss implementation in Pytorch source code ? Thank you very much !
st103271
The parameter: weight_decay in optim modules control the regularization. For example the code in SGD 7
st103272
Thank you very much. In fact, I also want to find where the regularization loss is calculated ? can you help me?
st103273
Install pyTorch on Windows 10 : guide I am using python 3.6.5 and pip version 10.0.1 (python 3.6) and i have the Nvidia CUDA 9.0 SDK installed. pip3 install http://download.pytorch.org/whl/cu90/torch-0.4.0-cp36-cp36m-win_amd64.whl 27 pip3 install torchvision Next you need to install the OpenMP support go to https://software.intel.com/en-us/articles/redistributable-libraries-of-the-intel-c-and-fortran-compiler-for-windows 10 and i have installed: https://software.intel.com/sites/default/files/managed/01/9c/w_cproc_p_11.1.072_redist_intel64.zip 20 Have fun with pyTorch /Adrian
st103274
Hi, Im doing an image segmentation task, and for that, within the Dataset, Im using a function which generates a stick model of a human based on the xy points of places of interest (head, joints etc). I have the xy points, and my Dataset class looks like the following. class MyDataset(Dataset): def __init__(self, json_file_dir, image_dir, transform=None): # json_file_dir :: String ; Path to json file with xy coordinates # image_dir :: String ; Path to image data # transform :: Optional transform for dataset self.json_file_dir = json_file_dir with open(json_file_dir) as f: self.json_file = json.loads(f.read()) self.image_dir = image_dir self.transform = transform def __len__(self): return len(self.json_file) def __getitem__(self, index): # index :: index number for input,label pair image_name = self.json_file[index]["filename"] #needed to read the image image_coordinates = self.json_file[index]["keypoints"] #needed to generate the labelled image image = cv2.imread(image_name) labelled_image = LabelMaker(image, image_coordinates, args="LINES") #outputs n channel label, n is the number of classes # image :: np.array 256x256x3 # labelled_image :: np.array 256x256x4 sample = {"image": image, "labelled_image": labelled_image} if self.transform: sample = self.transform(sample["image"], sample["labelled_image"]) return sample def transform(image, label): image = cv2.resize(image, (256, 256)).astype(np.float64) # uint8 with RGB mode image -= rgb_mean #[R, G, B] where R, G, B denote the mean values image = image.astype(np.float64) image = image.astype(float) / rgb_sd #[R, G, B] where R, G, B are the standard deviations of them image = torch.from_numpy(image).float() label = torch.from_numpy(label).float() return {"image": image, "labelled_image": label} After making an instance of this for the training dataset, I make a dataloader, with the parameters - batch size=5, shuffle=True and number of workers=4. However During the training process, I get an error, which states AttributeError: Traceback (most recent call last): File "dir/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 57, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "<ipython-input-20-e1502ecaff7b>", line 32, in __getitem__ labelled_image = LabelMaker(image, image_coordinates, args="LINES") File "<ipython-input-19-820c82037e68>", line 23, in PointAndLineMaker height, width = image.shape[0], image.shape[1] AttributeError: 'NoneType' object has no attribute 'shape' Exception NameError: "global name 'FileNotFoundError' is not defined" in <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x7f13cacc8c90>> ignored Here, the LabelMaker is the function that takes an input image, and the required coordinates to generate the label. As you can see from my Dataset class, the input to this function is fed from that class after loading it from there. I checked with print statements going through the whole dataset trying to see whether anything is None but to no avail. Whats more annoying is that this error pops up randomly. For debugging purposes Im running 5 epochs, and it chooses to spit out this None at different times, and I also ran it for 5 epochs successfully without the error coming. What am I doing wrong here? EDIT: I have reason to believe this is a problem in openCV, specifically the imread function, since the problem is avoided after converting the input images to .npy files, and loading these into the dataset
st103275
Could you add a debug statement before the LabelMarker(...) line: if not image: print(image_name) The next time it crashes, we would know, what file seems to be missing or corrupt. Also, could you set shuffle=False for the DataLoader? This would make sure to get the file deterministically.
st103276
I tried that, with shuffle=False as well, but the error still comes randomly. I also went through the whole dataset (which isnt too large luckily) thrice, and 1 time i got a None at a certain image file and when i checked it out it was not corrupted or missing. The other 2 runs, I didnt get a None. I even changed my workflow from Jupyter notebook to a more class-based IDE setup just in case Jupyter was messing with me. Also an important thing I may have not told originally in the first post (now I edited it there) was this message that I get Exception NameError: "global name 'FileNotFoundError' is not defined" in <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x7f13cacc8c90>> ignored
st103277
I want to install pytorch 0.4 with python 2.7 and coda 8.0 in my server(no access to the Internet) of centos 6.3. However, I cannot find a proper whl on the Internet. Does anyone know a link to this whl ?
st103278
Hi all, I am encountering a weird phenomenon when trying to save the parameters of a custom model that has being moved to the GPU in Pytorch 0.4.0. When I call model.state_dict(), the resulting OrderedDict is empty, see the following MWE import torch from torch.nn import Module, Parameter class test_module(Module): def __init__(self, device=None): super(test_module, self).__init__() if device is not None: self.device = device else: self.device = torch.device('cpu') self.W = Parameter(torch.ones((3, 5))).to(self.device) cpu = test_module() print(cpu.state_dict()) gpu = test_module(torch.device('cuda')) print(gpu.state_dict()) The first printout gives me the desired dictionary, but the second one is empty. Furthermore, if I call print(cpu.to(torch.device('cuda')).state_dict()) I do get the correct dictionary, while if I call print(gpu.to(torch.device(‘cpu’)).state_dict()) then I get an empty object. Is this expected behavior, or am I doing something wrong?
st103279
Move the to(self.device) operation into the Parameter call: self.W = Parameter(torch.ones((3, 5)).to(self.device)) The .to is a no-op for CPU tensors in your code, while you are manipulating and thus losing the nn.Parameter for the GPU case.
st103280
How does the random function calculation depend on the seed? What If I used seed=1 and now I want different values, does it matter whether I choose seed=2 or seed=300 next?
st103281
Solved by tom in post #4 In answering a question, John Cook writes: So as far as we know, it’s OK for CPU and without looking into it, I would guess that the GPU RNG also qualifies. Best regards Thomas
st103282
If you’re not too worried over the quality of randomness, you could just use seed = 2 etc. If you run it sequentially, one way to be sure is to save the last seed(s) and continue from them. Or you could estimate the number of random calls and do them as warm up. It is really easy to make mistakes when trying to be clever 28, and I must admit that I’ve certainly come close to making the mistake described there thinking that it looks like a good idea at first sight. Best regards Thomas
st103283
I meant, is there a difference between choosing seed=2 and seed=300 after seed=1? For example, maybe seed=2 somehow generate numbers that are close to the ones generated when seed=1.
st103284
In answering a question, John Cook writes: A high quality random number generator should produce uncorrelated output for any two different seeds, including consecutive seeds, and as far as anyone knows, the Mersenne Twister does this. So as far as we know, it’s OK for CPU and without looking into it, I would guess that the GPU RNG also qualifies. Best regards Thomas
st103285
when i call torch.save to save model in training stage. It raise OSError. The trackback is as follow: Traceback (most recent call last): File "/home/ym/git/action_recognition.pytorch/tools/train.py", line 347, in <module> val_loss = train(epoch, best_val_loss) File "/home/ym/git/action_recognition.pytorch/tools/train.py", line 323, in train torch.save(decoder.state_dict(), decoder_file) File "/home/ym/anaconda2/envs/pytorch3.0-py3.5/lib/python3.5/site-packages/torch/serialization.py", line 135, in save return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol)) File "/home/ym/anaconda2/envs/pytorch3.0-py3.5/lib/python3.5/site-packages/torch/serialization.py", line 120, in _with_file_like f.close() OSError: [Errno 5] Input/output error The environment is Python3.5 and pytorch3.0. What’s strange is the weight files could be saved sometimes.
st103286
It means that Python cannot write the file to disk. This could range from the filesystem running out of space via user quotas if you are on a machine that has them to failing storage devices. Best regards Thomas
st103287
Hello, I’m trying to load a large image dataset that won’t fit into RAM. I’ve looked up a similar question here on the forums, but can’t seem to get the answer working. the variable data_loc has the directory to images and targets. class MyDataset(Data.Dataset): def __init__(self): self.data_files = os.listdir(data_loc) #sort(self.data_files) def __getindex__(self, idx): return load_file(self.data_files[idx]) def __len__(self): return len(self.data_files) set_test = MyDataset() loader = Data.DataLoader(set_test,batch_size = BATCH_SIZE, num_workers=8) for step, (x,y) in enumerate(set_test): *do stuff* set_test = MyDataset() loader = Data.DataLoader(set_test,batch_size = BATCH_SIZE, num_workers=8) But I get a not implemented error for the set_test. Any thoughts on how to fix this?
st103288
You should change __getindex__ to __getitem__. Also, the usual approach is to iterate your DataLoader, not the Dataset. Try: for batch_idx, (x, y) in enumerate(loader): #do stuff Is there a reason you are re-initializing the Dataset and DataLoader in the for-loop?
st103289
Thank you! I think I’m on the right track now. I was confused about what to iterate over and if it needed to be re-initialized. I just am stuck on getting the data loader portion now. I figured it would be pretty easy, but I’m not sure how to create the file load_file function. My files are stored in a file data where the subsequent files are as follows /data/images/ images to use /data/targets.txt Is this the correct format for a loader to work, or do I need to have each batch be a new set of folders? data_loc = '.../data/' def load_file(file): class MyDataset(Data.Dataset): def __init__(self, data_files): self.data_files = sorted(data_files) def __getitem__(self, index): return load_file(self.data_files[index]) def __len__(self): return len(self.data_files) set_test = MyDataset(data_loc) loader = Data.DataLoader(set_test,batch_size = BATCH_SIZE, num_workers=8)```
st103290
The folder looks good. However, you will need the target so that the Dataset will return the data sample and its target. If you have images in the folder, you can simply use PIL.Image.open() to load the image. After loading, you could apply transformations on this image and finally fast it to a Tensor. Let me know, it you need any help.
st103291
Hi, sorry to continue to ask for help. I’ve done a bit more and get the error “OSError: [Errno 24] Too many open files”. I’ve tried adding some lines that seemed to work in fixing this error for other people, but it’s still not working. I couldn’t figure out how to get the targets appended to each individual image so I created an array of the targets separated out and append that to each image file as it comes in. (Not sure if that will work). data_loc = '.../sample_data/' torch.multiprocessing.set_sharing_strategy('file_system') target_counter = 0 """ Getting the targets """ with open('.../annotations.txt') as f: content = f.readlines() #makes each one a float targets = [x.split(',') for x in content] for a in targets: for ind,val in enumerate(a): #a[ind] = int(float(val)) a[ind] = float(val) targets = torch.FloatTensor(targets) def load_file(file): temp = Image.open(file) keep = temp.copy() temp.close() data = tuple(keep,targets[target_counter]) target_counter = target_counter + 1 return data class MyDataset(Data.Dataset): def __init__(self, data_files): self.data_files = sorted(data_files) def __getitem__(self, index): return load_file(self.data_files[index]) def __len__(self): return len(self.data_files) set_test = MyDataset(data_loc) loader = Data.DataLoader(set_test,batch_size = BATCH_SIZE, num_workers=8)
st103292
Don’t be sorry for asking for help Unfortunately I’m not familiar with the sharing strategies, so I don’t know it setting it to file_system helps. Did you read it somewhere? I cannot see, where you are opening a lot of files without closing them. Could you post the whole code please? Also, besides the error you are seeing, your code is a bit dangerous, since you have a loose mapping between the input and target. While the file is loaded using index in __getitem__, you are using a target_counter to load the target. If you set shuffle=True in the DataLoader your data will be randomly assigned to the next target. To fix this, you could pass index to load_file and use targets[index].
st103293
Well thank you ! I just pulled the file system line from another Pytorch forum question that seemed to have a similar issue. This is all the code that’s running and causing the errors at this point. I’m not opening files in any other part of the code right now. Annotations.txt has data that looks like: 1, 205, 5.976959, 9.223372E+18, 13.00167, 9.223372E+18, 9.223372E+18, 2.116816, 3.283184, 9.223372E+18 1, 210, 2.403473, 9.223372E+18, 13.00638, 9.223372E+18, 9.223372E+18, 2.744155, 2.655845, 9.223372E+18 with each newline being a new input vector. And just to be safe I’ve moved the targets outside of the folder I’m loading from so I only call images. so the folders are now …/sample_data2/images/image files.bmp …/sample_data/annotations.txt data_loc = '/Users/markmartinez/Downloads/sample_data2/' torch.multiprocessing.set_sharing_strategy('file_system') """ Getting the targets """ with open('/Users/markmartinez/Downloads/sample_data/annotations.txt') as f: content = f.readlines() #makes each one a float targets = [x.split(',') for x in content] for a in targets: for ind,val in enumerate(a): #a[ind] = int(float(val)) a[ind] = float(val) targets = torch.FloatTensor(targets) def load_file(file,index): temp = Image.open(file) keep = temp.copy() temp.close() data = tuple(keep,targets[index]) return data class MyDataset(Data.Dataset): def __init__(self, data_files): self.data_files = sorted(data_files) def __getitem__(self, index): return load_file(self.data_files[index],index) def __len__(self): return len(self.data_files) set_test = MyDataset(data_loc) loader = Data.DataLoader(set_test,batch_size = BATCH_SIZE, num_workers=8) and this is the error I’m getting Traceback (most recent call last): File "/anaconda/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2847, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-53-eb2d61b6aaa1>", line 7, in <module> with open('/Users/markmartinez/Downloads/sample_data/annotations.txt') as f: OSError: [Errno 24] Too many open files: '/Users/markmartinez/Downloads/sample_data/annotations.txt' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/anaconda/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 1795, in showtraceback stb = value._render_traceback_() AttributeError: 'OSError' object has no attribute '_render_traceback_' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/anaconda/lib/python3.5/site-packages/IPython/core/ultratb.py", line 1092, in get_records return _fixed_getinnerframes(etb, number_of_lines_of_context, tb_offset) File "/anaconda/lib/python3.5/site-packages/IPython/core/ultratb.py", line 312, in wrapped return f(*args, **kwargs) File "/anaconda/lib/python3.5/site-packages/IPython/core/ultratb.py", line 347, in _fixed_getinnerframes records = fix_frame_records_filenames(inspect.getinnerframes(etb, context)) File "/anaconda/lib/python3.5/inspect.py", line 1454, in getinnerframes frameinfo = (tb.tb_frame,) + getframeinfo(tb, context) File "/anaconda/lib/python3.5/inspect.py", line 1411, in getframeinfo filename = getsourcefile(frame) or getfile(frame) File "/anaconda/lib/python3.5/inspect.py", line 671, in getsourcefile if getattr(getmodule(object, filename), '__loader__', None) is not None: File "/anaconda/lib/python3.5/inspect.py", line 700, in getmodule file = getabsfile(object, _filename) File "/anaconda/lib/python3.5/inspect.py", line 684, in getabsfile return os.path.normcase(os.path.abspath(_filename)) File "/anaconda/lib/python3.5/posixpath.py", line 362, in abspath cwd = os.getcwd() OSError: [Errno 24] Too many open files```
st103294
Ok, could you check ulimit -n in a terminal and if possible increase the size? Are you working on a remote server or a local machine? Could it be that the machine just uses a lot of file handlers?
st103295
Hi! The file error was happening because I needed to restart the kernel. I realized that there is a great tutorial on dataloading that solves a lot of my issues from the inception of my problems so I think I’m good on this particular issue now. http://pytorch.org/tutorials/beginner/data_loading_tutorial.html 57 Thank you so much for your help!
st103296
I download the gpu version from http://download.pytorch.org/whl/cu80/torch-0.4.0-cp36-cp36m-win_amd64.whl 3(C:\ProgramData\Anaconda3) rename it and install it with E:\software\pylib>pip install torch-0.4.0-cp36-cp36m-win_amd64_cu80.whl Error occurs: torch-0.4.0-cp36-cp36m-win_amd64_cu80.whl is not a supported wheel on this platform. The environment is : (C:\ProgramData\Anaconda3) E:\software\pylib>python --version Python 3.6.0 :: Anaconda 4.3.1 (64-bit) (C:\ProgramData\Anaconda3) E:\software\pylib>pip --version pip 9.0.1 from C:\ProgramData\Anaconda3\lib\site-packages (python 3.6) (C:\ProgramData\Anaconda3) E:\software\pylib>nvcc -V nvcc: NVIDIA ® Cuda compiler driver Copyright © 2005-2016 NVIDIA Corporation Built on Mon_Jan__9_17:32:33_CST_2017 Cuda compilation tools, release 8.0, V8.0.60
st103297
Caused by renaming the .whl file. Successfully install using the default file name.
st103298
I have a tensor X and defined Y = X.detach() or Y = X.data. According to the documentation, Y is a new tensor. But when I change the value of Y, X is also changed. This is strange. The same thing happens even if I define Y = X.detach().numpy() or Y = X.data.numpy(). Is it a bug?
st103299
I had the same issue. I guess with detach() function creates Tensor Y as the new alias of tensor X with requires_grad = False. I solved this by Z = Y.clone() . Now changing the Z values won’t change X values.