id
stringlengths
3
8
text
stringlengths
1
115k
st104400
I benchmarked some pretrained networks, VGG and DenseNet and Resnet. The problem is, all of the variable of the densenet shows slow convergence on same parameter, compared to VGG or resnet. Is densenet needs special approaches to fine-tuning network? Or densenet 's convergence is slower than others? Training condition : dataset : dog and cats (1000 img per class) data augmentation : RandomResizedCrop(224x224), RandomHorizontalFlip, RandomVerticalFlip batch_size : 64 optimizer : Adam, default setting criterion : cross entropy learning rate : start from 0.001 , multiplied by 0.5 for each 50 epoch. total epoch : 200 Test condition dataset : dog and cats (200 img per class), unseen to model data augmentation : Resize(224x224) batch_size = 64 train_loss_models.png1280×960 236 KB train_acc_models.png1280×960 249 KB
st104401
For example, I have a 3*3 filter, and I want to fix some weights like (0,0), (2,2) in this filter during the training process. That means I only want to update part of weights in this filter. How to implement it?
st104402
You can zero out the gradients for the parts you want to freeze after each backward.
st104403
Hello, I wish to use convolution layers to predict a future state, based on older perceptions. Each perception is a vector of 8 features, and I was thinking I could stack several of those in order to represent the past. The tensor would then go through one or several 1D conv layer and if necessary a FC layer. However, I’m not sure about the kernel size and other hyper-parameters. Given that the input has a small size, is it wise to have a 8-sized kernel ? This would then make the output tensor of size (batch, filters, 1) and I couldn’t use another conv layer, could I ? But given that I’m not sure about spatial relationship between inputs, would a smaller kernel size make sense ? Finally, is this approach relevant at all or should I just stick to LSTM ? Thanks a lot !
st104404
i’m trying to get my validation to work within my training model Currently def train(epoch): model.train() correct = 0 train_loss = 0 vloss = 0 vcorrect = 0 for batch_idx, (data, target) in enumerate(train_loader): if args.cuda: data, target = data.cuda(), target.cuda() data, target = Variable(data), Variable(target) optimizer.zero_grad() output = model(data, target) loss = F.nll_loss(output, target) loss.backward() optimizer.step() train_loss += F.nll_loss(output, target, size_average=False).item() # sum up batch loss pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability correct += pred.eq(target.data.view_as(pred)).cpu().sum() # validation would most likely go here... if batch_idx % args.log_interval == 0: print('Time', time.time()-start,'Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.data[0])) #extra print for print('Time', time.time()-start,'Valid Epoch: {} [{}/{} ({:.0f}%)]\tValid Loss: {:.6f}'.format( epoch, batch_idx * len(data), len(validate_loader.dataset), 100. * batch_idx / len(validate_loader), loss.data[0])) I want to add validation, i think it should look like this, eval every 10 steps if batch_idx % 10 == 0: model.eval() for data, target in validate_loader: if args.cuda: data, target = data.cuda(), target.cuda() data, target = Variable(data), Variable(target) voutput = model(data) vloss += F.nll_loss(voutput, target, size_average=False).item() vpred = voutput.data.max(1, keepdim=True)[1] vcorrect += vpred.eq(target.data.view_as(vpred)).cpu().sum() I have run into several problems first is var issue: data,target are used for both train and validation however, pytorch errors out when i try and rename one of the data,train vars Is this just implemented wrong in PyTorch ?
st104405
Time 0.12518644332885742 Train Epoch: 1 [0/41937 (0%)] Loss: 2.533277 Time 0.12536191940307617 Valid Epoch: 1 [0/4660 (0%)] Valid Loss: 2.533277 Time 0.2388451099395752 Train Epoch: 1 [600/41937 (2%)] Loss: 2.197046 Time 0.2390141487121582 Valid Epoch: 1 [600/4660 (21%)] Valid Loss: 2.197046 Time 0.3443450927734375 Train Epoch: 1 [1200/41937 (3%)] Loss: 2.194630 Time 0.34452247619628906 Valid Epoch: 1 [1200/4660 (43%)] Valid Loss: 2.194630 Time 0.45102596282958984 Train Epoch: 1 [1800/41937 (5%)] Loss: 2.192434 Time 0.45121240615844727 Valid Epoch: 1 [1800/4660 (64%)] Valid Loss: 2.192434 Time 0.5626089572906494 Train Epoch: 1 [2400/41937 (6%)] Loss: 2.190561 Time 0.5628125667572021 Valid Epoch: 1 [2400/4660 (85%)] Valid Loss: 2.190561 Time 0.6735479831695557 Train Epoch: 1 [3000/41937 (8%)] Loss: 2.186656 Time 0.6737353801727295 Valid Epoch: 1 [3000/4660 (106%)] Valid Loss: 2.186656 loss and valid loss are the same value If i change some vars around if batch_idx % 10 == 0: model.eval() for data, target in validate_loader: if args.cuda: vdata, vtarget = data.cuda(), target.cuda() vdata, vtarget = Variable(data), Variable(target) voutput = model(data) vloss += F.nll_loss(voutput, target, size_average=False).item() vpred = voutput.data.max(1, keepdim=True)[1] vcorrect += vpred.eq(target.data.view_as(vpred)).cpu().sum() print('\nTime', time.time()-start,'Valid Epoch: {} [{}/{} ({:.0f}%)]\tValid Loss: {:.6f}'.format( epoch, batch_idx * len(data), len(validate_loader.dataset), 100. * batch_idx / len(validate_loader), vloss.data.item())) if i try and change some vars around i get File "custom.py", line 215, in train voutput = model(data) File "/home//anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "custom.py", line 167, in forward x = self.hidden1_bn(self.hidden1(x)) File "/home//anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home//anaconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward return F.linear(input, self.weight, self.bias) File "/home//anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 994, in linear output = input.matmul(weight.t()) RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'mat2'
st104406
In the first case it makes sense, that the values are equal, since you are printing the same loss for training and validation. In fact you overwrite the training loss and just the the validation loss. The second approach looks better. The error is thrown, because you are passing data to model instead of vdata during your validation.
st104407
Hi, I am new to Pytorch and trying to run a simple CNN on CIFAR10 dataset in Pytorch. However I am getting the error : “RuntimeError: invalid argument 3: only batches of spatial targets supported (3D tensors) but got targets of dimension: 1 at /pytorch/torch/lib/THNN/generic/SpatialClassNLLCriterion.c:60” Below is the relevant code : transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root=’./data’, train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root=’./data’, train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=128, shuffle=False, num_workers=2) classes = (‘plane’, ‘car’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’) net = Net() criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.5) for epoch in range(50): for batch_idx, (data, target) in enumerate(trainloader): data, target = Variable(data), Variable(target) print(data.shape) ## outputs-- torch.Size([128, 3, 32, 32]) print(target.shape) ## outputs – torch.Size([128]) optimizer.zero_grad() net_out = net(data) loss = criterion(net_out, target) loss.backward() optimizer.step() if batch_idx % 10 == 0: print(‘Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}’.format( epoch, batch_idx * len(data), len(trainloader.dataset), 100. * batch_idx / len(trainloader), loss.data[0])) Can someone please suggest what’s going wrong here? Thanks for your help in advance…
st104408
The expected input dimension is [batch_size, channels, height, width]. If you resize your input, make sure to add a channel dimension, even if it’s 1.
st104409
Actually my input dimension is [128,3,32,32] and the target is just [128]. I am not resizing the input, so, is the target dimension a problem?
st104410
OK, from your code before the edit it looked like you are reshaping the data. Could you post your model definition? The error has to be there, since the other parts work with this dummy model: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv = nn.Conv2d(3, 1, 3, 1, 1) self.fc1 = nn.Linear(1*32*32, 10) def forward(self, x): x = self.conv(x) x = x.view(x.size(0), -1) x = self.fc1(x) return x
st104411
Sure, the model is as defined below: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 64, 8, stride=3, padding=3) ##11x11x64 self.conv1_bn = nn.BatchNorm2d(64) self.conv2 = nn.Conv2d(64, 128, 8, stride=4,padding=10) ##7x7x128 self.conv2_bn = nn.BatchNorm2d(128) self.conv3 = nn.Conv2d(128, 256, 8, stride=3,padding=0) ##1x1x256 self.conv3_bn = nn.BatchNorm2d(256) self.conv1x1 = nn.Conv2d(256, 10, 1) ##1x1x10 def forward(self, x): x = self.conv1_bn(F.elu(self.conv1(x))) x = self.conv2_bn(F.elu(self.conv2(x))) x = F.dropout(x) x = self.conv3_bn(F.elu(self.conv3(x))) x = F.dropout(x) x = F.dropout(self.conv1x1(x)) x.view(x.size()[0], -1) return F.log_softmax(x)
st104412
This model won’t work, since your output of self.conv2 will have dimensions [batch_size, 128, 6, 6]. Using a kernel_size=8 in self.conv3 will therefore throw an error.
st104413
Oh, I think I must have miscalculated the output size as [batch_size,128,7,7]. Will introducing the padding=1 in self.conv3 solve the problem to give me an output size of [batch_size,1,1,256] as I have to use kernel_size of 8?
st104414
Yes, this would yield an output of [batch_size, 256, 1, 1]. x = torch.randn(1, 128, 6, 6) conv = nn.Conv2d(128, 256, 8, stride=1, padding=1) output = conv(x) print(output.shape) > torch.Size([1, 256, 1, 1])
st104415
Hi @ptrblck, I changed the padding to 1 in conv3 but I am still getting the same error : /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce) 1049 return torch._C._nn.nll_loss(input, target, weight, size_average, ignore_index, reduce) 1050 elif dim == 4: -> 1051 return torch._C._nn.nll_loss2d(input, target, weight, size_average, ignore_index, reduce) 1052 else: 1053 raise ValueError(‘Expected 2 or 4 dimensions (got {})’.format(dim)) RuntimeError: invalid argument 3: only batches of spatial targets supported (3D tensors) but got targets of dimension: 1 at /pytorch/torch/lib/THNN/generic/SpatialClassNLLCriterion.c:60
st104416
Could you print the shapes of the tensors in NLLLoss please? It seems like your model output has an invalid shape, i.e. it needs a spatial target like in a segmentation task.
st104417
Create a matrix in the network as shown in the figure below In the matrix, only the diagonal line and its two sides have parameters, and all other positions are zero. In the training phase, only the position of the parameter is adjusted, and the value of zero remains unchanged.The same symbol represents the same parameter
st104418
How should be applied this module considering it combines an activation together with a loss? The usual pipeline in pytorch is, once you have the model, create a criterion=nn.L1Loss() in example. Later to compute loss = criterion(output,groundtruth) and doing loss.backward() But how to use it to proper apply the activation and loss?
st104419
It’s just a more numerically stable replacement for Sigmoid and BCELoss. Pass in the inputs that you would pass into Sigmoid if you were doing Sigmoid and then BCELoss.
st104420
I am implementing the idea of the paper “A Closer Look at Spatiotemporal Convolutions for Action Recognition”. It proposed a way to replace 3D convolution by R(2+1)D convolution which is implemented in CAFFE2 4. My target has reproduced the result in pytorch. For 3D convolution of 3xtxhxw, where 3 means RGB, t is a number of the frame, h and w is height and width. For R(2+1)D, it will follows two steps: Convolution with 1xdxd kernel (d is size of kernel, 1 means on single frame) Apply tx1x1 on the output of the feature map. In pytorch, 3D convolution can do as self.conv3d = nn.Conv3d(in_channels, out_channels, kernel=3, stride=1, padding=1, bias=bias) This is my implementation to equivalent with above self.spatial_conv = nn.Conv2d(in_channels, intermed_channels, kernel=3, stride=1, padding=1, bias=bias) self.bn = nn.BatchNorm2d(intermed_channels) self.relu = nn.ReLU() self.temporal_conv = nn.Conv3d(intermed_channels, out_channels, temporal_kernel_size, stride=temporal_stride, padding=temporal_padding, bias=bias) Am I correct? Thanks
st104421
Hi All, I was training a VGG-16 on CIFAR10 image classification task with GTX1080. However, I discovered a rather weird phenomenon: The original VGG-16 has convolution filter size: [64, 128, 256, 512, 512] which takes roughly 35s per epoch in training time. However, when I reduced those filter numbers into [32, 64, 128, 256, 256], [16, 32, 64, 128, 128], [8, 16, 32, 64, 64] They all only take 10 seconds per epoch. I thought intuitively, the training time should reduce when the network has fewer parameters? Is this a common situation? Thanks.
st104422
The timing might be masked by another bottleneck in your code. The training procedure on the GPU might be fast, but the overall training routine has to wait e.g. for the DataLoader to provide a new batch.
st104423
I understand that argument. However, each implementation and the code are the same while the only difference is the filter number which is defined in the network class. Maybe there is timing mask, so it seemed to like a step function rather than a continuous function.
st104424
Hi, I ma trying to compute this multiplication of a vector with a matrix: f=torch.tensor([[-1.],[1]]) r=torch.tensor([[11.,21],[21,22]]) f.matmul( r ) However I am getting this error: size mismatch, m1: [2 x 1], m2: [2 x 2] at c:\programdata\miniconda3\conda-bld\pytorch-cpu_1524541161962\work\aten\src\th\generic/THTensorMath.c:2033 What works and produces the right result is: f=torch.tensor([[-1.,0],[1,0]]) r=torch.tensor([[11.,21],[21,22]]) f.matmul( r ) I tried using mm() and mv() with similar results. Is there an alternative? Is there an easy way to expand f adding 0? Thanks, Alberto
st104425
bgobbi: m1: [2 x 1], m2: [2 x 2] As a rule, matrix multiplication only works when the neighbouring dimensions coincide. so f.t() will be 1 x 2 and can be multiplied from the right with 2 x 2 r to give a 1 x 2. Best regards Thomas
st104426
Thanks Tom! However that is not what I need to do. I need to do (note the extra 0’s in f): f=torch.tensor([[-1.,0],[1,0]]) r=torch.tensor([[11.,21],[21,22]]) f.matmul( r ) Which would be equivalent to: f=torch.tensor([[-1.],[1]]) r=torch.tensor([[11.,21],[21,22]]) r.matmul(f.t()).t() However, this also will not work: RuntimeError: size mismatch, m1: [2 x 2], m2: [1 x 2] It seems I really need to fill up with 0’s. Alberto
st104427
I found the solution: all I need to do for this to work is: f * r This is what I really wanted to do.
st104428
Hi, I have a text corpus and I have a score for each sentence of it. I want to make a RNN model to predict these scores. I have written some codes to do it. In train phase, a batch of data (sentence-score samples) is given to my classifier by: output=classifier(input,seq_lengths) input is a tensor that each row of it is a sequence of word embedding vectors of the words in a sentence. seq_lengths is a list of its sentence lengths. input is a tensor with size: batch_size*(max_length_of_sentences*word_embedding_vector_length) but I get an error in running this line of code: embedded=self.embedding(input) the error is: Traceback (most recent call last): File "/home/mahsa/PycharmProjects/PyTorch_env_project/Thesis/proj2/mahsa_rnn_sent_classification.py", line 277, in <module> train() File "/home/mahsa/PycharmProjects/PyTorch_env_project/Thesis/proj2/mahsa_rnn_sent_classification.py", line 238, in train output = classifier(input, seq_lengths) File "/home/mahsa/anaconda3/envs/pytorch_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 224, in __call__ result = self.forward(*input, **kwargs) File "/home/mahsa/PycharmProjects/PyTorch_env_project/Thesis/proj2/mahsa_rnn_sent_classification.py", line 212, in forward embedded = self.embedding(input) File "/home/mahsa/anaconda3/envs/pytorch_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 224, in __call__ result = self.forward(*input, **kwargs) File "/home/mahsa/anaconda3/envs/pytorch_env/lib/python3.5/site-packages/torch/nn/modules/sparse.py", line 94, in forward self.scale_grad_by_freq, self.sparse File "/home/mahsa/anaconda3/envs/pytorch_env/lib/python3.5/site-packages/torch/nn/_functions/thnn/sparse.py", line 53, in forward output = torch.index_select(weight, 0, indices.view(-1)) TypeError: torch.index_select received an invalid combination of arguments - got (torch.FloatTensor, int, !torch.FloatTensor!), but expected (torch.FloatTensor source, int dim, torch.LongTensor index) can you guide me?
st104429
Hi, The problem is that the input you give to the embedding layer is not a LongTensor. You should convert your input before giving it to the embedding layer: input.long().
st104430
Hi, Thanks. But my input is consisted of word embedding vectors that they are float not int.
st104431
From the doc 27 the embedding layer takes as input the indices of the embeddings it should output the representation for. Isn’t that what you give as input?
st104432
Thanks @albanD. your doc 13 saves me from a huge mistake. I was feeding wrong input to embedding layer. But now I received a new error in computing loss: Traceback (most recent call last): File "/home/mahsa/PycharmProjects/PyTorch_env_project/Thesis/proj2/mahsa_rnn_sent_classification.py", line 271, in <module> train() File "/home/mahsa/PycharmProjects/PyTorch_env_project/Thesis/proj2/mahsa_rnn_sent_classification.py", line 234, in train loss = criterion(output, target) File "/home/mahsa/anaconda3/envs/pytorch_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 224, in __call__ result = self.forward(*input, **kwargs) File "/home/mahsa/anaconda3/envs/pytorch_env/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 482, in forward self.ignore_index) File "/home/mahsa/anaconda3/envs/pytorch_env/lib/python3.5/site-packages/torch/nn/functional.py", line 746, in cross_entropy return nll_loss(log_softmax(input), target, weight, size_average, ignore_index) File "/home/mahsa/anaconda3/envs/pytorch_env/lib/python3.5/site-packages/torch/nn/functional.py", line 672, in nll_loss return _functions.thnn.NLLLoss.apply(input, target, weight, size_average, ignore_index) File "/home/mahsa/anaconda3/envs/pytorch_env/lib/python3.5/site-packages/torch/nn/_functions/thnn/auto.py", line 47, in forward output, *ctx.additional_args) TypeError: FloatClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, !torch.DoubleTensor!, torch.FloatTensor, bool, NoneType, torch.FloatTensor, int), but expected (int state, torch.FloatTensor input, torch.LongTensor target, torch.FloatTensor output, bool sizeAverage, [torch.FloatTensor weights or None], torch.FloatTensor total_weight, int ignore_index) Sorry that this problem is different from the topic but can you guide me? why my target should be torch.Long? because my target is the scores for sentences and they are float.
st104433
It’s actually a very similar error as the previous one. As you can see in the error message TypeError: FloatClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, !torch.DoubleTensor!, torch.FloatTensor, bool, NoneType, torch.FloatTensor, int), but expected (int state, torch.FloatTensor input, torch.LongTensor target, torch.FloatTensor output, bool sizeAverage, [torch.FloatTensor weights or None], torch.FloatTensor total_weight, int ignore_index) the function that you try to use called when doing loss = criterion(output, target) takes got as input a Float and a Double tensor (note that the DoubleTensor has some ! around it). It it was expecting Float and Long tensor. As you can see, the LongTensor is for the target. So the second argument for your criterion that contains the target should be a LongTensor and not a DoubleTensor. Double check the doc for the criterion you used to make sure what it takes as target is what you give him. In general in pytorch, every tensor that contains indices will be LongTensors.
st104434
Hi, yes I double checked my code. and the problem was solved. Thanks for valuable comments.
st104435
Hello , I’m trying to use data loader , but can’t figure out how it works . I splitted my dataset 75% train 25% test , Now I used it like that (code) , My question how the data loader identify the label (obejective , class) , is it by default the last column in the tensor or do I have to specify it ? P.S : New to PyTorch and DL , is it good to use shuffle for the training set ? train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
st104436
I don’t know if you implemented the train_dataset yourself but if you did you should notices that the train_dataset class must have 2 default special methods. One is to be able to call len and getitem which both help dataloader to navigate and grab the data set the way you wanted. I am experimenting with pytorch as well so feel free to correct me if I am wrong. As for shuffling the training data set it depends what you want to do. Eg if you want to train on sequential data you don’t want that on, if you want to learn a classification network then you should turn it on.
st104437
Hello @inkplay Thank you for relying , To describe my dataset to you understand more , I have [ examples , dates , features ] (e.g. [1920,23,20])where the examples are agricultural lands and 20 features which are the mean and variance for different bandwidths ( 10 to be exact , 2 * 10 ) and these measurements are taken on different dates through out the year and my target (label , classes ) are [examples , dates , class ] e.g.([1920,23,1]) ,12 classes in total. After cleaning splitting and scaling my data (normalization) I want to train it on an RNN for classification ( simple tanh cells , just like you experimenting I’ll try after that GRU and LSTM ) but as far as I can see from different codes online everyone seem to use the data loader so can it be replace or how can specify to it that my class is this row in the tensor ? and thank you.
st104438
You have to define a dataset: https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset 25 Dataset is an iterator prepared to work with dataloader You have to set your dataset: class BinaryData(torch.utils.data.Dataset): def __init__(self, root_dir,transform): Define here basics of your dataset, such as directory. I enlist all the files inside my dataset and use it as list for reading data later. self.input_list = [] self.transform = transform for path, subdirs, files in os.walk(root_dir): for name in files: self.input_list.append(os.path.join(path, name)) def __len__(self): return len(self.input_list) def __getitem__(self, idx): Here you have to load your data using whatever you want, you can see last one I did as an example. Here I was loading python dictionaries. You can return the data with the structure you want dic = data2dic(np.load(self.input_list[idx])) audio = dic['audio'] if audio.shape !=(256,256): audio = np.resize(audio,(256,256)) frames = dic['frames'] size = np.shape(frames) images = [] for i in range(size[3]): images.append(self.transform(Image.fromarray(frames[:,:,:,i]))) frames = torch.stack(images) return audio,frames Independently of how your data is stored, you can manage and modify it when you call the function. Gettiem is warped with dataloader so each time you call dataloder it will provide data with the structure of getitem’s return
st104439
Follow this tutorial https://pytorch.org/tutorials/beginner/data_loading_tutorial.html 169 to create your dataset class, basically just switch the image dataset they use with your csv or text dataset file.
st104440
Hi there, I’m trying to built a regression model for predicting a one dimensional timeseries from multiple timeseries signals. I’m using a GRU. However I got a couple problems/questions: If I train my model in batches. Do I also have to predict in batches? (If not, how can I predict without using a batch) How can I include batch normalization for the GRU? Also I’m quite new to pytorch. If anyone has some advice on how to improve my code. I’ll be thankful for any comment. That’s my entire code: import torch from torch import nn from torch.autograd import Variable import torch.nn.functional as F import numpy as np import matplotlib.pyplot as plt import torch.optim as optim class GRU_BF(nn.Module): def __init__(self,input_size,hidden_size,num_layers,time_steps,bidirectional=False): super().__init__() self.input_size=input_size self.hidden_size=hidden_size self.num_layers=num_layers self.time_steps=time_steps self.bidirectional=bidirectional self.GRU_lyrs=nn.GRU(input_size = input_size,hidden_size = hidden_size,num_layers = num_layers,batch_first=True,bidirectional=bidirectional) self.outp=nn.Linear(time_steps,time_steps) self.outp2=nn.Linear(time_steps,time_steps) return def forward(self,x,hstate): x,hstate=self.GRU_lyrs(x,hstate) hstate = Variable(hstate.data) x=F.relu(self.outp(x[:, :, -1])) x=self.outp2(x) return x, hstate class SamplerBF(): def __init__(self): self.count=0 return def sample(self,batchsize): X1=[] X2=[] Y=[] num_ts=400 for i in range(batchsize): x=np.linspace(self.count*np.pi,(self.count+4)*np.pi,num_ts) x1=np.sin(x) X1.append(x1) x2=np.cos(x)+2 X2.append(x2) Y.append((x1/x2)) self.count+=1 return np.swapaxes(np.array([X1,X2]).T,0,1),np.array(Y) input_dim=2 hidden_size=200 num_layers=2 time_steps=400 lossvals=[] S=SamplerBF() mynn=GRU_BF(input_size=input_dim,hidden_size=hidden_size,num_layers=num_layers,time_steps=time_steps) h_state=None optimizer=optim.Adam(mynn.parameters(),lr=0.01,weight_decay=0) scheduler = optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.954) #train for i in range(20): mynn.train() x,y=S.sample(30) X=torch.Tensor(x) Y=torch.Tensor(y) Y_pred, h_state = mynn(X,h_state) scheduler.step() error=nn.MSELoss() loss=error(Y_pred,Y) mynn.zero_grad() loss.backward() optimizer.step() lossvals.append(loss.item()) print("It: ",i,lossvals[-1]) plt.semilogy(lossvals) #predict/eval with torch.no_grad(): mynn.eval() # I would love to predict without using the same amout of batches, that I used for training x,y=S.sample(30) X=torch.Tensor(x) Y=torch.Tensor(y) Y_pred, h_state = mynn(X,h_state) for i in range(2): fig,ax = plt.subplots(1,1,figsize=(12,6)) ax.plot(y[i],"ko",Y_pred[i].data.numpy(),"r*") plt.show() Thanks for any hint in advance!! Have a good one!
st104441
I ran a comparison in 1D and 2D of PyTorch convolution with simple PyCUDA convolution kernels in a Colaboratory GPU (K80) notebook. In these tests, PyTorch is significantly slower than PyCUDA, even though my PyCUDA kernels are very simple implementations (everything in global memory, etc.). I think I am doing everything right: The PyTorch tensors are on the GPU, I have CuDNN benchmarking enabled, and I synchronize before measuring the runtime, so I am surprised by the performance difference. I would be grateful if you looked at the (quite short and simple) notebook to see if I made a mistake: https://colab.research.google.com/drive/1oF9wH0UDGqWanmZ2B04YH8U0RxEydR9_ 26
st104442
checkpoint = torch.load('model_best.pth.tar', map_location=lambda storage, loc: storage) Gives me an Assertion Error, CUDA not enabled.
st104443
Hello , New to PyTorch and deep learning I have Four questions : 1- Is it better to scale all your data then split to training and testing sets or do it otherwise. 2- How to use effectively use data augmentation 3- I have missing values in my datasets NaN I just removed the lines where they appeared (8 out of 9230) so im my opinion they don’t have too much impact or do they. 4- I’m doing classification using different types of neural nets , I have 12 classes in total but some are too rare compared to other , is there a way to make the model more robust to rare classes. And Thank you.
st104444
Solved by ptrblck in post #2 I assume scaling is done for normalization. If so, you should split your data first and then scale it. Otherwise you will leak the test information into your training procedure. Have a look at the Data loading tutorial. Basically you create a Dataset and add your data augmentation in the __geti…
st104445
I assume scaling is done for normalization. If so, you should split your data first and then scale it. Otherwise you will leak the test information into your training procedure. Have a look at the Data loading tutorial 21. Basically you create a Dataset and add your data augmentation in the __getitem__ function. The DataLoader can use multi-processing to speed up the loading and transformations. It really depends on the data. Since you only remove 8 out of 9k, it seems not to be that bad. Alternatively, try to fill the NaNs with the mean or median of the feature from your training set. You can use the weight argument for different loss functions (see here 1). Alternatively, you can over- or undersample your classes using WeightedRandomSampler 4. I’ve created a small tutorial 8 on this topic, which is not released yet, but might give you a good start.
st104446
This is a snippet from my network code: conv_block += [nn.Conv2d(in_dim, out_dim, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(out_dim), nn.ELU()] with BatchNorm2d. The network gave good results for first few training epochs but after that it get unstable. During debugging, I removed BatchNorm2d from the network to analysis the effect: conv_block += [nn.Conv2d(in_dim, out_dim, kernel_size=3, stride=1, padding=1), nn.ELU()] but result were very bad and was not even comparable with the version with BN. Why is it so?
st104447
BatchNorm layers have the ability to normalize the input activations, so that learning will be accelerated/feasible. The original paper clains it’s due to the reduction of the internal covariance shift. Recent papers claim that’s bogus, and give another perspective on the functionality. Have a look at the original paper 1 for more information.
st104448
Is there a way I can load model checkpoint of Pytorch in Tensorflow? Or atleast can I extract the weights from my Pytorch checkpoint and save it in a .h5 file and use it in Keras, using model.load_weights?
st104449
You could export your model to onnx 58 and load it into tensorflow. Exporting the weights works in theory, too, but it is not uncommon that there are subtle differences in how the operators work (e.g. padding for convolutions, …). Best regards Thomas
st104450
Code: import torch import torch.nn.functional as F from torch.autograd import Variable x = torch.arange(0,9).view(1,1,3,3) x = x.cuda().float() o = F.avg_pool2d(Variable(x), kernel_size=3, stride=1, padding=1) >>> x (0 ,0 ,.,.) = 0 1 2 3 4 5 6 7 8 [torch.cuda.FloatTensor of size 1x1x3x3 (GPU 0)] PyTorch ‘0.5.0a0+d365158’: (Github latest source as of 19-Jun-2018 8.57 IST) [WRONG] o = tensor([[[[2.0000, 2.5000, 3.0000], [3.5000, 4.0000, 4.5000], [5.0000, 5.5000, 6.0000]]]], device='cuda:0') PyTorch ‘0.1.12+6f6d70f’: [CORRECT] o = Variable containing: (0 ,0 ,.,.) = 0.8889 1.6667 1.3333 2.3333 4.0000 3.0000 2.2222 3.6667 2.6667 [torch.cuda.FloatTensor of size 1x1x3x3 (GPU 0)] It seems the old pytorch version(‘0.1.12+6f6d70f’) is giving the right behaviour. Does it mean the bug is re-introduced/files not synched?
st104451
Answer: There is a parameter in F.avg_pool2d() to handle this (count_include_pad=True). According to the documentation https://pytorch.org/docs/stable/nn.html#torch.nn.AvgPool2d 61, by default count_include_pad=True. But by default count_include_pad=False. EIther the documentation has to be modified or default parameter has to be count_include_pad=True
st104452
Hi, thank you for researching this! I’m not 100% certain what the default should be, but consistency between doc and implementation would certainly be good. I submitted an issue for this in github: github.com/pytorch/pytorch Issue: avg_pool2d/3d padding behaviour default 8 opened by t-vi on 2018-06-19 Spotted by InnovArul/Arulkumar on the forums, thank you! The documentation for avg_pool2d/3d says count_include_pad: when True, will include the zero-padding in the averaging... Best regards Thomas
st104453
Hi, I installed pytorch 0.5 using the setup.py following the official guide. My system has cuda 9.2 installed and it is included into my bash env by setting LD_LIBRARY_PATH and PATH like described in the CUDA installation guide from nvidia. NOTE: I upgraded from CUDA 8.0 to CUDA 9.2. The files of CUDA 8.0 are not deleted but my bash env does not export any of CUDA 8.0 related directories: (pyt_gh) chofer@eisbaer:~$ env | grep cuda LD_LIBRARY_PATH=/usr/local/cuda-9.2/lib64 PATH=/scratch2/chofer/opt/anaconda3/envs/pyt_gh/bin:/scratch2/chofer/opt/anaconda3/bin:/usr/local/cuda-9.2/bin:/usr/local/sbin:/usr/local/bin:/usr/local/nfs/sbin:/usr/local/nfs/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/opt/pgi/linux86-64/14.9/bin:/snap/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin:/home/pma/chofer/opt However, after compilation I get an import error: >>> import torch Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/scratch2/chofer/opt/anaconda3/envs/pyt_gh/lib/python3.6/site-packages/torch/__init__.py", line 78, in <module> from torch._C import * ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory Adding /usr/local/cuda-8.0/lib64 to LD_LIBRARY_PATH resolves the problems. **It seems to me that pytorch 0.5 links against two CUDA versions, i.e., 8.0 + 9.2, is this a bug? ** (It would mean that one has to have installed both versions in order for pytorch to run) Is this just happening because the cuda 8.0 directories are not deleted? May be related to https://github.com/pytorch/pytorch/issues/1941 31 My Modification of the installation routine: I installed into a virtual env created by conda (env_name = pyt_gh) changed: export CMAKE_PREFIX_PATH="/scratch2/chofer/opt/anaconda3/envs/pyt_gh/" changed: conda install -c pytorch magma-cuda91
st104454
Maybe this is a silly question, but how can we sum over multiple dimensions in pytorch? In numpy, np.sum() takes a axis argument which can be an int or a tuple of ints, while in pytorch, torch.sum() takes a dim argument which can take only a single int. Say I have a tensor of size 16 x 256 x 14 x 14, and I want to sum over the third and fourth dimensions to get a tensor of size 16 x 256. In numpy, one can do np.sum(t, axis=(2, 3)), what is the pytorch equivalent?
st104455
AFAIK there isn’t such methods. So ou can only do t.sum(2).sum(2) or t.view(16,256,-1).sum(2)
st104456
Indeed, I’ve always used the t.sum(2).sum(2), but the view() solution looks cleaner, thanks!
st104457
numpy’s sum() is like tensorflow’s reduce sum. Unfortunately right now pytorch’s sum doesn’t support summing over an axis (it’s being worked on, though! 477) but there are ways to be creative with this.
st104458
richard: but there are ways to be creative with this. what do u mean by that?
st104459
One way to do this is, depending on the number of dimensions your tensor has, do this by hand with for loops, after transposing the matrix so the dimension you want to reduce over is last. For example: import torch x = torch.randn(2, 2, 2) # Let's say I want to sum over the 1st dimension (0-indexed) x = x.transpose(1, 2) torch.Tensor([ [ z.sum() for z in y ] for y in x])
st104460
Hope no one minds, but I thought I’d share my “creative solution”. def multi_max(input, axes, keepdim=False): ''' Performs `torch.max` over multiple dimensions of `input` ''' axes = sorted(axes) maxed = input for axis in reversed(axes): maxed, _ = maxed.max(axis, keepdim) return maxed This style should work for sum as well.
st104461
Really? What version of PyTorch? I have v0.4.0 and neither of the following worked: import torch x = torch.rand(5,5,5) torch.sum(x, (1,2)) y = x.sum((1,2)) Both complained about wrong types. I’d like a built in way to do this (that’s why I ended up on this page, after all).
st104462
Ah, sorry, this was only merged a week after 0.4 334. So you would either need to build your own or wait for 0.5. Best regards Thomas
st104463
I recently ported monodepth 1 which is written in tensorflow to pytorch(ported code 2). But the training seems unstable than the tensorflow implementation(github issue 1). For the first 10 - 20 epoch during training the result looks promising but its get very unstable after that. Could any one figure out the issue?
st104464
Input to network module is of type torch.autograd.variable.Variable >>> print(type(input)) <torch.autograd.variable.Variable> During forward() I unpacked input with .data to use torch.index_select on it. pix = torch.index_select(input.data), 0, idx) The output value for the forward() is torch.cuda.FloatTensor. >>> output = model.forward(input) >>> print(type(output)) <torch.cuda.FloatTensor> Will unpacking input affect the flow of gradient during backward()?
st104465
Yes, you should never unpack Variables by hand when computing stuff. Also you should not call the forward method directly on an nn.Module, you should do output = model(input).
st104466
Can I unpack variable after backward()? for iter in range(100): output = model(input) loss = loss_cal(input, ouput) loss.backward() save_image(ouput.data.cpu.numpy())
st104467
Recently came across this NVIDIA pix2pixHD pytorch code 1 where they have called forward directly. fake_image = self.netG.forward(input_concat) link here 2 pred_fake = self.netD.forward(torch.cat((input_label, fake_image), dim=1)) link here Why so?
st104468
I have been follow pix2pixHD pytorch code 6 for a while now. I’m not quite sure why they use: model.module.optimizer_G.zero_grad() instead of model.optimizer_G.zero_grad() link here 5 model.module.optimizer_D.zero_grad() instead of model.optimizer_D.zero_grad() link here 1 I couldn’t find module anywhere in there repo. Is it a pytorch thing?
st104469
There has been a lot of work on different data representations such as fixed-point and floating-point with various bit widths for efficient training and inference, especially in deep neural networks. I have implemented customizable data types and arithmetic operations in Python where the user can choose a data type, customize the number of bits allocated to each field, and define how arithmetic operations are performed (e.g., truncate a specified number of bits before computation). My implementation can be integrated with NumPy by creating arrays where the dtype is object instead of NumPy’s default data types. I have recently started using PyTorch and really liked it compared to other machine learning libraries. I would like to integrate my code into PyTorch, but I’m not sure where to begin. I have a few questions that will hopefully help me throughout this process. What are the most relevant parts of the source code I should start reading? Can I use my current Python implementation and be able to run it on GPUs or I should re-implement it in C/C++/Cuda? Any other pointers and comments are very welcome Thanks!
st104470
Interesting idea. I think most library functions are only available in 64bit, 32 bit (and maybe 16bit from cuda). So it is possible to trim the end results to a certain number of bits, but the intermediate computation results is probably very difficult to refined to a certain number of bits. I’m wondering if you get your code to work with numpy on blas functions.
st104471
Currently, I have defined separate classes for each data type where the number of bits for each field can be customized. I understand the trimming existing data types like float64 is a solution, but I’d rather avoid that because a user may choose a bit width for exponent that is larger than what IEEE standard supports. Can you please elaborate your last sentence a little bit? Or a snippet of code that I can run using my data types? If you mean something like doing matrix multiplication using NumPy and on my data types, I don’t think the current implementation works. Because my NumPy is complied to use openblas, but matrix multiplication is performed using a single core, so I assume NumPy is using its own engine rather than openblas. I think my implementation is too simple at the moment because it simply defines basic operations such as __add__, __sub__, etc. on the data types. I’m not familiar with BLAS, but I assume there’s more I have to do to make use of openblas, MKL, etc.
st104472
Yeah, those libraries are usually compiled with 32bit or 64bit floating point types in C and/or FORTRAN. It is probably nontrivial to define efficient arbitrary length floating types in those languages, especially considering memory alignment.
st104473
All the classes I have implemented use __slots__ to define member variables. As a result, the memory consumed by each object is determined at the time of creation and is typically a multiple of 8 bytes. Do you think this will solve the problem? Besides, what I had in mind was to let users try different representations and arithmetic operations in Python. Later on, they can implement their optimized design on an FPGA. The goal I’m trying to achieve at this point is to run Python code on GPU to gain a huge speedup.
st104474
I just remembered I had seen the PyTorch internals 62 a while ago. This article explains how to add a new data type in CPython and integrate it into PyTorch. For basic data types such as Float, the THGenerateFloatType.h header file has a line that looks like this: #define real float where all instances of real in Tensor.cpp will be replaced by float when the code is automatically generated. Can I define my own data type using something like this: typedef struct { PyObject_HEAD int first; int second; } PyMyObject; and creating a new header file that has a line that looks like this: #define real PyMyObject ?
st104475
Thread Marked. The function is just what I want. I have no mature solution yet. I shall keep an eye on.
st104476
How I can set the values the LSTM hidden state can take, so they are from -x to x, where x is a float and the distribution is uniform? The exact same thing for nn.Embedding can be achieved as it follows: initrange = 0.5 / 100 embeddings.weight.data.uniform_(-initrange, initrange)
st104477
hey pytorch community, following code takes for my usage too long, import timeit u = timeit.Timer('torch.zeros(10,10,20)', setup='import torch') u.timeit() since the upgrade to 0.4 this has an avg runtime of 2.3 seconds. did I miss something? best regards, magnus
st104478
I tried your code and you are right. It does takes around 2 seconds if you measure it using the timeit.Timer function. But if you try it with just PyTorch code as in: import torch a = torch.zeros(10, 10, 20) Then it’s pretty much instantaneous. Seems like the timeit.Timer function adds some weird time overhead in this case. Maybe someone with more knowledge than me could clarify why that is the case. Best regards, diego
st104479
Hey @Diego, the Timer was just to demonstrate that it really takes this long -> but i agree with you there is alot of overhead, i’ll edit the post. I work a lot with RNNs and as you might know you’ll init your hidden (and cell) state with for instance with torch.zero tensors. After the update to 0.4 my code got extremely slow. I profiled my code and it turned out that torch.zeros was the problem… Everytime I initialized my hidden with, torch.zeros(3, 80, 1), it took 0.5 seconds. What a disaster. Best regards, Magnus
st104480
That is very weird. Maybe you should post a snippet of the code (if possible) so that we can figure out where the problem is. I honestly don’t think that the torch.zeros function was changed so drastically as to cause that worsening in performance with the update. I could be wrong tho.
st104481
Yeah, somehow the setup in the timeit function does not seem to work properly for PyTorch. Maybe it’s because of the C-extensions and that it doesn’t import it completely before starting the timing and executing “torch.zeros” or sth like this. Using the timeit magic command via ipython gives more reasonable results: In [1]: import torch In [2]: %timeit torch.zeros(10,10,20) 2.09 µs ± 31.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
st104482
On the following very simple example, why the hidden state consists of 2 Tensors? From what I understand, isn’t it supposed to be just a Tensor of size 20? import torch import torch.autograd as autograd import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.autograd import Variable rnn = nn.LSTM(input_size=10, hidden_size=20) input = Variable(torch.randn(50, 1, 10))#seq_len x batch x input_size output, hn = rnn(input) print (hn)
st104483
lstm has both h and c hidden states en.wikipedia.org Long short-term memory 198 Long short-term memory (LSTM) units (or blocks) are a building unit for layers of a recurrent neural network (RNN). A RNN composed of LSTM units is often called an LSTM network. A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell is responsible for "remembering" values over arbitrary time intervals; hence the word "memory" in LSTM. Each of the three gates can be thought of as a "conventional" artificial neuron, as in a multi-layer (or feedforward) n...
st104484
Can you please explain the difference between the two and what each is suited for? Should I just concatenate them?
st104485
I am implementing a model which is based on MemoryNetworks. I have triplets data of (context, query, answer). And I want to calculate attention. The attention indicates which sentences in a context should be focused. To formulate mini-batch, I use zero-paddings to create context data. So the following is attention data. And 0 values show a result of embeddings of zero-padded context. In such a data, I want to apply softmax to indices 0, 1, 2, 3, last. So the model should ignore zero padding columns. So how do I realize this? I want to know such a technique when we use zero-padding and attention mechanisms. And I am sorry about this question may be a general deep learning problem. Before softmax. torch.bmm(contex, q) 109.8601 77.6376 68.3927 199.1673 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 348.0155 [torch.cuda.FloatTensor of size 15 (GPU 0)] After softmax. F.softmax( torch.bmm(contex, q) ) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 [torch.cuda.FloatTensor of size 15 (GPU 0)]
st104486
On what I could understand from your question, you want to apply softmax to non zero values in a tensor. Assuming the name of the tensor to be a, you can use, a = a - torch.where(a > 0, torch.zeros_like(a), torch.ones_like(a) * float('inf')) This will make every zero value -inf, applying softmax over which will give zero.
st104487
I’m trying to utilize intermediate feature representations from different points along a CNN pipeline. Is it OK to create multiple feature extractors from the same pre-trained CNN? Will the parameter updates be performed correctly in this example (*see example below): class Net(nn.Module): def __init__(self, dim_l1=256, dim_l2=512, dim_l3=512, dim_fc=512): super(Net, self).__init__() vgg = models.vgg19(pretrained=True) self.full_feature_extractor = nn.Sequential(*list(vgg.features)) self.local_feature_extractor1 = nn.Sequential(*list(vgg.features)[:15]) self.local_feature_extractor2 = nn.Sequential(*list(vgg.features)[:-13]) self.local_feature_extractor3 = nn.Sequential(*list(vgg.features)[:-6]) self.fc1 = nn.Linear(dim_fc, dim_fc) self.fc2 = nn.Linear(dim_l1 * 64 + dim_l2 * 16 + dim_l3 * 4, dim_fc) self.fc3 = nn.Linear(dim_fc, 10) @staticmethod def some_function(x1, **kwargs): ... ... return some_output @staticmethod def flatten(x): return x.view(-1, int(np.prod([i for i in x.size()[1:]]))) def forward(self, x): local_features1 = self.local_feature_extractor1(x) local_features2 = self.local_feature_extractor2(x) local_features3 = self.local_feature_extractor3(x) end_features = self.fc1(self.global_feature_extractor(x).squeeze()) some_output1 = self.some_function(local_features1, **kwargs1) some_output2 = self.some_function(local_features2, **kwargs2) some_output3 = self.some_function(local_features3, **kwargs3) some_output4 = self.some_function(end_features, **kwargs3) g = torch.cat([self.flatten(some_output1), self.flatten(some_output2), self.flatten(some_output3), self.flatten(some_output4)], dim=1) out = F.relu(self.fc2(g)) out = F.relu(self.fc3(out)) return F.softmax(out, dim=1) When I check the parameters I am seeing the following output: net = Net() for i, name in enumerate(list(net.named_parameters())): print(i, name[0]) 0 global_feature_extractor.0.weight 1 global_feature_extractor.0.bias 2 global_feature_extractor.2.weight 3 global_feature_extractor.2.bias 4 global_feature_extractor.5.weight 5 global_feature_extractor.5.bias 6 global_feature_extractor.7.weight 7 global_feature_extractor.7.bias 8 global_feature_extractor.10.weight 9 global_feature_extractor.10.bias 10 global_feature_extractor.12.weight 11 global_feature_extractor.12.bias 12 global_feature_extractor.14.weight 13 global_feature_extractor.14.bias 14 global_feature_extractor.16.weight 15 global_feature_extractor.16.bias 16 global_feature_extractor.19.weight 17 global_feature_extractor.19.bias 18 global_feature_extractor.21.weight 19 global_feature_extractor.21.bias 20 global_feature_extractor.23.weight 21 global_feature_extractor.23.bias 22 global_feature_extractor.25.weight 23 global_feature_extractor.25.bias 24 global_feature_extractor.28.weight 25 global_feature_extractor.28.bias 26 global_feature_extractor.30.weight 27 global_feature_extractor.30.bias 28 global_feature_extractor.32.weight 29 global_feature_extractor.32.bias 30 global_feature_extractor.34.weight 31 global_feature_extractor.34.bias 32 fc1.weight 33 fc1.bias 34 fc2.weight 35 fc2.bias 36 fc3.weight 37 fc3.bias So it looks like the parameters from VGG are all being associated with the biggest feature extractor component (which all other feature extractors are subsets of in this case). So they would each be updated once per backprop? Is this a reasonable approach? Please advise if an alternative would be better. Thanks!
st104488
Since your feature extractors share the same model, the gradients will be accumulated in the layers. This could make the training a bit complicated, because the gradients in the first layers could be larger than in the last layers. I’ve created a small code snippet showing that the gradients from the base model are the same as the sub-models: vgg = models.vgg19(pretrained=False) local_feature_extractor1 = nn.Sequential(*list(vgg.features)[:15]) local_feature_extractor2 = nn.Sequential(*list(vgg.features)[:-13]) local_feature_extractor3 = nn.Sequential(*list(vgg.features)[:-6]) x = torch.randn(1, 3, 224, 224) output = vgg(x) output.mean().backward() print(vgg.features[0].weight.grad.abs().mean()) output = local_feature_extractor1(x) output.mean().backward() print(local_feature_extractor1[0].weight.grad.abs().mean()) print(vgg.features[0].weight.grad.abs().mean()) output = local_feature_extractor2(x) output.mean().backward() print(local_feature_extractor2[0].weight.grad.abs().mean()) print(vgg.features[0].weight.grad.abs().mean()) output = local_feature_extractor3(x) output.mean().backward() print(local_feature_extractor3[0].weight.grad.abs().mean()) print(vgg.features[0].weight.grad.abs().mean())
st104489
Thanks for the reply! From the experiment you ran it looks like the main and sub-models are using the same parameters correct (i.e. the output is essentially coming from different points in the global_feature_extractor pipeline, which was the goal)? Can you elaborate on the potential gradient accumulation issue? I’m running a similar experiment where I check the mean gradient with respect to each feature extractor component of the main model after each epoch, and I’m seeing consistent values. More importantly, is there a better way in Pytorch to achieve the desired network architecture?
st104490
For me to understand how pytorch graphs, autograd etcetera… works let me ask: what’s the difference between doing: model=model(defining model) batch_output=[] for i in range(batch_size) new_sample = dataloader(batch_size=1) #loads just one sample output = model(new_sample) batch_output.append(new_sample) torch.stack(batch_output) This would be applying the model several times to process the whole batch in the main file running model(input) as many times as batch requires and processing the batch inside the forward function: forward(self,batch): batch_output=[] for i in range(batch): sample_i=batch[i] .... .... batch_output.append(sample_i) return torch.stack(batch.output) And doing the same iteration inside the forward function? I’ve seen that doing it inside the forward function is hugely memory consuming. I imagine it’s duplicating graphs or something like that but I don’t really understand how all this pytorch world works properly enough not to commit mistakes. I’m not using pre-implemented nn.Modules , I need to create mine and I don’t know what is the properly way of using PyTorch. Can anyone explain me what exactly happens when I use a for loop inside forward function? What happens when I run model(input)? Any good guide to understand all this? Thank you!
st104491
In Torch by doing the following, one could enable GPU kernel peer to peer access. require 'cutorch' cutorch.setKernelPeerToPeerAccess(true) How can I do the same with PyTorch?
st104492
cudaDeviceEnablePeerAccess 48 function is used to enable P2P access. An example of using this function is shown in simpleP2P.cu 31. But I would like to know if PyTorch provides any mechanisms to enable Kernel P2P access.
st104493
Thanks for your answer. I observed this behavior but I couldn’t find any information in the documentation.
st104494
Should I put requires_grad=True for input in below case: >>> input = Variable(input_image_tensor, requires_grad=True) >>> output = model(input) >>> loss = loss_cal(input, output) >>> loss.backward() where the model is encoder-decoder network.
st104495
Hello, I’d like to train a U-net which must accept a 3 channel image and return a 3 channel image. In a nutshell, the loss should accept as input B x C x H x W and as target B x C x H x C. This kind of network has for goal standard image processing, that’s what justify this approach. Im actually getting a coherent error : invalid argument 1: only batches of spatial targets supported (3Dtensors) but got targets of dimension: 4 at /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THCUNN/generic/SpatialClassNLLCriterion.cu:14 Hope I was clearly explaining my problem and hope for some feedbacks. Thanks.
st104496
What kind of output do you expect? Using NLLLoss, it seems you would like to perform a classification. In this case, your output has to have dimensions [B, H, W] with the corresponding class indices.
st104497
thanks for your ansewer i try the NLLLoss2D() and a CrossEntropyLoss(). What loss i must us to have a [B, C, H, W] as output?
st104498
It depends on your use case, so what kind of output are you expecting? Is it a segmentation task with multi-labels? If so, you could use nn.BCELoss. Alternatively you could try nn.MSELoss.
st104499
As a side note, you should use sigmoid + BCELoss no non-linearity (just logits) and BCEWithLogitsLoss If you still have the log_softmax from using NLLLoss, you should replace or remove it.