instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
How to get mini-batches in pytorch in a clean and efficient way? | I was trying to do a simple thing which was train a linear model with Stochastic Gradient Descent (SGD) using torch:
import numpy as np
import torch
from torch.autograd import Variable
import pdb
def get_batch2(X,Y,M,dtype):
X,Y = X.data.numpy(), Y.data.numpy()
N = len(Y)
valid_indices = np.array( range(N) )
batch_indices = np.random.choice(valid_indices,size=M,replace=False)
batch_xs = torch.FloatTensor(X[batch_indices,:]).type(dtype)
batch_ys = torch.FloatTensor(Y[batch_indices]).type(dtype)
return Variable(batch_xs, requires_grad=False), Variable(batch_ys, requires_grad=False)
def poly_kernel_matrix( x,D ):
N = len(x)
Kern = np.zeros( (N,D+1) )
for n in range(N):
for d in range(D+1):
Kern[n,d] = x[n]**d;
return Kern
## data params
N=5 # data set size
Degree=4 # number dimensions/features
D_sgd = Degree+1
##
x_true = np.linspace(0,1,N) # the real data points
y = np.sin(2*np.pi*x_true)
y.shape = (N,1)
## TORCH
dtype = torch.FloatTensor
# dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU
X_mdl = poly_kernel_matrix( x_true,Degree )
X_mdl = Variable(torch.FloatTensor(X_mdl).type(dtype), requires_grad=False)
y = Variable(torch.FloatTensor(y).type(dtype), requires_grad=False)
## SGD mdl
w_init = torch.zeros(D_sgd,1).type(dtype)
W = Variable(w_init, requires_grad=True)
M = 5 # mini-batch size
eta = 0.1 # step size
for i in range(500):
batch_xs, batch_ys = get_batch2(X_mdl,y,M,dtype)
# Forward pass: compute predicted y using operations on Variables
y_pred = batch_xs.mm(W)
# Compute and print loss using operations on Variables. Now loss is a Variable of shape (1,) and loss.data is a Tensor of shape (1,); loss.data[0] is a scalar value holding the loss.
loss = (1/N)*(y_pred - batch_ys).pow(2).sum()
# Use autograd to compute the backward pass. Now w will have gradients
loss.backward()
# Update weights using gradient descent; w1.data are Tensors,
# w.grad are Variables and w.grad.data are Tensors.
W.data -= eta * W.grad.data
# Manually zero the gradients after updating weights
W.grad.data.zero_()
#
c_sgd = W.data.numpy()
X_mdl = X_mdl.data.numpy()
y = y.data.numpy()
#
Xc_pinv = np.dot(X_mdl,c_sgd)
print('J(c_sgd) = ', (1/N)*(np.linalg.norm(y-Xc_pinv)**2) )
print('loss = ',loss.data[0])
the code runs fine and all though my get_batch2 method seems really dum/naive, its probably because I am new to pytorch but I have not found a good place where they discuss how to retrieve data batches. I went through their tutorials (http://pytorch.org/tutorials/beginner/pytorch_with_examples.html) and through the data set (http://pytorch.org/tutorials/beginner/data_loading_tutorial.html) with no luck. The tutorials all seem to assume that one already has the batch and batch-size at the beginning and then proceeds to train with that data without changing it (specifically look at http://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-variables-and-autograd).
So my question is do I really need to turn my data back into numpy so that I can fetch some random sample of it and then turn it back to pytorch with Variable to be able to train in memory? Is there no way to get mini-batches with torch?
I looked at a few functions torch provides but with no luck:
#pdb.set_trace()
#valid_indices = torch.arange(0,N).numpy()
#valid_indices = np.array( range(N) )
#batch_indices = np.random.choice(valid_indices,size=M,replace=False)
#indices = torch.LongTensor(batch_indices)
#batch_xs, batch_ys = torch.index_select(X_mdl, 0, indices), torch.index_select(y, 0, indices)
#batch_xs,batch_ys = torch.index_select(X_mdl, 0, indices), torch.index_select(y, 0, indices)
even though the code I provided works fine I am worried that its not an efficient implementation AND that if I were to use GPUs that there would be a considerable further slow down (because my guess it putting things in memory and then fetching them back to put them GPU like that is silly).
I implemented a new one based on the answer that suggested to use torch.index_select():
def get_batch2(X,Y,M):
'''
get batch for pytorch model
'''
# TODO fix and make it nicer, there is pytorch forum question
#X,Y = X.data.numpy(), Y.data.numpy()
X,Y = X, Y
N = X.size()[0]
batch_indices = torch.LongTensor( np.random.randint(0,N+1,size=M) )
pdb.set_trace()
batch_xs = torch.index_select(X,0,batch_indices)
batch_ys = torch.index_select(Y,0,batch_indices)
return Variable(batch_xs, requires_grad=False), Variable(batch_ys, requires_grad=False)
however, this seems to have issues because it does not work if X,Y are NOT variables...which is really odd. I added this to the pytorch forum: https://discuss.pytorch.org/t/how-to-get-mini-batches-in-pytorch-in-a-clean-and-efficient-way/10322
Right now what I am struggling with is making this work for gpu. My most current version:
def get_batch2(X,Y,M,dtype):
'''
get batch for pytorch model
'''
# TODO fix and make it nicer, there is pytorch forum question
#X,Y = X.data.numpy(), Y.data.numpy()
X,Y = X, Y
N = X.size()[0]
if dtype == torch.cuda.FloatTensor:
batch_indices = torch.cuda.LongTensor( np.random.randint(0,N,size=M) )# without replacement
else:
batch_indices = torch.LongTensor( np.random.randint(0,N,size=M) ).type(dtype) # without replacement
pdb.set_trace()
batch_xs = torch.index_select(X,0,batch_indices)
batch_ys = torch.index_select(Y,0,batch_indices)
return Variable(batch_xs, requires_grad=False), Variable(batch_ys, requires_grad=False)
the error:
RuntimeError: tried to construct a tensor from a int sequence, but found an item of type numpy.int64 at index (0)
I don't get it, do I really have to do:
ints = [ random.randint(0,N) for i i range(M)]
to get the integers?
It would also be ideal if the data could be a variable. It seems that it torch.index_select does not work for Variable type data.
this list of integers thing still doesn't work:
TypeError: torch.addmm received an invalid combination of arguments - got (int, torch.cuda.FloatTensor, int, torch.cuda.FloatTensor, torch.FloatTensor, out=torch.cuda.FloatTensor), but expected one of:
* (torch.cuda.FloatTensor source, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
* (torch.cuda.FloatTensor source, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
* (float beta, torch.cuda.FloatTensor source, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
* (torch.cuda.FloatTensor source, float alpha, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
* (float beta, torch.cuda.FloatTensor source, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
* (torch.cuda.FloatTensor source, float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
* (float beta, torch.cuda.FloatTensor source, float alpha, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
didn't match because some of the arguments have invalid types: (int, torch.cuda.FloatTensor, int, torch.cuda.FloatTensor, torch.FloatTensor, out=torch.cuda.FloatTensor)
* (float beta, torch.cuda.FloatTensor source, float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out)
didn't match because some of the arguments have invalid types: (int, torch.cuda.FloatTensor, int, torch.cuda.FloatTensor, torch.FloatTensor, out=torch.cuda.FloatTensor)
| If I'm understanding your code correctly, your get_batch2 function appears to be taking random mini-batches from your dataset without tracking which indices you've used already in an epoch. The issue with this implementation is that it likely will not make use of all of your data.
The way I usually do batching is creating a random permutation of all the possible vertices using torch.randperm(N) and loop through them in batches. For example:
n_epochs = 100 # or whatever
batch_size = 128 # or whatever
for epoch in range(n_epochs):
# X is a torch Variable
permutation = torch.randperm(X.size()[0])
for i in range(0,X.size()[0], batch_size):
optimizer.zero_grad()
indices = permutation[i:i+batch_size]
batch_x, batch_y = X[indices], Y[indices]
# in case you wanted a semi-full example
outputs = model.forward(batch_x)
loss = lossfunction(outputs,batch_y)
loss.backward()
optimizer.step()
If you like to copy and paste, make sure you define your optimizer, model, and lossfunction somewhere before the start of the epoch loop.
With regards to your error, try using torch.from_numpy(np.random.randint(0,N,size=M)).long() instead of torch.LongTensor(np.random.randint(0,N,size=M)). I'm not sure if this will solve the error you are getting, but it will solve a future error.
| https://stackoverflow.com/questions/45113245/ |
BCELoss for binary pixel-wise segmentation pytorch | I'm implementing a UNet for binary segmentation while using Sigmoid and BCELoss. The problem is that after several iterations the network tries to predict very small values per pixel while for some regions it should predict values close to one (for ground truth mask region). Does it give any intuition about the wrong behavior?
Besides, there exist NLLLoss2d which is used for pixel-wise loss. Currently, I'm simply ignoring this and I'm using MSELoss() directly. Should I use NLLLoss2d with Sigmoid activation layer?
Thanks
| You might want to use torch.nn.BCEWithLogitsLoss(), replacing the Sigmoid and the BCELoss function.
An excerpt from the docs tells you why its always better to use this loss function implementation.
This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.
| https://stackoverflow.com/questions/45184741/ |
How does one make sure that everything is running on GPU automatically in Pytorch? | I wanted a way with minimal amount of code such that everything in my script runs automatically in GPU (or the standard way pytorch did it). Something like:
torch.everything_to_gpu()
and then it "just works". I don't care about manually putting things in GPU etc. I just want it to do its stuff automatically (sort of the way tensorflow does it?). I did see a related question in the pytorch forum but it doesn't seem that they address my issue directly.
Right now it seems to me (from the examples I've been through) that one can do something like what I want by specifying a simple type to every torch Variable/tensor as follows:
dtype = torch.FloatTensor
# dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU
so as long as every variable/tensor is takes dtype somehow e.g.
Variable(torch.FloatTensor(x).type(dtype), requires_grad=False)
then we can use that single variable to control what is in GPU and not. The issue that I am encountering that makes things ambiguous to me if such a single command exists is when using torch.nn.Module package. For example when using
l = torch.nn.Linear(D_in,D_out)
or costume NN classes (that inherit from it). Such cases seems that the best way to deal with it is to use the:
torch.nn.Module.cuda(device_id=device_id) # device_id = None is the default
function/method. However this seems to suggest to me that there might be other hidden functions that I might not be aware of to make sure that everything does indeed run in GPU.
Thus: Is there a centralized way to make sure everything runs in some (ideally automatically) assigned GPU?
In reflection I think one thing that is confusing me is that I don't understand the model of how pytorch carriers on computations on GPU. For example, I am fairly certain tht the way MATLAB works is that if at least one thing is on GPU then all further computations will be on GPU. So I guess, I am wondering, is this how pytorch works? If possible, how does it compare to TensorFlow?
| I think that there is no such thing.
From what I've seen, people usually create classes that:
i) Inherit from nn.Module.
ii) Have an attribute describing model parameters (e.g. self.opt);
iii) Set each variable/parameters as attributes (e.g. self.my_var)
iv) Then call .cuda() on it if a kind of -use_gpu parameter is set.
I also use a maybe_cuda(variable) function inside my classes in order to create variables easier (pass a Variable, return variable.cuda() if opt.cuda is True.
In fact, I did something like this (may not be perfect, but found it practical):
class MyModule(nn.Module):
def __init__(self, opt):
super(MyModule, self).__init__()
self.opt = opt
def maybe_cuda(self, variable):
if self.opt.cuda:
return variable.cuda()
return variable
class Model(MyModule):
def __init__(self, opt, other_arg):
super(Model, self).__init__(opt)
self.linear = nn.Linear(opt.size1, opt.size2)
self.W_out = nn.Parameter(_____)
def forward(self, ____):
# create a variable, put it on GPU if possible
my_var = self.maybe_cuda(Variable(torch.zeros(___)))
| https://stackoverflow.com/questions/45553613/ |
How to install pytorch on windows subsystem for linux | my windows10 has subsystem for linux of 14.04. I tried to install pytorch on the preinstalled python2 but couldn't work.The error is: torch-0.2.0.post1-cp27-cp27m-manylinux1_x86_64.whl is not a supported wheel on this platform. I tried to install python3.6 then install pytorch with it, but still couldn't work.The error is missing module 'apt_pkg'. Anyone has idea on this?
| According to this it should be working now via anaconda's package manager conda.
| https://stackoverflow.com/questions/45573109/ |
Pytorch vectors with empty dimensions | I noticed that some vectors in pytorch have empty dimensions like a=torch.randn(1) has size (1L,). What are these types of vectors called and why are they missing the second dimension? I'm on OSX and Ubuntu - not sure if that makes a difference.
Example:
>>> import torch
>>> a=torch.randn(1)
>>> a
2.1358
[torch.FloatTensor of size 1]
>>> a.size()
(1L,)
>>> a=torch.randn(1).size()
>>> a
(1L,)
| It is not a pytorch specific. This convention is followed in python as well. Let a vector of size (5,) and a matrix of size (5,1) is different. Although they have the same number of elements.
>>> import numpy as np
>>> a=np.random.rand(5)
>>> a.shape
>>> (5,)
>>> b=np.random.rand(5,1)
>>> (5,1)
| https://stackoverflow.com/questions/45580261/ |
Matching PyTorch w/ CNTK (VGG on CIFAR) | I am trying to understand how PyTorch works and want to replicate a simple CNN training on CIFAR. The CNTK script gets to 0.76 accuracy after 168 seconds of training (10 epochs), which is similar to my MXNet script (0.75 accuracy after 153 seconds).
However, my PyTorch script is lagging behind a lot at 0.71 accuracy and 354 seconds. I appreciate I will get differences in accuracy due to stochastic weight initialisation, etc. However the difference across frameworks is much greater than difference within a framework, initialising randomly between runs.
The reasons I can think of:
MXNet and CNTK are initialized to xavier/glorot uniform; not sure how to do this in PyTorch and so perhaps the weights are initialised to 0
CNTK does gradient-clipping by default; not sure if PyTorch has the equivalent
Perhaps the bias is dropped in PyTorch by default
I use SGD with momentum; perhaps the PyTorch implementation of momentum is a bit different
Edit:
I have tried specifying the weight initialisation, however it seems to have no big effect:
self.conv1 = nn.Conv2d(3, 50, kernel_size=3, padding=1)
init.xavier_uniform(self.conv1.weight, gain=np.sqrt(2.0))
init.constant(self.conv1.bias, 0)
| I try to answer your first two questions:
weight initialization: different kinds of layers have their own method, you can find the default weight initialization of all these layers in the following link: https://github.com/pytorch/pytorch/tree/master/torch/nn/modules
gradient-clipping: you might want to use torch.nn.utils.clip_grad_norm
In addition, I am curious why you don't use torchvision.transforms torch.utils.data.DataLoader and torchvision.datasets.CIFAR10 to load and preprocess your data?
There is a similar image classification tutorial of cifar for Pytorch
http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py
Hope this can help you.
| https://stackoverflow.com/questions/45731727/ |
How to correctly use Numpy's FFT function in PyTorch? | I was recently introduced to PyTorch and began running through the library's documentation and tutorials.
In the "Creating extensions using numpy and scipy" tutorial, under "Parameter-less example", a sample function is created using numpy called BadFFTFunction.
The description for the function states:
"This layer doesn’t particularly do anything useful or mathematically
correct.
It is aptly named BadFFTFunction"
The function and its usage are given as:
from numpy.fft import rfft2, irfft2
class BadFFTFunction(Function):
def forward(self, input):
numpy_input = input.numpy()
result = abs(rfft2(numpy_input))
return torch.FloatTensor(result)
def backward(self, grad_output):
numpy_go = grad_output.numpy()
result = irfft2(numpy_go)
return torch.FloatTensor(result)
def incorrect_fft(input):
return BadFFTFunction()(input)
input = Variable(torch.randn(8, 8), requires_grad=True)
result = incorrect_fft(input)
print(result.data)
result.backward(torch.randn(result.size()))
print(input.grad)
Unfortunately, I was only recently introduced to signal processing as well, and am unsure of where the (likely obvious) error is in this function.
I am wondering, how might one go about fixing this function so that its forward and backward outputs are correct?
How can BadFFTFunction be fixed so that a differentiable FFT function can be used in PyTorch?
| I think the errors are: First, the function, despite having FFT in its name, only returns the amplitudes/absolute values of the FFT output, not the full complex coefficients. Also, just using the inverse FFT to compute the gradient of the amplitudes probably doesn't make much sense mathematically (?).
There is a package called pytorch-fft that tries to make an FFT-function available in pytorch. You can see some experimental code for autograd functionality here. Also note discussion in this issue.
| https://stackoverflow.com/questions/45746504/ |
PyTorch RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed | I’m trying to create a basic binary classifier in Pytorch that classifies whether my player plays on the right or the left side in the game Pong. The input is an 1x42x42 image and the label is my player's side (right = 1 or left = 2). The code:
class Net(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(Net, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
net = Net(42 * 42, 100, 2)
# Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer_net = torch.optim.Adam(net.parameters(), 0.001)
net.train()
while True:
state = get_game_img()
state = torch.from_numpy(state)
# right = 1, left = 2
current_side = get_player_side()
target = torch.LongTensor(current_side)
x = Variable(state.view(-1, 42 * 42))
y = Variable(target)
optimizer_net.zero_grad()
y_pred = net(x)
loss = criterion(y_pred, y)
loss.backward()
optimizer.step()
The error I get:
File "train.py", line 109, in train
loss = criterion(y_pred, y)
File "/home/shani/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/shani/anaconda2/lib/python2.7/site-packages/torch/nn/modules/loss.py", line 321, in forward
self.weight, self.size_average)
File "/home/shani/anaconda2/lib/python2.7/site-packages/torch/nn/functional.py", line 533, in cross_entropy
return nll_loss(log_softmax(input), target, weight, size_average)
File "/home/shani/anaconda2/lib/python2.7/site-packages/torch/nn/functional.py", line 501, in nll_loss
return f(input, target)
File "/home/shani/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py", line 41, in forward
output, *self.additional_args)
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /py/conda-bld/pytorch_1493676237139/work/torch/lib/THNN/generic/ClassNLLCriterion.c:57
| For most of deeplearning library, target(or label) should start from 0.
It means that your target should be in the range of [0,n) with n-classes.
| https://stackoverflow.com/questions/45769206/ |
RuntimeError: CUDNN_STATUS_INTERNAL_ERROR | On ubuntu14.04,I use pytorch with cudnn.This problem happened:
Traceback (most recent call last):
File "main.py", line 58, in <module>
test_detect(test_loader, nod_net, get_pbb, bbox_result_path,config1,n_gpu=config_submit['n_gpu'])
File "/home/ubuntu/nndl/DSB2017/test_detect.py", line 52, in test_detect
output = net(input,inputcoord)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 252, in __call__
result = self.forward(*input, **kwargs)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 58, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 252, in __call__
result = self.forward(*input, **kwargs)
File "/home/ubuntu/nndl/DSB2017/net_detector.py", line 102, in forward
out = self.preBlock(x)#16
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 252, in __call__
result = self.forward(*input, **kwargs)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/torch/nn/modules/container.py", line 67, in forward
input = module(input)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 252, in __call__
result = self.forward(*input, **kwargs)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 351, in forward
self.padding, self.dilation, self.groups)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/torch/nn/functional.py", line 119, in conv3d
return f(input, weight, bias)
RuntimeError: CUDNN_STATUS_INTERNAL_ERROR
I have google it for severial hours an am really confused.What made this happen?
| I just encountered this problem on ubuntu16.04 and solved it. My solution was to run
sudo rm -rf ~/.nv
and then reboot.
| https://stackoverflow.com/questions/45810356/ |
Is there a function to extract image patches in PyTorch? | Given a batch of images, I'd like to extract all possible image patches, similar to a convolution. In TensorFlow, we can use tf.extract_image_patches to achieve this. Is there an equivalent function in PyTorch?
Thank you.
| Unfortunately, there might not be a direct way to achieve your goal.
But Tensor.unfold function might be a solution.
https://discuss.pytorch.org/t/how-to-extract-smaller-image-patches-3d/16837/2
This website might help you.
| https://stackoverflow.com/questions/45828265/ |
PyTorch giving cuda runtime error | I have made a slight modification in my code so that it does not use DataParallel and DistributedDataParallel. The code is as follows:
import argparse
import os
import shutil
import time
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.distributed as dist
import torch.optim
import torch.utils.data
import torch.utils.data.distributed
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torchvision.models as models
model_names = sorted(name for name in models.__dict__
if name.islower() and not name.startswith("__")
and callable(models.__dict__[name]))
parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')
parser.add_argument('data', metavar='DIR',
help='path to dataset')
parser.add_argument('--arch', '-a', metavar='ARCH', default='resnet18',
choices=model_names,
help='model architecture: ' +
' | '.join(model_names) +
' (default: resnet18)')
parser.add_argument('-j', '--workers', default=4, type=int, metavar='N',
help='number of data loading workers (default: 4)')
parser.add_argument('--epochs', default=90, type=int, metavar='N',
help='number of total epochs to run')
parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
help='manual epoch number (useful on restarts)')
parser.add_argument('-b', '--batch-size', default=256, type=int,
metavar='N', help='mini-batch size (default: 256)')
parser.add_argument('--lr', '--learning-rate', default=0.1, type=float,
metavar='LR', help='initial learning rate')
parser.add_argument('--momentum', default=0.9, type=float, metavar='M',
help='momentum')
parser.add_argument('--weight-decay', '--wd', default=1e-4, type=float,
metavar='W', help='weight decay (default: 1e-4)')
parser.add_argument('--print-freq', '-p', default=10, type=int,
metavar='N', help='print frequency (default: 10)')
parser.add_argument('--resume', default='', type=str, metavar='PATH',
help='path to latest checkpoint (default: none)')
parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true',
help='evaluate model on validation set')
parser.add_argument('--pretrained', dest='pretrained', action='store_true',
help='use pre-trained model')
parser.add_argument('--world-size', default=1, type=int,
help='number of distributed processes')
parser.add_argument('--dist-url', default='tcp://224.66.41.62:23456', type=str,
help='url used to set up distributed training')
parser.add_argument('--dist-backend', default='gloo', type=str,
help='distributed backend')
best_prec1 = 0
def main():
global args, best_prec1
args = parser.parse_args()
args.distributed = args.world_size > 1
if args.distributed:
dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url,
world_size=args.world_size)
# create model
if args.pretrained:
print("=> using pre-trained model '{}'".format(args.arch))
model = models.__dict__[args.arch](pretrained=True)
else:
print("=> creating model '{}'".format(args.arch))
model = models.__dict__[args.arch]()
if not args.distributed:
if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):
#model.features = torch.nn.DataParallel(model.features)
model.cuda()
#else:
#model = torch.nn.DataParallel(model).cuda()
else:
model.cuda()
#model = torch.nn.parallel.DistributedDataParallel(model)
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda()
optimizer = torch.optim.SGD(model.parameters(), args.lr,
momentum=args.momentum,
weight_decay=args.weight_decay)
# optionally resume from a checkpoint
if args.resume:
if os.path.isfile(args.resume):
print("=> loading checkpoint '{}'".format(args.resume))
checkpoint = torch.load(args.resume)
args.start_epoch = checkpoint['epoch']
best_prec1 = checkpoint['best_prec1']
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
print("=> loaded checkpoint '{}' (epoch {})"
.format(args.resume, checkpoint['epoch']))
else:
print("=> no checkpoint found at '{}'".format(args.resume))
cudnn.benchmark = True
# Data loading code
traindir = os.path.join(args.data, 'train')
valdir = os.path.join(args.data, 'val')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
train_dataset = datasets.ImageFolder(
traindir,
transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
]))
if args.distributed:
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
else:
train_sampler = None
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=args.batch_size, shuffle=(train_sampler is None),
num_workers=args.workers, pin_memory=True, sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(valdir, transforms.Compose([
transforms.Scale(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])),
batch_size=args.batch_size, shuffle=False,
num_workers=args.workers, pin_memory=True)
if args.evaluate:
validate(val_loader, model, criterion)
return
for epoch in range(args.start_epoch, args.epochs):
if args.distributed:
train_sampler.set_epoch(epoch)
adjust_learning_rate(optimizer, epoch)
# train for one epoch
train(train_loader, model, criterion, optimizer, epoch)
# evaluate on validation set
prec1 = validate(val_loader, model, criterion)
# remember best prec@1 and save checkpoint
is_best = prec1 > best_prec1
best_prec1 = max(prec1, best_prec1)
save_checkpoint({
'epoch': epoch + 1,
'arch': args.arch,
'state_dict': model.state_dict(),
'best_prec1': best_prec1,
'optimizer' : optimizer.state_dict(),
}, is_best)
def train(train_loader, model, criterion, optimizer, epoch):
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
top1 = AverageMeter()
top5 = AverageMeter()
# switch to train mode
model.train()
end = time.time()
for i, (input, target) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
target = target.cuda(async=True)
input_var = torch.autograd.Variable(input)
target_var = torch.autograd.Variable(target)
# compute output
output = model(input_var)
loss = criterion(output, target_var)
# measure accuracy and record loss
prec1, prec5 = accuracy(output.data, target, topk=(1, 5))
losses.update(loss.data[0], input.size(0))
top1.update(prec1[0], input.size(0))
top5.update(prec5[0], input.size(0))
# compute gradient and do SGD step
optimizer.zero_grad()
loss.backward()
optimizer.step()
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % args.print_freq == 0:
print('Epoch: [{0}][{1}/{2}]\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
'Loss {loss.val:.4f} ({loss.avg:.4f})\t'
'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\t'
'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format(
epoch, i, len(train_loader), batch_time=batch_time,
data_time=data_time, loss=losses, top1=top1, top5=top5))
def validate(val_loader, model, criterion):
batch_time = AverageMeter()
losses = AverageMeter()
top1 = AverageMeter()
top5 = AverageMeter()
# switch to evaluate mode
model.eval()
end = time.time()
for i, (input, target) in enumerate(val_loader):
target = target.cuda(async=True)
input_var = torch.autograd.Variable(input, volatile=True)
target_var = torch.autograd.Variable(target, volatile=True)
# compute output
output = model(input_var)
loss = criterion(output, target_var)
# measure accuracy and record loss
prec1, prec5 = accuracy(output.data, target, topk=(1, 5))
losses.update(loss.data[0], input.size(0))
top1.update(prec1[0], input.size(0))
top5.update(prec5[0], input.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % args.print_freq == 0:
print('Test: [{0}/{1}]\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Loss {loss.val:.4f} ({loss.avg:.4f})\t'
'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\t'
'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format(
i, len(val_loader), batch_time=batch_time, loss=losses,
top1=top1, top5=top5))
print(' * Prec@1 {top1.avg:.3f} Prec@5 {top5.avg:.3f}'
.format(top1=top1, top5=top5))
return top1.avg
def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'):
torch.save(state, filename)
if is_best:
shutil.copyfile(filename, 'model_best.pth.tar')
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def adjust_learning_rate(optimizer, epoch):
"""Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
lr = args.lr * (0.1 ** (epoch // 30))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def accuracy(output, target, topk=(1,)):
"""Computes the precision@k for the specified values of k"""
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)
res.append(correct_k.mul_(100.0 / batch_size))
return res
if __name__ == '__main__':
main()
And, when I run this code on a set of images with the alexnet neuralnet architecture, it gives a weird cuda error, which is as follows:
=> creating model 'alexnet'
THCudaCheck FAIL file=/pytorch/torch/lib/THC/THCGeneral.c line=70 error=30 : unknown error
Traceback (most recent call last):
File "imagenet2.py", line 319, in <module>
main()
File "imagenet2.py", line 87, in main
model.cuda()
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 147, in cuda
return self._apply(lambda t: t.cuda(device_id))
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 118, in _apply
module._apply(fn)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 118, in _apply
module._apply(fn)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 118, in _apply
module._apply(fn)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 124, in _apply
param.data = fn(param.data)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 147, in <lambda>
return self._apply(lambda t: t.cuda(device_id))
File "/usr/local/lib/python2.7/dist-packages/torch/_utils.py", line 66, in _cuda
return new_type(self.size()).copy_(self, async)
File "/usr/local/lib/python2.7/dist-packages/torch/cuda/__init__.py", line 266, in _lazy_new
_lazy_init()
File "/usr/local/lib/python2.7/dist-packages/torch/cuda/__init__.py", line 85, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (30) : unknown error at /pytorch/torch/lib/THC/THCGeneral.c:70
Command used for running the code: python imagenet.py --world-size 1 --arch 'alexnet' <image_folder>
Where did I go wrong?
PS: Running on an AWS g2.2xlarge Ubuntu instance.
The CUDA version is as follows:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61
|
CUDNN gives useless error messages. For debug, test your net on CPU using net.cpu() or just simple remove the net.cuda(). You will have to do the same with training, validation and output variables.
It seams the problem is that you used pre-trained AlexNet on a images of size different than 224x224. According to the documentation it should work as long as the image size is at least 224x224.
This is probably a tensor shaping problem due to a hard-coded parameter in pytorch's implementation of AlexNet. In vision/torchvision/models/alexnet.py at line 44 it says
x = x.view(x.size(0), 256 * 6 * 6)
change it to
x = x.view(x.size(0), -1)
This should allow it to work with different images sizes.
I subbmitted this modification to the github repository, but I guess it has not been updated yet.
| https://stackoverflow.com/questions/45861767/ |
How to compute cross entropy loss for binary classification in Pytorch ? | For binary classification, my output and label is like this
output = [0.7, 0.3, 0.1, 0.9 ... ]
label = [1, 0, 0, 1 ... ]
where the output is the probability for precited label = 1
And I want a cross entropy like this:
def cross_entropy(output, label):
return sum(-label * log(output) - (1 - label) * log(1 - output))
However, this gives me a NaN error because that in log(output) the output might be zero.
I know there is torch.nn.CrossEntropyLoss however it does not apply for my data format here.
| import torch
import torch.nn.functional as F
def my_binary_cross_entrophy(output,label):
label = label.float()
#print(label)
loss = 0
for i in range(len(label)):
loss += -(label[i]*math.log(output[i])+(1-label[i])*math.log(1-output[i]))
#print(loss)
return loss/len(label)
label1 = torch.randint(0,2,(3,)).float()
output = torch.rand(3)
my_binary_cross_entrophy(output,label1)
The value it returned is the same as F.binary_cross_entropy value.
F.binary_cross_entropy(output,label1)
| https://stackoverflow.com/questions/45884070/ |
Seq2seq pytorch Inference slow | I tried the seq2seq pytorch implementation available here seq2seq . After profiling the evaluation(evaluate.py) code, the piece of code taking longer time was the decode_minibatch method
def decode_minibatch(
config,
model,
input_lines_src,
input_lines_trg,
output_lines_trg_gold
):
"""Decode a minibatch."""
for i in xrange(config['data']['max_trg_length']):
decoder_logit = model(input_lines_src, input_lines_trg)
word_probs = model.decode(decoder_logit)
decoder_argmax = word_probs.data.cpu().numpy().argmax(axis=-1)
next_preds = Variable(
torch.from_numpy(decoder_argmax[:, -1])
).cuda()
input_lines_trg = torch.cat(
(input_lines_trg, next_preds.unsqueeze(1)),
1
)
return input_lines_trg
Trained the model on GPU and loaded the model in CPU mode to make inference. But unfortunately, every sentence seems to take ~10sec. Is slow prediction expected on pytorch?
Any fixes, suggestions to speed up would be much appreciated. Thanks.
| One solution for slow performance may be to use a toolkit optimized for the inference, such as OpenVINO. OpenVINO is optimized for Intel hardware but it should work with any CPU. It optimizes the inference performance by e.g. graph pruning or fusing some operations together.
You can find a full tutorial on how to convert the PyTorch model here (FastSeg) and here (BERT). Some snippets below.
Install OpenVINO
The easiest way to do it is using PIP. Alternatively, you can use this tool to find the best way in your case.
pip install openvino-dev[pytorch,onnx]
Save your model to ONNX
OpenVINO cannot convert PyTorch model directly for now but it can do it with ONNX model. This sample code assumes the model is for computer vision.
dummy_input = torch.randn(1, 3, IMAGE_HEIGHT, IMAGE_WIDTH)
torch.onnx.export(model, dummy_input, "model.onnx", opset_version=11)
Use Model Optimizer to convert ONNX model
The Model Optimizer is a command line tool which comes from OpenVINO Development Package so be sure you have installed it. It converts the ONNX model to OV format (aka IR), which is a default format for OpenVINO. It also changes the precision to FP16 (to further increase performance). Run in command line:
mo --input_model "model.onnx" --input_shape "[1, 3, 224, 224]" --mean_values="[123.675, 116.28 , 103.53]" --scale_values="[58.395, 57.12 , 57.375]" --data_type FP16 --output_dir "model_ir"
Run the inference on the CPU
The converted model can be loaded by the runtime and compiled for a specific device e.g. CPU or GPU (integrated into your CPU like Intel HD Graphics). If you don't know what is the best choice for you, just use AUTO.
# Load the network
ie = Core()
model_ir = ie.read_model(model="model_ir/model.xml")
compiled_model_ir = ie.compile_model(model=model_ir, device_name="CPU")
# Get output layer
output_layer_ir = compiled_model_ir.output(0)
# Run inference on the input image
result = compiled_model_ir([input_image])[output_layer_ir]
Disclaimer: I work on OpenVINO.
| https://stackoverflow.com/questions/45976991/ |
"expected CPU tensor(got CUDA tensor)" error for PyTorch | In my inference code with Trained PyTorch model, What's wrong ?
There is an runtime error message: "expected CPU tensor(got CUDA tensor)"
import torch
import torch.nn as nn
#from __future__ import print_function
import argparse
from PIL import Image
import torchvision.models as models
import skimage.io
from torch.autograd import Variable as V
from torch.nn import functional as f
from torchvision import transforms as trn
# define image transformation
centre_crop = trn.Compose([
trn.ToPILImage(),
trn.Scale(256),
trn.CenterCrop(224),
trn.ToTensor(),
trn.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
filename=r'ed91.png'
img = skimage.io.imread(filename)
x = V(centre_crop(img).unsqueeze(0), volatile=True)
model = models.__dict__['resnet18']()
model = torch.nn.DataParallel(model).cuda()
model = torch.load('mw_model0831.pth')
#model.load_state_dict(checkpoint['state_dict'])
#best_prec1 = checkpoint['best_prec1']
logit = model(x)
print(logit)
print(len(logit))
h_x = f.softmax(logit).data.squeeze()
How can I fix this ?
| The error is because the model is on GPU while your input image x is on CPU. You have to make sure that they are both on GPU or CPU.
Also, model.cuda() and x.cuda() behaves slightly differently: model.cuda() will put the model on GPU, but x.cuda() just return a new Variable on GPU, leaving the original x untouched. You have to assign the return value to x explicitly.
You can find a detailed discussion here.
| https://stackoverflow.com/questions/45996659/ |
How to combine multiple models together? | I am trying to "parallelize" Neural Network models to speed up training. One idea I had was to run two models on two computers and combine the results somehow.
Is this possible? If not, what are the options to parallelize model training on two computers?
I am open to use any neural network framework.
| I think you mean distributed tensorflow?
See official document: https://www.tensorflow.org/deploy/distributed
| https://stackoverflow.com/questions/46142179/ |
PyTorch equivalence for softmax_cross_entropy_with_logits | I was wondering is there an equivalent PyTorch loss function for TensorFlow's softmax_cross_entropy_with_logits?
|
is there an equivalent PyTorch loss function for TensorFlow's softmax_cross_entropy_with_logits?
torch.nn.functional.cross_entropy
This takes logits as inputs (performing log_softmax internally). Here "logits" are just some values that are not probabilities (i.e. not necessarily in the interval [0,1]).
But, logits are also the values that will be converted to probabilities.
If you consider the name of the tensorflow function you will understand it is pleonasm (since the with_logits part assumes softmax will be called).
In the PyTorch implementation looks like this:
loss = F.cross_entropy(x, target)
Which is equivalent to :
lp = F.log_softmax(x, dim=-1)
loss = F.nll_loss(lp, target)
It is not F.binary_cross_entropy_with_logits because this function assumes multi label classification:
F.sigmoid + F.binary_cross_entropy = F.binary_cross_entropy_with_logits
It is not torch.nn.functional.nll_loss either because this function takes log-probabilities (after log_softmax()) not logits.
| https://stackoverflow.com/questions/46218566/ |
Resized copy of Pytorch Tensor/Dataset | I have a homemade dataset with a few million rows. I am trying to make truncated copies. So I clip the tensors that I'm using to make the original dataset and create a new dataset. However, when I save the new dataset, which is only 20K rows, it's the same size on disk as the original dataset. Otherwise everything seems kosher, including, when I check, the size of the new tensors. What am I doing wrong?
#original dataset - 2+million rows
dataset = D.TensorDataset(training_data, labels)
torch.save(dataset, filename)
#20k dataset for experiments
d = torch.Tensor(training_data[0:20000])
l = torch.Tensor(labels[0:20000])
ds_small = D.TensorDataset(d,l)
#this is the same size as the one above on disk... approx 1.45GB
torch.save(ds_small, filename_small)
Thanks
| In your code d and training_data share the same memory, even if you use slicing during the creation of d. I don't know why this is the case, but answer anyway to give you a solution:
d = x[0:10000].clone()
l = y[0:10000].clone()
clonewill give you Tensors with a memory independent from the old Tensor's and the file size will be much smaller.
Note that using torch.Tensor() is not necessary when creating d and l since training_data and labels are already tensors.
| https://stackoverflow.com/questions/46227756/ |
Pytorch load model | I used tensorflow some days ago. To build conv layers with fixed weights is easy, just pass the weight kernel to conv2d(). And it is convenient to load pretrained models such as VGG19. But I found it did't work that way using pytorch, because conv2d() doesn't accept an explicit kernel but a kernel size. So I wonder if there is any possibility that we can reuse the weights in VGG19 by simply passing it to a method like conv2d(). Any reply will be appreciated.
| I can see that you have 2 questions . How to use a pre-trained model like VGG in pyTorch and how to set the weights for a particular layer like nn.conv2d().
For creating a pretrained Vgg model you can use the below code.
from torchvision import models
model_vgg = models.vgg16(pretrained=True)
for param in model_vgg.parameters():
param.requires_grad = False
In PyTorch you implement neural network subclassing nn.Module which contains the parameters() function which returns all the weights associated with the network.
Setting the weights of a particular layer .
decoder = nn.Linear(10, 100)
decoder.weight = #Do anything which is valid.
You can check my code here to know more on how to use a trained model.
| https://stackoverflow.com/questions/46283230/ |
How to get the inner module of Unet? | I created a UNet with the the UnetGenerator. You can find the resulting structure here.
How do I get the module Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))?
How do I get the module (5): ConvTranspose2d(256, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))?
I want to get the inner modules to change the attributes of certain layers.
I tried something like net.modules(i).modules(i), but it doesn't work. I refer the docs, I haven't get a good idea to do it.
My initial intention is to change the attributes of certain layers when training. I may add a custom layer myLayer, in which self.mode='normal'. When training, I hope I can change its attribute myLayer.mode = 'capture' to make it change its behaviour in training.
| All the subclasses of nn.Module has an attribute called children. Which you will be able to access using the code below.
unet = UnetGenerator(512,512,4)
layers = list(unet.children())
len(layers)
For the network I created using the above code , I can access one of the layers inside the network and change the properties like below.
l = layers[0]
conv = list(l.children())[0][0]
conv.kernel_size = (2,2)
If you are training the network without using any pre-trained weights then you could make the changes in the source code before you create the network object.
| https://stackoverflow.com/questions/46315202/ |
How to correctly implement a batch-input LSTM network in PyTorch? | This release of PyTorch seems provide the PackedSequence for variable lengths of input for recurrent neural network. However, I found it's a bit hard to use it correctly.
Using pad_packed_sequence to recover an output of a RNN layer which were fed by pack_padded_sequence, we got a T x B x N tensor outputs where T is the max time steps, B is the batch size and N is the hidden size. I found that for short sequences in the batch, the subsequent output will be all zeros.
Here are my questions.
For a single output task where the one would need the last output of all the sequences, simple outputs[-1] will give a wrong result since this tensor contains lots of zeros for short sequences. One will need to construct indices by sequence lengths to fetch the individual last output for all the sequences. Is there more simple way to do that?
For a multiple output task (e.g. seq2seq), usually one will add a linear layer N x O and reshape the batch outputs T x B x O into TB x O and compute the cross entropy loss with the true targets TB (usually integers in language model). In this situation, do these zeros in batch output matters?
| Question 1 - Last Timestep
This is the code that i use to get the output of the last timestep. I don't know if there is a simpler solution. If it is, i'd like to know it. I followed this discussion and grabbed the relative code snippet for my last_timestep method. This is my forward.
class BaselineRNN(nn.Module):
def __init__(self, **kwargs):
...
def last_timestep(self, unpacked, lengths):
# Index of the last output for each sequence.
idx = (lengths - 1).view(-1, 1).expand(unpacked.size(0),
unpacked.size(2)).unsqueeze(1)
return unpacked.gather(1, idx).squeeze()
def forward(self, x, lengths):
embs = self.embedding(x)
# pack the batch
packed = pack_padded_sequence(embs, list(lengths.data),
batch_first=True)
out_packed, (h, c) = self.rnn(packed)
out_unpacked, _ = pad_packed_sequence(out_packed, batch_first=True)
# get the outputs from the last *non-masked* timestep for each sentence
last_outputs = self.last_timestep(out_unpacked, lengths)
# project to the classes using a linear layer
logits = self.linear(last_outputs)
return logits
Question 2 - Masked Cross Entropy Loss
Yes, by default the zero padded timesteps (targets) matter. However, it is very easy to mask them. You have two options, depending on the version of PyTorch that you use.
PyTorch 0.2.0: Now pytorch supports masking directly in the CrossEntropyLoss, with the ignore_index argument. For example, in language modeling or seq2seq, where i add zero padding, i mask the zero padded words (target) simply like this:
loss_function = nn.CrossEntropyLoss(ignore_index=0)
PyTorch 0.1.12 and older: In the older versions of PyTorch, masking was not supported, so you had to implement your own workaround. I solution that i used, was masked_cross_entropy.py, by jihunchoi. You may be also interested in this discussion.
| https://stackoverflow.com/questions/46387661/ |
Do I need to define the backward() in custom loss function? | I have already to define my own loss function. It does work. The feedforward may not have problem. But I am not sure whether it is correct because I don't define the backward().
class _Loss(nn.Module):
def __init__(self, size_average=True):
super(_Loss, self).__init__()
self.size_average = size_average
class MyLoss(_Loss):
def forward(self, input, target):
loss = 0
weight = np.zeros((BATCH_SIZE,BATCH_SIZE))
for a in range(BATCH_SIZE):
for b in range(BATCH_SIZE):
weight[a][b] = get_weight(target.data[a][0])
for i in range(BATCH_SIZE):
for j in range(BATCH_SIZE):
a_ij= (input[i]-input[j]-target[i]+target[j])*weight[i,j]
loss += F.relu(a_ij)
return loss
The question I want to ask is that
1) Do I need to define the backward() to loss function?
2) How to define the backward()?
3) Is there are any way to do the index of the data while doing SGD in torch?
| You can write a loss function like below.
def mse_loss(input, target):
return ((input - target) ** 2).sum() / input.data.nelement()
You do not need to implement backward function. All the above parameters of the loss functions should be PyTorch variables and the rest is taken care by torch.autograd function.
| https://stackoverflow.com/questions/46399797/ |
Pytorch: define custom function | I wanted to write my own activation function, but I got a problem. Saying the matrix multiplication will call .data. I searched but got little useful information. Any help will be appreciated. The error information is
Traceback (most recent call last):
File "defineAutogradFuncion.py", line 126, in <module>
test = gradcheck(argmin, input, eps=1e-6, atol=1e-4)
File "/home/zhaosl/.local/lib/python2.7/site-packages/torch/autograd/gradcheck.py", line 154, in gradcheck
output = func(*inputs)
File "defineAutogradFuncion.py", line 86, in forward
output = output.mm(dismap).squeeze(-1)
File "/home/zhaosl/.local/lib/python2.7/site-packages/torch/autograd/variable.py", line 578, in mm
output = Variable(self.data.new(self.data.size(0), matrix.data.size(1)))
File "/home/zhaosl/.local/lib/python2.7/site-packages/torch/tensor.py", line 374, in data
raise RuntimeError('cannot call .data on a torch.Tensor: did you intend to use autograd.Variable?')
RuntimeError: cannot call .data on a torch.Tensor: did you intend to use autograd.Variable?
class Softargmin(torch.autograd.Function):
"""
We can implement our own custom autograd Functions by subclassing
torch.autograd.Function and implementing the forward and backward passes
which operate on Tensors.
"""
@staticmethod
def forward(self, input):
"""
In the forward pass we receive a Tensor containing the input and return a
Tensor containing the output. You can cache arbitrary Tensors for use in the
backward pass using the save_for_backward method.
"""
#P = Fun.softmax(-input)
inputSqueeze = input.squeeze(-1)
P = Fun.softmax(-inputSqueeze)
self.save_for_backward(P)
output = P.permute(0,2,3,1)
dismap = torch.arange(0,output.size(-1)+1).unsqueeze(1)
output = output.mm(dismap).squeeze(-1)
return output
@staticmethod
def backward(self, grad_output):
"""
In the backward pass we receive a Tensor containing the gradient of the loss
with respect to the output, and we need to compute the gradient of the loss
with respect to the input.
"""
P, = self.saved_tensors
P = P.unsqueeze(-1)
Pk = torch.squeeze(P,-1).permute(0,2,3,1)
k = torch.arange(0,Pk.size(-1)+1).unsqueeze(1)
sumkPk = Pk.mm(k)
sumkPk = sumkPk.unsqueeze(1).expand(P.size())
i = torch.arange(0,Pk.size(-1)+1).view(1,-1,1,1,1).expand(P.size())
grad_output_expand =grad_output.unsqueeze(-1).unsqueeze(1).expand(P.size())
grad_input = grad_output_expand*P*(sumkPk-i)
return grad_input
| The most basic element in PyTorch is a Tensor, which is the equivalent of numpy.ndarray with the only difference being that a Tensor can be put onto a GPU for any computation.
A Variable is a wrapper around Tensor that contains three attributes: data, grad and grad_fn. data contains the original Tensor; grad contains the derivative/gradient of some value with respect to this Variable; and grad_fn is a pointer to the Function object that created this Variable. The grad_fn attribute is actually the key for autograd to work properly since PyTorch uses those pointers to build the computation graph at each iteration and carry out the differentiations for all Variables in your graph accordingly. This is not only about differentiating correctly through this custom Function object you are creating.
Hence whenever you create some Tensor in your computation that requires differentiation, wrap it as a Variable. First, this would enable the Tensor to be able to save the resulting derivative/gradient value after you call backward(). Second, this helps autograd build a correct computation graph.
Another thing to notice is that whenever you send a Variable into your computation graph, any value that is computed using this Variable will automatically be a Variable. So you don't have to manually wrap all Tensors in your computation graph.
You might want to take a look at this.
Going back to your error, it's a little difficult to figure out what is really causing the trouble because you are not showing all of your code (information like how you are using this custom Function in your computation graph), but I suspect that what most likely has happened is that you used this Function in a subgraph that required to be differentiated through, when PyTorch used numerical gradient check on your model to see if the differentiation is correct, it assumed that every node in that subgraph was a Variable because that is necessary for differentiation through that subgraph to happen, then it tried to call the data attribute of that Variable, most likely because that value is used somewhere in the differentiation, and failed because that node was in fact a Tensor and did not have a data attribute.
| https://stackoverflow.com/questions/46509039/ |
Adding a linear layer to an existing model on Pytorch | I'm trying to add a new layer to an existing network (as the first layer) and train it on the original input. When I add a convolutional layer everything works perfectly but when I change it to linear it doesn't seem to train. Any ideas why?
Here is the whole network:
class ActorCritic(torch.nn.Module): #original model
def __init__(self, num_inputs, action_space):
super(ActorCritic, self).__init__()
self.conv1 = nn.Conv2d(num_inputs, 32, 3, stride=2, padding=1)
self.conv2 = nn.Conv2d(32, 32, 3, stride=2, padding=1)
self.conv3 = nn.Conv2d(32, 32, 3, stride=2, padding=1)
self.conv4 = nn.Conv2d(32, 32, 3, stride=2, padding=1)
self.lstm = nn.LSTMCell(32 * 3 * 3, 256)
num_outputs = action_space.n
self.critic_linear = nn.Linear(256, 1)
self.actor_linear = nn.Linear(256, num_outputs)
def forward(self, inputs):
inputs, (hx, cx) = inputs
x = F.elu(self.conv1(inputs))
x = F.elu(self.conv2(x))
x = F.elu(self.conv3(x))
x = F.elu(self.conv4(x))
x = x.view(-1, 32 * 3 * 3)
hx, cx = self.lstm(x, (hx, cx))
x = hx
return self.critic_linear(x), self.actor_linear(x), (hx, cx)
class TLModel(torch.nn.Module): #new model
def __init__(self, pretrained_model, num_inputs):
super(TLModel, self).__init__()
self.new_layer = nn.Linear(1*1*42*42, 1*1*42*42)
self.pretrained_model = pretrained_model
def forward(self, inputs):
inputs, (hx, cx) = inputs
x = F.elu(self.new_layer(inputs.view(-1, 1*1*42*42)))
return self.pretrained_model((x.view(1,1,42,42), (hx, cx)))
I tried different activation functions (not just elu). it works with conv:
class TLModel(torch.nn.Module):
def __init__(self, pretrained_model, num_inputs):
super(TLModel, self).__init__()
self.new_layer = nn.Conv2d(num_inputs, num_inputs, 1)
self.pretrained_model = pretrained_model
def forward(self, inputs):
inputs, (hx, cx) = inputs
x = F.elu(self.new_layer(inputs))
return self.pretrained_model((x, (hx, cx)))
The number of inputs is 1 and the size of an input is 1x1x42x42
| It would be useful if you had supplied the error message. From what you have written, I can only guess that you have forgotten to squeeze your input. You write your input is of size 1x1x42x42, i.e. it is 4-dimensional. nn.Conv2D expects a 4-dimensional input. nn.Linear instead expects a 2-dimensional input.
Therefore, try to call input = input.squeeze() before feeding it to your model. This removes singleton dimensions, and hence will make your input 2-dimensional as there are two singleton dimensions.
As a side note, nn.Linear expects input of dimension batch_size x feat_dim. Does a linear layer really make sense on your data?
As another side note, when people usually add layers to networks they put them in the end not the beginning, but I trust you have good reasons to do so and know what you're doing :)
Good Luck!
| https://stackoverflow.com/questions/46529028/ |
Pytorch maxpooling over channels dimension | I was trying to build a cnn to with Pytorch, and had difficulty in maxpooling. I have taken the cs231n held by Stanford. As I recalled, maxpooling can be used as a dimensional deduction step, for example, I have this (1, 20, height, width) input ot max_pool2d (assuming my batch_size is 1). And if I use (1, 1) kernel, I want to get output like this: (1, 1, height, width), which means the kernel should be slide over the channel dimension. However, after checking the pytorch docs, it says the kernel slides over height and width. And thanks to @ImgPrcSng on Pytorch forum who told me to use max_pool3d, and it turned out worked well. But there is still a reshape operation between the output of the conv2d layer and the input of the max_pool3d layer. So it is hard to be aggregated into a nn.Sequential, so I wonder is there another way to do this?
| Would something like this work?
from torch.nn import MaxPool1d
import torch.nn.functional as F
class ChannelPool(MaxPool1d):
def forward(self, input):
n, c, w, h = input.size()
input = input.view(n, c, w * h).permute(0, 2, 1)
pooled = F.max_pool1d(
input,
self.kernel_size,
self.stride,
self.padding,
self.dilation,
self.ceil_mode,
self.return_indices,
)
_, _, c = pooled.size()
pooled = pooled.permute(0, 2, 1)
return pooled.view(n, c, w, h)
Or, using einops
from torch.nn import MaxPool1d
import torch.nn.functional as F
from einops import rearrange
class ChannelPool(MaxPool1d):
def forward(self, input):
n, c, w, h = input.size()
pool = lambda x: F.max_pool1d(
x,
self.kernel_size,
self.stride,
self.padding,
self.dilation,
self.ceil_mode,
self.return_indices,
)
return rearrange(
pool(rearrange(input, "n c w h -> n (w h) c")),
"n (w h) c -> n c w h",
n=n,
w=w,
h=h,
)
| https://stackoverflow.com/questions/46562612/ |
Pytorch: Trying to apply the transform to a numpy array... fails with an error | Any help will be much appreciated. The code in transforms.py says that the transformation should/would apply to PIL images as well as ndarrays.
Given the transforms:
data_transforms = {
'train': transforms.Compose([
transforms.Scale(256),
transforms.Pad(4,0),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Scale(256),
transforms.Pad(4,0),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
I wish to apply the transform on ndarrays that I obtained from some other code. Let's say it is x_data, whose shape is (1000,120,160,3) where the dimensions are (total rows, width, height, channels)
doing the following fails (All I'm trying to do is apply a transformation) :
foo = data_transforms['train']
bar = foo(x_data[0])
with the following message:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-93-a703e3b9c76d> in <module>()
----> 1 foo(x_data[1])
~/anaconda3/envs/pytorch/lib/python3.5/site-packages/torchvision-0.1.9-py3.5.egg/torchvision/transforms.py in __call__(self, img)
32 def __call__(self, img):
33 for t in self.transforms:
---> 34 img = t(img)
35 return img
36
~/anaconda3/envs/pytorch/lib/python3.5/site-packages/torchvision-0.1.9-py3.5.egg/torchvision/transforms.py in __call__(self, img)
185 """
186 if isinstance(self.size, int):
--> 187 w, h = img.size
188 if (w <= h and w == self.size) or (h <= w and h == self.size):
189 return img
TypeError: 'int' object is not iterable
| Most transforms method take only PIL objects as input. But you can add another transform called transforms.ToPILImage() which takes an nd-array as input, to convert an nd-array to PIL object. So in your case, dictionary variable should become:
data_transforms = {
'train': transforms.Compose([
transforms.ToPILImage()
transforms.Scale(256),
transforms.Pad(4,0),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Scale(256),
transforms.Pad(4,0),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
Note that these transformations work sequentially. So it's necessary you add the toPILImage transform as the first transformation. Hence your nd-array is first converted to PIL object, and then other transformations are applied.
| https://stackoverflow.com/questions/46586616/ |
Loss is not decreasing for convolutional autoencoder | I'm trying to train a convolutional autoencoder to encode and decode a piano roll representation of monophonic midi clips. I reduced the note range to 3 octaves, divide songs into 100 time step pieces (where 1 time step = 1/100th of a second), and train the net in batches of 3 pieces.
I'm using Adagrad as my optimizer, and MSE as my loss function. The loss is huge, and I see no decrease in average loss even after hundreds of training examples are fed in.
Here's my code:
"""
Most absolutely simple assumptions:
- not changing the key of any of the files
- not changing the tempo of any of the files
- take blocks of 36 by 100
- divide up all songs by this amount, cutting off any excess from the
end, train
"""
from __future__ import print_function
import cPickle as pickle
import numpy as np
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from reverse_pianoroll import piano_roll_to_pretty_midi as pr2pm
N = 1000
# load a NxMxC dataset
# N: Number of clips
# M: Piano roll size, the number of midi notes that could possibly be 'on'
# C: Clip length, in 100ths of a second
dataset = pickle.load(open('mh-midi-data.pickle', 'rb'))
######## take a subset of the data for training ######
# based on the mean and standard deviation of non zero entries in the data, I've
# found that the most populous, and thus best range of notes to take is from
# 48 to 84 (C2 - C5); this is 3 octaves, which is much less than the original
# 10 and a half. Additionally, we're going to take a subsample of 1000 because
# i'm training on my macbook and the network is pretty simple
######################################################
dataset = dataset[:, :, 48:84, :]
dataset = dataset[:N]
######################################################
midi_dim, clip_len = dataset.shape[2:]
class Autoencoder(nn.Module):
def __init__(self, **kwargs):
super(Autoencoder, self).__init__(**kwargs)
# input is 3 x 1 x 36 x 100
self.conv1 = nn.Conv2d(in_channels=1, out_channels=14, kernel_size=(midi_dim, 2))
# now transformed to 3 x 14 x 1 x 99
self.conv2 = nn.Conv2d(in_channels=14, out_channels=77, kernel_size=(1, 4))
# now transformed to 3 x 77 x 1 x 96
input_size = 3*77*1*96
self.fc1 = nn.Linear(input_size, input_size/2)
self.fc2 = nn.Linear(input_size/2, input_size/4)
self.fc3 = nn.Linear(input_size/4, input_size/2)
self.fc4 = nn.Linear(input_size/2, input_size)
self.tconv2 = nn.ConvTranspose2d(in_channels=77, out_channels=14, kernel_size=(1, 4))
self.tconv1 = nn.ConvTranspose2d(in_channels=14, out_channels=1, kernel_size=(midi_dim, 2))
self.sigmoid = nn.Sigmoid()
return
def forward(self, x):
# print("1: {}".format(x.size()))
x = F.relu(self.conv1(x))
# print("2: {}".format(x.size()))
x = F.relu(self.conv2(x))
# print("3: {}".format(x.size()))
x = x.view(-1, np.prod(x.size()[:]))
# print("4: {}".format(x.size()))
x = F.relu(self.fc1(x))
# print("5: {}".format(x.size()))
h = F.relu(self.fc2(x))
# print("6: {}".format(h.size()))
d = F.relu(self.fc3(h))
# print("7: {}".format(d.size()))
d = F.relu(self.fc4(d))
# print("8: {}".format(d.size()))
d = d.view(3, 77, 1, 96)
# print("9: {}".format(d.size()))
d = F.relu(self.tconv2(d))
# print("10: {}".format(d.size()))
d = self.tconv1(d)
d = self.sigmoid(d)
# print("11: {}".format(d.size()))
return d
net = Autoencoder()
loss_fn = nn.MSELoss()
# optimizer = optim.SGD(net.parameters(), lr=1e-3, momentum=0.9)
optimizer = optim.Adagrad(net.parameters(), lr=1e-3)
batch_count = 0
avg_loss = 0.0
print_every = 3
print("Beginning Training")
for epoch in xrange(2):
# for i, clip in enumerate(dataset):
for i in xrange(len(dataset)/3):
batch = dataset[(3*i):(3*i + 3), :, :]
# get the input, wrap it in a Variable
inpt = Variable(torch.from_numpy(batch).type(torch.FloatTensor))
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outpt = net(inpt)
loss = loss_fn(outpt, inpt)
loss.backward()
optimizer.step()
# print stats out
avg_loss += loss.data[0]
if batch_count % print_every == print_every - 1:
print('epoch: %d, batch_count: %d, loss: %.3f'%(
epoch + 1, batch_count + 1, avg_loss / print_every))
avg_loss = 0.0
batch_count += 1
print('Finished Training')
I'm really a beginner with this stuff, so any advice would be greatly appreciated.
| Double check that you normalize your inpt to be in the range of 0 to 1. For instance, if you are working with images you could just divide inpt variable by 255.
| https://stackoverflow.com/questions/46587685/ |
How to apply the torch.inverse() function of PyTorch to every sample in the batch? | This may seem like a basic question, but I am unable to work it through.
In the forward pass of my neural network, I have an output tensor of shape 8x3x3, where 8 is my batch size. We can assume each 3x3 tensor to be a non-singular matrix. I need to find the inverse of these matrices.
The PyTorch inverse() function only works on square matrices. Since I now have 8x3x3, how do I apply this function to every matrix in the batch in a differentiable manner?
If I iterate through the samples and append the inverses to a python list, which I then convert to a PyTorch tensor, should it be a problem during backprop? (I am asking since converting PyTorch tensors to numpy to perform some operations and then back to a tensor won't compute gradients during backprop for such operations)
I also get the following error when I try to do something like that.
a = torch.arange(0,8).view(-1,2,2)
b = [m.inverse() for m in a]
c = torch.FloatTensor(b)
TypeError: 'torch.FloatTensor' object does not support indexing
| EDIT:
As of Pytorch version 1.0, torch.inverse now supports batches of tensors. See here. So you can simply use the built-in function torch.inverse
OLD ANSWER
There are plans to implement batched inverse soon. For discussion, see for example issue 7500 or issue 9102. However, as of the time of writing, the current stable version (0.4.1), no batch inverse operation is available.
Having said that, recently batch support for torch.gesv was added. This can be (ab)used to define your own batched inverse operation along the following lines:
def b_inv(b_mat):
eye = b_mat.new_ones(b_mat.size(-1)).diag().expand_as(b_mat)
b_inv, _ = torch.gesv(eye, b_mat)
return b_inv
I found that this gives good speed-ups over a for loop when running on GPU.
| https://stackoverflow.com/questions/46595157/ |
Implementing a loss function (MSVE) in Reinforcement learning | I am trying to build a temporal difference learning agent for Othello. While the rest of my implementation seems to run as intended I am wondering about the loss function used to train my network. In Sutton's book "Reinforcement learning: An Introduction", the Mean Squared Value Error (MSVE is presented as the standard loss function. It is basically a Mean Square Error multiplied with the on policy distribution. (Sum over all states s ( onPolicyDistribution(s) * [V(s) - V'(s,w)]² ) )
My question is now: How do I obtain this on policy distribution when my policy is an e-greedy function of a learned value function? Is it even necessary and what's the issue if I just use an MSELoss instead?
I'm implementing all of this in pytorch, so bonus points for an easy implementation there :)
| As you mentioned, in your case, it sounds like you are doing Q-learning, so you do not need to do policy gradient as described in Sutton's book. That is need when you are learning a policy. You are not learning a policy, you are learning a value function and using that to act.
| https://stackoverflow.com/questions/46685506/ |
How can I extract the feature vector of the last hidden layer of the Alex net in pytorch? | I’ve trained an Alex net in pytorch and I want to extract feature vectors from the layers
What is the function which I should use?
| In my opinion, you can define a function in the model class which takes input and output the features you like, for example:
class model(nn.Module):
def __init__(self):
# init codes
def forward(self, input):
# forward codes
def yourfunc(self, input):
# codes
#return feature1, feature2
This yourfunc only takes your input and outputs features you need, it does no backward computation. You just need to call it wherever you need. And I don't think there are built-in functions in PyTorch can do that since it is easy to implement by yourself.
| https://stackoverflow.com/questions/46727042/ |
How to implement LSTM layer with multiple cells in Pytorch? | I intend to implement an LSTM with 2 layers and 256 cells in each layer. I am trying to understand the PyTorch LSTM framework for the same. The variables in torch.nn.LSTM that I can edit are input_size, hidden_size, num_layers, bias, batch_first, dropout and bidirectional.
However, how do I have multiple cells in a single layer?
| These cells will be automatically unrolled based on your sequence size in the input. Please check out this code:
# One cell RNN input_dim (4) -> output_dim (2). sequence: 5, batch 3
# 3 batches 'hello', 'eolll', 'lleel'
# rank = (3, 5, 4)
inputs = Variable(torch.Tensor([[h, e, l, l, o],
[e, o, l, l, l],
[l, l, e, e, l]]))
print("input size", inputs.size()) # input size torch.Size([3, 5, 4])
# Propagate input through RNN
# Input: (batch, seq_len, input_size) when batch_first=True
# B x S x I
out, hidden = cell(inputs, hidden)
print("out size", out.size()) # out size torch.Size([3, 5, 2])
You can find more examples at https://github.com/hunkim/PyTorchZeroToAll/.
| https://stackoverflow.com/questions/46752078/ |
Is it possible to train pytorch and tensorflow model together on one GPU? | I have a pytorch model and a tensorflow model, I want to train them together on one GPU, following the process bellow: input --> pytorch model--> output_pytorch --> tensorflow model --> output_tensorflow --> pytorch model.
Is is possible to do this? If answer is yes, is there any problem which I will encounter?
Thanks in advance.
| I haven't done this but it is possible but implementing is can be a little bit.
You can consider each network as a function, you want to - in some sense - compose these function to form your network, to do this you can compute the final function by just giving result of one network to the other and then use chain-rule to compute the derivatives(using symbolic differentiation from both packages).
I think a good way for implementing this you might be to wrap TF models as a PyTorch Function and use tf.gradients for computing the backward pass.
Doing gradient updates can really get hard (because some variables exist in TF's computation graph) you can turn TF variables to PyTorch Variable turn them into placeholdes in TF computation graph, feed them in feed_dict and update them using PyTorch mechanisms, but I think it would be really hard to do, instead if you do your updates inside backward method of the function you might be able to do the job(it is really ugly but might do the job).
| https://stackoverflow.com/questions/46782508/ |
L1 norm as regularizer in Pytorch | I need to add an L1 norm as a regularizer to create a sparsity condition in my neural network. I would like to train my network for classification. I tried to construct an L1 norm by myself, like here, but it didn't work.
I need to add the regularizer after ConvTranspose2d, something like this Keras example:
model.add(Dense(64, input_dim=64,
kernel_regularizer=regularizers.l2(0.01),
activity_regularizer=regularizers.l1(0.01)))
But my network was created in PyTorch like so:
upconv = nn.ConvTranspose2d(inner_nc, outer_nc,
kernel_size=4, stride=2,
padding=1, bias=use_bias)
down = [downrelu, downconv]
up = [uprelu, upconv, upnorm]
model = down + up
| You're overthinking this. As I see from your Keras code, you're trying to impose a L1 penalty on the activations of your layer. The simplest way would just be to do something like the following:
activations_to_regularise = upconv(input)
output = remaining_netowrk(activations_to_regularise)
Then have your normal loss function to assess the output against a target and also incorporate the L1 loss into the objective, such that you get
total_loss = criterion(output, target) + 0.01 * activations_to_regularise.abs()
| https://stackoverflow.com/questions/46797955/ |
Efficiently find centroid of labelled image regions | I have a segmented image as a 2 dimensional matrix of unique labels 1 ... k. For example:
img =
[1 1 2 2 2 2 2 3 3]
[1 1 1 2 2 2 2 3 3]
[1 1 2 2 2 2 3 3 3]
[1 4 4 4 2 2 2 2 3]
[4 4 4 5 5 5 2 3 3]
[4 4 4 5 5 6 6 6 6]
[4 4 5 5 5 5 6 6 6]
I am trying to determine the region centroids. That is, per label, what is the X,Y coordinate of the center of mass? For example, the centroid of label 1 is (1.25, 0.625). Just average up the row numbers ((0 + 0 + 1 + 1 + 1 + 2 + 2 + 3) / 8 = 1.25) and the column numbers ((0 + 0 + 0 + 0 + 1 + 1 + 1 + 2) / 8 = 0.625)
The only way I know how to do this is to use a for loop from 1 to k (or in the case of my example, 1 through 6), find the indices of the points for each label, and average their coordinates by indexing a meshgrid of the image.
However, I am looking to do this in a way optimized for GPU computations. Hence, the use of a for loop is less than ideal (takes about 1 sec per image on a nice GPU for a few hundred labels). I am using PyTorch, but really any numpy solution should suffice.
Is there a GPU-efficient solution for this task?
| One idea would be to use bincount to accumulate the row and column indices for each region using the numbers in the input array as the bins and thus have a vectorized solution, like so -
m,n = a.shape
r,c = np.mgrid[:m,:n]
count = np.bincount(a.ravel())
centroid_row = np.bincount(a.ravel(),r.ravel())/count
centroid_col = np.bincount(a.ravel(),c.ravel())/count
Sample run -
In [77]: a
Out[77]:
array([[1, 1, 2, 2, 2, 2, 2, 3, 3],
[1, 1, 1, 2, 2, 2, 2, 3, 3],
[1, 1, 2, 2, 2, 2, 3, 3, 3],
[1, 4, 4, 4, 2, 2, 2, 2, 3],
[4, 4, 4, 5, 5, 5, 2, 3, 3],
[4, 4, 4, 5, 5, 6, 6, 6, 6],
[4, 4, 5, 5, 5, 5, 6, 6, 6]])
In [78]: np.c_[centroid_row, centroid_col]
Out[78]:
array([[ nan, nan],
[ 1.25, 0.62], # centroid for region-1
[ 1.56, 4.44], # centroid for region-2
[ 1.9 , 7.4 ], # centroid for region-3 and so on.
[ 4.36, 1.18],
[ 5.11, 3.67],
[ 5.43, 6.71]])
| https://stackoverflow.com/questions/46840707/ |
Why grad_output requires_grad is False in pytorch? | Here is the customLayer.py.
I am quite confused about the following things:
The input of the inner layer is not a Variable. Then in backward it becomes a Variable and requires gradient. Why?
grad_output is a Variable yet requires_grad is False. Why is not true?
In my custom layer, I need customize forward and backward operations. It is quite complicated. See the same link. I have posted questions in it.
|
The gradients are updated through your loss computation and are required for the backpropagation. If you don't have gradients, you cant train your network.
Probably, because you don't want the gradients last on the variable. It's temporally only for one backward phase.
Why do you need a custom backward function? Do you need extra operations on your backpropagation?
| https://stackoverflow.com/questions/46870086/ |
How to do reinforcement learning with an LSTM in PyTorch? | Due to observations not revealing the entire state, I need to do reinforcement with a recurrent neural network so that the network has some sort of memory of what has happened in the past. For simplicity let's assume that we use an LSTM.
Now the in-built PyTorch LSTM requires you to feed it a an input of shape Time x MiniBatch x Input D and it outputs a tensor of shape Time x MiniBatch x Output D.
In reinforcement learning however, to know the input at time t+1, I need to know the output at time t, because I am doing actions in an environment.
So is it possible to use the in-built PyTorch LSTM to do BPTT in a reinforcement learning setting? And if it is, how could I do it?
| Maybe you can feed your input sequence in a loop to your LSTM. Something, like this:
h, c = Variable(torch.zeros()), Variable(torch.zeros())
for i in range(T):
input = Variable(...)
_, (h, c) = lstm(input, (h,c))
Every timestep you can use (h,c) and input to evaluate action for instance. As long as you do not break computational graph you can backpropagate as Variables keep all the history.
| https://stackoverflow.com/questions/46914292/ |
Pytorch RNN memory allocation error in DataLoader | I am writing an RNN in Pytorch.
I have the following line of code:
data_loader = torch.utils.data.DataLoader(
data,
batch_size=args.batch_size,
shuffle=True,
num_workers=args.num_workers,
drop_last=True)
If I set num_workers to 0, I get a segmentation fault.
If I set num_workers to > 0, then I have the traceback:
Traceback (most recent call last):
File "rnn_model.py", line 352, in <module>
train_model(train_data, dev_data, test_data, model, args)
File "rnn_model.py", line 212, in train_model
loss = run_epoch(train_data, True, model, optimizer, args)
File "rnn_model.py", line 301, in run_epoch
for batch in tqdm.tqdm(data_loader):
File "/home/username/miniconda3/lib/python2.7/site-packages/tqdm/_tqdm.py",
line 872, in __iter__
for obj in iterable:
File "/home/username/miniconda3/lib/python2.7/site-
packages/torch/utils/data/dataloader.py", line 303, in __iter__
return DataLoaderIter(self)
File "/home/username/miniconda3/lib/python2.7/site-
packages/torch/utils/data/dataloader.py", line 162, in __init__
w.start()
File "/home/username/miniconda3/lib/python2.7/multiprocessing/process.py", line 130, in start
self._popen = Popen(self)
File "/home/username/miniconda3/lib/python2.7/multiprocessing/forking.py", line 121, in __init__
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
| You are trying to load more data than your system can hold in its RAM.
You can either try to load only parts of your data or use/write a data loader which only loads the data that is needed for the current batch.
| https://stackoverflow.com/questions/46993965/ |
Teacher forcing with pytorch RNN | The pytorch tutorials do a great job of illustrating a bare-bones RNN by defining the input and hidden layers, and manually feeding the hidden layers back into the network to remember the state. This flexibility then allows you to very easily perform teacher forcing.
Question 1: How do you perform teacher forcing when using the native nn.RNN() module (since the entire sequence is fed at once)? Example simple RNN network would be:
class SimpleRNN(nn.Module):
def __init__(self, vocab_size,
embedding_dim,
batch_sz,
hidden_size=128,
nlayers=1,
num_directions=1,
dropout=0.1):
super(SimpleRNN, self).__init__()
self.batch_sz = batch_sz
self.hidden_size = hidden_size
self.encoder = nn.Embedding(vocab_size, embedding_dim)
self.rnn = nn.RNN(embedding_dim, hidden_size, nlayers, dropout=0.5)
self.decoder = nn.Linear(hidden_size, vocab_size)
def init_hidden(self):
return autograd.Variable(torch.zeros(nlayers, batch_sz, hidden_size)).cuda()
def forward(self, inputs, hidden):
# -- encoder returns:
# -- [batch_sz, seq_len, embed_dim]
encoded = self.encoder(inputs)
_, seq_len, _ = encoded.size()
# -- rnn returns:
# -- output.size() = [seq_len, batch_sz, hidden_sz]
# -- hidden.size() = [nlayers, batch_sz, hidden_sz]
output, hidden = self.rnn(encoded.view(seq_len, batch_sz, embedding_dim), hidden)
# -- decoder returns:
# -- output.size() = [batch_sz, seq_len, vocab_size]
output = F.log_softmax(decoder(output.view(batch_sz, seq_len, self.hidden_size)))
return output, hidden
Where I can call the network with:
model = SimpleRNN(vocab_size, embedding_dim, batch_sz).cuda()
x_data, y_data = get_sequence_data(train_batches[0])
output, hidden = model(x_data, model.init_hidden())
Just for completeness, here are my shapes of x_data, output, and hidden:
print(x_data.size(), output.size(), hidden.size())
torch.Size([32, 80]) torch.Size([32, 80, 4773]) torch.Size([1, 32, 128])
Question 2: would it be possible to use this SimpleRNN network to then generate a sequence word-by-word, by first feeding it a <GO_TOKEN> and iterating until an <END_TOKEN> is reached? I ask because when I run this:
x_data = autograd.Variable(torch.LongTensor([[word2idx['<GO>']]]), volatile=True).cuda()
output, hidden = model(x_data, model.init_hidden(1))
print(output, output.sum())
I get an output of all 0s, and the output.sum() = 0. I get this even after training the network and backpropagating the loss. Any ideas why?
Question 3: If not terribly inefficient, is it possible to train the SimpleRNN network above word-by-word, analogous to the pytorch tutorial shown (here)[http://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html] (albeit there they're training character-by-character).
| Question 1.
Teacher forcing is performed implicitly in this case, since your x_data is [seq_len, batch_size] it will feed in each item in seq_len as input and not use the actual output for the next input.
Question 2.
Your model.init_hidden does not take any input however it looks like you're trying to add the batch size in, maybe you could check that, everything else seems fine. Though you will need to do an max() or multinomial() on the output before you can feed it back through.
Question 3.
Yes you can do this, yes it is terribly inefficient. This is a limitation of the CUDNN LSTM kernel
| https://stackoverflow.com/questions/47077831/ |
Running out of memory during evaluation in Pytorch | I'm training a model in pytorch. Every 10 epochs, I'm evaluating the train and test error on the entire train and test dataset. For some reason the evaluation function is causing out-of-memory on my GPU. This is strange because I have the same batchsize for training and evaluation. I believe it's due to the net.forward() method being called repeated and having all the hidden values stored in memory but I'm not sure how to get around this?
def evaluate(self, data):
correct = 0
total = 0
loader = self.train_loader if data == "train" else self.test_loader
for step, (story, question, answer) in enumerate(loader):
story = Variable(story)
question = Variable(question)
answer = Variable(answer)
_, answer = torch.max(answer, 1)
if self.config.cuda:
story = story.cuda()
question = question.cuda()
answer = answer.cuda()
pred_prob = self.mem_n2n(story, question)[0]
_, output_max_index = torch.max(pred_prob, 1)
toadd = (answer == output_max_index).float().sum().data[0]
correct = correct + toadd
total = total + captions.size(0)
acc = correct / total
return acc
| I think it fails during Validation because you don't use optimizer.zero_grad(). The zero_grad executes detach, making the tensor a leaf. It is commonly used every epoch in the training part.
The use of volatile flag in Variable from PyTorch 0.4.0 has been removed.
Ref - migration_guide_to_0.4.0
Starting from 0.4.0, to avoid the gradient being computed during validation, use torch.no_grad()
Code example from the migration guide.
# evaluate
with torch.no_grad(): # operations inside don't track history
for input, target in test_loader:
...
For 0.3.X, using volatile should work.
| https://stackoverflow.com/questions/47086338/ |
How to train a simple linear regression model with SGD in pytorch successfully? | I was trying to train a simple polynomial linear regression model in pytorch with SGD. I wrote some self contained (what I thought would be extremely simple code), however, for some reason my model does not train as I thought it should.
I have 5 points sampled from a sine curve and try to fit it with a polynomial of degree 4. This is a convex problem so GD or SGD should find a solution with zero train error eventually as long as we have enough iterations and small enough step size. For some reason however my model does not train well (even though it seems that it is changing the parameters of the model. Anyone have an idea why? Here is the code (I tried making it self contained and minimal):
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
import torch
from torch.autograd import Variable
from maps import NamedDict
from plotting_utils import *
def index_batch(X,batch_indices,dtype):
'''
returns the batch indexed/sliced batch
'''
if len(X.shape) == 1: # i.e. dimension (M,) just a vector
batch_xs = torch.FloatTensor(X[batch_indices]).type(dtype)
else:
batch_xs = torch.FloatTensor(X[batch_indices,:]).type(dtype)
return batch_xs
def get_batch2(X,Y,M,dtype):
'''
get batch for pytorch model
'''
# TODO fix and make it nicer, there is pytorch forum question
X,Y = X.data.numpy(), Y.data.numpy()
N = len(Y)
valid_indices = np.array( range(N) )
batch_indices = np.random.choice(valid_indices,size=M,replace=False)
batch_xs = index_batch(X,batch_indices,dtype)
batch_ys = index_batch(Y,batch_indices,dtype)
return Variable(batch_xs, requires_grad=False), Variable(batch_ys, requires_grad=False)
def get_sequential_lifted_mdl(nb_monomials,D_out, bias=False):
return torch.nn.Sequential(torch.nn.Linear(nb_monomials,D_out,bias=bias))
def train_SGD(mdl, M,eta,nb_iter,logging_freq ,dtype, X_train,Y_train):
##
N_train,_ = tuple( X_train.size() )
#print(N_train)
for i in range(nb_iter):
# Forward pass: compute predicted Y using operations on Variables
batch_xs, batch_ys = get_batch2(X_train,Y_train,M,dtype) # [M, D], [M, 1]
## FORWARD PASS
y_pred = mdl.forward(batch_xs)
## LOSS + Regularization
batch_loss = (1/M)*(y_pred - batch_ys).pow(2).sum()
## BACKARD PASS
batch_loss.backward() # Use autograd to compute the backward pass. Now w will have gradients
## SGD update
for W in mdl.parameters():
delta = eta*W.grad.data
W.data.copy_(W.data - delta)
## train stats
if i % (nb_iter/10) == 0 or i == 0:
current_train_loss = (1/N_train)*(mdl.forward(X_train) - Y_train).pow(2).sum().data.numpy()
print('i = {}, current_loss = {}'.format(i, current_train_loss ) )
## Manually zero the gradients after updating weights
mdl.zero_grad()
##
logging_freq = 100
dtype = torch.FloatTensor
## SGD params
M = 3
eta = 0.0002
nb_iter = 20*1000
##
lb,ub = 0,1
f_target = lambda x: np.sin(2*np.pi*x)
N_train = 5
X_train = np.linspace(lb,ub,N_train)
Y_train = f_target(X_train)
## degree of mdl
Degree_mdl = 4
## pseudo-inverse solution
c_pinv = np.polyfit( X_train, Y_train , Degree_mdl )[::-1]
## linear mdl to train with SGD
nb_terms = c_pinv.shape[0]
mdl_sgd = get_sequential_lifted_mdl(nb_monomials=nb_terms,D_out=1, bias=False)
## Make polynomial Kernel
poly_feat = PolynomialFeatures(degree=Degree_mdl)
Kern_train = poly_feat.fit_transform(X_train.reshape(N_train,1))
Kern_train_pt, Y_train_pt = Variable(torch.FloatTensor(Kern_train).type(dtype), requires_grad=False), Variable(torch.FloatTensor(Y_train).type(dtype), requires_grad=False)
train_SGD(mdl_sgd, M,eta,nb_iter,logging_freq ,dtype, Kern_train_pt,Y_train_pt)
the error seems to hover on 2ish:
i = 0, current_loss = [ 2.08996224]
i = 2000, current_loss = [ 2.03536892]
i = 4000, current_loss = [ 2.02014995]
i = 6000, current_loss = [ 2.01307297]
i = 8000, current_loss = [ 2.01300406]
i = 10000, current_loss = [ 2.01125693]
i = 12000, current_loss = [ 2.01162267]
i = 14000, current_loss = [ 2.01296973]
i = 16000, current_loss = [ 2.00951076]
i = 18000, current_loss = [ 2.00967121]
which is weird cuz it should be able to reach zero.
I also plotted the learned function:
the code for the plotting:
##
x_horizontal = np.linspace(lb,ub,1000).reshape(1000,1)
X_plot = poly_feat.fit_transform(x_horizontal)
X_plot_pytorch = Variable( torch.FloatTensor(X_plot), requires_grad=False)
##
fig1 = plt.figure()
#plots objs
p_sgd, = plt.plot(x_horizontal, [ float(f_val) for f_val in mdl_sgd.forward(X_plot_pytorch).data.numpy() ])
p_pinv, = plt.plot(x_horizontal, np.dot(X_plot,c_pinv))
p_data, = plt.plot(X_train,Y_train,'ro')
## legend
nb_terms = c_pinv.shape[0]
legend_mdl = f'SGD solution standard parametrization, number of monomials={nb_terms}, batch-size={M}, iterations={nb_iter}, step size={eta}'
plt.legend(
[p_sgd,p_pinv,p_data],
[legend_mdl,f'linear algebra soln, number of monomials={nb_terms}',f'data points = {N_train}']
)
##
plt.xlabel('x'), plt.ylabel('f(x)')
plt.show()
I actually went ahead and implemented a TensorFlow version. That one does seem to train the model. I tried having both of them match by giving them the same initialization:
mdl_sgd[0].weight.data.fill_(0)
but that still didn't work. Tensorflow code:
graph = tf.Graph()
with graph.as_default():
X = tf.placeholder(tf.float32, [None, nb_terms])
Y = tf.placeholder(tf.float32, [None,1])
w = tf.Variable( tf.zeros([nb_terms,1]) )
#w = tf.Variable( tf.truncated_normal([Degree_mdl,1],mean=0.0,stddev=1.0) )
#w = tf.Variable( 1000*tf.ones([Degree_mdl,1]) )
##
f = tf.matmul(X,w) # [N,1] = [N,D] x [D,1]
#loss = tf.reduce_sum(tf.square(Y - f))
loss = tf.reduce_sum( tf.reduce_mean(tf.square(Y-f), 0))
l2loss_tf = (1/N_train)*2*tf.nn.l2_loss(Y-f)
##
learning_rate = eta
#global_step = tf.Variable(0, trainable=False)
#learning_rate = tf.train.exponential_decay(learning_rate=eta, global_step=global_step,decay_steps=nb_iter/2, decay_rate=1, staircase=True)
train_step = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(loss)
with tf.Session(graph=graph) as sess:
Y_train = Y_train.reshape(N_train,1)
tf.global_variables_initializer().run()
# Train
for i in range(nb_iter):
#if i % (nb_iter/10) == 0:
if i % (nb_iter/10) == 0 or i == 0:
current_loss = sess.run(fetches=loss, feed_dict={X: Kern_train, Y: Y_train})
print(f'i = {i}, current_loss = {current_loss}')
## train
batch_xs, batch_ys = get_batch(Kern_train,Y_train,M)
sess.run(train_step, feed_dict={X: batch_xs, Y: batch_ys})
I also tried changing the initialization but it didn't change anything, which makes sense cuz it shouldn't make a big difference:
mdl_sgd[0].weight.data.normal_(mean=0,std=0.001)
Original post:
https://discuss.pytorch.org/t/how-to-train-a-simple-linear-regression-model-with-sgd-in-pytorch-successfully/9620
This is how it should look like:
SOLUTION:
it seems that there is an issue with the result being returned as a vector instead of a number causing the issue. i.e. the following code fixed things:
y_pred = model(batch_xs).view(-1) # change this to "y_pred = model(batch_xs)" to get the incorrect results
loss = (y_pred - batch_ys).pow(2).mean()
which seems completely mysterious to me. Does someone know why this fixed the issue? it just seems like magic.
| The bug is really subtle but essentially it's because pytorch is using numpy broadcasting rules. So when a column vector (3,1) and an array (i.e. dim is (3,) ) then what happens is that broadcasting produces a (3,3) matrix (note this wouldn't happen when you subtract a row vector (1,3) vector with a (3,) array, I guess arrays are treated as row vectors). This is really bad because it means that we compute the matrix of all pairwise differences between every label and every prediction. Of course this is nonsensical and produces a bug because we don't want the prediction of the first label point to match the prediction of every other label in the data set. Of course that won't produce anything sensible.
So it seems the answer is just to avoid wrong numpy broadcasting by either reshaping things during training or before the data is fed. Either one should work.
To avoid the error one can attach use this code:
def check_vectors_have_same_dimensions(Y,Y_):
'''
Checks that vector Y and Y_ have the same dimensions. If they don't
then there might be an error that could be caused due to wrong broadcasting.
'''
DY = tuple( Y.size() )
DY_ = tuple( Y_.size() )
if len(DY) != len(DY_):
return True
for i in range(len(DY)):
if DY[i] != DY_[i]:
return True
return False
| https://stackoverflow.com/questions/47165079/ |
How to change the picture size in PyTorch | I'm trying to convert CNN Keras model for Emotion Recognition using FER2013 dataset to PyTorch model and I have following error:
Traceback (most recent call last):
File "VGG.py", line 112, in <module>
transfer.keras_to_pytorch(keras_network, pytorch_network)
File "/home/eorg/NeuralNetworks/user/Project/model/nntransfer.py", line 121, in keras_to_pytorch
pytorch_model.load_state_dict(state_dict)
File "/home/eorg/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 334, in load_state_dict
own_state[name].copy_(param)
RuntimeError: inconsistent tensor size at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorCopy.c:51
I understood that the error is related to the shape of images. In Keras the input size is defined to be 48 by 48.
And my question is how to define in PyTorch models that of my pictures are the shape of 48x48? I couldn't find such function in the documentation and examples.
Any help would be useful!
| In order to automatically resize your input images you need to define a preprocessing pipeline all your images go through. This can be done with torchvision.transforms.Compose() (Compose docs). To resize Images you can use torchvision.transforms.Scale() (Scale docs) from the torchvision package.
See the documentation:
Note, in the documentation it says that .Scale() is deprecated and .Resize() should be used instead. Resize docs
This would be a minimal working example:
import torch
from torchvision import transforms
p = transforms.Compose([transforms.Scale((48,48))])
from PIL import Image
img = Image.open('img.jpg')
img.size
# (224, 224) <-- This will be the original dimensions of your image
p(img).size
# (48, 48) <-- This will be the rescaled/resized dimensions of your image
| https://stackoverflow.com/questions/47181853/ |
Pytorch: how to convert data into tensor | I am a beginner for Pytorch.
I was trying to write CNN code referring Pytorch tutorial.
Below is a part of the code, but it shows error "RuntimeError: Variable data has to be a tensor, but got list". I tried to cast input data to tensor but didn't work well. If anybody know the solution, please help me out...
def read_labels(file):
dic = {}
with open(file) as f:
reader = f
for row in reader:
dic[row.split(",")[0]] = row.split(",")[1].rstrip() #rstrip(): eliminate "\n"
return dic
image_names= os.listdir("./train_mini")
label_dic = read_labels("labels.csv")
names =[]
labels = []
images =[]
for name in image_names[1:]:
images.append(cv2.imread("./train_mini/"+name))
labels.append(label_dic[os.path.splitext(name)[0]])
"""
Data distribution
"""
N = len(images)
N_train = int(N * 0.7)
N_test = int(N*0.2)
X_train, X_tmp, Y_train, Y_tmp = train_test_split(images, labels, train_size=N_train)
X_validation, X_test, Y_validation, Y_test = train_test_split(X_tmp, Y_tmp, test_size=N_test)
"""
Model Definition
"""
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.head = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=10,
kernel_size=5, stride=1),
nn.MaxPool2d(kernel_size=2),
nn.ReLU(),
nn.Conv2d(10, 20, kernel_size=5),
nn.MaxPool2d(kernel_size=2),
nn.ReLU())
self.tail = nn.Sequential(
nn.Linear(320, 50),
nn.ReLU(),
nn.Linear(50, 10))
def forward(self, x):
x = self.head(x)
x = x.view(-1, 320)
x = self.tail(x)
return F.log_softmax(x)
CNN = CNN()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(CNN.parameters(), lr=0.001, momentum=0.9)
"""
Training
"""
batch_size = 50
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i in range(N / batch_size):
#for i, data in enumerate(trainloader, 0):
batch = batch_size * i
# get the inputs
images_batch = X_train[batch:batch + batch_size]
labels_batch = Y_train[batch:batch + batch_size]
# wrap them in Variable
images_batch, labels_batch = Variable(images_batch), Variable(labels_batch)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = CNN(images_batch)
loss = criterion(outputs, labels_batch)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
And error is happening here
# wrap them in Variable
images_batch, labels_batch = Variable(images_batch), Variable(labels_batch)
| If my guess is correct, you are probably getting error in the following line.
# wrap them in Variable
images_batch, labels_batch = Variable(images_batch), Variable(labels_batch)
It means, images_batch and/or labels_batch are lists. You can simple convert them to numpy array and then convert to tensor as follows.
# wrap them in Variable
images_batch = torch.from_numpy(numpy.array(images_batch))
labels_batch = torch.from_numpy(numpy.array(labels_batch))
It should solve your problem.
Edit: If you get the following error while running the above snippet of code:
"RuntimeError: can't convert a given np.ndarray to a tensor - it has an invalid type. The only supported types are: double, float, int64, int32, and uint8."
You can create the numpy array by giving a data type. For example,
images_batch = torch.from_numpy(numpy.array(images_batch, dtype='int32'))
I am assuming images_batch contains pixel information of images, so I used int32. For more information, see official documentation.
| https://stackoverflow.com/questions/47272971/ |
pytorch, AttributeError: module 'torch' has no attribute 'Tensor' | I'm working with Python 3.5.1 on a computer having CentOS Linux 7.3.1611 (Core) operating system.
I'm trying to use PyTorch and I'm getting started with this tutorial.
Unfortunately, the #4 line of the example creates troubles:
>>> torch.Tensor(5, 3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'torch' has no attribute 'Tensor'
I cannot understand this error... of course in Torch the 'torch' does have an attribute 'Tensor'. The same command works in Torch.
How can I solve this problem?
| The Python binary that you are running does not have torch installed. It does have a directory named torch on the module search path, and it is treated as a namespace package:
$ pwd
/some/path
$ python3 -c 'import torch; print(torch); print(torch.__path__)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'torch'
$ mkdir torch
$ python3 -c 'import torch; print(torch); print(torch.__path__)'
<module 'torch' (namespace)>
_NamespacePath(['/some/path/torch'])
Any directory without a __init__.py file present in it, located on your module search path, will be treated as a namespace, provided no other Python modules or packages by that name are found anywhere else along the search path.
This means that if torch was installed for your Python binary, it doesn't matter if there is a local torch directory:
$ ls -ld torch/
drwxr-xr-x 2 mjpieters users 68 Nov 23 13:57 torch/
$ mkdir -p additional_path/torch/
$ touch additional_path/torch/__init__.py
$ PYTHONPATH="./additional_path" python3 -c 'import os.path as p, sys; print(*(t for t in (p.join(e, "torch") for e in sys.path) if p.exists(t)), sep="\n")'
torch
/some/path/additional_path/torch
$ PYTHONPATH="./additional_path" python3 -c 'import torch; print(torch); print(torch.__path__)'
<module 'torch' from '/some/path/additional_path/torch/__init__.py'>
['/some/path/additional_path/torch']
The above shows that sys.path lists the torch directory first, followed by additional_path/torch, but the latter is loaded as the torch module when you try to import it. That's because Python gives priority to top-level modules and packages before loading a namespace package.
You need to install torch correctly for your current Python binary, see the project homepage; when using pip you may want to use the Python binary with the -m switch instead:
python3.5 -m pip install http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp35-cp35m-manylinux1_x86_64.whl
python3.5 -m pip install torchvision
So replace the pip3 the homepage instructions use with python3.5 -m pip; python3.5 can also be the full path to your Python binary.
Do use the correct download.pytorch.org URL for the latest version.
You don't have to move the directory aside, but if you do want to and don't know where it is located, use print(torch.__path__) as I've shown above.
Again, note that if you do have an __init__.py file in a local torch directory, it becomes a regular package and it'll mask packages installed by pip into the normal site-packages location. If you have such a package, or a local torch.py single-file module, you need to rename those. The diagnostic information looks different in that case:
$ pwd
/some/path
$ python3 -c 'import torch; print(torch); print(torch.__path__)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'torch'
$ mkdir torch
$ touch torch/__init__.py # make it a package
$ python3 -c 'import torch; print(torch); print(torch.__path__)'
<module 'torch' from '/some/path/torch/__init__.py'>
['/some/path/torch']
$ rm -rf torch/
$ touch torch.py # make it a module
$ python3 -c 'import torch; print(torch); print(torch.__file__)'
<module 'torch' from '/some/path/torch.py'>
/some/path/torch.py
Note the differences; a namespace package, above, uses <module 'name' (namespace)>, while a regular package uses ), while a plain module uses`.
Such packages and modules (not namespace packages) are found first and stop the search. If the found package or module is not the one you wanted, you need to move them aside or rename them.
| https://stackoverflow.com/questions/47317141/ |
From 2D to 3D using convolutional autoencoder | I’d like to reconstruct 3D object from 2D images.
For that, I try to use convolutional auto encoder. However, in which layer should I lift the dimensionality?
I wrote a code below, however, it shows an error:
“RuntimeError: invalid argument 2: size ‘[1 x 1156 x 1156]’ is invalid for input of with 2312 elements at pytorch-src/torch/lib/TH/THStorage.c:41”
class dim_lifting(nn.Module):
def __init__(self):
super(dim_lifting, self).__init__()
self.encode = nn.Sequential(
nn.Conv2d(1, 34, kernel_size=5, padding=2),
nn.MaxPool2d(2),
nn.Conv2d(34, 16, kernel_size=5, padding=2),
nn.MaxPool2d(2),
nn.Conv2d(16, 8, kernel_size=5, padding=2),
nn.MaxPool2d(2),
nn.LeakyReLU()
)
self.fc1 = nn.Linear(2312, 2312)
self.decode = nn.Sequential(
nn.ConvTranspose3d(1, 16, kernel_size=5, padding=2),
nn.LeakyReLU(),
nn.ConvTranspose3d(16, 32, kernel_size=5, padding=2),
nn.LeakyReLU(),
nn.MaxPool2d(2))
def forward(self, x):
out = self.encode(x)
out = out.view(out.size(0), -1)
out = self.fc1(out)
out = out.view(1, 1156, 1156)
out = self.decode(out)
return out
Error happens here
out = out.view(1, 1156, 1156)
| I cannot test my suggestion because your example is not complete.
I think your line should like
out = out.view(x.size(0), -1)
this way you're flattening out your input.
| https://stackoverflow.com/questions/47373421/ |
How Pytorch do row normalization for each matrix in a 3D Tensor(Variable)? | If I have a 3D Tensor (Variable) with size [a,b,c].
consider it as a b*c matrix, and I hope that all these a matrix got row normalized.
| You can use the normalize function.
import torch.nn.functional as f
f.normalize(input, p=2, dim=2)
The dim=2 argument tells along which dimension to normalize (divide each row vector by its p-norm.
| https://stackoverflow.com/questions/47406429/ |
Does tensorflow have the function similar to pytorch's "masked_fill_" | I want to set INF value to matrix by mask matrix, just like pytorch code:
scores.data.masked_fill_(y_mask.data, -float('inf'))
I try to use tf.map_fn to implement that, but the performance is too slow. So does tensorflow have any efficient function to implement that?
| I have used a math calculate method to instead. It's valid and much faster.
def mask_fill_inf(matrix, mask):
negmask = 1 - mask
num = 3.4 * math.pow(10, 38)
return (matrix * mask) + (-((negmask * num + num) - num))
Do anyone have the better method?
| https://stackoverflow.com/questions/47447272/ |
No module named 'torch.utils.data.distributed' | I have installed anaconda and pytorch on my windows 10 and there was no errors when I installed it.
But I don't know why I don't have the modules or packages in pytorch.
Have you experienced the same problem?
This is the program I'm testing: https://github.com/pytorch/examples/blob/master/imagenet/main.py
| Because you didn't provide any additional information, there are couple of things you can try:
1) first make sure that you've already installed torchvision
2) Then try the following import:
# this import is necessary
import torch.utils.data
| https://stackoverflow.com/questions/47483385/ |
`THIndexTensor_(size)(target, 0) == batch_size' failed. at d:\projects\pytorch\torch\lib\thnn\generic/ClassNLLCriterion.c:54 | I am trying to train my neural networks on dog breeds data set. After feed-forward, during the loss computation it throws this error :
RuntimeError: Assertion `THIndexTensor_(size)(target, 0) == batch_size' failed. at d:\projects\pytorch\torch\lib\thnn\generic/ClassNLLCriterion.c:54
Code:
criterion =nn.CrossEntropyLoss()
optimizer=optim.Adam(net.parameters(),lr=0.001)
for epoch in range(10): # loop over the dataset multiple times
running_loss = 0.0
print(len(trainloader))
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# wrap them in Variable
inputs, labels = Variable(inputs).float(), Variable(labels).float().type(torch.LongTensor)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
Error is generated in this line :
loss = criterion(outputs, labels)
What's the issue ??
| I think problem is that you are missing the batch dimension on the tensor labels. The error says that the size of the 0th dimension is not equal to the batch size.
Try changing this:
loss = criterion(outputs, labels.unsqueeze(0))
Please note, the outputs tensor should have one more dimension than the labels tensor corresponding to a score for each label and the labels should just contain the index of the correct label.
| https://stackoverflow.com/questions/47492033/ |
why is my simple feedforward neural network diverging (pytorch)? | I am experimenting with a simple 2 layer neural network with pytorch, feeding in only three inputs of size 10 each, with a single value as output. I have normalised inputs and lowered learning rate. It is my understanding that a two layer fully connected neural network should be able to trivially fit to this data
Features:
0.8138 1.2342 0.4419 0.8273 0.0728 2.4576 0.3800 0.0512 0.6872 0.5201
1.5666 1.3955 1.0436 0.1602 0.1688 0.2074 0.8810 0.9155 0.9641 1.3668
1.7091 0.9091 0.5058 0.6149 0.3669 0.1365 0.3442 0.9482 1.2550 1.6950
[torch.FloatTensor of size 3x10]
Targets
[124, 125, 122]
[torch.FloatTensor of size 3]
The code is adapted from a simple example and I am using MSELoss as the loss function. The loss diverges to infinity after just a few iterations:
features = torch.from_numpy(np.array(features))
x_data = Variable(torch.Tensor(features))
y_data = Variable(torch.Tensor(targets))
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = torch.nn.Linear(10,5)
self.linear2 = torch.nn.Linear(5,1)
def forward(self, x):
l_out1 = self.linear(x)
y_pred = self.linear2(l_out1)
return y_pred
model = Model()
criterion = torch.nn.MSELoss(size_average = False)
optim = torch.optim.SGD(model.parameters(), lr = 0.001)
def main():
for iteration in range(1000):
y_pred = model(x_data)
loss = criterion(y_pred, y_data)
print(iteration, loss.data[0])
optim.zero_grad()
loss.backward()
optim.step()
Any help would be appreciated. Thanks
EDIT:
Indeed it seems that this was simply due to the learning rate being too high. Setting to 0.00001 fixes convergence issues, albeit giving very slow convergence.
| This is because you're not using a non-linearity between layers, and your network is still Linear.
You can use Relu in order to make it non linear. You can change the forward method like this :
...
y_pred = torch.nn.functional.F.relu(self.linear2(l_out1))
...
| https://stackoverflow.com/questions/47500817/ |
Why tensorflow GPU memory usage decreasing when I increasing the batch size? | Recently I implemented a VGG-16 network using both Tensorflow and PyTorch, data set is CIFAR-10. Each picture is 32 * 32 RGB.
I use a 64 batch size in beginning, while I found PyTorch using much less GPU memory than tensorflow. Then I did some experiments and got a figure, which is posted below.
After some researching, I known the tensorflow using BFC algorithm to manage memory. So it's can explain why tensorflow's memory using decreasing or increasing by 2048, 1024, ... MB and sometimes the memory use not increasing when batch size is bigger.
But I am still confused, why the memory use is lower when batch size is 512 than batch size is 384, 448 etc. which has a smaller batch size. The same as when batch size is from 1024 to 1408, and batch size is 2048 to 2688.
Here is my source code:
PyTorch:https://github.com/liupeng3425/tesorflow-vgg/blob/master/vgg-16-pytorch.py
Tensorflow:https://github.com/liupeng3425/tesorflow-vgg/blob/master/vgg-16.py
edit:
I have two Titan XP on my computer, OS: Linux Mint 18.2 64-bit.
I determine GPU memory usage with command nvidia-smi.
My code runs on GPU1, which is defined in my code:
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
And I am sure there only one application using GPU1.
GPU memory usage can be determined by the application list below.
For example, like the posted screen shot below, process name is /usr/bin/python3 and its GPU memory usage is 1563 MiB.
| As noted in the comments, by default TensorFlow always takes up all memory on a GPU. I assume you have disabled that function for this test, but it does show that the algorithms do not generally attempt to minimize the memory that is reserved, even if it's not all utilized in the calculations.
To find the optimal configuration for your device and code, TensorFlow often runs (parts of) the first calculation multiple times. I suspect that this included settings for pre-loading data onto the GPU. This would mean that the numbers you see happen to be the optimal values for your device and configuration.
Since TensorFlow doesn't mind using more memory, 'optimal' here is measured by speed, not memory usage.
| https://stackoverflow.com/questions/47504924/ |
Q-value keeps stepping down when training a DQN | I am training a DQN and the Q-value keeps going down. The curve looks very weird (see below).
Every step corresponds to an update to target network.
Any possible reason why this happens?
| Does the step correspond to the Target Q network update? If so try to:
1) update the TargetQ network less frequently
2) increase the discount factor (e.g. to .99 if you were using .5)
3) use a smooth update for the TargetQ network in the form (1 - tau)old + tauv1
| https://stackoverflow.com/questions/47527648/ |
Loss dimensionality issue in PyTorch (sequence to label learning) | I am doing a sequence to label learning model in PyTorch. I have two sentences and I am classifying whether they are entailed or not (SNLI dataset). I concatenate two 50 word sentences together (sometimes padded) into a vector of length 100. I then send in minibatches into word embeddings -> LSTM -> Linear layer. I am doing cross entropy loss but I need a vector of [mini_batch, C] to go into the CrossEntropyLoss function. Instead I still have the 100 words in my vector as [mini_batch, 100, C]
Here is my model:
class myLSTM(nn.Module):
def __init__(self, h_size=128, v_size=10, embed_d=300, mlp_d=256):
super(myLSTM, self).__init__()
self.embedding = nn.Embedding(v_size, embed_d)
self.lstm = nn.LSTM(embed_d, h_size, num_layers=1, bidirectional=True, batch_first=True)
self.mlp = nn.Linear(mlp_d, 1024)
# Set static embedding vectors
self.embedding.weight.requires_grad = False
#self.sm = nn.CrossEntropyLoss()
def display(self):
for param in self.parameters():
print(param.data.size())
def filter_params(self):
# Might not be compatible with python 3
#self.parameters = filter(lambda p: p.requires_grad, self.parameters())
pass
def init_hidden(self):
# Need to init hidden weights in LSTM
pass
def forward(self, sentence):
print(sentence.size())
embeds = self.embedding(sentence)
print(embeds.size())
out, _ = self.lstm(embeds)
print(out.size())
out = self.mlp(out)
return out
My training sequences with output:
batch_size = 3
SGD_optimizer = optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=0.01, weight_decay=1e-4)
ADM_optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.01)
criterion = nn.CrossEntropyLoss()
num_epochs = 50
from torch.autograd import Variable
from torch import optim
for epoch in range(num_epochs):
print("Epoch {0}/{1}: {2}%".format(epoch, num_epochs, float(epoch)/num_epochs))
for start, end in tqdm(batch_index_gen(batch_size, len(n_data))):
# Convert minibatch to numpy
s1, s2, y = convert_to_numpy(n_data[start:end])
# Convert numpy to Tensor
res = np.concatenate((s1,s2), axis=1) # Attach two sentences into 1 input vector
input_tensor = torch.from_numpy(res).type(torch.LongTensor)
target_tensor = torch.from_numpy(y).type(torch.FloatTensor)
data, target = Variable(input_tensor), Variable(target_tensor)
# Zero gradients
SGD_optimizer.zero_grad()
# Forward Pass
output = model.forward(data)
print("Output size: ")
print(output.size())
print("Target size: ")
print(target.size())
# Calculate loss with respect to training labels
loss = criterion(output, target)
# Backprogogate and update optimizer
loss.backward()
SGD_optimizer.step()
#ADAM_optimizer.step()
output:
Epoch 0/50: 0.0%
torch.Size([3, 100])
torch.Size([3, 100, 300])
torch.Size([3, 100, 256])
Output size:
torch.Size([3, 100, 1024])
Target size:
torch.Size([3])
error:
ValueError: Expected 2 or 4 dimensions (got 3)
EDITED -------------------------------------------------------------------
I have now got my model training but I am getting low accuracy. Is there an issue with my LSTM outputs being concatenated and then condensed to a smaller tensor to go through my linear layer?
New Model:
class myLSTM(nn.Module):
def __init__(self, h_size=128, v_size=10, embed_d=300, mlp_d=256, num_classes=3, lstm_layers=1):
super(myLSTM, self).__init__()
self.num_layers = lstm_layers
self.hidden_size = h_size
self.embedding = nn.Embedding(v_size, embed_d)
self.lstm = nn.LSTM(embed_d, h_size, num_layers=lstm_layers, bidirectional=True, batch_first=True)
self.mlp = nn.Linear(2 * h_size * 2, num_classes)
# Set static embedding vectors
self.embedding.weight.requires_grad = False
def forward(self, s1, s2):
# Set initial states
#h0 = Variable(torch.zeros(self.num_layers*2, s1.size(0), self.hidden_size)).cuda() # 2 for bidirection
#c0 = Variable(torch.zeros(self.num_layers*2, s1.size(0), self.hidden_size)).cuda()
batch_size = s1.size()[0]
embeds_1 = self.embedding(s1)
embeds_2 = self.embedding(s2)
_, (h_1_last, _) = self.lstm(embeds_1)#, (h0, c0)) #note the change here. Last hidden state is taken
_, (h_2_last, _) = self.lstm(embeds_2)#, (h0, c0))
concat = torch.cat( (h_1_last, h_2_last), dim=2) #double check the dimension
concat = concat.view(batch_size, -1)
scores = self.mlp(concat)
return scores
New Training
batch_size = 64
SGD_optimizer = optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=0.001, weight_decay=1e-4)
criterion = nn.CrossEntropyLoss()
num_epochs = 10
model.train()
if cuda:
model = model.cuda()
criterion = criterion.cuda()
from torch.autograd import Variable
from torch import optim
epoch_losses = []
for epoch in range(num_epochs):
print("Epoch {0}/{1}: {2}%".format(epoch, num_epochs, 100*float(epoch)/num_epochs))
# Batch loss aggregator
losses = []
for start, end in tqdm(batch_index_gen(batch_size, len(n_data))):
# Convert minibatch to numpy
s1, s2, y = convert_to_numpy(n_data[start:end])
# Convert numpy to Tensor
s1_tensor = torch.from_numpy(s1).type(torch.LongTensor)
s2_tensor = torch.from_numpy(s2).type(torch.LongTensor)
target_tensor = torch.from_numpy(y).type(torch.LongTensor)
s1 = Variable(s1_tensor)
s2 = Variable(s2_tensor)
target = Variable(target_tensor)
if cuda:
s1 = s1.cuda()
s2 = s2.cuda()
target = target.cuda()
# Zero gradients
SGD_optimizer.zero_grad()
# Forward Pass
output = model.forward(s1,s2)
# Calculate loss with respect to training labels
loss = criterion(output, target)
losses.append(loss.data[0])
# Backprogogate and update optimizer
loss.backward()
SGD_optimizer.step()
# concat losses to epoch losses
epoch_losses += losses
training with tensor sizes printed:
Epoch 0/10: 0.0%
Batch size: 64
Sentences
torch.Size([64, 50])
torch.Size([64, 50])
torch.Size([64, 50, 300])
torch.Size([64, 50, 300])
Hidden states
torch.Size([2, 64, 128])
torch.Size([2, 64, 128])
Concatenated hidden states
torch.Size([2, 64, 256])
Reshaped tensors for linear layer
torch.Size([64, 512])
Linear propogation
torch.Size([64, 3])
Evaluation
def eval_model(model, mode='dev'):
file_name = 'snli_1.0/snli_1.0_dev.jsonl' if mode == 'dev' else 'snli_1.0/snli_1.0_test.jsonl'
dev_data, _ = obtain_data(file_name)
dev_n_data = vocab.process_data(dev_data)
print("Length of data: {}".format(len(dev_n_data)))
eval_batch_size = 1024
model.eval()
total = len(dev_n_data)
hit = 0
correct = 0
# Batch dev eval
for start, end in batch_index_gen(eval_batch_size, len(dev_n_data)):
s1, s2, y = convert_to_numpy(dev_n_data[start:end])
s1_tensor = torch.from_numpy(s1).type(torch.LongTensor)
s2_tensor = torch.from_numpy(s2).type(torch.LongTensor)
target_tensor = torch.from_numpy(y).type(torch.LongTensor)
s1 = Variable(s1_tensor, volatile=True)
s2 = Variable(s2_tensor, volatile=True)
target = Variable(target_tensor, volatile=True)
if cuda:
s1 = s1.cuda()
s2 = s2.cuda()
target = target.cuda()
output = model.forward(s1,s2)
loss = criterion(output, target)
#print("output size: {}".format(output.size()))
#print("target size: {}".format(target.size()))
pred = output.data.max(1)[1] # get the index of the max log-probability
#print(pred[:5])
#print(output[:])
correct += pred.eq(target.data).cpu().sum()
return correct / float(total)
eval_model(model)
| I think there is an issue in a way you are trying to solve an entailment problem.
Maybe you can do it this way:
design your module to accept two sentences as input
embed both of them with your embeddings
encode them using the LSTM module.
now you have two fixed length vector representations of two sentences. Simpliest thing to do is to just concatenate them
together.
Add liner layer on top to evaluate scores for each entailment class (3 I suppose)
apply softmax to get a proper probability distribution
So your model can look like this (double check the dimensions):
class myLSTM(nn.Module):
def __init__(self, h_size=128, v_size=10, embed_d=300, num_classes = 3):
super(myLSTM, self).__init__()
self.embedding = nn.Embedding(v_size, embed_d)
self.lstm = nn.LSTM(embed_d, h_size, num_layers=1, bidirectional=True, batch_first=True)
self.mlp = nn.Linear(2*h_size*2, num_classes) #<- change here
def forward(self, sentence1, sentence2):
embeds_1 = self.embedding(sentence1)
embeds_2 = self.embedding(sentence2)
_, (h_1_last, _) = self.lstm(embeds_1) #note the change here. Last hidden state is taken
_, (h_2_last, _) = self.lstm(embeds_2)
concat = torch.concat([h_1_last, h_2_last], dim=1) #double check the dimension
scores = self.mlp(concat)
probas = F.softmax(scores) #from torch.functional ...
Then you can play around with adding more hidden layers or thinking how combining two sentences can be done in more intelligent way (attention, etc).
Double check what CrossEntropyLoss accepts as input and target and adjust (is it unnormalized class scores or probability distribution). Check http://pytorch.org/docs/master/nn.html#lstm for LSTM module documentation to clarify what LSTM returns (do you need hidden states for every word or just the representation after the last one).
| https://stackoverflow.com/questions/47563398/ |
What does the following error mean, when I tried to import torch library on iPython? |
RuntimeError Traceback (most recent call last)
RuntimeError: module compiled against API version 0xb but this version of numpy is 0xa
ImportError Traceback (most recent call last)
in ()
----> 1 import torch
~/anaconda3/lib/python3.6/site-packages/torch/init.py in ()
51 sys.setdlopenflags(_dl_flags.RTLD_GLOBAL | _dl_flags.RTLD_NOW)
52
---> 53 from torch._C import *
54
55 all += [name for name in dir(_C)
ImportError: numpy.core.multiarray failed to import
| You may need to update your numpy installation.
Run this in terminal:
pip install -U numpy
If this doesn't work, try uninstalling and then reinstalling it:
pip uninstall numpy
pip install numpy
| https://stackoverflow.com/questions/47581877/ |
PyTorch ValueError: Target and input must have the same number of elements | I am kinda new to PyTorch, but I am trying to understand how the sizes of target and input work in torch.nn.BCELoss() when computing loss function.
import torch
import torch.nn as nn
from torch.autograd import Variable
time_steps = 15
batch_size = 3
embeddings_size = 100
num_classes = 2
model = nn.LSTM(embeddings_size, num_classes)
input_seq = Variable(torch.randn(time_steps, batch_size, embeddings_size))
lstm_out, _ = model(input_seq)
last_out = lstm_out[-1]
print(last_out)
loss = nn.BCELoss()
target = Variable(torch.LongTensor(batch_size).random_(0, num_classes))
print(target)
err = loss(last_out.long(), target)
err.backward()
I received the following error:
Warning (from warnings module):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/functional.py", line 767
"Please ensure they have the same size.".format(target.size(), input.size()))
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/functional.py", line 770, in binary_cross_entropy
"!= input nelement ({})".format(target.nelement(), input.nelement()))
ValueError: Target and input must have the same number of elements. target nelement (3) != input nelement (6)
This error definitely comes from the different sizes of last_out (size 3x2) and target (size 3). So my question is how can I convert last_out into something like target (of size 3 and containing just 0s and 1s) to compute loss function?
| The idea of nn.BCELoss() is to implement the following formula:
Both o and tare tensors of arbitrary (but same!) size and i simply indexes each element of the two tensor to compute the sum above.
Typically, nn.BCELoss() is used in a classification setting: o and i will be matrices of dimensions N x D. N will be the number of observations in your dataset or minibatch. D will be 1 if you are only trying to classify a single property, and larger than 1 if you are trying to classify multiple properties. t, the target matrix, will only hold 0 and 1, as for every property, there are only two classes (that's where the binary in binary cross entropy loss comes from). o will hold the probability with which you assign every property of every observation to class 1.
Now in your setting above, it is not clear how many classes you are considering and how many properties there are for you. If there is only one property as suggested by the shape of your target, you should only output a single quantity, namely the probability of being in class 1 from your model. If there are two properties, your targets are incomplete! If there are multiple classes, you should work with torch.nn.CrossEntropyLossinstead of torch.nn.BCELoss().
As an aside, often it is desirable for the sake of numerical stability to use torch.nn.BCEWithLogitsLossinstead of torch.nn.BCELoss() following nn.Sigmoid() on some outputs.
| https://stackoverflow.com/questions/47594358/ |
How does one use Hermite polynomials with Stochastic Gradient Descent (SGD)? | I was trying to train a simple polynomial linear model with pytorch using Hermite polynomials since they seem to have a better conditioned Hessian.
To do that I decided to use the hermvander since it gives the Vandermonde matrix with each entry being a Hermite term. To do that I just made my feature vectors be the outpute of hermvander:
Kern_train = hermvander(X_train,Degree_mdl)
however, when I proceeded to train I get NaN all the time. I suspected it could have been a step size issue but I decided to use the step size suggested by this question that already has my example working in R, so there is no need to search for a step size I thought. However, when I tried it it does not work.
Anyone has any idea whats going on?
Same error occurs in tensorflow:
import pdb
import numpy as np
from numpy.polynomial.hermite import hermvander
import random
import tensorflow as tf
def get_batch(X,Y,M):
N = len(Y)
valid_indices = np.array( range(N) )
batch_indices = np.random.choice(valid_indices,size=M,replace=False)
batch_xs = X[batch_indices,:]
batch_ys = Y[batch_indices]
return batch_xs, batch_ys
##
D0=1
logging_freq = 100
## SGD params
M = 5
eta = 0.1
#eta = lambda i: eta/(i**0.6)
nb_iter = 500*10
##
lb,ub = 0,1
freq_sin = 4 # 2.3
f_target = lambda x: np.sin(2*np.pi*freq_sin*x)
N_train = 10
X_train = np.linspace(lb,ub,N_train)
Y_train = f_target(X_train).reshape(N_train,1)
x_horizontal = np.linspace(lb,ub,1000).reshape(1000,1)
## degree of mdl
Degree_mdl = N_train-1
## Hermite
Kern_train = hermvander(X_train,Degree_mdl)
print(f'Kern_train.shape={Kern_train.shape}')
Kern_train = Kern_train.reshape(N_train,Kern_train.shape[1])
##
Kern_train_pinv = np.linalg.pinv( Kern_train )
c_pinv = np.dot(Kern_train_pinv, Y_train)
nb_terms = c_pinv.shape[0]
##
condition_number_hessian = np.linalg.cond(Kern_train)
##
graph = tf.Graph()
with graph.as_default():
X = tf.placeholder(tf.float32, [None, nb_terms])
Y = tf.placeholder(tf.float32, [None,1])
w = tf.Variable( tf.zeros([nb_terms,1]) )
#w = tf.Variable( tf.truncated_normal([Degree_mdl,1],mean=0.0,stddev=1.0) )
#w = tf.Variable( 1000*tf.ones([Degree_mdl,1]) )
##
f = tf.matmul(X,w) # [N,1] = [N,D] x [D,1]
#loss = tf.reduce_sum(tf.square(Y - f))
loss = tf.reduce_sum( tf.reduce_mean(tf.square(Y-f), 0))
l2loss_tf = (1/N_train)*2*tf.nn.l2_loss(Y-f)
##
learning_rate = eta
#global_step = tf.Variable(0, trainable=False)
#learning_rate = tf.train.exponential_decay(learning_rate=eta, global_step=global_step,decay_steps=nb_iter/2, decay_rate=1, staircase=True)
train_step = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(loss)
with tf.Session(graph=graph) as sess:
Y_train = Y_train.reshape(N_train,1)
tf.global_variables_initializer().run()
# Train
for i in range(nb_iter):
#if i % (nb_iter/10) == 0:
if i % (nb_iter/10) == 0 or i == 0:
current_loss = sess.run(fetches=loss, feed_dict={X: Kern_train, Y: Y_train})
print(f'tf: i = {i}, current_loss = {current_loss}')
## train
batch_xs, batch_ys = get_batch(Kern_train,Y_train,M)
sess.run(train_step, feed_dict={X: batch_xs, Y: batch_ys})
print(f'condition_number_hessian = {condition_number_hessian}')
print('\a')
Totally self contained code in pytorch:
import numpy as np
from numpy.polynomial.hermite import hermvander
import random
import torch
from torch.autograd import Variable
def vectors_dims_dont_match(Y,Y_):
'''
Checks that vector Y and Y_ have the same dimensions. If they don't
then there might be an error that could be caused due to wrong broadcasting.
'''
DY = tuple( Y.size() )
DY_ = tuple( Y_.size() )
if len(DY) != len(DY_):
return True
for i in range(len(DY)):
if DY[i] != DY_[i]:
return True
return False
def index_batch(X,batch_indices,dtype):
'''
returns the batch indexed/sliced batch
'''
if len(X.shape) == 1: # i.e. dimension (M,) just a vector
batch_xs = torch.FloatTensor(X[batch_indices]).type(dtype)
else:
batch_xs = torch.FloatTensor(X[batch_indices,:]).type(dtype)
return batch_xs
def get_batch2(X,Y,M,dtype):
'''
get batch for pytorch model
'''
# TODO fix and make it nicer, there is pytorch forum question
X,Y = X.data.numpy(), Y.data.numpy()
N = len(Y)
valid_indices = np.array( range(N) )
batch_indices = np.random.choice(valid_indices,size=M,replace=False)
batch_xs = index_batch(X,batch_indices,dtype)
batch_ys = index_batch(Y,batch_indices,dtype)
return Variable(batch_xs, requires_grad=False), Variable(batch_ys, requires_grad=False)
def get_sequential_lifted_mdl(nb_monomials,D_out, bias=False):
return torch.nn.Sequential(torch.nn.Linear(nb_monomials,D_out,bias=bias))
def train_SGD(mdl, M,eta,nb_iter,logging_freq ,dtype, X_train,Y_train):
##
#pdb.set_trace()
N_train,_ = tuple( X_train.size() )
#print(N_train)
for i in range(1,nb_iter+1):
# Forward pass: compute predicted Y using operations on Variables
batch_xs, batch_ys = get_batch2(X_train,Y_train,M,dtype) # [M, D], [M, 1]
## FORWARD PASS
y_pred = mdl.forward(batch_xs)
## Check vectors have same dimension
if vectors_dims_dont_match(batch_ys,y_pred):
raise ValueError('You vectors don\'t have matching dimensions. It will lead to errors.')
## LOSS + Regularization
batch_loss = (1/M)*(y_pred - batch_ys).pow(2).sum()
## BACKARD PASS
batch_loss.backward() # Use autograd to compute the backward pass. Now w will have gradients
## SGD update
for W in mdl.parameters():
delta = eta(i)*W.grad.data
W.data.copy_(W.data - delta)
## train stats
if i % (nb_iter/10) == 0 or i == 0:
#X_train_, Y_train_ = Variable(X_train), Variable(Y_train)
X_train_, Y_train_ = X_train, Y_train
current_train_loss = (1/N_train)*(mdl.forward(X_train_) - Y_train_).pow(2).sum().data.numpy()
print('\n-------------')
print(f'i = {i}, current_train_loss = {current_train_loss}\n')
print(f'eta*W.grad.data = {eta*W.grad.data}')
print(f'W.grad.data = {W.grad.data}')
## Manually zero the gradients after updating weights
mdl.zero_grad()
final_sgd_error = current_train_loss
return final_sgd_error
##
D0=1
logging_freq = 100
#dtype = torch.cuda.FloatTensor
dtype = torch.FloatTensor
## SGD params
M = 5
eta = 0.1
eta = lambda i: eta/(i**0.6)
nb_iter = 500*10
##
lb,ub = 0,1
freq_sin = 4 # 2.3
f_target = lambda x: np.sin(2*np.pi*freq_sin*x)
N_train = 10
X_train = np.linspace(lb,ub,N_train)
Y_train = f_target(X_train).reshape(N_train,1)
x_horizontal = np.linspace(lb,ub,1000).reshape(1000,1)
## degree of mdl
Degree_mdl = N_train-1
## Hermite
Kern_train = hermvander(X_train,Degree_mdl)
Kern_train = Kern_train.reshape(N_train,Kern_train.shape[2])
##
Kern_train_pinv = np.linalg.pinv( Kern_train )
c_pinv = np.dot(Kern_train_pinv, Y_train)
##
condition_number_hessian = np.linalg.cond(Kern_train)
## linear mdl to train with SGD
nb_terms = c_pinv.shape[0]
mdl_sgd = get_sequential_lifted_mdl(nb_monomials=nb_terms,D_out=1, bias=False)
mdl_sgd[0].weight.data.normal_(mean=0,std=0.001)
mdl_sgd[0].weight.data.fill_(0)
## Make polynomial Kernel
Kern_train_pt, Y_train_pt = Variable(torch.FloatTensor(Kern_train).type(dtype), requires_grad=False), Variable(torch.FloatTensor(Y_train).type(dtype), requires_grad=False)
final_sgd_error = train_SGD(mdl_sgd, M,eta,nb_iter,logging_freq ,dtype, Kern_train_pt,Y_train_pt)
## PRINT ERRORS
from plotting_utils import *
train_error_pinv = (1/N_train)*(np.linalg.norm(Y_train-np.dot(Kern_train,c_pinv))**2)
print('\n-----------------')
print(f'N_train={N_train}')
print(f'train_error_pinv = {train_error_pinv}')
print(f'final_sgd_error = {final_sgd_error}')
print(f'condition_number_hessian = {condition_number_hessian}')
print('\a')
| Maybe it's a bit late, but you might have a look at this https://github.com/Orcuslc/OrthNet
| https://stackoverflow.com/questions/47624798/ |
what is the fastest way of loading images? | I have about 200,000 high resolution images, and loading such high quality images every time is time consuming.
Preloading all images might occupy too much memory.
How about saving each images into .npz file format and loading .npz instead of .jpg? Would it be boosting speed?
| You do not need to load all the image to memory at once. Considering also that we need to do data augmentation on the dataset during model training, it is impossible to load all images.
In PyTorch, you can use Dataset to store your training and validation set. The Dataset class has a parameter transforms(e.g., Scale, RandomCrop, etc.), which is used to transform the training image on the fly during training. Several ready-made dataset are also provided by torchvision package, see here.
Basic methold
PyTorch's builtin Dataloader has a num_worker, which is used to control how many subprocess you use for loading the data. Since your dataset is not so huge, that would be enough for your need. About how to set the appropriate number of worker, see here.
More references
There are discussion on PyTorch forum on fast image loading, use post1 and post2 as a start.
| https://stackoverflow.com/questions/47644367/ |
Spiral Loss Function for Music Encoding | I am trying to develop an autoencoder for music generation; in pursuit of that end, I am attempting to develop a loss function which captures musical relationships.
My current idea is a 'Spiral' loss function, which is to say that if the system predicts the same note in a different octave, the loss should be smaller than if the note is just wrong. Additionally, notes that are close to the correct note, such as B and D to C should also have small losses. One can conceptually think of this as finding the distance between two points on a coil or spiral, such that the same notes in different octaves lie along a line tangent to the coil, but separated by some loop distance.
I am working in PyTorch, and my input representation is a 36 by 36 Tensor, where the rows represent the notes (MIDI range 48:84, the middle three octaves of a piano) and the columns represent time steps (1 column = 1/100th of a second). The values in the matrix are either 0 or 1, signifying that a note is on at a particular time.
Here is my current implementation of the loss:
def SpiralLoss():
def spiral_loss(input, output):
loss = Variable(torch.FloatTensor([0]))
d = 5
r = 10
for i in xrange(input.size()[0]):
for j in xrange(input.size()[3]):
# take along the 1 axis because it's a column vector
inval, inind = torch.max(input[i, :, :, j], 1)
outval, outind = torch.max(output[i, :, :, j], 1)
note_loss = (r*30*(inind%12 - outind%12)).float()
octave_loss = (d*(inind/12 - outind/12)).float()
loss += torch.sqrt(torch.pow(note_loss, 2) + torch.pow(octave_loss, 2))
return loss
return spiral_loss
The problem with this loss is that the max function is not differentiable. I cannot think of a way to make this loss differentiable, and was wondering if anyone might have any ideas or suggestions?
I'm not sure if this is the right place for a post like this, and so if it isn't, I would really appreciate any pointers towards a better location.
| Taking the maximum here is not only problematic because of differentiability: If you only take the maximum of the output, and it is at the right place, slightly lower values in wrong positions don't get punished.
One rough idea would be to use a normal L1 or L2 loss on the difference of the input and a modified output vector: The output could be multiplied by some weight mask that punishes octave and note difference differently, like for example:
def create_mask(input_column):
r = 10
d = 5
mask = torch.FloatTensor(input_column.size())
_, max_ind = torch.max(input_column, 0)
max_ind = int(max_ind[0])
for i in range(mask.size(0)):
mask[i] = r*abs(i-max_ind)%12 + d*abs(i-max_ind)/12
return mask
This is just roughly written, not something ready but in theory it should do the job. The mask vector should be set to requires_grad=False since it is an exact constant we compute for each input. Thus, you can use the maximum on the input but don't use the max on the output.
I hope it helps!
| https://stackoverflow.com/questions/47646226/ |
pytorch data loader multiple iterations | i use iris-dataset to train a simple network with pytorch.
trainset = iris.Iris(train=True)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=150,
shuffle=True, num_workers=2)
dataiter = iter(trainloader)
the dataset itself has only 150 data points, and pytorch dataloader iterates jus t once over the whole dataset, because of the batch size of 150.
My question is now, is there generally any way to tell dataloader of pytorch to repeat over the dataset if it's once done with iteration?
thnaks
update
got it runnning :)
just created a sub class of dataloader and implemented my own __next__()
| To complement the previous answers. To be comparable between datasets, it is often better to use the total number of steps instead of the total number of epochs as a hyper-parameter. That is because the number of iterations should not relly on the dataset size, but on its complexity.
I am using the following code for training. It ensures that the data loader re-shuffles the data every time it is re-initiated.
# main training loop
generator = iter(trainloader)
for i in range(max_steps):
try:
# Samples the batch
x, y = next(generator)
except StopIteration:
# restart the generator if the previous generator is exhausted.
generator = iter(trainloader)
x, y = next(generator)
I will agree that is not the most elegant solution, but it keeps me from having to rely on epochs for training.
| https://stackoverflow.com/questions/47714643/ |
Example CrossEntropyLoss for 3D semantic segmentation in pytorch | I have a network performing 3D convolutions on a 5D input tensor. The output of my network if of size (1, 12, 60, 36, 60) corresponding to ( BatchSize, NumClasses, x-dim, y-dim, z-dim). I need to compute a voxel-wise cross entropy loss. However I keep on getting errors.
When trying to compute cross entropy loss using torch.nn.CrossEntropyLoss(), I keep on getting the following error message:
RuntimeError: multi-target not supported at .../src/THCUNN/generic/ClassNLLCriterion.cu:16
here is the extract of my code:
import torch
import torch.nn as nn
from torch.autograd import Variable
criterion = torch.nn.CrossEntropyLoss()
images = Variable(torch.randn(1, 12, 60, 36, 60)).cuda()
labels = Variable(torch.zeros(1, 12, 60, 36, 60).random_(2)).long().cuda()
loss = criterion(images.view(1,-1), labels.view(1,-1))
Same happens when I create a one-hot tensor for the labels:
nclasses = 12
labels = (np.random.randint(0,12,(1,60,36,60))) # Random labels with values between [0..11]
labels = (np.arange(nclasses) == labels[..., None] - 1).astype(int) # Converts labels to one_hot_tensor
a = np.transpose(labels,(0,4,3,2,1)) # Reorder dimensions to match shape of "images" ([1, 12, 60, 36, 60])
b = Variable(torch.from_numpy(a)).cuda()
loss = criterion(images.view(1,-1), b.view(1,-1))
Any idea what I'm doing wrong?
Can someone provide an example of computing cross entropy on a 5D output tensor?
| Just checked some implementation (fcn) for 2D semantic segmentation, and tried to adapt it to 3D semantic segmentation. No guarantee that this is correct, I'll have to double check...
import torch
import torch.nn.functional as F
def cross_entropy3d(input, target, weight=None, size_average=True):
# input: (n, c, h, w, z), target: (n, h, w, z)
n, c, h, w , z = input.size()
# log_p: (n, c, h, w, z)
log_p = F.log_softmax(input, dim=1)
# log_p: (n*h*w*z, c)
log_p = log_p.permute(0, 4, 3, 2, 1).contiguous().view(-1, c) # make class dimension last dimension
log_p = log_p[target.view(n, h, w, z, 1).repeat(1, 1, 1, 1, c) >= 0] # this looks wrong -> Should rather be a one-hot vector
log_p = log_p.view(-1, c)
# target: (n*h*w*z,)
mask = target >= 0
target = target[mask]
loss = F.nll_loss(log_p, target.view(-1), weight=weight, size_average=False)
if size_average:
loss /= mask.data.sum()
return loss
images = Variable(torch.randn(5, 3, 16, 16, 16))
labels = Variable(torch.LongTensor(5, 16, 16, 16).random_(3))
cross_entropy3d(images, labels, weight=None, size_average=True)
| https://stackoverflow.com/questions/47715696/ |
How to install pytorch in windows? | I am trying to install pytorch on windows and there is one which is available for it but shows an error.
conda install -c peterjc123 pytorch=0.1.12
| Warning: Unless you have a very specific reason not to, just follow the official installation instructions from https://pytorch.org. They are far more likely to be accurate and up-to-date.
Here is how to install the PyTorch package from the official channel, on Windows using Anaconda, as of the time of writing this comment (31/03/2020):
PyTorch without CUDA:
conda install pytorch torchvision cpuonly -c pytorch
PyTorch with CUDA 10.1:
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
| https://stackoverflow.com/questions/47754749/ |
Python (Pytorch) Multiprocessing throwing errors: Connection reset by peer and File Not Found | I'm trying to get something working similarly to keras' "fit_generator" method. Basically, I have a (very) large data file of mini-batches and I want to have my CPU grab mini-batches and populate a queue parallel to my GPU taking mini-batches from the queue and training on them. By having the CPU work in parallel to the GPU (as opposed to having the CPU grab a batch and making the GPU wait for the CPU before it trains on that batch) I should be able to reduce my training time by about half. I have benchmarked how long it takes the CPU to grab a mini-batch, and it's taking a comparable amount of time to how long it takes my GPU to train on one mini-batch, so parallelizing the CPU and GPU should work alright. I haven't found a built-in method in pytorch to do this, if there is one, please let me know.
So I have tried to use the torch.multiprocessing module to do what I want, but I'm not able to complete the training as I always get some sort of error right before training is completed. The torch.multiprocessing module should be a wrapper with essentially all the same functionalities as the regular multiprocessing module except it allows pytorch tensors to be shared between processes. Basically, I have set up my code to have 2 functions, a loader function, and a trainer function like so:
def data_gen(que,PATH,epochs,steps_per_epoch,batch_size=32):
for epoch in range(epochs):
for j in range(steps_per_epoch):
with h5py.File(PATH,'r') as f:
X = f['X'][j*batch_size:(j+1)*batch_size]
Y = f['Y'][j*batch_size:(j+1)*batch_size]
X = autograd.Variable(torch.Tensor(X).resize_(batch_size,256,25).cpu())
Y = autograd.Variable(torch.Tensor(Y).cpu())
que.put((X,Y))
que.put('stop')
que.close()
return
def train_network(que,net,optimizer,epochs):
print('Training for %s epochs...' %epochs)
for epoch in range(epochs):
while(True):
data = que.get()
if(data == 'stop'):
break
net.zero_grad()
net.hid = net.init_hid()
inp,labels = data
inp = inp.cuda()
labels = labels.cuda()
out,hid = net(inp)
loss = F.binary_cross_entropy(out,labels)
loss.backward()
optimizer.step()
print('Epoch end reached')
return
And then I run the two processes in parallel like so:
if __name__ == '__main__':
tmp.set_start_method('spawn')
que = tmp.Queue(maxsize=10)
loader = tmp.Process(target=data_gen, args=(que,PATH,epochs,steps), kwargs={'batch_size':batch_size})
loader.start()
trainer = tmp.Process(target=train_network, args=(que,net,optimizer,epochs,steps))
trainer.start()
loader.join()
trainer.join()
I have the que put in a 'stop' value at the end of each epoch so I can break out of the loop in the trainer and go to the next epoch. This "poison pill" method appears to work because the code runs for multiple epochs, and the trainer does in fact print the end of epoch verification message. The code runs, and it does appear to speed up the training process (I've been trying to prototype this code on a small subset of the data so it's sometimes hard to tell how much speed up I'm getting), but at the end of training (and always at the end, no matter how many epochs I specify), I always get an error:
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/media/digitalstorm/Storage/RNN_Prototype/Lazuli_rnnprototype.py", line 307, in train_network
data = que.get()
File "/usr/lib/python3.6/multiprocessing/queues.py", line 113, in get
return _ForkingPickler.loads(res)
File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/reductions.py", line 70, in rebuild_storage_fd
fd = df.detach()
File "/usr/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/usr/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 487, in Client
c = SocketClient(address)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 614, in SocketClient
s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
or, if I mess around a bit with various options, I sometimes get an error like this:
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/media/digitalstorm/Storage/RNN_Prototype/Lazuli_rnnprototype.py", line 306, in train_network
data = que.get()
File "/usr/lib/python3.6/multiprocessing/queues.py", line 113, in get
return _ForkingPickler.loads(res)
File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/reductions.py", line 70, in rebuild_storage_fd
fd = df.detach()
File "/usr/lib/python3.6/multiprocessing/resource_sharer.py", line 58, in detach
return reduction.recv_handle(conn)
File "/usr/lib/python3.6/multiprocessing/reduction.py", line 182, in recv_handle
return recvfds(s, 1)[0]
File "/usr/lib/python3.6/multiprocessing/reduction.py", line 153, in recvfds
msg, ancdata, flags, addr = sock.recvmsg(1, socket.CMSG_LEN(bytes_size))
ConnectionResetError: [Errno 104] Connection reset by peer
I don't know where I'm going wrong. Admittedly, I am a novice at multiprocessing, so it's hard for me to debug what went wrong exactly. Any help would be appreciated, thanks!
| Since there hasn't been any movement on this question, I'll just post my own workaround to this problem. Basically the loader process was shutting down the que after it finished processing and queuing up the examples. It wasn't waiting for the trainer process to finish, so when the trainer process was going to get the next minibatch, it couldn't find it. I don't quite know why the loader process was prematurely shutting down the que, the documentation for que.close() says that this should only tell the que no more objects are being sent to the que, but it shouldn't actually shut down the que. Also, deleting que.close() did not solve the issue, so I don't think the error had to do with that command. What solved this issue for me was putting a time.sleep(2) command after the que.close() command. This forces the que to sleep for a couple of seconds after it finishes putting everything in the que, and it allows the program to complete and exit without error.
| https://stackoverflow.com/questions/47762973/ |
pix2pixHD error with own dataset | I am trying to generate my own images using the pix2pixHD pre-trained model. Github repo found here
The images inside the dataset has to be in grayscale with no alpha channel. The images in the repo has a size of 16 bitPerSample and I have both images in size 8 and 16 bitsPerSample.
When I check both my images and the images in the repo using sips -g all. This is the outcome I get:
pixelWidth: 2048
pixelHeight: 1024
typeIdentifier: public.png
format: png
formatOptions: default
dpiWidth: 72.000
dpiHeight: 72.000
samplesPerPixel: 1
bitsPerSample: 16
hasAlpha: no
space: Gray
The strange thing is that it works with the images that has 8 bitPerSample.
This is the outcome I get:
Grayscale input
Converted label map
Final output
When I run test.py with 16 bitsPerSample images, it doesn't work.
This is the error it gives me:
model [Pix2PixHDModel] was created
Traceback (most recent call last):
File "test.py", line 26, in <module>
for i, data in enumerate(dataset):
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 210, in __next__
return self._process_next_batch(batch)
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 230, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
TypeError: Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 42, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 42, in <listcomp>
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/paperspace/Documents/pix2pixHD/data/aligned_dataset.py", line 41, in __getitem__
label_tensor = transform_label(label) * 255.0
File "/usr/local/lib/python3.5/dist-packages/torch/tensor.py", line 309, in __mul__
return self.mul(other)
TypeError: mul received an invalid combination of arguments - got (float), but expected one of:
* (int value)
didn't match because some of the arguments have invalid types: (float)
* (torch.IntTensor other)
didn't match because some of the arguments have invalid types: (float)
I am new fairly to Tensorflow and I have never used pytorch before.
Any idea what this error mean and how can I resolve it?
| Yes, I think I can help you.
I haven't checked the repository, but from the error trace the problem appears to be following:
You are performing a multiplication operation betweenn the output of transform_label(label) (presumably a tensor) and a scalar 255.0. This is fine as long as both your scalar and your tensor are of the same datatype. From the error trace however, it looks as if the output of transform_label() is of data type Int / Long, while 255.0 is a float.
I suggest you try 255 or int(255.0) instead of 255.0.
If this does not resolve your problem, let me know what data type the output of transform_label() is.
| https://stackoverflow.com/questions/47819686/ |
PyTorch 2d Convolution with sparse filters | I am trying to perform a spatial convolution (e.g. on an image) in pytorch on dense input using a sparse filter matrix.
Sparse Tensors are implemented in PyTorch. I tried to use a sparse Tensor, but it ends up with a segmentation fault.
import torch
from torch.autograd import Variable
from torch.nn import functional as F
# build sparse filter matrix
i = torch.LongTensor([[0, 1, 1],[2, 0, 2]])
v = torch.FloatTensor([3, 4, 5])
filter = Variable(torch.sparse.FloatTensor(i, v, torch.Size([3,3])))
inputs = Variable(torch.randn(1,1,6,6))
F.conv2d(inputs, filter)
Can anyone just give me a hint how to do that?
Thanks in advance!
dymat
| I know this question is outdated but I also know that there are still people looking for an answer (like myself) so here goes...
On sparse filters
If you'd like sparse convolution without the freedom to specify the sparsity pattern yourself, take a look at dilated conv (also called atrous conv). This is implemented in PyTorch and you can control the degree of sparsity by adjusting the dilation param in Conv2d.
If you'd like to specify the sparsity pattern yourself, to the best of my knowledge, this feature is not currently available in PyTorch. But you may want to check this out if you are ok with using Tensorflow. There is also a blog post providing more details on this repo.
On sparse input
A list of existing and TODO sparse tensor operations is available here.
This talks about the current state of sparse tensors in PyTorch.
This lets you propose your own sparse tensor use case to the PyTorch contributors.
But at the time of this writing, I did not see conv on sparse tensors being an implemented feature or on the TODO list. nn.Linear on sparse input, however, is supported.
And if you build a sparse tensor and apply a conv layer to it, PyTorch (1.1.0) throws an exception:
>>> a = torch.zeros((1, 3, 2, 2), layout=torch.sparse_coo)
>>> net = torch.nn.Conv2d(1, 1, 1)
>>> b = net(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: sparse tensors do not have is_contiguous
>>> torch.__version__
'1.1.0'
Changing to a linear layer and it would work:
>>> c = torch.zeros((1, 2), layout=torch.sparse_coo)
>>> another_net = torch.nn.Linear(2, 1)
>>> d = another_net(c)
>>> d
tensor([[0.1944]], grad_fn=<AddmmBackward>)
>>> d.backward()
>>> another_net.weight.grad
tensor([[0., 0.]])
>>> another_net.bias.grad
tensor([1.])
| https://stackoverflow.com/questions/47890312/ |
PyTorch DataLoader | I'm trying to use multiple torch.utils.data.DataLoaders to create datasets that have different transforms applied to them. Currently, my code is roughly
d_transforms = [
transforms.RandomHorizontalFlip(),
# Some other transforms...
]
loaders = []
for i in range(len(d_transforms)):
dataset = datasets.MNIST('./data',
train=train,
download=True,
transform=d_transforms[i]
loaders.append(
DataLoader(dataset,
shuffle=True,
pin_memory=True,
num_workers=1)
)
This works, but it's extremely slow. kernprof shows that nearly all of the time in my code is spent on lines like
x, y = next(iter(train_loaders[i]))
I suspect that this is due to the fact that i'm using multiple instances of DataLoader, each with their own worker, which tries to read the same data files.
My question is, what is a better way to do this? Ideally, I would subclass torch.utils.data.DataSet and specify the transform i'd like to apply when sampling, but this doesn't seem possible due to __getitem__ not being able to take arguments.
| __getitem__ does take an argument which is the index of the content you want to load . For eg.
transform = transforms.Compose(
[transforms.ToTensor(),
normalize])
class CountDataset(Dataset):
def __init__(self, file,transform=None):
self.transform = transform
#self.vocab = vocab
with open(file,'rb') as f:
self.data = pickle.load(f)
self.y = self.data['answers']
self.I = self.data['images']
def __len__(self):
return len(self.y)
def __getitem__(self, idx):
img_name = self.I[idx]
label = self.y[Idx]
fname = '/'.join(img_name.split("/")[-2:]) #/train2014/xx.jpg
DIR = '/hdd/manoj/VQA/Images/mscoco/'
img_full_path = os.path.join(DIR,fname)
img = Image.open(img_full_path).convert("RGB")
img_tensor = self.transform(img.resize((224,224)))
return img_tensor,label
testset = CountDataset(file = 'testdat.pkl',
transform = transform)
testloader = DataLoader(testset, batch_size=32,
shuffle=False, num_workers=4)
You dont call the data loader in loop.
| https://stackoverflow.com/questions/47933597/ |
How can I use LSTM in pytorch for classification? | My code is as below:
class Mymodel(nn.Module):
def __init__(self, input_size, hidden_size, output_size, num_layers, batch_size):
super(Discriminator, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.num_layers = num_layers
self.batch_size = batch_size
self.lstm = nn.LSTM(input_size, hidden_size)
self.proj = nn.Linear(hidden_size, output_size)
self.hidden = self.init_hidden()
def init_hidden(self):
return (Variable(torch.zeros(self.num_layers, self.batch_size, self.hidden_size)),
Variable(torch.zeros(self.num_layers, self.batch_size, self.hidden_size)))
def forward(self, x):
lstm_out, self.hidden = self.lstm(x, self.hidden)
output = self.proj(lstm_out)
result = F.sigmoid(output)
return result
I want to use LSTM to classify a sentence to good (1) or bad (0). Using this code, I get the result which is time_step * batch_size * 1 but not 0 or 1. How to edit the code in order to get the classification result?
| Theory:
Recall that an LSTM outputs a vector for every input in the series. You are using sentences, which are a series of words (probably converted to indices and then embedded as vectors). This code from the LSTM PyTorch tutorial makes clear exactly what I mean (***emphasis mine):
lstm = nn.LSTM(3, 3) # Input dim is 3, output dim is 3
inputs = [autograd.Variable(torch.randn((1, 3)))
for _ in range(5)] # make a sequence of length 5
# initialize the hidden state.
hidden = (autograd.Variable(torch.randn(1, 1, 3)),
autograd.Variable(torch.randn((1, 1, 3))))
for i in inputs:
# Step through the sequence one element at a time.
# after each step, hidden contains the hidden state.
out, hidden = lstm(i.view(1, 1, -1), hidden)
# alternatively, we can do the entire sequence all at once.
# the first value returned by LSTM is all of the hidden states throughout
# the sequence. the second is just the most recent hidden state
# *** (compare the last slice of "out" with "hidden" below, they are the same)
# The reason for this is that:
# "out" will give you access to all hidden states in the sequence
# "hidden" will allow you to continue the sequence and backpropagate,
# by passing it as an argument to the lstm at a later time
# Add the extra 2nd dimension
inputs = torch.cat(inputs).view(len(inputs), 1, -1)
hidden = (autograd.Variable(torch.randn(1, 1, 3)), autograd.Variable(
torch.randn((1, 1, 3)))) # clean out hidden state
out, hidden = lstm(inputs, hidden)
print(out)
print(hidden)
One more time: compare the last slice of "out" with "hidden" below, they are the same. Why? Well...
If you're familiar with LSTM's, I'd recommend the PyTorch LSTM docs at this point. Under the output section, notice h_t is output at every t.
Now if you aren't used to LSTM-style equations, take a look at Chris Olah's LSTM blog post. Scroll down to the diagram of the unrolled network:
As you feed your sentence in word-by-word (x_i-by-x_i+1), you get an output from each timestep. You want to interpret the entire sentence to classify it. So you must wait until the LSTM has seen all the words. That is, you need to take h_t where t is the number of words in your sentence.
Code:
Here's a coding reference. I'm not going to copy-paste the entire thing, just the relevant parts. The magic happens at self.hidden2label(lstm_out[-1])
class LSTMClassifier(nn.Module):
def __init__(self, embedding_dim, hidden_dim, vocab_size, label_size, batch_size):
...
self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
self.hidden2label = nn.Linear(hidden_dim, label_size)
self.hidden = self.init_hidden()
def init_hidden(self):
return (autograd.Variable(torch.zeros(1, self.batch_size, self.hidden_dim)),
autograd.Variable(torch.zeros(1, self.batch_size, self.hidden_dim)))
def forward(self, sentence):
embeds = self.word_embeddings(sentence)
x = embeds.view(len(sentence), self.batch_size , -1)
lstm_out, self.hidden = self.lstm(x, self.hidden)
y = self.hidden2label(lstm_out[-1])
log_probs = F.log_softmax(y)
return log_probs
| https://stackoverflow.com/questions/47952930/ |
Using PyTorch for scientific computation | I would like to use PyTorch as a scientific computation package. It has much to recommend it in that respect - its Tensors are basically GPU-accelerated numpy arrays, and its autograd mechanism is potentially useful for a lot of things besides neural networks.
However, the available tutorials and documentation seem strongly geared towards quickly getting people up and running using it for machine learning. Although there is lots of good information available on the Tensor and Variable classes (and I understand that material reasonably well), the nn and optim packages always seem to be introduced by example rather than by explaining the API, which makes it hard to figure out exactly what's going on.
My main question at this point is whether I can use the optim package without also using the nn package, and if so how to do so. Of course I can always implement my simulations as subclasses of nn.Module even though they are not neural networks, but I would like to understand what happens under the hood when I do this, and what benefits/drawbacks it would give for my particular application.
More broadly, I would appreciate pointers to any resource that gives more of a logical overview of the API (for nn and optim specifically), rather than just presenting examples.
| This is a partial self-answer to the specific question about using optim without using nn. The answer is, yes, you can do that. In fact, from looking at the source code, the optim package doesn't know anything about nn and only cares about Variables and tensors.
The documentation gives the following incomplete example:
optimizer = optim.Adam([var1, var2], lr = 0.0001)
and then later:
for input, target in dataset:
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
The function model isn't defined anywhere and looks like it might be something to do with nn, but in fact it can just be a Python function that computes output from input using var1 and var2 as parameters, as long as all the intermediate steps are done using Variables so that it can be differentiated. The call to optimizer.step() will update the values of var1 and var2 automatically.
In terms of the structure of PyTorch overall, it seems that optim and nn are independent of one another, with nn being basically just a convenient way to chain differentiable functions together, along with a library of such functions that are useful in machine learning. I would still appreciate pointers to a good technical overview of the whole package, though.
| https://stackoverflow.com/questions/47958359/ |
Pytorch installation problems with Anaconda | After having upgraded my environment's python to Python 3.61, I attempted to install pytorch using this command:
conda install -c peterjc123 pytorch
However I got this error:
Fetching package metadata .............
Solving package specifications: .
UnsatisfiableError: The following specifications were found to be in conflict:
-pytorch
-pyqt
I also used the commands
conda install -c peterjc123 pytorch cuda90
conda install -c peterjc123 pytorch cuda80
But the result is still the same. Anyone got a clue how to solve this?
| The problem was solved after downgrading from Python 3.6.2 to Python 3.5.1 after running:
conda install -c anaconda python=3.5.1
After running this command, run:
conda install -c peterjc123 pytorch
Pytorch should install as per normal. A similar issue occurs for openCV as well
| https://stackoverflow.com/questions/47958982/ |
Everytime I use cuda() to remove Variable from CPU to GPU in pytorch,it takes about 5 to 10 minitues | I just do this:
t = Variable(torch.randn(5))
t =t.cuda()
print(t)
but it takes 5 to 10 minitues,everytime.
I used cuda samples to test bandwidth, it's fine.
Then I used pdb to find which takes the most time.
I find in /anaconda3/lib/python3.6/site-packages/torch/cuda/__init__:
def _lazy_new(cls, *args, **kwargs):
_lazy_init()
# We need this method only for lazy init, so we can remove it
del _CudaBase.__new__
return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
it takes about 5 minitues in the return
I don't know how to solve my problem by these imformation.
My environment is: Ubuntu 16.04 + CUDA 9.1
| There’s a cuda version mismatch between the cuda my pytorch was compiled with the cuda I'm running.I divided the official installation commond
conda install pytorch torchvision cuda90 -c pytorch
into two section:
conda install -c soumith magma-cuda90
conda install pytorch torchvision -c soumith
The second commond installed pytorch-0.2.0 by default,which mathchs CUDA8.0. After I update my pytorch to 0.3.0,this commond only takes one second.
| https://stackoverflow.com/questions/47979852/ |
Dynamic addition of hidden units in pytorch | I am trying to add hidden units to a 3-layered neural network(input, hidden,output) dynamically as I train it. I want to keep the weights of trained part of the network as I add new hidden units.This is my code,
class my_network(torch.nn.Module):
def __init__(self,input_dim,hidden_dim,output_dim):
super(my_network,self).__init__()
self.I = input_dim
self.H = hidden_dim
self.O = output_dim
self.layer1 = torch.nn.Linear(input_dim,hidden_dim)
self.layer2 = torch.nn.Linear(hidden_dim,output_dim)
def add_neurons(self,no_of_neurons,flag):
if flag == 1:
weights = [self.layer1.weight.data,self.layer2.weight.data]
self.layer1 = torch.nn.Linear(self.I,self.H+no_of_neurons)
self.layer2 = torch.nn.Linear(self.H+no_of_neurons,self.O)
self.layer1.weight.data[0:-no_of_neurons,:] = weights[0]
self.layer2.weight.data[:,0:-no_of_neurons] = weights[1]
self.H = self.H + no_of_neurons
return self.layer1.weight.shape[0]
def forward(self,x):
temp = self.layer1(x)
out = self.layer2(temp)
return out
I have noticed that once I call “add_neurons” method, the weights stop updating(while gradients are generated). Any help would be much appreciated.
| The optimizer might not be informed about new parameters that you added to your model. The easiest probably would be to recreate optimizer object with the updated list of your model parameters.
| https://stackoverflow.com/questions/47996046/ |
Advanced indexing in 2d tensor in pytorch | I have a 2d tensor X. and two lists of indexes that is first index and second call a and b. I want to do
X[a[i],b[i]] = 0 for i in range(len(a))
How can I do this. If i directly do X[a,b] the error is IndexError: The advanced indexing objects could not be broadcast
| Check your lists which contains the indices, some values might be out of range. That's when you will get IndexError like the one below:
In [43]: X[4,4]
IndexError Traceback (most recent call last)
in ()
----> 1 X[4,4]
IndexError: index 4 is out of range for dimension 0 (of size 3)
If your indices are in correct range, it should work fine.
Here is an example:
In [35]: X = torch.Tensor([[3, 4, 5, 6], [1, 2, 3, 4], [6, 3, 2, 1]])
In [36]: X
Out[36]:
3 4 5 6
1 2 3 4
6 3 2 1
[torch.FloatTensor of size 3x4]
In [37]: a = [0, 2]
In [38]: b = [1, 2]
In [39]: X[a, b]
Out[39]:
4
2
[torch.FloatTensor of size 2]
In [40]: X[a, b] = 0
In [41]: X
Out[41]:
3 0 5 6
1 2 3 4
6 3 0 1
[torch.FloatTensor of size 3x4]
| https://stackoverflow.com/questions/48006755/ |
I get this error on pytorch "RuntimeError: invalid argument 2: size '[-1 x 400]" | Can anyone help me how to fix this error?
RuntimeError: invalid argument 2: size '[-1 x 400]' is invalid for input of with 1597248 elements at /Users/soumith/miniconda2/conda-bld/pytorch_1503975723910/work/torch/lib/TH/THStorage.c:37
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 5 == 4: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 5))
running_loss = 0.0
print('Finished Training')
| When I scaled image size, error stopped occurring.
transform = transforms.Compose(
[transforms.Scale((32,32)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
| https://stackoverflow.com/questions/48015235/ |
How to decode an embedding in PyTorch efficiently? | I am new to Pytorch and RNN. I am learning how to use RNN to predict numbers as a tutorial from the video: https://www.youtube.com/watch?v=MKA6v99uYKY
In his code, he use python 3 and do the decode like:
out_unembedded = out.view(-1, hidden_size) @ embedding.weight.transpose(0,1)
I am using Python 2 and try the code:
out_unembedded = out.view(-1, hidden_size).dot( embedding.weight.transpose(0,1))
But it seems not right, Then I try to decode like this:
import torch
import torch.nn as nn
from torch.autograd import Variable
word2id = {'hello': 0, 'world': 1, 'I': 2, 'am': 3,'writing': 4,'pytorch': 5}
embeds = nn.Embedding(6, 3)
word_embed = embeds(Variable(torch.LongTensor([word2id['am']])))
id2word = {v: k for k, v in word2id.iteritems()}
index = 0
for row in embeds.weight.split(1):
if(torch.min( torch.eq(row.data,word_embed.data) ) == 1):
print index
print id2word[index]
index+=1
Is there a more professional way to do this? Thanks!
------------ UPDATE ------------
I find the correct way to substitute @ in Python 2:
out_unembedded = torch.mm( embedded_output.view(-1, hidden_size),embedding.weight.transpose(0, 1))
| I finally figure out the problem. The two decode methods are different.
The first one use
@
to do the dot product. Instead of searching the exact decoding, it calculates the cosine similarity by dot product and find the most similar word. The value after dot product means the similarity between the target and the word with such index. The equation is:
The second method which build a hash map is to find the index using the exact encoding.
| https://stackoverflow.com/questions/48060415/ |
Compute gradients w.r.t. the values of embedding vectors in PyTorch | I am trying to train a dual encoder LSTM model for a chatbot using PyTorch.
I defined two classes: the Encoder class defines the LSTM itself and the Dual_Encoder class applies the Encoder to both context and response utterances that I am trying to train on:
class Encoder(nn.Module):
def __init__(self,
input_size,
hidden_size,
vocab_size,
num_layers = 1,
num_directions = 1,
dropout = 0,
bidirectional = False,
rnn_type = 'lstm'):
super(Encoder, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.vocab_size = vocab_size
self.num_layers = 1
self.num_directions = 1
self.dropout = 0,
self.bidirectional = False
self.embedding = nn.Embedding(vocab_size, input_size, sparse = False, padding_idx = 0)
self.lstm = nn.LSTM(self.input_size, self.hidden_size, self.num_layers, batch_first=False, dropout = dropout, bidirectional=False).cuda()
self.init_weights()
def init_weights(self):
init.orthogonal(self.lstm.weight_ih_l0)
init.uniform(self.lstm.weight_hh_l0, a=-0.01, b=0.01)
embedding_weights = torch.FloatTensor(self.vocab_size, self.input_size).cuda()
init.uniform(embedding_weights, a = -0.25, b= 0.25)
id_to_vec, emb_dim = create_id_to_vec('/data/train_shuffled_onethousand.csv','/data/glove.6B.100d.txt')
for id, vec in id_to_vec.items():
embedding_weights[id] = vec
del self.embedding.weight
self.embedding.weight = nn.Parameter(embedding_weights)
self.embedding.weight.requires_grad = True
#self.embedding.weight.data.copy_(torch.from_numpy(self.embedding_weights))
def forward(self, inputs):
embeddings = self.embedding(inputs)
outputs, hiddens = self.lstm(embeddings)
return outputs, hiddens
#%%
class DualEncoder(nn.Module):
def __init__(self, encoder):
super(DualEncoder, self).__init__()
self.encoder = encoder
self.number_of_layers = 1
#h_0 (num_layers * num_directions, batch, hidden_size):
#tensor containing the initial hidden state for each element in the batch.
#dual_hidden_size = self.encoder.hidden_size * self.encoder.num_directions
M = torch.FloatTensor(self.encoder.hidden_size, self.encoder.hidden_size).cuda()
init.normal(M)
self.M = nn.Parameter(M, requires_grad = True)
def forward(self, contexts, responses):
#output (seq_len, batch, hidden_size * num_directions):
#tensor containing the output features (h_t) from the last layer
#of the RNN, for each t.
#h_n (num_layers * num_directions, batch, hidden_size):
#tensor containing the hidden state for t=seq_len
context_out, context_hn = self.encoder(contexts)
response_out, response_hn = self.encoder(responses)
scores_list = []
y_preds = None
for e in range(999):
context_h = context_out[e][-1].view(1, self.encoder.hidden_size)
response_h = response_out[e][-1].view(self.encoder.hidden_size,1)
dot_var = torch.mm(torch.mm(context_h, self.M), response_h)[0][0]
dot_tensor = dot_var.data
dot_tensor.cuda()
score = torch.sigmoid(dot_tensor)
scores_list.append(score)
y_preds_tensor = torch.stack(scores_list).cuda()
y_preds = autograd.Variable(y_preds_tensor).cuda()
return y_preds
#%% TRAINING
torch.backends.cudnn.enabled = False
#%%
vocab = create_vocab('/data/train_shuffled_onethousand.csv')
vocab_len = len(vocab)
emb_dim = get_emb_dim('/data/glove.6B.100d.txt')
#%%
encoder_model = Encoder(
input_size = emb_dim,
hidden_size = 300,
vocab_size = vocab_len)
encoder_model.cuda()
#%%
dual_encoder = DualEncoder(encoder_model)
dual_encoder.cuda()
#%%
loss_func = torch.nn.BCELoss()
loss_func.cuda()
learning_rate = 0.001
epochs = 100
#batch_size = 50
optimizer = optim.Adam(dual_encoder.parameters(),
lr = learning_rate)
#%%
for i in range(epochs):
context_matrix, response_matrix, y = make_matrices('/data/train_shuffled_onethousand.csv')
context_matrix = autograd.Variable(context_matrix, requires_grad=True).cuda()
response_matrix = autograd.Variable(response_matrix, requires_grad=True).cuda()
y_label = y.cuda()
y_preds = dual_encoder(context_matrix, response_matrix)
loss = loss_func(y_preds, y_label)
if i % 10 == 0:
print("Epoch: ", i, ", Loss: ", loss.data[0])
#evaluation metrics...
dual_encoder.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm(dual_encoder.parameters(), 10)
optimizer.step()
The following error occurs:
2018-01-06 06:07:02,148 INFO - result = self.forward(*input, **kwargs)
2018-01-06 06:07:02,148 INFO - File "all_scripts.py", line 258, in forward
2018-01-06 06:07:02,148 INFO - context_out, context_hn = self.encoder(contexts)
2018-01-06 06:07:02,149 INFO - File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
2018-01-06 06:07:02,149 INFO - result = self.forward(*input, **kwargs)
2018-01-06 06:07:02,149 INFO - File "all_scripts.py", line 229, in forward
2018-01-06 06:07:02,149 INFO - embeddings = self.embedding(inputs)
2018-01-06 06:07:02,150 INFO - File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
2018-01-06 06:07:02,150 INFO - result = self.forward(*input, **kwargs)
2018-01-06 06:07:02,150 INFO - File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 103, in forward
2018-01-06 06:07:02,150 INFO - self.scale_grad_by_freq, self.sparse
2018-01-06 06:07:02,150 INFO - File "/usr/local/lib/python3.6/site-packages/torch/nn/_functions/thnn/sparse.py", line 40, in forward
2018-01-06 06:07:02,151 INFO - assert not ctx.needs_input_grad[0], "Embedding doesn't " \
2018-01-06 06:07:02,151 INFO - AssertionError: Embedding doesn't compute the gradient w.r.t. the indices
I do understand why the problem occurs (surely it makes no sense to compute the gradient w.r.t. the indices).
But I do not understand how to adjust the code so that it computes the gradients w.r.t. the content values of the embedding vectors.
All help highly appreciated!
(Also see the thread in the PyTorch forum)
| After some extensive adjustments, the code works now. The problem was not only the embedding initialization. See my github repo for the improved code.
| https://stackoverflow.com/questions/48128934/ |
What's the meaning of function eval() in torch.nn module | Official comment shows that "This has any effect only on modules such as Dropout or BatchNorm." But I don't understand its implementation.
| Dropout and BatchNorm (and maybe some custom modules) behave differently during training and evaluation. You must let the model know when to switch to eval mode by calling .eval() on the model.
This sets self.training to False for every module in the model. If you are implementing your own module that must behave differently during training and evaluation, you can check the value of self.training while doing so.
| https://stackoverflow.com/questions/48146926/ |
Cropping a minibatch of images in Pytorch -- each image differently | I have a tensor named input with dimensions 64x21x21. It is a minibatch of 64 images, each 21x21 pixels. I'd like to crop each image down to 11x11 pixels. So the output tensor I want would have dimensions 64x11x11.
I'd like to crop each image around a different "center pixel." The center pixels are given by a 2-dimensional long tensor named center with dimensions 64x2. For image i, center[i][0] gives the row index and center[i][1] gives the column index for the pixel that should be at the center in the output. We can assume that the center pixel is always at least 5 pixels away from the border.
Is there an efficient way to do this in pytorch (on the gpu)?
UPDATE: Let me clarify that the center tensor is formed by a deep neural network. It acts as a "hard attention mechanism," to use the reinforcement learning term for it. After I "crop" an image, that subimage becomes the input to another neural network. That's why I want to do the cropping in Pytorch: because the operations before and after the cropping are in Pytorch. I'd like to avoid having to transfer anything from the GPU back to the CPU.
| I raised the question over on the pytorch forums, and got an answer there from smth. The grid_sample function should totally solve the problem.
https://discuss.pytorch.org/t/cropping-a-minibatch-of-images-each-image-a-bit-differently/12247
| https://stackoverflow.com/questions/48235916/ |
save predictions from pytorch model | Im following the pytorch transfer learning tutorial and applying it to the kaggle seed classification task,Im just not sure how to save the predictions in a csv file so that i can make the submission,
Any suggestion would be helpful,This is what i have ,
use_gpu = torch.cuda.is_available()
model = models.resnet50(pretrained=True)
for param in model.parameters():
param.requires_grad = False
num_ftrs = model.fc.in_features
model.fc = torch.nn.Linear(num_ftrs, len(classes))
if use_gpu:
model = model.cuda()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.fc.parameters(), lr=0.001, momentum=0.9)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
loaders = {'train':train_loader, 'valid':valid_loader, 'test': test_loader}
model = train_model(loaders, model, criterion, optimizer, exp_lr_scheduler, num_epochs=50)
| Once you have trained your model, you can evaluate it on your testing data. This gives you a Variable, probably on the GPU. From there, you'll want to copy its tensor to the CPU with cpu() and convert it into a numpy array with numpy(). You can then use numpy's CSV functionality or use e.g. pandas' DataFrame.to_csv. In the first case, you'd have something like this:
# evaluate on Variable x with testing data
y = model(x)
# access Variable's tensor, copy back to CPU, convert to numpy
arr = y.data.cpu().numpy()
# write CSV
np.savetxt('output.csv', arr)
| https://stackoverflow.com/questions/48264368/ |
Out of memory error during evaluation but training works fine | I have recently upgraded PyTorch from 0.2 to 0.3. Surprisingly my old programs are throwing an out of memory error during evaluation (in eval() mode) but training works just fine. I am using the same batch size for training and evaluation. I am totally clueless what is happening? Did anyone face similar issue? Is there any possible solution?
I tried using volatile=True param on the variables and it didn't help. Please note, I am not doing anything special to use cuDNN. I am using the default setting.
def validate(self, dev_corpus):
# Turn on evaluation mode which disables dropout.
self.model.eval()
dev_batches = helper.batchify(dev_corpus.data, self.config.batch_size)
print('number of dev batches = ', len(dev_batches))
dev_loss = 0
num_batches = len(dev_batches)
for batch_no in range(1, num_batches + 1):
session_queries, session_query_length, rel_docs, rel_docs_length, doc_labels = helper.session_to_tensor(
dev_batches[batch_no - 1], self.dictionary)
if self.config.cuda:
session_queries = session_queries.cuda()
session_query_length = session_query_length.cuda()
rel_docs = rel_docs.cuda()
rel_docs_length = rel_docs_length.cuda()
doc_labels = doc_labels.cuda()
loss = self.model(session_queries, session_query_length, rel_docs, rel_docs_length, doc_labels)
if loss.size(0) > 1:
loss = loss.mean()
dev_loss += loss.data[0]
return dev_loss / num_batches
I am using the above function for evaluation. Here, session_queries, session_query_length, .... rest variables are created by enabling volatile=True.
Please help!!
| I think it fails during validation because the volatile flag is now deprecated and has no effect. Starting from 0.4.0, to avoid the gradient being computed for all variables during validation, you should use a context manager. Code example:
with torch.no_grad():
# Your validation code
With that, the operation history and gradients are not stored. Thus this will save memory.
Refer to 0.4.0 migration guide for more details.
Furthermore, you could delete references to those variables after you finished the evaluation like this:
someVar = Variable(someVar)
del someVar
| https://stackoverflow.com/questions/48271007/ |
Problems of Pytorch installation on Ubuntu 17.10 (GPU) | I would like to use PyTorch and its GPU computations on my computer.
I have a computer running with Ubuntu 17.10. The computer (Alienware m17x) has two graphic cards:
An integrated Intel Ivybridge Mobile
A Nvidia Geforce 675M.
In order to install PyTorch, I followed the instructions on the PyTorch website pytorch.org
1) I installed CUDA 9 with the deb file: https://developer.nvidia.com/cuda-downloads
=> Linux/x86_64/Ubuntu/17.04/deb (local)
2) I installed Pytorch using the conda command line: conda install pytorch torchvision cuda90 -c pytorch
None of these two steps returned me any type of errors.
I restarted my computer. Apparently the two cards are detected:
$ lspci | grep -i vga
00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09)
01:00.0 VGA compatible controller: NVIDIA Corporation GF114M [GeForce GTX 675M] (rev a1)
But apparently there is something wrong with the drivers or CUDA itself. nvidia-detector does not return me anything:
$ nvidia-detector
none
And pytorch can not use cuda:
[1]: import torch
In [2]: torch.cuda.is_available()
Out[2]: False
Could you help me? I can provide additional informations if necessary, but I am not sure what could be relevant.
| You do not need to install cuda to use a GPU with pytorch, if you install pytorch like this: pytorch binaries include all the necessary cuda libraries.
Therefore it also does not matter which cuda version flavour you choose when installing pytorch. Usually one will probably want the latest version, but in cases where an old GPU needs to get used, the pytorch binary that comes with the older cuda version may be the only one still supporting that GPU.
If no GPU is detected then this is probably not related to the CUDA library but to your kernel driver. Make sure that your system has the latest tested NVIDIA proprietary kernel driver installed.
What is maybe a bit confusing is that one can install pytorch binaries with cuda support on any system, including one without a GPU or with a GPU but without the system driver installed. This works fine until you try to actually use the GPU and invoke .cuda()
| https://stackoverflow.com/questions/48280658/ |
Pytorch module error in Jupyter Notebook | I installed pytorch using conda command when the virtual env was activated.
But, there are some problems when I import torch modules in Jupyter Notebook.
I checked the sys.path both in the prompt and in Jupyter Notebook.
Well.. in the prompt, the result of sys.path is
['', '/home/usrname/anaconda3/lib/python36.zip',
'/home/usrname/anaconda3/lib/python3.6',
'/home/usrname/anaconda3/lib/python3.6/lib-dynload',
'/home/usrname/anaconda3/lib/python3.6/site-packages']
and there are no errors when I import torch modules.
But, in the jupyter notebook(executed in chrome), the sys.path is
['',
'/home/usrname/anaconda3/lib/python36.zip',
'/home/usrname/anaconda3/lib/python3.6',
'/home/usrname/anaconda3/lib/python3.6/lib-dynload',
'/home/usrname/anaconda3/lib/python3.6/site-packages',
'/home/usrname/anaconda3/lib/python3.6/site-packages/IPython/extensions',
'/home/usrname/.ipython']
and I see an error: No module named 'torch'
I can't solve this problem...
| I had the same issues with this but managed to solve the problem. I think PyTorch has an 'issue' with the regular Jupyter application in Anaconda, so I urge you to first install the numpy jupyter notebook that supports PyTorch, then you can launch you notebook again.
Use (while in your virtual env path):
conda install numpy jupyter notebook
Hope this helps.
| https://stackoverflow.com/questions/48316706/ |
Invalid syntax error when passing a list of modules in Pytorch | I have two blocks in my deep models which are defined as follows:
def make_conv_bn_relu(in_channels, out_channels, kernel_size=3, stride=1, padding=1):
return [
nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=False),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
]
def make_conv_relu(in_channels, out_channels, kernel_size=3, stride=1, padding=1):
return [
nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=True),
nn.ReLU(inplace=True),
]
Now, I want to pass it in nn.Sequential.
self.down1 = nn.Sequential(*make_conv_bn_relu(in_channels, 16, kernel_size=3, stride=1, padding=1 ), *make_conv_bn_relu(16, 32, kernel_size=3, stride=2, padding=1 ),)
But I am getting the following error:
Traceback (most recent call last):
File "train_unet.py", line 17, in <module>
from net.model.unet1 import UNet256_3x3 as Net
File "/home/avijit.d/Kaggle/Pytorch/source/dummy-01/net/model/unet1.py", line 40
self.down1 = nn.Sequential(*make_conv_bn_relu(in_channels, 16, kernel_size=3, stride=1, padding=1 ), *make_conv_bn_relu(16, 32, kernel_size=3, stride=2, padding=1 ),)
^
SyntaxError: invalid syntax
How to get rid of this? I am using Python 2.7
| You can't use multiple unpacks in python2. But if you really want to use it then just concatenate lists:
nn.Squential(*(make_foo() + make_bar()))
| https://stackoverflow.com/questions/48350616/ |
Pytorch network parameter calculation | Can someone tell me please about how the network parameter (10) is calculated? Thanks in advance.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(x.size()[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
print(net)
print(len(list(net.parameters())))
Output:
Net(
(conv1): Conv2d (1, 6, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d (6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=400, out_features=120)
(fc2): Linear(in_features=120, out_features=84)
(fc3): Linear(in_features=84, out_features=10)
)
10
Best,
Zack
| Most layer modules in PyTorch (e.g. Linear, Conv2d, etc.) group parameters into specific categories, such as weights and biases. Each of the five layer instances in your network has a "weight" and a "bias" parameter. This is why "10" is printed.
Of course, all of these "weight" and "bias" fields contain many parameters. For example, your first fully connected layer self.fc1 contains 16 * 5 * 5 * 120 = 48000 parameters. So len(params) doesn't tell you the number of parameters in the network--it gives you just the total number of "groupings" of parameters in the network.
| https://stackoverflow.com/questions/48393608/ |
pytorch: can't load CNN model and do prediction TypeError: 'collections.OrderedDict' object is not callable | I trained a CNN model using MNIST dataset and now want to predict a classification of the image, which contains a number 3.
But when I tried to use this CNN to predict, pytorch gives me this error:
TypeError: 'collections.OrderedDict' object is not callable
And here's what I write:
cnn = torch.load("/usr/prakt/w153/Desktop/score_detector.pkl")
img = scipy.ndimage.imread("/usr/prakt/w153/Desktop/resize_num_three.png")
test_x = Variable(torch.unsqueeze(torch.FloatTensor(img), dim=1), volatile=True).type(torch.FloatTensor).cuda()
test_output, last_layer = cnn(test_x)
pred = torch.max(test_output, 1)[1].cuda().data.squeeze()
print(pred)
here's some explaination:
img is the to be predicted image with size 28*28 score_detector.pkl is the trained CNN model
any help will be appreciated!
| Indeed, you are loading a state_dict rather than the model itself.
Saving the model is as follows:
torch.save(model.state_dict(), 'model_state.pth')
Whereas to load the model state you first need to init the model and then load the state
model = Model()
model.load_state_dict(torch.load('model_state.pth'))
If you trained your model on GPU but would like to load the model on a laptop which doesn't have CUDA, then you would need to add one more argument
model.load_state_dict(torch.load('model_state.pth', map_location='cpu'))
| https://stackoverflow.com/questions/48419626/ |
coding patterns for efficient minibatch loop using custom pytorch dataset | Are there any general recommendations to handle data efficiently in a custom Dataset so that it plays nicely with the minibatch eval/train loop? To illustrate what I mean more concretely, let's say I define this synthetic toy dataset that maps x to x+1:
import torch.utils.data as data
class Dataset(data.Dataset):
def __init__(self):
super(Dataset, self).__init__()
# list of [x, y]
self.dataset = [
[1, 2],
[2, 3],
[3, 4],
[4, 5]
]
def __getitem__(self, index):
item = self.dataset[index]
return item[0], item[1]
def __len__(self):
return len(self.dataset)
In practice, this would be wrapped in a DataLoader and accessed inside an eval/train loop, something like this:
dataset = Dataset()
data_loader = data.DataLoader(dataset=dataset, batch_size=2, shuffle=True)
epochs = 100
for i_epoch in range(epochs):
for i_minibatch, minibatch in enumerate(data_loader):
x, y = minibatch
# predict and train
The dataset object might return primitive Python objects like numbers or lists, like in my example implementation, but in the "predict and train" part of the last code snippet, we need some specific data types to compute stuff, like a torch.FloatTensor (it seems the data loader can do this implicitly), probably even wrapped as a torch.autograd.Variable, and some calls to .cuda() might also be necessary. My question is about general advice for when to make these data transformations and function calls.
For example, one option would be to have everything already saved as a torch.FloatTensor inside the dataset, and in the data_loader loop we could add the Variable wrapper and call .cuda(). We could also have all or part of the data on the GPU already by calling .cuda() in the Dataset constructor or in the getitem method. I think there might be pros and cons to all of these approaches. If I'm training a model for several epochs, I don't want to introduce unnecessary overhead each epoch or minibatch iteration that could have been avoided by precomputing stuff in the dataset. Probably someone with more knowledge about the internals of pytorch (maybe related to some caching or jit compilation happening under the hood) might be able to point to more specific reasons to choose one approach over another.
| Generally, datasets are stored in files in a format that is more friendly to storage on disk. When you load the dataset, you want the datatypes to be more friendly to PyTorch. This is accomplished by transformations interface of torchvision library. For example, for MNIST below is standard transformations:
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
Here ToTensor divides all values in tensor by 255 so that if the data is RGB image then values in tensor would be 0.0 to 1.0.
The key thing is here is that your data in disk should ideally be agnostic of what you might want to do with it (training, visualization, calculate stats etc) as well as agnostic of frameworks being used. You apply transformations after you load data that are relevant to stuff you are doing.
One other thing I want to mention is handling very large datasets like ImageNet. There are few important things:
You should avoid using separate image files as your dataset because this doesn't work well in clusters. Instead you can pack all files in format like LMDB or uncompressed zip (use Python ZipFile module) and then only access these files sequentially. The random access in large files will slow you down tremendously.
You should avoid using shuffle option in DataLoader class for large datasets. If you do that then again you are accessing large file with random access and performance will tank. Instead, what you can do is to sequentially read K = C * total_epochs * batch_size records where C is some constant of your choice >= 1. Then shuffle K records in memory and then divide them in batches. Unfortunately you have to do this manually for now.
| https://stackoverflow.com/questions/48433298/ |
Getting CUDA out of memory | Im trying to train a network but i get,
I set my batch-size as 300 and i get this error,but even if i reduce this to 100 i still get this error,and more frustratingly for running 10 epoch on ~1200 images it takes about 40 minutes,any suggestions what is going wrong and how may i speed the process!
Any tips will be extremely helpful,Thanks in advance.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-31-3b43ff4eea72> in <module>()
5 labels = Variable(labels).cuda()
6
----> 7 optimizer.zero_grad()
8 outputs = cnn(images)
9 loss = criterion(outputs, labels)
/usr/local/lib/python3.5/dist-packages/torch/optim/optimizer.py in zero_grad(self)
114 if p.grad is not None:
115 if p.grad.volatile:
--> 116 p.grad.data.zero_()
117 else:
118 data = p.grad.data
RuntimeError: cuda runtime error (2) : out of memory at /pytorch /torch/lib/THC/generic/THCTensorMath.cu:35`
Even though my GPU's are free
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.111 Driver Version: 384.111 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:05:00.0 Off | N/A |
| 23% 18C P8 15W / 250W | 10864MiB / 11172MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... Off | 00000000:08:00.0 Off | N/A |
| 23% 20C P8 15W / 250W | 10MiB / 11172MiB | 0% Default
+-------------------------------+----------------------+---------------
| Fairly general question. Here is how I would think on this problem.
Try to set batch size (number of batches) to 1. If this fixed the problem you may try to find optimal batch size.
If even for bs=1 you get "RuntimeError: cuda runtime error (2) : out of memory" :
Do not use linear layers that are too large.
A linear layer nn.Linear(m, n) uses O(nm)O(nm)O(nm) memory: that is to say, the memory requirements of the weights scales quadratically with the number of features considering also the gradients.
Do not accumulate history across your training loop.
If you sum the loss recursively inside a loop 10000 or more your back-propagation evaluation will be huge; taking lot of memory.
Delete tensors you don't need with del explicitly.
Run ps -elf | grep python and python processes on your GPU kill -9 [pid] if you have doubts some other Python process is eating your memory.
| https://stackoverflow.com/questions/48473573/ |
PyTorch memory model: "torch.from_numpy()" vs "torch.Tensor()" | I'm trying to have an in-depth understanding of how PyTorch Tensor memory model works.
# input numpy array
In [91]: arr = np.arange(10, dtype=float32).reshape(5, 2)
# input tensors in two different ways
In [92]: t1, t2 = torch.Tensor(arr), torch.from_numpy(arr)
# their types
In [93]: type(arr), type(t1), type(t2)
Out[93]: (numpy.ndarray, torch.FloatTensor, torch.FloatTensor)
# ndarray
In [94]: arr
Out[94]:
array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.],
[ 6., 7.],
[ 8., 9.]], dtype=float32)
I know that PyTorch tensors share the memory buffer of NumPy ndarrays. Thus, changing one will be reflected in the other. So, here I'm slicing and updating some values in the Tensor t2
In [98]: t2[:, 1] = 23.0
And as expected, it's updated in t2 and arr since they share the same memory buffer.
In [99]: t2
Out[99]:
0 23
2 23
4 23
6 23
8 23
[torch.FloatTensor of size 5x2]
In [101]: arr
Out[101]:
array([[ 0., 23.],
[ 2., 23.],
[ 4., 23.],
[ 6., 23.],
[ 8., 23.]], dtype=float32)
But, t1 is also updated. Remember that t1 was constructed using torch.Tensor() whereas t2 was constructed using torch.from_numpy()
In [100]: t1
Out[100]:
0 23
2 23
4 23
6 23
8 23
[torch.FloatTensor of size 5x2]
So, no matter whether we use torch.from_numpy() or torch.Tensor() to construct a tensor from an ndarray, all such tensors and ndarrays share the same memory buffer.
Based on this understanding, my question is why does a dedicated function torch.from_numpy() exists when simply torch.Tensor() can do the job?
I looked at the PyTorch documentation but it doesn't mention anything about this? Any ideas/suggestions?
| from_numpy() automatically inherits input array dtype. On the other hand, torch.Tensor is an alias for torch.FloatTensor.
Therefore, if you pass int64 array to torch.Tensor, output tensor is float tensor and they wouldn't share the storage. torch.from_numpy gives you torch.LongTensor as expected.
a = np.arange(10)
ft = torch.Tensor(a) # same as torch.FloatTensor
it = torch.from_numpy(a)
a.dtype # == dtype('int64')
ft.dtype # == torch.float32
it.dtype # == torch.int64
| https://stackoverflow.com/questions/48482787/ |
In google-cloud-ml, setup.py error during setting up PYTORCH | i write setup.py like below to setup pytorch in google-cloud-ml engine
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['torchvision']
DEPENDENCY_LINKS =['http://download.pytorch.org/whl/cpu/torch-0.3.0.post4-cp27-cp27mu-linux_x86_64.whl']
setup(
name='trainer',
version='0.1',
dependency_links=DEPENDENCY_LINKS,
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='My pytorch trainer application package.'
)
error message
"Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-_ZQ7aQ/torch/"
i don't know what it happen....
when i searched about that problem... the answers are just upgrade the setuptools... but i don't know how to upgrade setuptools in ml-engine...
so please help me...!
i want to run pytorch code in ml engine
| Seems like DEPENDENCY_LINKS has been ignored by pip.
Instead, I copied the whl file to a GCS bucket, and used the flag '--package gs://my-bucket/torch-0.3.0.post4-cp27-cp27mu-linux_x86_64.whl' in gcloud to install the whl file before executing 'pip install torchvision' and it worked.
You also need to remove DEPENDENCY_LINKS from setup.py
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['torchvision']
setup(
name='trainer',
version='0.1',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='My pytorch trainer application package.'
)
| https://stackoverflow.com/questions/48512588/ |
In PyTorch how are layer weights and biases initialized by default? | I was wondering how are layer weights and biases initialized by default? E.g. if I create the linear layer
torch.nn.Linear(5,100)
How are weights and biases for this layer initialized by default?
| PyTorch 1.0
Most layers are initialized using Kaiming Uniform method. Example layers include Linear, Conv2d, RNN etc. If you are using other layers, you should look up that layer on this doc. If it says weights are initialized using U(...) then its Kaiming Uniform method. Bias is initialized using LeCunn init, i.e., uniform(-std, std) where standard deviation std is 1/sqrt(fan_in) (code).
PyTorch 0.4.1, 0.3.1
Weights and biases are initialized using LeCunn init (see sec 4.6) for conv layers (code: 0.3.1, 0.4.1).
If you want to override default initialization then see this answer.
| https://stackoverflow.com/questions/48529625/ |
Is NVIDIA GeForce GT 635M suitable for Deep Learning? | My GPU model is NVIDIA GeForce GT 635M and on NVIDIA site it is said that this GPU is CUDA-enabled. Can I use TensorFlow or PyTorch or any other kind of Deep Learning platforms with this GPU or it is not suitable?
| Any GPU can be used for Deep Learning training. You can even train only on your CPU. This is mostly true for using TensorFlow and PyTorch as well, but there are caveats as discussed later.
As for their suitability for the task, it is true that some GPUs are better than others, for various reasons. One would clearly be just the performance aspect - the more powerful the GPU, the quicker the training. Another is the CUDA capability of the given GPU, which you have mentioned in your question. The GPU you mention, NVIDIA GeForce GT 653M, is CUDA enabled, but has the Compute Capability of only 2.1 which is on the low end. This also means that you won't be able to use TensorFlow or PyTorch because they require >= 3.0 Compute Capability (thanks to janneb for pointing that out). You will still be able to use them, but not with your GPU, only with CPU.
It is not a bad GPU to train on, mainly if you are just beginning with Deep Learning, but it is not the best either. You do not need to worry about not being able to use some software or not being able to train some networks because of that though.
| https://stackoverflow.com/questions/48587441/ |
Very weird behaviour when running the same deep learning code in two different GPUs | I am training networks using pytorch framework. I had K40 GPU in my computer. Last week, I added 1080 to the same computer.
In my first experiment, I observed identical results in both GPU. Then, I tried my second code on both GPUs. In this case, I "constantly" got good results in K40 while getting "constantly" awful results in 1080 for "exactly the same code".
First, I thought the only reason for getting such diverse outputs would be the random seeds in the codes. So, I fixed the seeds like this:
torch.manual_seed(3)
torch.cuda.manual_seed_all(3)
numpy.random.seed(3)
But, this did not solve the issue. I believe issue cannot be randomness because I was "constantly" getting good results in K40 and "constantly" getting bad results in 1080. Moreover, I tried exactly the same code in 2 other computers and 4 other 1080 GPUs and always achieved good results. So, problem has to be about the 1080 I recently plugged in.
I suspect problem might be about driver, or the way I installed pytorch. But, it is still weird that I only get bad results for "some" of the experiments. For the other experiments, I had the identical results.
Can anyone help me on this?
| I had the same problem. I solved the problem by simply changing
sum
to
torch.sum
. Please try to change all the build-in functions to GPU one.
| https://stackoverflow.com/questions/48612510/ |
Unable to Learn XOR Representation using 2 layers of Multi-Layered Perceptron (MLP) | Using PyTorch nn.Sequential model, I'm unable to learn all four representation of the XOR booleans:
import numpy as np
import torch
from torch import nn
from torch.autograd import Variable
from torch import FloatTensor
from torch import optim
use_cuda = torch.cuda.is_available()
X = xor_input = np.array([[0,0], [0,1], [1,0], [1,1]])
Y = xor_output = np.array([[0,1,1,0]]).T
# Converting the X to PyTorch-able data structure.
X_pt = Variable(FloatTensor(X))
X_pt = X_pt.cuda() if use_cuda else X_pt
# Converting the Y to PyTorch-able data structure.
Y_pt = Variable(FloatTensor(Y), requires_grad=False)
Y_pt = Y_pt.cuda() if use_cuda else Y_pt
hidden_dim = 5
model = nn.Sequential(nn.Linear(input_dim, hidden_dim),
nn.Linear(hidden_dim, output_dim),
nn.Sigmoid())
criterion = nn.L1Loss()
learning_rate = 0.03
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
num_epochs = 10000
for _ in range(num_epochs):
predictions = model(X_pt)
loss_this_epoch = criterion(predictions, Y_pt)
loss_this_epoch.backward()
optimizer.step()
print([int(_pred > 0.5) for _pred in predictions], list(map(int, Y_pt)), loss_this_epoch.data[0])
After learning:
for _x, _y in zip(X_pt, Y_pt):
prediction = model(_x)
print('Input:\t', list(map(int, _x)))
print('Pred:\t', int(prediction))
print('Ouput:\t', int(_y))
print('######')
[out]:
Input: [0, 0]
Pred: 0
Ouput: 0
######
Input: [0, 1]
Pred: 1
Ouput: 1
######
Input: [1, 0]
Pred: 0
Ouput: 1
######
Input: [1, 1]
Pred: 0
Ouput: 0
######
I've tried running the same code over a couple of random seeds but it didn't manage to learn all for XOR representation.
Without PyTorch, I could easily train a model with self-defined derivative functions and manually perform the backpropagation, see https://www.kaggle.io/svf/2342536/635025ecf1de59b71ea4fa03eb84f9f9/results.html#After-some-enlightenment
Why is it that the 2-layered MLP using PyTorch didn't learn the XOR representation?
How is the model in PyTorch:
hidden_dim = 5
model = nn.Sequential(nn.Linear(input_dim, hidden_dim),
nn.Linear(hidden_dim, output_dim),
nn.Sigmoid())
different from the one that is hand-written with the derivatives and the manually written backpropagation and optimizer step from https://www.kaggle.com/alvations/xor-with-mlp ?
Are the same the one hidden layered perceptron network?
Updated
Strangely, adding a nn.Sigmoid() between the nn.Linear layers didn't work:
hidden_dim = 5
model = nn.Sequential(nn.Linear(input_dim, hidden_dim),
nn.Sigmoid(),
nn.Linear(hidden_dim, output_dim),
nn.Sigmoid())
criterion = nn.L1Loss()
learning_rate = 0.03
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
num_epochs = 10000
for _ in range(num_epochs):
predictions = model(X_pt)
loss_this_epoch = criterion(predictions, Y_pt)
loss_this_epoch.backward()
optimizer.step()
for _x, _y in zip(X_pt, Y_pt):
prediction = model(_x)
print('Input:\t', list(map(int, _x)))
print('Pred:\t', int(prediction))
print('Ouput:\t', int(_y))
print('######')
[out]:
Input: [0, 0]
Pred: 0
Ouput: 0
######
Input: [0, 1]
Pred: 1
Ouput: 1
######
Input: [1, 0]
Pred: 1
Ouput: 1
######
Input: [1, 1]
Pred: 1
Ouput: 0
######
But adding nn.ReLU() did:
model = nn.Sequential(nn.Linear(input_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, output_dim),
nn.Sigmoid())
...
for _x, _y in zip(X_pt, Y_pt):
prediction = model(_x)
print('Input:\t', list(map(int, _x)))
print('Pred:\t', int(prediction))
print('Ouput:\t', int(_y))
print('######')
[out]:
Input: [0, 0]
Pred: 0
Ouput: 0
######
Input: [0, 1]
Pred: 1
Ouput: 1
######
Input: [1, 0]
Pred: 1
Ouput: 1
######
Input: [1, 1]
Pred: 1
Ouput: 0
######
Isn't a sigmoid enough for the non-linear activation?
I understand that the ReLU fits the task of boolean output but shouldn't the Sigmoid function produce the same/similar effect?
UPDATED 2
Running the same training 100 times:
from collections import Counter
import random
random.seed(100)
import torch
from torch import nn
from torch.autograd import Variable
from torch import FloatTensor
from torch import optim
use_cuda = torch.cuda.is_available()
all_results=[]
for _ in range(100):
hidden_dim = 2
model = nn.Sequential(nn.Linear(input_dim, hidden_dim),
nn.ReLU(), # Does the sigmoid has a build in biased?
nn.Linear(hidden_dim, output_dim),
nn.Sigmoid())
criterion = nn.MSELoss()
learning_rate = 0.03
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
num_epochs = 3000
for _ in range(num_epochs):
predictions = model(X_pt)
loss_this_epoch = criterion(predictions, Y_pt)
loss_this_epoch.backward()
optimizer.step()
##print([float(_pred) for _pred in predictions], list(map(int, Y_pt)), loss_this_epoch.data[0])
x_pred = [int(model(_x)) for _x in X_pt]
y_truth = list([int(_y[0]) for _y in Y_pt])
all_results.append([x_pred == y_truth, x_pred, loss_this_epoch.data[0]])
tf, outputsss, losses__ = zip(*all_results)
print(Counter(tf))
It only managed to learn the XOR representation 18 out of 100 times... -_-|||
| It's because nn.Linear has no activation built in, so your model is effectively a linear classifier, and XOR is the canonical example of a problem that can't be solved using linear classifiers.
Change this:
model = nn.Sequential(nn.Linear(input_dim, hidden_dim),
nn.Linear(hidden_dim, output_dim),
nn.Sigmoid())
to that:
model = nn.Sequential(nn.Linear(input_dim, hidden_dim),
nn.Sigmoid(),
nn.Linear(hidden_dim, output_dim),
nn.Sigmoid())
and only then will your model be equivalent to the one from the linked Kaggle notebook.
| https://stackoverflow.com/questions/48619928/ |
PyTorch: How to print output blob size of each layer in network? | I can print network structure like (also how to print positional index of each 'simple' layer? because in this example we have 3 for Fire module and it's content ('simple' layers) don't have index):
net = models.squeezenet1_1(pretrained=True)
print(net)
SqueezeNet(
(features): Sequential(
(0): Conv2d (3, 64, kernel_size=(3, 3), stride=(2, 2))
(1): ReLU(inplace)
(2): MaxPool2d(kernel_size=(3, 3), stride=(2, 2), dilation=(1, 1))
(3): Fire(
(squeeze): Conv2d (64, 16, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d (16, 64, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d (16, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace)
)
(4): Fire(
(squeeze): Conv2d (128, 16, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d (16, 64, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d (16, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace)
)
(5): MaxPool2d(kernel_size=(3, 3), stride=(2, 2), dilation=(1, 1))
(6): Fire(
(squeeze): Conv2d (128, 32, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d (32, 128, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d (32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace)
)
(7): Fire(
(squeeze): Conv2d (256, 32, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d (32, 128, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d (32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace)
)
(8): MaxPool2d(kernel_size=(3, 3), stride=(2, 2), dilation=(1, 1))
(9): Fire(
(squeeze): Conv2d (256, 48, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d (48, 192, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d (48, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace)
)
(10): Fire(
(squeeze): Conv2d (384, 48, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d (48, 192, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d (48, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace)
)
(11): Fire(
(squeeze): Conv2d (384, 64, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d (64, 256, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d (64, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace)
)
(12): Fire(
(squeeze): Conv2d (512, 64, kernel_size=(1, 1), stride=(1, 1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d (64, 256, kernel_size=(1, 1), stride=(1, 1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d (64, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(expand3x3_activation): ReLU(inplace)
)
)
(classifier): Sequential(
(0): Dropout(p=0.5)
(1): Conv2d (512, 1000, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace)
(3): AvgPool2d(kernel_size=13, stride=1, padding=0, ceil_mode=False, count_include_pad=True)
)
)
And I can print weights size like:
for i, weights in enumerate(list(net.parameters())):
print('i:',i,'weights:',weights.size())
i: 0 weights: torch.Size([64, 3, 3, 3])
i: 1 weights: torch.Size([64])
i: 2 weights: torch.Size([16, 64, 1, 1])
i: 3 weights: torch.Size([16])
i: 4 weights: torch.Size([64, 16, 1, 1])
i: 5 weights: torch.Size([64])
i: 6 weights: torch.Size([64, 16, 3, 3])
i: 7 weights: torch.Size([64])
i: 8 weights: torch.Size([16, 128, 1, 1])
i: 9 weights: torch.Size([16])
i: 10 weights: torch.Size([64, 16, 1, 1])
i: 11 weights: torch.Size([64])
i: 12 weights: torch.Size([64, 16, 3, 3])
i: 13 weights: torch.Size([64])
i: 14 weights: torch.Size([32, 128, 1, 1])
i: 15 weights: torch.Size([32])
i: 16 weights: torch.Size([128, 32, 1, 1])
i: 17 weights: torch.Size([128])
i: 18 weights: torch.Size([128, 32, 3, 3])
i: 19 weights: torch.Size([128])
i: 20 weights: torch.Size([32, 256, 1, 1])
i: 21 weights: torch.Size([32])
i: 22 weights: torch.Size([128, 32, 1, 1])
i: 23 weights: torch.Size([128])
i: 24 weights: torch.Size([128, 32, 3, 3])
i: 25 weights: torch.Size([128])
i: 26 weights: torch.Size([48, 256, 1, 1])
i: 27 weights: torch.Size([48])
i: 28 weights: torch.Size([192, 48, 1, 1])
i: 29 weights: torch.Size([192])
i: 30 weights: torch.Size([192, 48, 3, 3])
i: 31 weights: torch.Size([192])
i: 32 weights: torch.Size([48, 384, 1, 1])
i: 33 weights: torch.Size([48])
i: 34 weights: torch.Size([192, 48, 1, 1])
i: 35 weights: torch.Size([192])
i: 36 weights: torch.Size([192, 48, 3, 3])
i: 37 weights: torch.Size([192])
i: 38 weights: torch.Size([64, 384, 1, 1])
i: 39 weights: torch.Size([64])
i: 40 weights: torch.Size([256, 64, 1, 1])
i: 41 weights: torch.Size([256])
i: 42 weights: torch.Size([256, 64, 3, 3])
i: 43 weights: torch.Size([256])
i: 44 weights: torch.Size([64, 512, 1, 1])
i: 45 weights: torch.Size([64])
i: 46 weights: torch.Size([256, 64, 1, 1])
i: 47 weights: torch.Size([256])
i: 48 weights: torch.Size([256, 64, 3, 3])
i: 49 weights: torch.Size([256])
i: 50 weights: torch.Size([1000, 512, 1, 1])
i: 51 weights: torch.Size([1000])
How to print output blob size of each layer in network?
| You can register a hook (callback function) which will print out shapes of input and output tensors like described in the manual: Forward and Backward Function Hooks
Example:
net.register_forward_hook(your_print_blobs_function)
After this you need to do one forward pass against some input tensor.
expected_image_shape = (3, 224, 224)
input_tensor = torch.autograd.Variable(torch.rand(1, *expected_image_shape))
# this call will invoke all registered forward hooks
output_tensor = net(input_tensor)
| https://stackoverflow.com/questions/48675114/ |
Slicing a numpy 3-d matrix into 2-d matrix | I have a 3d numpy matrix t as follows, generated randomly:
t = np.random.rand(2,2,2)
array([[[ 0.80351862, 0.25631294],
[ 0.7971346 , 0.29468456]],
[[ 0.33771957, 0.91776256],
[ 0.6018604 , 0.55290615]]])
I want to extract a 2-d matrix such that the result is sliced along the columns of the 3-d matrix. Something like:
array([[ 0.25631294 , 0.91776256],
[ 0.29468456, 0.55290615]])
How can I slice in such a way?
Thanks for the help.
| That's just taking the last dim, with a transpose:
>>> t[:,:,1].T
array([[ 0.25631294, 0.91776256],
[ 0.29468456, 0.55290615]])
| https://stackoverflow.com/questions/48678313/ |
AttentionDecoderRNN without MAX_LENGTH | From the PyTorch Seq2Seq tutorial, http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html#attention-decoder
We see that the attention mechanism is heavily reliant on the MAX_LENGTH parameter to determine the output dimensions of the attn -> attn_softmax -> attn_weights, i.e.
class AttnDecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, dropout_p=0.1, max_length=MAX_LENGTH):
super(AttnDecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.output_size = output_size
self.dropout_p = dropout_p
self.max_length = max_length
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)
self.dropout = nn.Dropout(self.dropout_p)
self.gru = nn.GRU(self.hidden_size, self.hidden_size)
self.out = nn.Linear(self.hidden_size, self.output_size)
More specifically
self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
I understand that the MAX_LENGTH variable is the mechanism to reduce the no. of parameters that needs to be trained in the AttentionDecoderRNN.
If we don't have a MAX_LENGTH pre-determined. What values should we initialize the attn layer with?
Would it be the output_size? If so, then that'll be learning the attention with respect to the full vocabulary in the target language. Isn't that the real intention of the Bahdanau (2015) attention paper?
| Attention modulates the input to the decoder. That is attention modulates the encoded sequence which is of the same length as the input sequence. Thus, MAX_LENGTH should be the maximum sequence length of all your input sequences.
| https://stackoverflow.com/questions/48698587/ |
vgg16 in pytorch got size error |
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from torchvision import datasets, transforms, models
import time
import argparse
import os
batch_size = 64
train_dataset = datasets.CIFAR10(root='./data/cifar10/',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.CIFAR10(root='./data/cifar10/',
train=False,
transform=transforms.ToTensor())
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Vgg16(nn.Module):
def __init__(self, num_classes=10):
super(Vgg16, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, padding=1),
nn.ReLU(True),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.ReLU(True),
nn.MaxPool2d(kernel_size=2, stride=2, dilation=1),
nn.Conv2d(64, 128, kernel_size=3, padding=1),
nn.ReLU(True),
nn.Conv2d(128, 128, kernel_size=3, padding=1),
nn.ReLU(True),
nn.MaxPool2d(kernel_size=2, stride=2, dilation=1),
nn.Conv2d(128, 256, kernel_size=3, padding=1),
nn.ReLU(True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(True),
nn.MaxPool2d(kernel_size=2, stride=2, dilation=1),
nn.Conv2d(256, 512, kernel_size=3, padding=1),
nn.ReLU(True),
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.ReLU(True),
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.ReLU(True),
nn.MaxPool2d(kernel_size=2, stride=2, dilation=1),
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.ReLU(True),
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.ReLU(True),
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.ReLU(True),
nn.MaxPool2d(kernel_size=2, stride=2, dilation=1)
)
self.classifier = nn.Sequential(
nn.Linear(25088, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, num_classes)
)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return F.softmax(x)
model = Vgg16()
# print(model)
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
# optimizer = optim.Adam(model.parameters(), lr=0.001)
criterion = nn.CrossEntropyLoss().cuda()
if torch.cuda.device_count() > 0:
# os.environ["CUDA_VISIBLE_DEVICES"]= '0'
print("USE", torch.cuda.device_count(), "GPUs!")
model = nn.DataParallel(model)
else:
print("USE ONLY CPU!")
if torch.cuda.is_available():
model.cuda()
def train(epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
if torch.cuda.is_available():
data, target = Variable(data.cuda()), Variable(target.cuda())
else:
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
# loss = F.nll_loss(output, target)
loss = criterion(output, target)
loss.backward()
optimizer.step()
if batch_idx % 10 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.data[0]))
def test():
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
if torch.cuda.is_available():
data, target = Variable(data.cuda(), volatile=True), Variable(target.cuda())
else:
data, target = Variable(data, volatile=True), Variable(target)
output = model(data)
test_loss += F.nll_loss(output, target, size_average=False).data[0]
pred = output.data.max(1, keepdim=True)[1] # [0] : value, [1]: index
correct += pred.eq(target.data.view_as(pred)).sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
for epoch in range(0, 200):
train(epoch)
test()
When I run this codes.. this error occured.
-> RuntimeError: size mismatch at /pytorch/torch/lib/THC/generic/THCTensorMathBlas.cu:243
When I print(model) in other vgg codes in pytorch, There is a 25088 input size of FC layer...
So, I tried to set this parameter 25088,There is size mismatch error.
When I change this input_size from 25088 to 512, there is no error but training is not works well.(Never changed loss in training process and Always have 10% accuracy in test process)
So I think this input size of FC layer is the problem.. What can I do in this situation?
Thanks in advance;
| I identified the problem. When you define the classifier, you are defining a fully-connected layer nn.Linear(25088, 4096) while the output from the convolutional part after doing x = x.view(x.size(0), -1) is (batch_size, 512). To match the size of the output of the convolutional part with the beginning of the classifier, you should change the classifier definition to:.
self.classifier = nn.Sequential(
nn.Linear(512, 128),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(128, 128),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(128, num_classes)
)
Like this, it should work.
Note that I put 128 out_features in each fully-connected but you can change those numbers as you prefer.
Also adding the BatchNorm layer after the convolution might help the training to converge
class Vgg16(nn.Module):
def __init__(self, num_classes=10):
super(Vgg16, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(True),
nn.MaxPool2d(kernel_size=2, stride=2, dilation=1),
nn.Conv2d(64, 128, kernel_size=3, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(True),
nn.Conv2d(128, 128, kernel_size=3, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(True),
nn.MaxPool2d(kernel_size=2, stride=2, dilation=1),
nn.Conv2d(128, 256, kernel_size=3, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(True),
nn.MaxPool2d(kernel_size=2, stride=2, dilation=1),
nn.Conv2d(256, 512, kernel_size=3, padding=1),
nn.BatchNorm2d(512),
nn.ReLU(True),
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.BatchNorm2d(512),
nn.ReLU(True),
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.BatchNorm2d(512),
nn.ReLU(True),
nn.MaxPool2d(kernel_size=2, stride=2, dilation=1),
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.BatchNorm2d(512),
nn.ReLU(True),
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.BatchNorm2d(512),
nn.ReLU(True),
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.BatchNorm2d(512),
nn.ReLU(True),
nn.MaxPool2d(kernel_size=2, stride=2, dilation=1)
)
self.classifier = nn.Linear(512, 10)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
| https://stackoverflow.com/questions/48742480/ |
Why VGG-16 takes input size 512 * 7 * 7? | According to https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py
I don`t understand why VGG models take 512 * 7 * 7 input_size of fully-connected layer.
Last convolution layer is
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.ReLU(True),
nn.MaxPool2d(kernel_size=2, stride=2, dilation=1)
Codes in above link.
class VGG(nn.Module):
def __init__(self, features, num_classes=1000, init_weights=True):
super(VGG, self).__init__()
self.features = features
self.classifier = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, num_classes),
)
| The VGG neural net has two sections of layers: the "feature" layer and the "classifier" layer. The input to the feature layer is always an image of size 224 x 224 pixels.
The feature layer has 5 nn.MaxPool2d(kernel_size=2, stride=2) convolutions. See referenced source code line 76: each 'M' character in the configurations sets up one MaxPool2d convolution.
A MaxPool2d convolution with these specific parameters reduces the tensor size in half. So we have 224 --> 112 --> 56 --> 28 --> 14 --> 7 which means that the output of the feature layer is a 512 channels * 7 * 7 tensor. This is the input to the "classifier" layer.
| https://stackoverflow.com/questions/48760950/ |
Error in loading model in PyTorch | I Have the following code snippet
from train import predict
import random
import torch
ann=torch.load('ann.pt') #importing trained model
while True:
k=raw_input("User:")
intent,top_value,top_index = predict(str(k),ann)
print(intent)
when I run the script it is throwing the error as below:
Traceback (most recent call last):
File "test.py", line 6, in <module>
ann=torch.load('ann.pt') #importing trained model
File "/home/local/ZOHOCORP/raghav-5305/miniconda2/lib/python2.7/site-packages/torch/serialization.py", line 261, in load
return _load(f, map_location, pickle_module)
File "/home/local/ZOHOCORP/raghav-5305/miniconda2/lib/python2.7/site-packages/torch/serialization.py", line 409, in _load
result = unpickler.load()
AttributeError: 'module' object has no attribute 'ANN'
I have ann.pt file in the same folder as my script is.
Kindly help me identify fix the error and load the model.
Thanks in advance.
| When trying to save both parameters and model, pytorch pickles the parameters but only store path the model Class. For instance, changing tree structure or refactoring can break loading.
Therefore as the documentation points out, it is not recommended, prefer only save/load parameters:
...the serialized data is bound to the specific classes and the exact directory structure used, so it can break in various ways when used in other projects, or after some serious refactors.
For more help, it'll be useful to show your saving code.
| https://stackoverflow.com/questions/48807989/ |
CMake on Linux CentOS 7, how to force the system to use cmake3? | I tried to install PyTorch on my Linux CentOS 7.3. I downloaded its package, ran this command and got this error:
sudo python setup.py install
running install
running build_deps
CMake Error at CMakeLists.txt:1 (cmake_minimum_required):
CMake 3.0 or higher is required. You are running version 2.8.12.2
-- Configuring incomplete, errors occurred!
So I tried to install CMake 3 by using the command
sudo yum -y install cmake3
The installation went alright, but the system still uses cmake2.8 as default.
If I type the yum info comnmand, I get this:
sudo yum info cmake
Installed Packages
Name : cmake
Arch : x86_64
Version : 2.8.12.2
Release : 2.el7
Size : 27 M
Repo : installed
From repo : base
Summary : Cross-platform make system
URL : http://www.cmake.org
License : BSD and MIT and zlib
Description : CMake is used to control the software compilation process using simple
: platform and compiler independent configuration files. CMake generates
: native makefiles and workspaces that can be used in the compiler
: environment of your choice. CMake is quite sophisticated: it is possible
: to support complex environments requiring system configuration, preprocessor
: generation, code generation, and template instantiation.
So, the problem is clear: the system still sees cmake2.8 as default, and therefore Python does not use cmake3 for its PyTorch installation.
How can I solve this problem?
Thanks
| Once you have both the cmake and the cmake3 package installed on your machine, you can use update-alternatives to switch between both packages.
Use the alternatives command to register both installations:
$ sudo alternatives --install /usr/local/bin/cmake cmake /usr/bin/cmake 10 \
--slave /usr/local/bin/ctest ctest /usr/bin/ctest \
--slave /usr/local/bin/cpack cpack /usr/bin/cpack \
--slave /usr/local/bin/ccmake ccmake /usr/bin/ccmake \
--family cmake
$ sudo alternatives --install /usr/local/bin/cmake cmake /usr/bin/cmake3 20 \
--slave /usr/local/bin/ctest ctest /usr/bin/ctest3 \
--slave /usr/local/bin/cpack cpack /usr/bin/cpack3 \
--slave /usr/local/bin/ccmake ccmake /usr/bin/ccmake3 \
--family cmake
After these two commands, cmake3 will be invoked by default, when you enter cmake from a bash prompt or start a bash script. The commands also take care of registering a few secondary commands like ctest which need to be switched along with cmake.
If you need to switch back to cmake 2.8 as the default, run the following command:
$ sudo alternatives --config cmake
There are 2 programs which provide 'cmake'.
Selection Command
-----------------------------------------------
1 cmake (/usr/bin/cmake)
*+ 2 cmake (/usr/bin/cmake3)
Enter to keep the current selection[+], or type selection number: 1
| https://stackoverflow.com/questions/48831131/ |
How to find functions imported from torch._C in source code | I'm trying to track down the implementation of torch.nn.NLLLoss in the source code. I got as far as a call to torch._C.nll_loss in the function nll_loss in the file torch.nn.functional. But I can't find a place where _C is created.
Anyone have any info on this?
| Take a look at A Tour of PyTorch Internals on the PyTorch blog. Relevant excerpt:
PyTorch defines a new package torch. In this post we will consider the ._C module. This module is known as an “extension module” - a Python module written in C. Such modules allow us to define new built-in object types (e.g. the Tensor) and to call C/C++ functions.
The ._C module is defined in torch/csrc/Module.cpp. The init_C() / PyInit__C() function creates the module and adds the method definitions as appropriate. This module is passed around to a number of different __init() functions that add further objects to the module, register new types, etc.
Part II to that post goes into detail about the build system. In the section on NN modules, it says
Briefly, let’s touch on the last part of the build_deps command: generate_nn_wrappers(). We bind into the backend libraries using PyTorch’s custom cwrap tooling, which we touched upon in a previous post. For binding TH and THC we manually write the YAML declarations for each function. However, due to the relative simplicity of the THNN and THCUNN libraries, we auto-generate both the cwrap declarations and the resulting C++ code.
The reason we copy the THNN.h and THCUNN.h header files into torch/lib is that this is where the generate_nn_wrappers() code expects these files to be located. generate_nn_wrappers() does a few things:
Parses the header files, generating cwrap YAML declarations and writing them to output .cwrap files
Calls cwrap with the appropriate plugins on these .cwrap files to generate source code for each
Parses the headers a second time to generate THNN_generic.h - a library that takes THPP Tensors, PyTorch’s “generic” C++ Tensor Library, and calls into the appropriate THNN/THCUNN library function based on the dynamic type of the Tensor
Perhaps not that helpful without the context, but I don't think I should copy the entire post here.
When I tried to track down the definition of NLLLoss without having read those posts, I ended up at aten/src/THNN/generic/ClassNLLCriterion.c, via aten/src/ATen/nn.yaml. The latter is probably the YAML the second post talks about, but I haven't checked.
| https://stackoverflow.com/questions/48874968/ |
binary image digit convolution neural network has a very high confidence when nothing is in the image | I've been training my convolutional neural network on closely cropped printed digits, similar in style to the Mnist dataset. It works perfectly, with close to 100 percent on both the training and test data.
At first I trained it on 4 channel binary images where white was '255' and black was '0'. It has 10 outputs, one for each digit, which i then softmax in order to get a probability for each category.
However, I want to use a sliding window technique in order to analyse a page of digits. This is impossible because, for a fully white input, it returns almost %100 confidence that it is a 4, while pretty much nothing for everything else.
I thought it might be the fact the neural net was training on the white space in the image rather than the black space, as the black pixels had value 0, so I inverted the images and trained the network again. again, it simply returns really close to 100% confidence on a fully white image.
For both, it returns low percentages for each class when the image is completely black,as it should, although 4 is still the highest
I don't understand the intuition behind this, so any help would be great, even if you could just say its not usual behaviour. Is this to be expected? should i create another class for things that arent digits aswell and train it on that?
heres my neural network:
its fully convolutional so that i can implement fast sliding window with it, but the last convolutions are equivalent to fully connected layers
class fully_convolutional_1channel(nn.Module):
def __init__(self):
super(fully_convolutional_1channel, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fconv1 = nn.Conv2d(16, 120, (4,2))
self.fconv2 = nn.Conv2d(120, 84, 1)
self.fconv3 = nn.Conv2d(84, 10, 1)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
#relu does not change size
x = self.pool(x)
x = self.pool(F.relu(self.conv2(x)))
#x = x.view(-1, 16 * 4 * 2)
x = F.relu(self.fconv1(x))
x = F.relu(self.fconv2(x))
x = self.fconv3(x)
#print(list(x.size))
return x
|
should i create another class for things that arent digits aswell and train it on that?
Yes. Your network has been trained to recognize digits, not white spaces. You could either re-train the network with this strategy or preprocess the page of digits to avoid feeding the network with white spaces.
| https://stackoverflow.com/questions/48892928/ |
UserWarning: Implicit dimension choice for log_softmax has been deprecated | I´m using Mac OS el capitán and I am trying to follow the quick start tutorial for OpenNMT pytorch version. In the training step I get the following warning message:
OpenNMT-py/onmt/modules/GlobalAttention.py:177: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
align_vectors = self.sm(align.view(batch*targetL, sourceL))
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/modules/container.py:67: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
input = module(input)
Step 1: Preprocess data (works as expected)
python preprocess.py -train_src data/src-train.txt -train_tgt data/tgt-train.txt -valid_src data/src-val.txt -valid_tgt data/tgt-val.txt -save_data data/demo
Step 2: Train model (produces the warning message)
python train.py -data data/demo -save_model demo-model
Has anyone come across this warning or have any pointers to solve it?
| It is almost always you will need the last dimension when you compute the cross-entropy so your line may look like:
torch.nn.functional.log_softmax(x, -1)
| https://stackoverflow.com/questions/49006773/ |
why do we need to call `detach_` in `zero_grad`? | I am reading the source code of optimizer.zero_grad()
http://pytorch.org/docs/master/_modules/torch/optim/optimizer.html#Optimizer.zero_grad
def zero_grad(self):
"""Clears the gradients of all optimized :class:`Variable` s."""
for group in self.param_groups:
for p in group['params']:
if p.grad is not None:
p.grad.detach_()
p.grad.zero_()
I wonder why detach_() is necessary? What does it even mean to detach a gradient variable instead of a normal variable?
Why isn't zero_() enough?
| You do not want previous gradient updates to affect future gradient updates so you are detaching the previous gradients from the graph.
| https://stackoverflow.com/questions/49029247/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.