id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st104200 | Solved by Naman-ntc in post #2
You should look at torchvision functional transforms
You can randomly choose augmentation stratergy and apply on both input and ground truth |
st104201 | You should look at torchvision functional transforms 132
You can randomly choose augmentation stratergy and apply on both input and ground truth |
st104202 | Thank you this works, you just have to sample a random integer and use that as the degress param for the functional transform. |
st104203 | Hi, all,
when I tried to implement:
m = nn.Hardshrink()
input = Variable(torch.randn(2)).cuda()
print(m(input))
It showed hard shrink_forward is not implemented for type torch.cuda.FloatTensor.
However, when I tried Softshrink, it worked. Does this mean this function doesn’t support cuda?
Thanks in advance! |
st104204 | Are we planning to have that? It seems strange that softshrink has while hardshrink doesnot. |
st104205 | It’s probably a good thing to have. I’ve opened a feature request for it here 26. |
st104206 | I am also using HardShrink function and trying to put my model on cuda. Is there a projected time for when that will be fixed?
Thanks! |
st104207 | Hi,
It it possible to use aten C++11 code to access a Tensor that was allocated on the GPU /CPU using PyTorch?
I am aware the opposite direction id possible.
Thanks, |
st104208 | I am using c++ application to embedding python. The whole process is : C++ read an image, and transfer the image as a parameter to python. The python code load a CNN model(trained with Pytorch) and feed the image into the network, finally the output of network is called back to c++ program. I am doing this on Windows 10. When the python code execute the following line:
state_dict = torch.load('model_lastest.pt')
it fails and return the error "System Error: "
I have tried to directly load the same model in pure python , no error occurred. But once I use c++ to embed this python code, the above error happens. Can any one help me? |
st104209 | I would like to do the exact same thing. Any alternatives to the method described above? |
st104210 | Hi all,
It seems that the pyTorch docker image from dockerhub has size 3.11GB and the CUDA files (cuda.h, etc) are not inside /usr/local/, but when I build from the Dockerfile provided in repo it has size around 5 GB with all the CUDA files in expected directories. What kind of sorcery is this? |
st104211 | I have a very simple model, where weights are not updating but biases are. Haven’t been able to figure out why. (full code here 2)
Model:
class MnistModel(nn.Module):
def __init__(self, batch_size):
super(MnistModel, self).__init__()
self.batch_size = batch_size
self.w = torch.nn.Parameter(torch.empty(batch_size, 784, 10).uniform_(0, 1))
self.b = torch.nn.Parameter(torch.empty(10).uniform_(0, 1))
def forward(self, x):
return torch.bmm(x, self.w) + self.b
and loop:
for raw_data, raw_target in train_loader:
data = raw_data.view((batch_size, 1, 784))
logits = model(data)
zeros[:, :, :] = 0
zeros[rows, 0, raw_target] = 1
loss = criterion(logits, zeros.float())
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
when you run this code it will show( biase and weights are taken from model.parameters()):
weight[0][0][0] before training: tensor(0.7576)
Bias[0] before training: tensor(0.3681)
weight[0][0][0] after training: tensor(0.7576)
Bias[0] after training tensor(-48.1940) |
st104212 | I think some of the weights are updating. When I sum all of the weights I get another value after training than I got before training, but it is still very close to the first one.
sum(weight) before training: tensor(1.00000e+06 *
3.9201)
Bias[0] before training: tensor(0.3681)
sum(weight) after training: tensor(1.00000e+06 *
3.8118)
Bias[0] after training tensor(-47.6196) |
st104213 | I am not absolutely sure in this point, but do you really want to have the batch_size as one Dimension of your weight w?
Shouldn’t the weight be the same for every element in the batch and you just expand it in the forward computation like this:
class MnistModel(nn.Module):
def __init__(self, batch_size):
super(MnistModel, self).__init__()
self.w = torch.nn.Parameter(torch.empty(784, 10).uniform_(0, 1))
self.b = torch.nn.Parameter(torch.empty(10).uniform_(0, 1))
def forward(self, x):
exp_w = self.w.expand(x.size(0), self.w.size(0), self.w.size(1))
return torch.bmm(x, exp_w) + self.b
? |
st104214 | With your pointers and changing optimizer to torch.optim.Adam I was able to make weights change a little more. |
st104215 | Hi,
I am very new to Pytorch and this is my first post here, so I apologise if the question is very straightforward.
My problem is that I have defined class net1 and initialised it randomly with a manual seed.
random.seed(opt.manualSeed)
torch.manual_seed(opt.manualSeed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(opt.manualSeed)
class net1(nn.Module):
def __init__(self):
super(net1, self).__init__()
self.main_body = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=96, kernel_size=11, stride=4, padding=0),
# rest of the network...#
)
def forward(self, x):
output = self.main_body(x)
return output
# custom weights initialization called on net1
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
net1_ = net1()
net1_.apply(weights_init)
However, when I add another class net2 to the code:
class net2(nn.Module):
def __init__(self):
super(net2, self).__init__()
self.main_body = nn.Sequential(
nn.Conv2d(4096, 1, 1)
nn.Sigmoid()
)
def forward(self, x):
output = self.main_body(x)
return output.view(-1, 1).squeeze(1)
net2_ = net2()
and instantiate it, even though I do not use it anywhere else and it is not connected to my main graph (which is built on net1_), I get different outputs from my graph.
Is it a reasonable outcome? |
st104216 | I dont think this is related to net2. Some operations in the convolution layer are non-deterministic, and BatchNorm behaves differently based on different mini-batches.
If you inserted the code of net2 above net1, then what you see is also expected, because the weights of net2 get initialized, changing the state of the random number generator |
st104217 | Thanks for your answer.
Though I get very consistent outputs every time I run the code, when class _net2 is not defined.
I guess I see, that defining a new module anywhere probably changes the behaviour of random generator. |
st104218 | I have some problem in understanding the meaning of torch.manual_seed(opt.manualSeed). You didn’t call this variable later on, how does it help initialize the net? Which parameters does it initialize? Thanks. |
st104219 | To my understanding, setting a manual seed makes all the initializations in the code (e.g. weight initializations embedded in the network definitions, or wherever else in the code that you call a random generator) to be deterministic. In other words, given a fixed manual seed, they are supposed to generate identical values, any time you run the program. |
st104220 | Yes, you don’t see it is called later. But in pytorch, the in-build random generator use the seed. Trace the related codes, you can find https://github.com/pytorch/pytorch/blob/1c51c185a1152ae7efe7d8d490102a9cdb7782e7/torch/lib/THC/THCTensorRandom.cpp 153. It is where you seed finally goes to. |
st104221 | Hi, I have a very simple question to ask.
If I manually set the seed and initialize three weight vectors of the same shape, then I am wondering whether all the initialized parameters for these three weight vectors will be the same? |
st104222 | Hello, everyone.
I want to write some pytorch cuda extensions in windows 10.
I succeed in writing c extensions in these links:
https://pytorch.org/tutorials/advanced/c_extension.html 8
https://pytorch.org/docs/master/notes/windows.html 9
I noted that most implementation of cuda extension is based on c extension, which means that python interact with c, and c interact with cuda.
I know how to use cffi to build pure c extension in windows 10, and I succeed.
I wonder when I use cffi to build a c extension which has interaction with cuda, does cffi build cuda file (.cu) together? Or the cuda file should be built separately? How to built the cuda file in windows 10?
Another question is: Is there any proper methods to co-debug c and cuda extensions in windows 10? Can IDEs such as Visual Studio do this job?
I am not familiar with cuda programming and debugging , thanks for any suggestion |
st104223 | Is there any difference between to(), type() and type_as()? I don’t know why there is such need to have three implementations, especially what is the difference between to() and type(). It seems that xx.to(x.type()) will report error, while xx.to(torch.float) is correct. It seems to be better to combine them, or use to() only for changing device. |
st104224 | Solved by RicCu in post #2
type() is an older method that works only with the tensor types (e.g. DoubleTensor, HalfTensor, etc.), whereas to() was added in 0.4 along with the introduction of dtypes (e.g. torch.float, torch.half) and devices, which are more flexible.
to() is the preferred method now. |
st104225 | type() is an older method that works only with the tensor types (e.g. DoubleTensor, HalfTensor, etc.), whereas to() was added in 0.4 along with the introduction of dtypes (e.g. torch.float, torch.half) and devices, which are more flexible.
to() is the preferred method now. |
st104226 | Thanks for your help @RicCu . So for now, dtype is also preferred than type, right? And I need to use x.to(y.dtype) instead of x.type_as(y) or x.type(y.type()). |
st104227 | Yes, dtype is preferred
You can actually do x = x.to(y), which returns a copy of x with the dtype and the device of y. If you only want to change the tensor’s dtype or device, but not both, then yeah, you woud do it with to(y.dtype) |
st104228 | Does PyTorch 0.4 take advantage of the GPU when batch training LSTMs with masked sequences?
I have noticed, as I put together a toy example of a multi-layer LSTM with teacher forcing (i.e., the whole input can be fed to the network at the same time) that I only see performance gains on GPU vs CPU when I increase the overall size of the network, not when I increase the batch size. I.e., the PyTorch does not seem to parallelize over batches.
Is this the expected behavior?
I am using pack_padded_sequence for eventual use with masked sequences, but right now the sequences are uniform length to eliminate that variability. All of the following are moved to device when testing with the GPU:
Hidden layers
Model
Criterion
Inputs
Targets
The loss function is relatively simple, uses pairwise distance, and acts directly on the packed sequences correctly without needing to be re-padded.
Is this expected behavior?
Or am I missing some way to coax batch parallelism out of an RNN?
(Note this is a single-GPU question. In trying to answer this question myself, I found a lot of descriptions of DataParallel, but that seems to be a multi-GPU solution.) |
st104229 | I have 4 GPU. If I do not use CUDA_VISIBLE_DEVICES and run such shell:
import torch
a = torch.tensor([1,1], device=“cuda:1”)
a.device
device(type=‘cuda’, index=1)
Then I check the GUP usage by nvidia-smi. I get such result:
±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15254 C python 519MiB |
| 1 15254 C python 519MiB |
±----------------------------------------------------------------------------+
It seems that my python shell (PID=15254) uses 2 GPU! I’m wondering why the first GPU is used? |
st104230 | CUDA context are created on both GPU 0 and GPU 1. All further operations, if only involving GPU 1 tensors, will not use memory on GPU 0.
The context thing is fixed on GitHub master. For a workaround you can set CUDA_VISIBLE_DEVICES. |
st104231 | Hi
So i see in the main.py (https://github.com/pytorch/examples/blob/master/imagenet/main.py 699) we have model.train() and model.val(), i dont understand how to use them. can someone explain it to me please.
For example in here:
python main.py -a resnet18 [imagenet-folder with train and val folders] we did not specify train or eval, so how do we know which one to use.
I know my question is stupid, please let me know if there is any good tutorial to read and understand it.
Thanks |
st104232 | Solved by anis016 in post #2
maybe these should clear you out.
By default all the modules are initialized to train mode (self.training = True). Also be aware that some layers have different behavior during train/and evaluation (like BatchNorm, Dropout) so setting it matters.
Dropout works as a regularization for preventin… |
st104233 | maybe these should clear you out.
Model.train() and model.eval() vs model and model.eval()
Yes, they are the same. By default all the modules are initialized to train mode (self.training = True). Also be aware that some layers have different behavior during train/and evaluation (like BatchNorm, Dropout) so setting it matters.
Also as a rule of thumb for programming in general, try to explicitly state your intent and set model.train() and model.eval() when necessary.
By default all the modules are initialized to train mode (self.training = True). Also be aware that some layers have different behavior during train/and evaluation (like BatchNorm, Dropout) so setting it matters.
Do need to use model.eval() when I test?
Sure, Dropout works as a regularization for preventing overfitting during training.
It randomly zeros the elements of inputs in Dropout layer on forward call.
It should be disabled during testing since you may want to use full model (no element is masked)
Dropout works as a regularization for preventing overfitting during training.
It randomly zeros the elements of inputs in Dropout layer on forward call.
It should be disabled during testing ( model.eval() ) since you may want to use full model (no element is masked) |
st104234 | Hi everyone, I’m currently trying to running a Pytorch code inorder to re-implement its Lua Torch version. My question is that if all the operations are the same, (networks structure, learning rate, dropout, etc.) will there still be differences between Pytorch version and Torch version that may cause performance gap? for example, there are different default initialization values or gradient update values between the two, thanks. |
st104235 | I am trying to implement truncated backporpgation through time with an LSTM module (with k1 = k2). I’m running into a few specific problems that I’m not sure how to solve.
When running loss.backward for a second time (wanting to only run on the next k1 = k2 steps) I get the following error. However because I’m calling optimizer.zero_grad(), shouldn’t that reset all the gradients in the network and allow me to start the gradients anew? Why is there a need for retain_graph in this case?
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
Even if I completely comment out the backward call as I’ve done in the attached gist code, I’m receiving an out of memory error on the second iteration (shown below). I’m confused by this as the GPU has enough memory to store both the networks parameters and the training examples on the first iteration but not the second. Is the network saving some unnecessary data from the last iteration?
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58
My full code is available here:
gist.github.com
https://gist.github.com/sauhaardac/7c825acf22ae703f25618da397b35f87 21
raga_train.py
from os import listdir
from os.path import isfile, join
import os
import json
import progressbar
from multiprocessing import Pool
import time, tqdm, random
from collections import deque
import numpy as np
This file has been truncated. show original |
st104236 | Hello,
I was reading pytorch documentation on nn.NLLLoss module. I tried out the following example provided:
2D loss example (used, for example, with image inputs)
N, C = 5, 4
loss = nn.NLLLoss()
input is of size N x C x height x width
data = torch.randn(N, 16, 10, 10)
m = nn.Conv2d(16, C, (3, 3))
each element in target has to have 0 <= value < C
target = torch.tensor(N, 8, 8).random_(0, C)
output = loss(m(data), target)
output.backward()
but I get this error instead.
ValueError: Expected input batch_size (5) to match target batch_size (3).
Thanks for in advance for the help and assistances! |
st104237 | Could you try to run this single line of code:
target = torch.tensor(N, 8, 8).random_(0, C)
Do you get an error or was it successful? |
st104238 | ptrblck:
target = torch.tensor(N, 8, 8).random_(0, C)
I get this error:
TypeError: tensor() takes 1 positional argument but 3 were given |
st104239 | Thanks for the info.
Could you change it to
target = torch.Tensor(N, 8, 8).random_(0, C)
and run it again? (Note the uppercase T in Tensor) |
st104240 | ptrblck:
target = torch.Tensor(N, 8, 8).random_(0, C)
Ok, this time no error, however when I proceed to run the next line:
output = loss(m(data), target)
I get this error:
RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 ‘target’ |
st104241 | Try to cast the tensor so long by
target = target.long()
Which PyTorch version are you using? |
st104242 | Ah, that worked, thank you very much! I understand it better now
I’m currently running python 3.6 and and pytorch 0.4.0 |
st104243 | How do I create a custom kernel (laplace) and then convolve it with my input image efficiently ? |
st104244 | Everytime when I tried to run the initialization code like in ResNet, the machine just said
Illegal Instruction
The code I tried to run is here:
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
The machine is a Linux server with Centos system. I don’t have the root privelige.
And, when I run torch.randn(2,2), it’s OK. But for torch.randn(4,4) and bigger size, it also says illegal instruction. |
st104245 | Hi there.
I am so new in Pytorch. Here is My code to implement a GAN architecture to generate some Images. I have implement it based on dcgan example in PyTorch github repository. when I’ve ran my code on my 2 Geforce GTX 1080 and also with 64 GB Ram, I’ve faced with following error (by the way my cuda version is 7.5):
RuntimeError Traceback (most recent call last)
in ()
7 label.data.resize_(batchSize).fill_(real_label)
8
----> 9 output = netD(input)
10 errD_real = criterion(output, label)
11 errD_real.backward()
/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
200
201 def call(self, *input, **kwargs):
–> 202 result = self.forward(*input, **kwargs)
203 for hook in self._forward_hooks.values():
204 hook_result = hook(self, input, result)
in forward(self, input)
32 gpu_ids = range(self.ngpu)
33
—> 34 output = nn.parallel.data_parallel(self.main, input, gpu_ids)
35 return output.view(-1,1)
/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in data_parallel(module, input, device_ids, output_device)
90 inputs = scatter(input, device_ids)
91 replicas = replicas[:len(inputs)]
—> 92 outputs = parallel_apply(replicas, inputs)
93 return gather(outputs, output_device)
/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs)
43 output = results[i]
44 if isinstance(output, Exception):
—> 45 raise output
46 outputs.append(output)
47 return outputs
/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py in _worker(module, input, results, lock)
24 try:
25 with torch.cuda.device_of(var_input):
—> 26 output = module(input)
27 with lock:
28 results[input] = output
/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
200
201 def call(self, *input, **kwargs):
–> 202 result = self.forward(*input, **kwargs)
203 for hook in self._forward_hooks.values():
204 hook_result = hook(self, input, result)
/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input)
62 def forward(self, input):
63 for module in self._modules.values():
—> 64 input = module(input)
65 return input
66
/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
200
201 def call(self, *input, **kwargs):
–> 202 result = self.forward(*input, **kwargs)
203 for hook in self._forward_hooks.values():
204 hook_result = hook(self, input, result)
/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input, output_size)
521 return F.conv_transpose2d(
522 input, self.weight, self.bias, self.stride, self.padding,
–> 523 output_padding, self.groups)
524
525
/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in conv_transpose2d(input, weight, bias, stride, padding, output_padding, groups)
116 f = ConvNd(_pair(stride), _pair(padding), _pair(1), True,
117 _pair(output_padding), groups)
–> 118 return f(input, weight, bias) if bias is not None else f(input, weight)
119
120
/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/conv.py in forward(self, input, weight, bias)
33 if k == 3:
34 input, weight = _view4d(input, weight)
—> 35 output = self._update_output(input, weight, bias)
36 if k == 3:
37 output, = _view3d(output)
/home/mlcmdeep/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/conv.py in _update_output(self, input, weight, bias)
82 self.use_cudnn = not self.is_dilated()
83 if self.use_cudnn:
—> 84 output = input.new(*self._output_size(input, weight))
85 if self.transposed:
86 self._cudnn_info = (
RuntimeError: cuda runtime error (2) : out of memory at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487346124464/work/torch/lib/THC/generic/THCStorage.cu:66
I would like to know how can I solve my problem. What is the source of problem?
My Code:
import torch
import torch.nn as nn
import torchvision
import torchvision.datasets as dset
import torchvision.transforms as trans
from torch.autograd import Variable as V
import torchvision.utils as vutils
import torch.optim as optim
import PyQt5
import matplotlib
matplotlib.use(‘Qt5Agg’)
import matplotlib.pyplot as plt
import numpy as np
def imshow(img):
img = img/2 + 0.5
npimg = img.numpy()
plt.figure(1)
plt.imshow(np.transpose(npimg, (1,2,0)))
plt.show()
ngpu = 2 # Number of GPU
nz = 100 # Number of Latent Code
ngf = 64 # Number of Generator Feature Map
ndf = 64 # Number of Discriminator Feature Map
nc = 3 # Number of Channel For each Image
batchSize = 64
imageWidth = 256
imageHeight = 256
cuda = 1
learning_rate = 0.0002
beta1 = 0.5
beta2 = 0.99
niter = 25
outf = ‘./data’
data_root = ‘./Genuine/’
dataset = dset.ImageFolder(root=data_root,
transform=trans.Compose([trans.Scale(256),
trans.CenterCrop(256),
trans.ToTensor(),
trans.Normalize((.5,.5,.5),(.5,.5,.5))])
)
assert dataset
dataLoader = torch.utils.data.DataLoader(dataset = dataset,
batch_size=batchSize,
shuffle=True,
num_workers = 2)
class _netG(nn.Module):
def init(self, ngpu):
super(_netG, self).init()
self.ngpu = ngpu
self.main = nn.Sequential(
nn.ConvTranspose2d(nz, ngf32, 4, 1, 0, bias=True),
nn.BatchNorm2d(32ngf),
nn.ReLU(True), # size( (32ngf)44 )
nn.ConvTranspose2d(ngf32, ngf20, 4, 2, 1, bias=True),
nn.BatchNorm2d(20ngf),
nn.ReLU(True), # size( (20ngf)88 )
nn.ConvTranspose2d(ngf20, ngf16, 4, 2, 1, bias=True),
nn.BatchNorm2d(16ngf),
nn.ReLU(True), # size( (16ngf)1616 )
nn.ConvTranspose2d(ngf16, ngf16, 4, 2, 1, bias=True),
nn.BatchNorm2d(16ngf),
nn.ReLU(True), # size( (16ngf)3232 )
nn.ConvTranspose2d(ngf32, ngf16, 4, 2, 1, bias=True),
nn.BatchNorm2d(16ngf),
nn.ReLU(True), # size( (16ngf)6464 )
nn.ConvTranspose2d(ngf16, ngf8, 4, 2, 1, bias=True),
nn.BatchNorm2d(8ngf),
nn.ReLU(True), # size( (8ngf)128128 )
nn.ConvTranspose2d(ngf8, ngf3, 4, 2, 1, bias=True),
nn.BatchNorm2d(8ngf),
nn.ReLU(True), # size( (3*ngf)256256 )
)
def forward(self, input):
gpu_ids = None
if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu>1:
gpu_ids = range(self.ngpu)
output = nn.parallel.data_parallel(self.main, input, gpu_ids)
return output
class _netD(nn.Module):
def init(self, ngpu):
super(_netD, self).init()
self.ngpu = ngpu
self.main = nn.Sequential(
nn.ConvTranspose2d(nz, ngf32, 4, 1, 0, bias=True),
nn.BatchNorm2d(32ngf),
nn.ReLU(True), # size( (32ngf)44 )
nn.ConvTranspose2d(ngf32, ngf16, 4, 2, 1, bias=True),
nn.BatchNorm2d(16ngf),
nn.ReLU(True), # size( (16ngf)88 )
nn.ConvTranspose2d(ngf16, ngf12, 4, 2, 1, bias=True),
nn.BatchNorm2d(12ngf),
nn.ReLU(True), # size( (12ngf)1616 )
nn.ConvTranspose2d(ngf12, ngf10, 4, 2, 1, bias=True),
nn.BatchNorm2d(10ngf),
nn.ReLU(True), # size( (10ngf)3232 )
nn.ConvTranspose2d(ngf10, ngf8, 4, 2, 1, bias=True),
nn.BatchNorm2d(8ngf),
nn.ReLU(True), # size( (8ngf)6464 )
nn.ConvTranspose2d(ngf8 ,ngf6 ,4, 2, 1, bias=True),
nn.BatchNorm2d(6ngf),
nn.ReLU(True), # size( (6ngf)128128 )
nn.ConvTranspose2d(ngf6, 3, 4, 2, 1, bias=True),
nn.BatchNorm2d(3),
nn.ReLU(True), # size( (3*ngf)256256 )
)
def forward(self, input):
gpu_ids = None
if isinstance(input.data, torch.cuda.FloatTensor) and self.ngpu>1:
gpu_ids = range(self.ngpu)
output = nn.parallel.data_parallel(self.main, input, gpu_ids)
return output.view(-1,1)
def weights_init(m):
classname = m.class.name
if classname.find(‘Conv’) != (-1):
m.weight.data.normal_(0.0, 0.02)
elif classname.find(‘BatchNorm’) != (-1):
m.weight.data.normal_(1,0.02)
m.bias.data.fill_(0)
netG = _netG(ngpu)
netG.apply(weights_init)
print(netG)
netD = _netD(ngpu)
netD.apply(weights_init)
print(netD)
criterion = nn.BCELoss() # Binary Cross Entropy Between Target and Output
input = torch.FloatTensor(batchSize, nc, imageWidth, imageHeight)
noise = torch.FloatTensor(batchSize, nz, 1, 1)
fixed_noise = torch.FloatTensor(batchSize, nz, 1, 1).normal_(0,1)
label = torch.FloatTensor(batchSize)
real_label, fake_label = 1, 0
if cuda:
netG.cuda()
netD.cuda()
criterion.cuda()
input, label = input.cuda(), label.cuda()
noise, fixed_noise = noise.cuda(), label.cuda()
input = V(input)
label = V(label)
noise = V(noise)
fixed_noise = V(fixed_noise)
optimizerD = optim.Adam(netD.parameters(), lr= learning_rate, betas = (beta1, beta2))
optimizerG = optim.Adam(netG.parameters(), lr= learning_rate, betas = (beta1, beta2))
for epoch in range(niter):
for i, data in enumerate(dataLoader, 0):
netD.zero_grad()
real_cpu, _ = data
batchSize = real_cpu.size(0)
input.data.resize_(real_cpu.size()).copy_(real_cpu)
label.data.resize_(batchSize).fill_(real_label)
output = netD(input)
errD_real = criterion(output, label)
errD_real.backward()
D_x = output.mean()
noise.data.resize_(batchSize, nz, 1, 1)
noise.data.normal_(0,1)
fake = netG(noise).detach() # detach() blocks the gradient
label.data.fill_(fake_label)
output = netD(fake)
errD_fake = criterion(output, label)
errD_fake.backward()
D_G_z1 = output.data.mean()
errD = errD_fake + errD_real
optimizerD.step()
netG.zero_grad()
label.data.fill_(real_label)
noise.data.normal_(0,1)
fake = netG(noise)
output = netD(fake)
errG = criterion(output, label)
errG.backward()
D_G_z2 = output.data.mean()
optimizerG.step()
print('[%d/%d][%d/%d] Loss-D: %.4f loss-G: %.4f D(x): %.4f D(G(z)): %.4f / %.4f' % (epoch,
niter,
i,
len(dataLoader),
errD.data[0],
errG.data[0],
D_x,
D_G_z1,
D_G_z2))
if i%100 == 0:
vutils.save_image(real_cpu, '%s/real_sample.png' % (outf))
fake = netG(fixed_noise)
vutils.save_image(fake.data, '%s/fake_samples_%0.3d.png' % (outf, epoch))
do checkpointing
torch.save(netG.state_dict(), '%s/netG_epoch_%d.pth' % (outf, epoch))
torch.save(netD.state_dict(), '%s/netD_epoch_%d.pth' % (outf, epoch)) |
st104246 | The problem is exactly what the error says, you ran out of memory on the GPU. You can try reducing the batch size or image sizes. |
st104247 | To complement apaszke’s answer, in case you are using Jupyter and you ‘run’ multiple times the cell which creates the network, put always a delete net on that cell, otherwise you might be creating multiple neural networks which will cause similar problems.
If not, just try to find the maximum batch size which works for your hardware. |
st104248 | I analysed the memory consumption using nvidia-smi and found out that the code constantly consumes memory per iteration. I am using the DCGAN code adapted from pytorch examples 23. |
st104249 | I analysed the memory consumption using nvidia-smi and found out that the code constantly consumes memory per iteration. I am using the DCGAN code adapted from pytorch examples 24 |
st104250 | I face the same problem with you and I find the problem results from the upsampling layer.
When I use nvidia-smi to monitor my GPU memory usage, I find that during some time, the memory requirement for transpose2d for upsampling will be doubled and it tells me out of memory…
Maybe consider to reduce the batch size. |
st104251 | For some reason, torch.argmax is slower for me than transferring an array to CPU and then calling np.argmax. Any ideas why? Should I file a bug report?
gist.github.com
https://gist.github.com/tbenst/de3f61ac9956778a2ca6d5db94b526ef 12
argmax_bench.ipynb
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
This file has been truncated. show original |
st104252 | I have a tensor A:
A = torch.rand(4,1,5,5)
with the size of:
torch.Size([4, 1, 5, 5])
and I have tensor B:
B = torch.rand(1,1,5,5)
With the size of:
torch.Size([1, 1, 5, 5])
Can I concatenate these two tensors in a way that i have a tensor A (or a new tensor) with size of:
torch.Size([4, 2, 5, 5])
In other words i want to concatenate in a way that add them to the dim = 1 |
st104253 | I have checked other posts on the Cuda runtime error 59 and most of them are related to accessing out-of-bound indices but my error seems to be occurring during backprop and not in the forward prop (if it was an out-of-bound index, it would have happened in forward prop). The detailed error message is below:
THCudaCheck FAIL file=/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THCUNN/generic/Threshold.cu line=66 error=59 : device-side assert triggered
Traceback (most recent call last):
File "srnn/train.py", line 191, in <module>
main()
File "srnn/train.py", line 80, in main
train(args)
File "srnn/train.py", line 159, in train
loss.backward()
File "/home/anirudh/.virtualenvs/gp/local/lib/python2.7/site-packages/torch/autograd/variable.py", line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/home/anirudh/.virtualenvs/gp/local/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py", line 174, in backward
update_grad_input_fn(self._backend.library_state, input, grad_output, grad_input, *gi_args)
RuntimeError: cuda runtime error (59) : device-side assert triggered at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/THCUNN/generic/Threshold.cu:66 |
st104254 | Using CUDA_LAUNCH_BLOCKING=1, I get the following trace:
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [0,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [1,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [2,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [3,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [4,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [5,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [6,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [7,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [8,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [9,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [10,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [11,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [12,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [13,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [14,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [15,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [16,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [17,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [18,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [19,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [20,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [21,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [22,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [23,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [24,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [25,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [26,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [27,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [28,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [29,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [30,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [31,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [32,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [33,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [34,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [35,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [36,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [37,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [38,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [39,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [40,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [41,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [42,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [43,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [44,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [45,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [46,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [47,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [48,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [49,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [50,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [51,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [52,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [53,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [54,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [55,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [56,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [57,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [58,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [59,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [60,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [61,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [62,0,0] Assertion `dstIndex < dstFillDimSize` failed.
/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:202: void indexFillSmallIndex(TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, IndexType, long, T) [with T = float, IndexType = unsigned int, DstDim = 2, IdxDim = -2]: block: [0,0,0], thread: [63,0,0] Assertion `dstIndex < dstFillDimSize` failed.
THCudaCheck FAIL file=/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorCopy.cu line=100 error=59 : device-side assert triggered
Traceback (most recent call last):
File "srnn/train.py", line 190, in <module>
main()
File "srnn/train.py", line 80, in main
train(args)
File "srnn/train.py", line 158, in train
loss.backward()
File "/home/anirudh/.virtualenvs/gp/local/lib/python2.7/site-packages/torch/autograd/variable.py", line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/home/anirudh/.virtualenvs/gp/local/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py", line 44, in backward
grad_value = grad_output.index(self.index).clone()
RuntimeError: cuda runtime error (59) : device-side assert triggered at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorCopy.cu:100 |
st104255 | Error 59 is notoriously non-informative. Are you, by chance, using a pre-trained model when you get this error? I had a similar issue until I adjusted the number of outputs in the last layer to match the number of classes in my dataset. |
st104256 | Hi,
From the error message it looks like you are using indexing with wrong indices.
A good way to debug these errors is to simply run the exact same code on CPU, that way you will get a proper stack trace with the exact line that crashed. |
st104257 | @albanD 's answer helped me a lot, when I moved my model and variables to cpu I got an exact line number that caused it:
model.cpu()
input_features.cpu()
result = model(input_features)
Thanks! |
st104258 | Hello. I have a problem with my small project.
I want to classify a RGB images to two classes.
Size image = 100x85.
Training set:
class 1 = 1212 images
class 2 = 1695 images
Test set:
class 1 = 131 images
class 2 = 128 images
Problem: accurate is always = 49%
import torch
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import ImageFolder
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import transforms, datasets
data = ImageFolder(root='img-train', transform= transforms.ToTensor())
data2 = ImageFolder(root='img-valid', transform= transforms.ToTensor())
trainloader = DataLoader(data)
testloader = DataLoader(data2)
classes = ('class1', 'class2')
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 320, kernel_size=5)
self.conv2 = nn.Conv2d(320, 64, kernel_size=5)
self.conv3 = nn.Conv2d(64, 1024, kernel_size=5)
self.dropout = nn.Dropout2d()
self.fc1 = nn.Linear(6, 500)
self.fc2 = nn.Linear(500, 250)
self.fc3 = nn.Linear(250, 2)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.dropout(self.conv2(x)), 2))
x = F.relu(F.max_pool2d(self.dropout(self.conv3(x)), 2))
x = x.view(-1, 20480 )
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = F.relu(self.fc2(x))
# 50 -> 10
x = self.fc3(x)
# transform to logits
return F.log_softmax(x)
net=Net()
net.cuda()
print(net)
import torch.optim as optim
criterion = nn.NLLLoss()
optimizer = optim.SGD(net.parameters(), lr=0.005, momentum=0.0022)
for epoch in range(1): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# wrap them in Variable
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 20 == 19: # print every 2000 mini-batches
print('[%d, %5d] loss: %.20f' %
(epoch + 1, i + 1, running_loss / 20))
running_loss = 0.0
print('Finished Training')
dataiter = iter(testloader)
images, labels = dataiter.next()
net.eval()
correct = 0
total = 0
for data in testloader:
images, labels = data
images=images.cuda()
labels=labels.cuda()
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Correct: %d %%' % (
100 * correct / total)) |
st104259 | How is your training accuracy? Is it also at approx. 50%?
It’s a bit strange to use 320 filters in the first conv layer, than back to 64 and up again to 1024.
Did you want to use 640 instead?
If so, it also seems to be a bit high, but would maybe work better than the model now.
Could you plot your training loss and accuracy?
This would make it a bit more easy to debug. |
st104260 | Thank you for reply.
this my code now:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 320, kernel_size=5)
self.conv2 = nn.Conv2d(320, 640, kernel_size=5)
self.conv3 = nn.Conv2d(640, 1024, kernel_size=5)
self.dropout = nn.Dropout2d()
self.fc1 = nn.Linear(64512, 500)
self.fc2 = nn.Linear(500, 250)
self.fc3 = nn.Linear(250, 2)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.dropout(self.conv2(x)), 2))
x = F.relu(F.max_pool2d(self.dropout(self.conv3(x)), 2))
x = x.view(-1, 64512 )
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = F.relu(self.fc2(x))
# 50 -> 10
x = self.fc3(x)
# transform to logits
return F.log_softmax(x)
and my output:
Net(
(conv1): Conv2d(3, 320, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(320, 640, kernel_size=(5, 5), stride=(1, 1))
(conv3): Conv2d(640, 1024, kernel_size=(5, 5), stride=(1, 1))
(dropout): Dropout2d(p=0.5)
(fc1): Linear(in_features=64512, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=250, bias=True)
(fc3): Linear(in_features=250, out_features=2, bias=True)
)
testcnn.py:50: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
return F.log_softmax(x)
[1, 100] loss: 0.69665333777666094139
[1, 200] loss: 0.00708028078079223598
[1, 300] loss: 0.00437617301940917969
[1, 400] loss: 0.00268048048019409180
[1, 500] loss: 0.00068682432174682617
[1, 600] loss: 0.00131387710571289067
[1, 700] loss: 0.00132138729095458989
[1, 800] loss: 0.00043724775314331056
[1, 900] loss: 0.00070947408676147461
[1, 1000] loss: 0.00035747289657592771
[1, 1100] loss: 0.00042194128036499023
[1, 1200] loss: 0.00050359964370727539
[1, 1300] loss: 1.56638206690549841582
[1, 1400] loss: 0.00618010759353637695
[1, 1500] loss: 0.00646384358406066912
[1, 1600] loss: 0.00293265581130981463
[1, 1700] loss: 0.00214604139328002921
[1, 1800] loss: 0.00116072893142700191
[1, 1900] loss: 0.00060119628906249998
[1, 2000] loss: 0.00109928846359252930
[1, 2100] loss: 0.00142927765846252437
[1, 2200] loss: 0.00041127204895019531
[1, 2300] loss: 0.00112100839614868173
[1, 2400] loss: 0.00148788094520568848
[1, 2500] loss: 0.00093175768852233882
[1, 2600] loss: 0.00042861700057983398
[1, 2700] loss: 0.00033172369003295896
[1, 2800] loss: 0.00043708086013793945
[1, 2900] loss: 0.00052500963211059575
Finished Training
Correct: 49 % |
st104261 | I changed code, but I still have only 49%
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from torchvision.datasets import ImageFolder
from collections import namedtuple
from torch.utils.data import DataLoader
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
Params = namedtuple('Params', ['batch_size', 'test_batch_size', 'epochs', 'lr', 'momentum', 'seed', 'cuda', 'log_interval'])
args = Params(batch_size=64, test_batch_size=1000, epochs=10, lr=0.01, momentum=0.5, seed=1, cuda=False, log_interval=200)
data = ImageFolder(root='img-train', transform= transforms.ToTensor())
data2 = ImageFolder(root='img-valid', transform= transforms.ToTensor())
train_loader = DataLoader(data)
test_loader = DataLoader(data2)
classes = ('before', 'after')
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(7920, 50)
self.fc2 = nn.Linear(50, 2)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 7920)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
model = Net()
model.share_memory() # gradients are allocated lazily, so they are not shared here
def train_epoch(epoch, args, model, data_loader, optimizer):
model.train()
for batch_idx, (data, target) in enumerate(data_loader):
if args.cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(data_loader.dataset),
100. * batch_idx / len(data_loader), loss.data[0]))
def test_epoch(model, data_loader):
model.eval()
test_loss = 0
correct = 0
for data, target in data_loader:
if args.cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data, volatile=True), Variable(target)
output = model(data)
test_loss += F.nll_loss(output, target, size_average=False).data[0] # sum up batch loss
pred = output.data.max(1)[1] # get the index of the max log-probability
correct += pred.eq(target.data).cpu().sum()
test_loss /= len(data_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(data_loader.dataset),
100. * correct / len(data_loader.dataset)))
# Run the training loop over the epochs (evaluate after each)
if args.cuda:
model = model.cuda()
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
for epoch in range(1, args.epochs + 1):
train_epoch(epoch, args, model, train_loader, optimizer)
test_epoch(model, test_loader)
and output:
Train Epoch: 1 [0/2510 (0%)] Loss: 0.730912
Train Epoch: 1 [200/2510 (8%)] Loss: 0.000000
Train Epoch: 1 [400/2510 (16%)] Loss: 0.393666
Train Epoch: 1 [600/2510 (24%)] Loss: 0.000000
Train Epoch: 1 [800/2510 (32%)] Loss: 0.000000
Train Epoch: 1 [1000/2510 (40%)] Loss: 0.000000
Train Epoch: 1 [1200/2510 (48%)] Loss: 0.000000
Train Epoch: 1 [1400/2510 (56%)] Loss: 0.000161
Train Epoch: 1 [1600/2510 (64%)] Loss: 0.000000
Train Epoch: 1 [1800/2510 (72%)] Loss: 0.000000
Train Epoch: 1 [2000/2510 (80%)] Loss: 0.000028
Train Epoch: 1 [2200/2510 (88%)] Loss: 0.002456
Train Epoch: 1 [2400/2510 (96%)] Loss: 0.000000
Test set: Average loss: 13.5654, Accuracy: 128/259 (49%) |
st104262 | Could you run additionally test_epoch with train_loader to see the resubstitution error? |
st104263 | Train Epoch: 1 [0/2510 (0%)] Loss: 0.763600
Train Epoch: 1 [200/2510 (8%)] Loss: 0.000000
Train Epoch: 1 [400/2510 (16%)] Loss: 0.000000
Train Epoch: 1 [600/2510 (24%)] Loss: 0.000000
Train Epoch: 1 [800/2510 (32%)] Loss: 0.000000
Train Epoch: 1 [1000/2510 (40%)] Loss: 0.000000
Train Epoch: 1 [1200/2510 (48%)] Loss: 0.000000
Train Epoch: 1 [1400/2510 (56%)] Loss: 0.000004
Train Epoch: 1 [1600/2510 (64%)] Loss: 0.000000
Train Epoch: 1 [1800/2510 (72%)] Loss: 0.000000
Train Epoch: 1 [2000/2510 (80%)] Loss: 0.000000
Train Epoch: 1 [2200/2510 (88%)] Loss: 0.000000
Train Epoch: 1 [2400/2510 (96%)] Loss: 0.000000
Test set: Average loss: 22.8103, Accuracy: 1299/2510 (52%)
Train Epoch: 2 [0/2510 (0%)] Loss: 25.119247
Train Epoch: 2 [200/2510 (8%)] Loss: 0.000971
Train Epoch: 2 [400/2510 (16%)] Loss: 0.000031
Train Epoch: 2 [600/2510 (24%)] Loss: 0.000000
Train Epoch: 2 [800/2510 (32%)] Loss: 0.003936
Train Epoch: 2 [1000/2510 (40%)] Loss: 0.000000
Train Epoch: 2 [1200/2510 (48%)] Loss: 0.070989
Train Epoch: 2 [1400/2510 (56%)] Loss: 0.073976
Train Epoch: 2 [1600/2510 (64%)] Loss: 0.000000
Train Epoch: 2 [1800/2510 (72%)] Loss: 0.005794
Train Epoch: 2 [2000/2510 (80%)] Loss: 0.000753
Train Epoch: 2 [2200/2510 (88%)] Loss: 0.000000
Train Epoch: 2 [2400/2510 (96%)] Loss: 0.000000
Test set: Average loss: 6.6127, Accuracy: 1299/2510 (52%)
Train Epoch: 3 [0/2510 (0%)] Loss: 17.105192
Train Epoch: 3 [200/2510 (8%)] Loss: 0.145598
Train Epoch: 3 [400/2510 (16%)] Loss: 0.018258
Train Epoch: 3 [600/2510 (24%)] Loss: 0.062921
Train Epoch: 3 [800/2510 (32%)] Loss: 0.054164
Train Epoch: 3 [1000/2510 (40%)] Loss: 0.037550
Train Epoch: 3 [1200/2510 (48%)] Loss: 0.008747
Train Epoch: 3 [1400/2510 (56%)] Loss: 0.051795
Train Epoch: 3 [1600/2510 (64%)] Loss: 0.003390
Train Epoch: 3 [1800/2510 (72%)] Loss: 0.016323
Train Epoch: 3 [2000/2510 (80%)] Loss: 0.022865
Train Epoch: 3 [2200/2510 (88%)] Loss: 0.004945
Train Epoch: 3 [2400/2510 (96%)] Loss: 0.001030
Test set: Average loss: 2.9158, Accuracy: 1299/2510 (52%)
Train Epoch: 4 [0/2510 (0%)] Loss: 7.015765
Train Epoch: 4 [200/2510 (8%)] Loss: 0.134741
Train Epoch: 4 [400/2510 (16%)] Loss: 0.049476
Train Epoch: 4 [600/2510 (24%)] Loss: 0.013536
Train Epoch: 4 [800/2510 (32%)] Loss: 0.008116
Train Epoch: 4 [1000/2510 (40%)] Loss: 0.041024
Train Epoch: 4 [1200/2510 (48%)] Loss: 0.001711
Train Epoch: 4 [1400/2510 (56%)] Loss: 0.087386
Train Epoch: 4 [1600/2510 (64%)] Loss: 0.019133
Train Epoch: 4 [1800/2510 (72%)] Loss: 0.002450
Train Epoch: 4 [2000/2510 (80%)] Loss: 0.134814
Train Epoch: 4 [2200/2510 (88%)] Loss: 0.001560
Train Epoch: 4 [2400/2510 (96%)] Loss: 0.009285
Test set: Average loss: 3.0047, Accuracy: 1299/2510 (52%)
Train Epoch: 5 [0/2510 (0%)] Loss: 2.197697
Train Epoch: 5 [200/2510 (8%)] Loss: 0.142283
Train Epoch: 5 [400/2510 (16%)] Loss: 0.062617
Train Epoch: 5 [600/2510 (24%)] Loss: 0.022220
Train Epoch: 5 [800/2510 (32%)] Loss: 0.016140
Train Epoch: 5 [1000/2510 (40%)] Loss: 0.009101
Train Epoch: 5 [1200/2510 (48%)] Loss: 0.011840
Train Epoch: 5 [1400/2510 (56%)] Loss: 0.233425
Train Epoch: 5 [1600/2510 (64%)] Loss: 0.041333
Train Epoch: 5 [1800/2510 (72%)] Loss: 0.080752
Train Epoch: 5 [2000/2510 (80%)] Loss: 0.010446
Train Epoch: 5 [2200/2510 (88%)] Loss: 0.001828
Train Epoch: 5 [2400/2510 (96%)] Loss: 0.018279
Test set: Average loss: 2.6521, Accuracy: 1299/2510 (52%)
Train Epoch: 6 [0/2510 (0%)] Loss: 5.629133
Train Epoch: 6 [200/2510 (8%)] Loss: 0.111943
Train Epoch: 6 [400/2510 (16%)] Loss: 0.078148
Train Epoch: 6 [600/2510 (24%)] Loss: 0.030254
Train Epoch: 6 [800/2510 (32%)] Loss: 0.018951
Train Epoch: 6 [1000/2510 (40%)] Loss: 0.007060
Train Epoch: 6 [1200/2510 (48%)] Loss: 0.006947
Train Epoch: 6 [1400/2510 (56%)] Loss: 0.146729
Train Epoch: 6 [1600/2510 (64%)] Loss: 0.037547
Train Epoch: 6 [1800/2510 (72%)] Loss: 0.008593
Train Epoch: 6 [2000/2510 (80%)] Loss: 0.061359
Train Epoch: 6 [2200/2510 (88%)] Loss: 0.010392
Train Epoch: 6 [2400/2510 (96%)] Loss: 0.003662
Test set: Average loss: 2.4896, Accuracy: 1299/2510 (52%)
Train Epoch: 7 [0/2510 (0%)] Loss: 5.354959
Train Epoch: 7 [200/2510 (8%)] Loss: 0.219744
Train Epoch: 7 [400/2510 (16%)] Loss: 0.047250
Train Epoch: 7 [600/2510 (24%)] Loss: 0.033134
Train Epoch: 7 [800/2510 (32%)] Loss: 0.006888
Train Epoch: 7 [1000/2510 (40%)] Loss: 0.041285
Train Epoch: 7 [1200/2510 (48%)] Loss: 0.010135
Train Epoch: 7 [1400/2510 (56%)] Loss: 0.185328
Train Epoch: 7 [1600/2510 (64%)] Loss: 0.030203
Train Epoch: 7 [1800/2510 (72%)] Loss: 0.068935
Train Epoch: 7 [2000/2510 (80%)] Loss: 0.021755
Train Epoch: 7 [2200/2510 (88%)] Loss: 0.013778
Train Epoch: 7 [2400/2510 (96%)] Loss: 0.001412
Test set: Average loss: 2.3966, Accuracy: 1299/2510 (52%)
Train Epoch: 8 [0/2510 (0%)] Loss: 5.587163
Train Epoch: 8 [200/2510 (8%)] Loss: 0.201555
Train Epoch: 8 [400/2510 (16%)] Loss: 0.057020
Train Epoch: 8 [600/2510 (24%)] Loss: 0.030785
Train Epoch: 8 [800/2510 (32%)] Loss: 0.045294
Train Epoch: 8 [1000/2510 (40%)] Loss: 0.009532
Train Epoch: 8 [1200/2510 (48%)] Loss: 0.033173
Train Epoch: 8 [1400/2510 (56%)] Loss: 0.242416
Train Epoch: 8 [1600/2510 (64%)] Loss: 0.054072
Train Epoch: 8 [1800/2510 (72%)] Loss: 0.014919
Train Epoch: 8 [2000/2510 (80%)] Loss: 0.028255
Train Epoch: 8 [2200/2510 (88%)] Loss: 0.014499
Train Epoch: 8 [2400/2510 (96%)] Loss: 0.004550
Test set: Average loss: 2.2503, Accuracy: 1299/2510 (52%)
Train Epoch: 9 [0/2510 (0%)] Loss: 4.205160
Train Epoch: 9 [200/2510 (8%)] Loss: 0.219553
Train Epoch: 9 [400/2510 (16%)] Loss: 0.076956
Train Epoch: 9 [600/2510 (24%)] Loss: 0.041252
Train Epoch: 9 [800/2510 (32%)] Loss: 0.011788
Train Epoch: 9 [1000/2510 (40%)] Loss: 0.019258
Train Epoch: 9 [1200/2510 (48%)] Loss: 0.025886
Train Epoch: 9 [1400/2510 (56%)] Loss: 0.264767
Train Epoch: 9 [1600/2510 (64%)] Loss: 0.064278
Train Epoch: 9 [1800/2510 (72%)] Loss: 0.035067
Train Epoch: 9 [2000/2510 (80%)] Loss: 0.014171
Train Epoch: 9 [2200/2510 (88%)] Loss: 0.017240
Train Epoch: 9 [2400/2510 (96%)] Loss: 0.010370
Test set: Average loss: 2.1646, Accuracy: 1299/2510 (52%)
Train Epoch: 10 [0/2510 (0%)] Loss: 3.983953
Train Epoch: 10 [200/2510 (8%)] Loss: 0.247857
Train Epoch: 10 [400/2510 (16%)] Loss: 0.079800
Train Epoch: 10 [600/2510 (24%)] Loss: 0.032828
Train Epoch: 10 [800/2510 (32%)] Loss: 0.014672
Train Epoch: 10 [1000/2510 (40%)] Loss: 0.015492
Train Epoch: 10 [1200/2510 (48%)] Loss: 0.006651
Train Epoch: 10 [1400/2510 (56%)] Loss: 0.244603
Train Epoch: 10 [1600/2510 (64%)] Loss: 0.065021
Train Epoch: 10 [1800/2510 (72%)] Loss: 0.027480
Train Epoch: 10 [2000/2510 (80%)] Loss: 0.019957
Train Epoch: 10 [2200/2510 (88%)] Loss: 0.013186
Train Epoch: 10 [2400/2510 (96%)] Loss: 0.006839
Test set: Average loss: 2.1206, Accuracy: 1299/2510 (52%) |
st104264 | If this is the training accuracy, your model doesn’t seem to learn anything useful.
In that case scale the problem down to just one single image and try to overfit your model on it. |
st104265 | I’m trying overfitting, but my network don’t learning. Maybe challange is too dificult, I want to classify boobs - Before and after Breast augmentation.
Here is my data set:
GitHub
pantrombka/Neural-boobs 10
Contribute to Neural-boobs development by creating an account on GitHub. |
st104266 | Hi,
can someone tell me where the dispatch function in this example is located? I simply want to reproduce the example…
GitHub
zdevito/ATen 1
ATen: A TENsor library for C++11
Best
Tim |
st104267 | HI,
I have changed the RNN module into another one RNN1. Everything is working fine if I disable cuDNN. But once cuDNN is on, there is an error. It seems that I have to define it in cuDNN file? How to do that? |
st104268 | CUDNN only supports a handful of well-known RNN cell types. If you want to implement your own RNN cell, modifying or subclassing RNNBase doesn’t really work. Instead, you should subclass RNNCellBase and write something that looks like nn.RNNCell (https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/rnn.py#L353 56). Then use a for-loop to unroll the RNN cell manually. |
st104269 | Thanks.
I have checked the performance with and without cuDNN for RNN. It seems that with cuDNN, RNN can run much faster about 10 times. Is this really the case?
I have also implemented the same code of RNN1 in Lasagne based on Theano. It seems working in a similar speed as RNN in pytorch with cuDNN.
What is the difference in using cuDNN between theano and pytorch? Is is automatically used in theano? |
st104270 | What is the hidden size and sequence length of your RNN? We’ve seen a 10x gap before, but only for very small hidden size and very long sequences. Since CUDNN can’t run custom cells, what Theano is most likely doing is using its graph optimizer to fuse multiple pointwise CUDA kernels into one. PyTorch currently has inefficient kernel launches because of excess getDevice calls (https://github.com/pytorch/pytorch/issues/917 32) so it should be less bad after those are fixed; eventually we’ll want to provide a way to create custom fused pointwise kernels for things like user-defined RNN cells, at which point it’ll be as fast as Theano for your use case. |
st104271 | Following is the setting for running RNN.
hidden size: 128
length: 784 (pixel mnist)
batch size: 32
I expect those new features coming soon. That would be very helpful.
Thanks. |
st104272 | jekbradbury:
eventually we’ll want to provide a way to create custom fused pointwise kernels for things like user-defined RNN cells.
Any updates on this? |
st104273 | When running for example
torch.randn(64, 512, 1, 1).normal_(mean=0.0007720401627011597, std=0.01006225310266018))
there are sometimes nans in the tensor.
I’m working on pytorch 0.1.12 with python 3.5 and the “Bug” is reproducible. Am I doing sth wrong or is it already fixed ? |
st104274 | dzimm:
torch.randn(64, 512, 1, 1).normal_(mean=0.0007720401627011597, std=0.01006225310266018))
I am running this on pytorch 0.4 and I haven’t seen any nans yet. @isalirezag what version of pytorch are you on? |
st104275 | PyTorch itself has 8 different types of tensors; float, double, half, uint8, int16, int32, and int64. I looked at this blog post and they explained how they created these types of tensor:
A Tour of PyTorch Internals (Part I) 30
The type of tensor I want to create is fixed-point tensor instead of float-point that is used by pytorch in float, double and half types.
I was wondering if there is an easier way to create a new type of tensor without modifying the backend (C++) code? |
st104276 | In general, there isn’t an easy way now to create a new tensor type. Depending on the design of your fixed-point tensor, it might be possible to use the built-in tensors to implement it. What exactly do you mean by “fixed-point”? |
st104277 | PyTorch uses floating-point to store decimal number. Due to the structure of floating-point the numbers are not stored properly and it causes precision loss.
Structure of floating point 16
I was thinking about using integers to perform math operations and then use the new object to store those numbers to preserve the decimal points without any precision loss. |
st104278 | if you’re using ints, one thing you could do is use LongTensor and write custom functions of your own (i.e., add_fixed_point(LongTensor, LongTensor)) |
st104279 | Hello , I have been training my RNN model with 1897 * 0.75 examples and testing on the rest 1897 * 0.25 , I have two questions :
1- in the code I provided below , is that the right way to calculate the accuracy of the model ?
2- I’m getting a classification accuracy of 90 - 97.5 % is this normal or I have over fitted my model ?
P.S : features = 20 —> classes = 10
and Thank you
iter = 0
for epoch in range(num_epochs):
for i, (features, labels) in enumerate(train_loader):
features=features.type(torch.FloatTensor)
features = Variable(features)
#labels = labels.view(-1)
labels = labels.type(torch.LongTensor)
labels = Variable(labels)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output
# outputs.size() --> hidden_dim, uutput_dim [69,10]
outputs = model(features)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
if iter % 500 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for features, labels in test_loader:
features=features.type(torch.FloatTensor)
features = Variable(features)
# Forward pass only to get logits/output
outputs = model(features)
# Get predictions from the maximum value
_, predicted = torch.max(outputs.data, 1)
# Total number of labels
total += labels.size(0)
# Total correct predictions
correct += (predicted.type(torch.DoubleTensor) == labels).sum()
accuracy = 100 * correct / total
# Print Loss
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.data[0], accuracy)) |
st104280 | It all in the title:
torch.Tensor.cuda has an async parameter that allows asynchronous transfer from pinned memory to GPU. Does torch.Tensor.to automatically uses async when possible?
I wasn’t able to find the relevant piece of source code. |
st104281 | It was added in the current master branch as non_blocking, since asynch will be a keyword in Python 3.7.
See the docs 274.
If you want to use it, you would have to build PyTorch from source. You can find the build instructions here 45. |
st104282 | Hello ,
I have trained recently an RNN classifier and it gave me an Overall Accuracy of 80% , Now I want to know the F-score for each class ? is this possible in pytorch ?
and Thank you |
st104283 | I do not believe there is any binding to directly compute it with pytorch. But considering that you computed the OA, I assume you have the confusion matrix. From there, it is pretty straightforward to compute the F-scores per class.
F1Score = np.zeros(len(classes))
for cls in range(len(classes)):
try:
F1Score[cls] = 2.*confusion[cls, cls]/(np.sum(confusion[cls, :])+np.sum(confusion[:, cls]))
except:
pass
print("F1Score: ")
for cls, score in enumerate(F1Score):
print("{}: {:.2f}".format(classes[cls], score))
Hope that helps. |
st104284 | I surprisingly got this error message this morning when running a training :
Anaconda3\lib\site-packages\torch\nn\modules\conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDNN_STATUS_INTERNAL_ERROR
Although I did not change my scripts, I get this, while it worked yesterday…
Any ideas of what might cause this ?
Also I looked for a similar folder than the solution given in Error: CUDNN_STATUS_INTERNAL_ERROR 20 but not sure where it is on windows 7…
Thanks ! |
st104285 | Hi,
I’m trying to play around with, and understand, the use of LSTMs with Caffe2.
I’ve read and run the rather short tutorial/example but find it very non-tutorially in terms of actually telling you anything but - run this script.
What I’d like to do now is try and add an LSTM layer at the end of a number of other layers, say ReLU layers.
Creating a model roughly like this:
arg_scope = {“order”: “NCHW”}
model = model_helper.ModelHelper()
# Get the data
data, label = AddInput(model, batch_size=25,
db=train_data_path,
db_type='minidb')
# Define the layers
layer = brew.fc(model, data, 'dense_1', dim_in=13, dim_out=256)
layer = brew.relu(model, layer, 'relu_1')
layer = brew.fc(model, layer, 'dense_2', dim_in=256, dim_out=256)
layer = brew.relu(model, layer, 'relu_2')
layer = brew.fc(model, layer, 'dense_3', dim_in=256, dim_out=256)
layer = brew.relu(model, layer, 'relu_3')
workspace.FeedBlob(
"seq_lengths",
np.array([1] * 25, dtype=np.float32)
)
seq_lengths, target = \
model.net.AddExternalInputs(
'seq_lengths',
'target',
)
lstm_output, hidden_output, _, cell_state = LSTM(model,
layer,
seq_lengths,
None,
256,
256,
scope="LSTM",
forward_only=False)
output = brew.fc(model, lstm_output, 'lstm_out', dim_in=256, dim_out=2, axis=2)
softmax = model.net.Softmax(output, 'softmax', axis=2)
I.e. input to the LSTM is the output of the previous ReLU layer. This seems to work when initializing the model and creating it in memory (RunNetOnce - CreateNet), but once starting the training loop (RunNet) I get the following error:
WARNING:caffe2.python.workspace:Original python traceback for operator 9 in network model in exception above (most recent call last):
WARNING:caffe2.python.workspace: File “lstm_test.py”, line 157, in
WARNING:caffe2.python.workspace: File “lstm_test.py”, line 54, in getModel
WARNING:caffe2.python.workspace: File “/usr/local/lib/python2.7/dist-packages/caffe2/python/rnn_cell.py”, line 1571, in _LSTM
WARNING:caffe2.python.workspace: File “/usr/local/lib/python2.7/dist-packages/caffe2/python/rnn_cell.py”, line 93, in apply_over_sequence
WARNING:caffe2.python.workspace: File “/usr/local/lib/python2.7/dist-packages/caffe2/python/rnn_cell.py”, line 491, in prepare_input
WARNING:caffe2.python.workspace: File “/usr/local/lib/python2.7/dist-packages/caffe2/python/brew.py”, line 107, in scope_wrapper
WARNING:caffe2.python.workspace: File “/usr/local/lib/python2.7/dist-packages/caffe2/python/helpers/fc.py”, line 58, in fc
WARNING:caffe2.python.workspace: File “/usr/local/lib/python2.7/dist-packages/caffe2/python/helpers/fc.py”, line 54, in _FC_or_packed_FC
Traceback (most recent call last):
File “lstm_test.py”, line 182, in
workspace.RunNet(train_model.net)
File “/usr/local/lib/python2.7/dist-packages/caffe2/python/workspace.py”, line 217, in RunNet
StringifyNetName(name), num_iter, allow_fail,
File “/usr/local/lib/python2.7/dist-packages/caffe2/python/workspace.py”, line 178, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at tensor.h:76] axis_index < ndims. 2 vs 2 Error from operator:
input: “relu_3” input: “LSTM/i2h_w” input: “LSTM/i2h_b” output: “LSTM/i2h” name: “” type: “FC” arg { name: “use_cudnn” i: 1 } arg { name: “cudnn_exhaustive_search” i: 0 } arg { name: “order” s: “NCHW” } arg { name: “axis” i: 2 }
I suspect I don’t fully understand the interaction between sequence_lengths, axis dimensions and what ndims is here and this cause the problem. Any help is appreciated! |
st104286 | Hi,
I have 16 threads and for every thread an 8x5 matrix. For every thread, only one of those 8 5x1 vectors interests me and the rest can be discarded.
I’ve tried to do:
action_prob.gather(1, self.current_options.view(-1, 1))
Where action_prob is a 16x8x5 tensor and current_options is a 16x1 tensor. I’ll attach their values at the bottom.
I’m getting this error:
RuntimeError: invalid argument 4: Index tensor must have same dimensions as input tensor at /Users/soumith/code/builder/wheel/pytorch-src/aten/src/TH/generic/THTensorMath.c:581
It seems to me entirely clear what I’m trying to achieve and I don’t understand why there is even a dimension issue. Assuming for example current_options[0] has a value of 2, and as can be seen below, the first matrix is:
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]]
Why won’t it gather this line: [ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108] ? And do the same for the rest of the 16 matrices, so that in the end I’ll have a 16x5 tensor.
Thanks for the help.
For what it’s worth:
action_prob:
tensor([[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]],
[[ 0.1783, 0.2808, 0.2792, 0.1433, 0.1184],
[ 0.1529, 0.1557, 0.1721, 0.1805, 0.3387],
[ 0.1290, 0.2222, 0.1107, 0.3273, 0.2108],
[ 0.2075, 0.2130, 0.1273, 0.1836, 0.2686],
[ 0.1478, 0.2804, 0.1561, 0.1057, 0.3100],
[ 0.2466, 0.1288, 0.1199, 0.3007, 0.2041],
[ 0.1519, 0.2347, 0.1232, 0.3820, 0.1081],
[ 0.1655, 0.1894, 0.3634, 0.1587, 0.1230]]])
self.current_options:
tensor([[ 5],
[ 0],
[ 2],
[ 6],
[ 3],
[ 4],
[ 1],
[ 5],
[ 6],
[ 5],
[ 2],
[ 6],
[ 7],
[ 7],
[ 7],
[ 1]]) |
st104287 | The docs for gather describe how its indexing works: https://pytorch.org/docs/stable/torch.html#torch.gather 128.
In particular, it sounds like you want something like:
out[i][j][k] = input[i][indices[i]][k]
We can massage this a little into the form that torch.gather wants, which is out[i][j][k] = input[i][index[i][j][k]][k].
What happens if you do the following:
indices = indices.unsqueeze(-1).unsqueeze(-1)
# indices is now size (16, 1, 1). Let's turn it into the same size as action_prop
indices = indices.expand_as(action_prob)
action_prob.gather(1, indices) |
st104288 | An easier alternative to using gather is using python indexing.
Let’s say you have your input (size (16, 8, 5)), and indices of size (16, 1) that contain a number in the range [0, 8). Then one thing you can do is:
count = torch.arange(16) # get numbers from 0..15
indices = indices.squeeze() # indices is now size `(16,)`
input[count, indices, :] # gives an output of size (16, 5). |
st104289 | Hello ,
I’m training my data set using an RNN , i got this error :
Screenshot from 2018-06-22 10-43-24.png749×596 87.1 KB
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from torch.autograd import Variable
import numpy as np
from Data import Data
'''
STEP 1: LOADING DATASET
'''
features_train, labels_train = Data('s2-gap-12dates.csv', train=True)
features_test, labels_test = Data('s2-gap-12dates.csv', train=False)
features_train = features_train.transpose(0,2,1)
features_test = features_test.transpose(0,2,1)
class train_dataset(Dataset):
def __init__(self):
self.len = features_train.shape[0]
self.features = torch.from_numpy(features_train)
self.labels = torch.from_numpy(labels_train)
def __getitem__(self, index):
return self.features[index], self.labels[index]
def __len__(self):
return self.len
class test_dataset(Dataset):
def __init__(self):
self.len = features.shape[0]
self.features = torch.from_numpy(features_test)
self.labels = torch.from_numpy(labels_test)
def __getitem__(self, index):
return self.features[index], self.labels[index]
def __len__(self):
return self.len
train_dataset = train_dataset()
test_dataset = test_dataset()
# Training settings
batch_size = 20
n_iters = 3000
num_epochs = n_iters / (len(train_dataset) / batch_size)
num_epochs = int(num_epochs)
#------------------------------------------------------------------------------
'''
STEP 2: MAKING DATASET ITERABLE
'''
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True,
drop_last=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False,
drop_last=True)
'''
STEP 3: CREATE MODEL CLASS
'''
class RNNModel(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super(RNNModel, self).__init__()
# Hidden dimensions
self.hidden_dim = hidden_dim
# Number of hidden layers
self.layer_dim = layer_dim
# Building your RNN
# batch_first=True causes input/output tensors to be of shape
# (batch_dim, seq_dim, feature_dim)
self.rnn = nn.RNN(input_dim, hidden_dim, layer_dim, batch_first=True, nonlinearity='tanh')
# Readout layer
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
# Initialize hidden state with zeros
h0 = Variable(torch.zeros(self.layer_dim, x.size(0), self.hidden_dim))
# One time step
out, hn = self.rnn(x, h0)
out = self.fc(out[:, -1, :])
# out.size() --> 100, 10
return out
'''
STEP 4: INSTANTIATE MODEL CLASS
'''
input_dim = 20
hidden_dim = 69
layer_dim = 1
output_dim = 10
model = RNNModel(input_dim, hidden_dim, layer_dim, output_dim)
'''
STEP 5: INSTANTIATE LOSS CLASS
'''
criterion = nn.CrossEntropyLoss()
'''
STEP 6: INSTANTIATE OPTIMIZER CLASS
'''
learning_rate = 0.1
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
'''
STEP 7: TRAIN THE MODEL
'''
# Number of steps to unroll
seq_dim = 12
for i, (features, labels) in enumerate(train_loader):
features=features.type(torch.FloatTensor)
features = Variable(features)
#labels = labels.view(-1)
labels = labels.type(torch.LongTensor)
labels = Variable(labels)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
# outputs.size() --> 100, 10
outputs = model(features)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step() |
st104290 | Solved by Naman-ntc in post #2
The file s2-gap-12dates.csv, does it contain labels from 1 to num_classes or from 0 to num_class -1?
Cross Entropy loss asumes target from 0 to num_class -1 ?
And the error Assertion cur_targer >= 0 && cur_target < n_classes failed that means there is some problems with your labels, and this is th… |
st104291 | The file s2-gap-12dates.csv, does it contain labels from 1 to num_classes or from 0 to num_class -1?
Cross Entropy loss asumes target from 0 to num_class -1 ?
And the error Assertion cur_targer >= 0 && cur_target < n_classes failed that means there is some problems with your labels, and this is the most probable mistake |
st104292 | I am working on CRNN code where a pre-trained model is given in .t7 format, I wanted to use that in Keras which support “hdf5” format.
Is there any way I can convert this to use it in Keras. |
st104293 | I was reading on multiprocessing 2 and it seems like lots of problems are actually related to how bad Python’s multiprocessing is in general (worse in Python2).
Instead of trying to find all the quirks and workarounds for deadlocks (killed workers are tricky) PyTorch could switch to loky 9. It is an almost drop-in replacement for multiprocessing, but with consistent spawn behavior, and works fine with Python2.
What I like the most is to get an exception when a worker dies, instead of a hard crash, or worse a deadlock. |
st104294 | Hi,
I am confused about the output shape from STFT. Given
print (y.shape)
s = torch.stft(y, frame_length=128, hop=32)
print (s.shape)
we have
torch.Size([3, 7936])
torch.Size([3, 245, 65, 2])
According to the doc 30, “Returns the real and the imaginary parts together as one tensor of size (∗×N×2), where ∗ is the shape of input signal, N is the number of ω s considered depending on fft_size and return_onesided, and each pair in the last dimension represents a complex number as real part and imaginary part.”, * is the shape of input signal, so I would expect we have a returned tensor with shape [3, 7936, N, 2]. So, we is “245” computed given input length “7936”.
Thanks |
st104295 | It’s similar to the sliding windows of a convolution.
Have a look at the output shape formula for Conv2d 61.
Given your input size of 7936, your “kernel”, or in this case your frame, has a length of 128, with a stride or hop of 32:
((7936 - (128 - 1) - 1) / 32) + 1 = 245 |
st104296 | Lets say I have a tensor named Test
Test = torch.rand(2,3,4,5)
How I can make my tensor to be normalized between and 1 for each channel. In other words, i want to get 0 and 1 for each channel when I do this:
print('The size is: {}, \n \
1st channel is {}\n \
min and max are {} and {} \n'
.format(Test.size(),Test[0,0,:,:], torch.min(Test[0,0,:,:]), torch.max(Test[0,0,:,:])))
please let me know if I need to use something other than transforms.
Thanks |
st104297 | I have a model that uses two preexisting models to extract representations. These representations are passed into a sequential model and the whole thing is trained end-to-end. Or should be, but the weights in the two representation extraction models aren’t updating. A simple example might look something like this:
class Net(nn.Module):
def __init__(self, model1, model2):
super(Net, self).__init__()
self.model1 = model1
self.model2 = model2
self.classifier = nn.Sequential(nn.Linear(h1, h1), nn.ReLU(), nn.Linear(h1, h2))
def forward(self, x):
x1 = self.model1(x)
x2 = self.model2(x)
out = self.classifier(torch.cat([x1, x2], dim=1))
return out
Am I doing something obviously wrong here? |
st104298 | I did the following:
model = Net(model1, model2)
optimizer = optim.SGD([{'params': model.model1.parameters(), 'lr': 1e-4},
{'params': model.model2.parameters(), 'lr': 1e-4},
{'params': model.classifier.parameters(), 'lr': 0.1}],) |
st104299 | Nevermind! It was a false alarm. I made this mistake 8 checking the updates. Leaving it here in case it helps someone else. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.