id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st104800 | Oh cool, totally missed that! So @ptrblck what about inputting a 3d image for 3D convolutions. Do I treat the z dimension as a channel? |
st104801 | For 3d convolutions your input should have the dimensions [batch_size, channels, depth, height, width].
The convolution will be applied on the channels like in the two dimensional case.
I think it depends on your use case, what z exactly means. Are you working with medical images? |
st104802 | Well not exactly, just time series frames. I know this is asking a lot, I’m pretty new at this, but I have one more question. my current system is working by inputting 64 X 8 X 95 X 95, but I’m getting the error that input and target shapes do not match 256 X 8 X 64 X 64 being the target size.
That is the encoder to my network up top. The decoder looks similar but opposite. Like so:
self.decoder = nn.Sequential(
nn.Linear(z_dim, 256), # B, 256
View((-1, 256, 1, 1)), # B, 256, 1, 1
nn.ReLU(True),
nn.ConvTranspose2d(256, 64, 4), # B, 64, 4, 4
nn.ReLU(True),
nn.ConvTranspose2d(64, 64, 4, 2, 1), # B, 64, 8, 8
nn.ReLU(True),
nn.ConvTranspose2d(64, 32, 4, 2, 1), # B, 32, 16, 16
nn.ReLU(True),
nn.ConvTranspose2d(32, 32, 4, 2, 1), # B, 32, 32, 32
nn.ReLU(True),
nn.ConvTranspose2d(32, nc, 4, 2, 1), # B, nc, 64, 64
)
Is there anyway you could explain to me why the output reconstruction loss is being given to me in that dimension? Thanks in advance. |
st104803 | I don’t quite understand, why your batch size is larger in the target than your input.
From the shape calculation in the encoder code, it looks like you are passing a 64x64 tensor.
What kind of criterion are you using? BCELoss? |
st104804 | Well this is my recon_loss function, and I’m using MSE.
def reconstruction_loss(x, x_recon, distribution):
batch_size = x.size(0)
assert batch_size != 0
if distribution == 'bernoulli':
recon_loss = F.binary_cross_entropy_with_logits(x_recon, x, size_average=False).div(batch_size)
elif distribution == 'gaussian':
x_recon = F.sigmoid(x_recon)
recon_loss = F.mse_loss(x_recon, x, size_average=False).div(batch_size)
else:
recon_loss = None
return recon_loss
Gaussian is being run in this case. But it’s not being called until after x_recon is output from my autoencoder. But my recon loss is being out put the decoder obviously during training I call:
x = Variable(cuda(x, self.use_cuda))
x_recon, mu, logvar = self.net(x)
This is based off almost exactly on: https://github.com/1Konny/Beta-VAE 1 |
st104805 | Could you try to pass 64x64 images into your encoder, as this seems to be the height and width of your decoder? |
st104806 | If I resize my image to 64X64, I get the following: invalid argument 2: size ‘[-1 X 256]’ is invalid for input with 16256 elements at /pytorch… |
st104807 | I’ve added some debug info to your model and it seems to work now:
class View(nn.Module):
def __init__(self, size):
super(View, self).__init__()
self.size = size
def forward(self, x):
x = x.view(self.size)
return x
class Print(nn.Module):
def __init__(self):
super(Print, self).__init__()
def forward(self, x):
print(x.shape)
return x
class VAE(nn.Module):
def __init__(self, z_dim=10, nc=3):
super(VAE, self).__init__()
self.z_dim = z_dim
self.nc = nc
self.encoder = nn.Sequential(
nn.Conv2d(nc, 32, 4, 2, 1), # B, 32, 32, 32
nn.ReLU(True),
nn.Conv2d(32, 32, 4, 2, 1), # B, 32, 16, 16
nn.ReLU(True),
nn.Conv2d(32, 64, 4, 2, 1), # B, 64, 8, 8
nn.ReLU(True),
nn.Conv2d(64, 64, 4, 2, 1), # B, 64, 4, 4
nn.ReLU(True),
nn.Conv2d(64, 256, 4, 1), # B, 256, 1, 1
nn.ReLU(True),
Print(),
View((-1, 256*1*1)), # B, 256
Print(),
nn.Linear(256, z_dim*2), # B, z_dim*2
)
self.decoder = nn.Sequential(
nn.Linear(z_dim*2, 256), # B, 256
Print(),
View((-1, 256, 1, 1)), # B, 256, 1, 1
Print(),
nn.ReLU(True),
nn.ConvTranspose2d(256, 64, 4), # B, 64, 4, 4
nn.ReLU(True),
nn.ConvTranspose2d(64, 64, 4, 2, 1), # B, 64, 8, 8
nn.ReLU(True),
nn.ConvTranspose2d(64, 32, 4, 2, 1), # B, 32, 16, 16
nn.ReLU(True),
nn.ConvTranspose2d(32, 32, 4, 2, 1), # B, 32, 32, 32
nn.ReLU(True),
nn.ConvTranspose2d(32, nc, 4, 2, 1), # B, nc, 64, 64
)
def forward(self, x):
x = self.encoder(x)
print(x.shape)
x = self.decoder(x)
return x
model = VAE()
x = torch.randn(1, 3, 64, 64)
output = model(x)
print(output.shape) |
st104808 | Perfect, thanks so much for the help! I’m just curious do you understand what I would need to do differently if I were to try and get 96X96 image in instead of 64X64? |
st104809 | The easiest way would be to add a nn.Upsample(96, mode='bilinear') layer at the end of your decoder. |
st104810 | @ptrblck, in the encoder when I call view. I’m actually go from a tensor of [64,256,3,3] to [576,256]. Then I get as I go to call the encoder 576 stays as the actual batch size which messes up my final dimensionality. I get the error input and target shapes do not match. Being that the only difference is the batch.
Do you know why the View function is doing this? And how to keep the batch size constant?
Thanks |
st104811 | If you call your model with an image of size 64x64, it should work.
Generally, I would use view as:
x = x.view(x.size(0), -1)
This will keep the batch size constant and puts all remaining values to dim1. |
st104812 | Right, I’m trying to feed in a 96X96. the only real difference i made was to upsample at the end. But I’m getting mismatch errors when I try and treat the view in the way you mentioned (specifically in the linear layer just following the view call), i get the error size mismatch, m1: [256 x 576], m2: [256 x 20] at
This is what I’m getting at every layer. Do you see what the problem is? or how I can fix it?
torch.Size([64, 32, 48, 48])
torch.Size([64, 32, 24, 24])
torch.Size([64, 64, 12, 12])
torch.Size([64, 64, 6, 6])
torch.Size([64, 256, 3, 3])
torch.Size([576, 256])
torch.Size([576, 20])
torch.Size([576, 10])
torch.Size([576, 256])
torch.Size([576, 256, 1, 1])
torch.Size([576, 64, 4, 4])
torch.Size([576, 64, 8, 8])
torch.Size([576, 32, 16, 16])
torch.Size([576, 32, 32, 32])
torch.Size([576, 8, 64, 64]) |
st104813 | Something is still strange with your View().
I have to say I’m not a huge fan of nn.Sequential, if you use a more complicated model, but this code should work:
class VAE(nn.Module):
def __init__(self, z_dim=10, nc=3):
super(VAE, self).__init__()
self.z_dim = z_dim
self.nc = nc
self.encoder = nn.Sequential(
nn.Conv2d(nc, 32, 4, 2, 1), # B, 32, 32, 32
nn.ReLU(True),
nn.Conv2d(32, 32, 4, 2, 1), # B, 32, 16, 16
nn.ReLU(True),
nn.Conv2d(32, 64, 4, 2, 1), # B, 64, 8, 8
nn.ReLU(True),
nn.Conv2d(64, 64, 4, 2, 1), # B, 64, 4, 4
nn.ReLU(True),
nn.Conv2d(64, 256, 4, 1), # B, 256, 1, 1
nn.ReLU(True),
Print(),
View((1, -1)), # B, 256
Print(),
nn.Linear(256*3*3, z_dim*2), # B, z_dim*2
)
self.decoder = nn.Sequential(
nn.Linear(z_dim*2, 256*3*3), # B, 256
Print(),
View((1, -1, 3, 3)), # B, 256, 1, 1
Print(),
nn.ReLU(True),
nn.ConvTranspose2d(256, 64, 4), # B, 64, 4, 4
nn.ReLU(True),
nn.ConvTranspose2d(64, 64, 4, 2, 1), # B, 64, 8, 8
nn.ReLU(True),
nn.ConvTranspose2d(64, 32, 4, 2, 1), # B, 32, 16, 16
nn.ReLU(True),
nn.ConvTranspose2d(32, 32, 4, 2, 1), # B, 32, 32, 32
nn.ReLU(True),
nn.ConvTranspose2d(32, nc, 4, 2, 1), # B, nc, 64, 64
)
def forward(self, x):
x = self.encoder(x)
print(x.shape)
x = self.decoder(x)
return x
model = VAE()
x = torch.randn(1, 3, 96, 96)
output = model(x)
print(output.shape)
> torch.Size([1, 3, 96, 96]) |
st104814 | Man, thank you for all your help and quick responses. But it’s still not working. I think it’s worth mentioning I’m using a channel size of 8. Even if I change nn.Linear to be nn.Linear(256 X 8 X 8) it gives me: size mismatch, m1: [1 x 147456], m2: [16384 x 20] |
st104815 | The spatial dimensions shouldn’t be changed, if you use more or less channels. Could you post your code so that I could have a look? |
st104816 | Ok good to know. And here is the current state of my code:
import torch
import torch.nn as nn
import torch.nn.init as init
from torch.autograd import Variable
from torchvision import models
from torchsummary import summary
def reparametrize(mu, logvar):
std = logvar.div(2).exp()
eps = Variable(std.data.new(std.size()).normal_())
return mu + std*eps
class View(nn.Module):
def __init__(self, size):
super(View, self).__init__()
self.size = size
def forward(self, tensor):
return tensor.view(self.size)
class Print(nn.Module):
def __init__(self):
super(Print, self).__init__()
def forward(self, x):
print(x.shape)
return x
class BetaVAE_H(nn.Module):
"""Model proposed in original beta-VAE paper(Higgins et al, ICLR, 2017)."""
def __init__(self, z_dim=10, nc=8):
super(BetaVAE_H, self).__init__()
self.z_dim = z_dim
self.nc = nc
self.encoder = nn.Sequential(
nn.Conv2d(nc, 32, 4, 2, 1), # B, 32, 48, 48
Print(),
nn.ReLU(True),
nn.Conv2d(32, 32, 4, 2, 1), # B, 32, 24, 24
Print(),
nn.ReLU(True),
nn.Conv2d(32, 64, 4, 2, 1), # B, 64, 12, 12
Print(),
nn.ReLU(True),
nn.Conv2d(64, 64, 4, 2, 1), # B, 64, 6, 6
Print(),
nn.ReLU(True),
nn.Conv2d(64, 256, 4, 1), # B, 256, 3, 3
Print(),
nn.ReLU(True),
View((1,-1)), # B, 256
Print(),
nn.Linear(256*8*8,z_dim*2), # B, z_dim*2
Print(),
)
self.decoder = nn.Sequential(
Print(),
nn.Linear(z_dim, 256*3*3), # B, 256
Print(),
View((1, -1, 3, 3)), # B, 256, 1, 1
Print(),
nn.ReLU(True),
nn.ConvTranspose2d(256, 64, 4), # B, 64, 4, 4
Print(),
nn.ReLU(True),
nn.ConvTranspose2d(64, 64, 4, 2, 1), # B, 64, 8, 8
Print(),
nn.ReLU(True),
nn.ConvTranspose2d(64, 32, 4, 2, 1), # B, 32, 16, 16
Print(),
nn.ReLU(True),
nn.ConvTranspose2d(32, 32, 4, 2, 1), # B, 32, 32, 32
Print(),
nn.ReLU(True),
nn.ConvTranspose2d(32, nc, 4, 2, 1), # B, nc, 64, 64
Print(),
nn.Upsample(96,mode='bilinear'),
)
self.weight_init()
def weight_init(self):
for block in self._modules:
for m in self._modules[block]:
kaiming_init(m)
def forward(self, x):
#print("x val: " + str(x.size()))
distributions = self._encode(x)
mu = distributions[:, :self.z_dim]
logvar = distributions[:, self.z_dim:]
z = reparametrize(mu, logvar)
#print("z val: " + str(z.size()))
x_recon = self._decode(z)
return x_recon, mu, logvar
def _encode(self, x):
return self.encoder(x)
def _decode(self, z):
return self.decoder(z)
def kaiming_init(m):
if isinstance(m, (nn.Linear, nn.Conv2d)):
init.kaiming_normal(m.weight)
if m.bias is not None:
m.bias.data.fill_(0)
elif isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d)):
m.weight.data.fill_(1)
if m.bias is not None:
m.bias.data.fill_(0) |
st104817 | I encountered a problem to unpack the packed padded sequence. It returned error:
Traceback (most recent call last):
File “Decoder.py”, line 118, in
main()
File “Decoder.py”, line 115, in main
logSM = dec(x_emb, h_0, c_0, enc_out, mask, True)
File “/u/nieyifan/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 325, in call
result = self.forward(*input, **kwargs)
File “Decoder.py”, line 77, in forward
h, c, mask) #dec_out=(1, BS, hid)
File “/u/nieyifan/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 325, in call
result = self.forward(*input, **kwargs)
File “/u/nieyifan/projects/EncDec/LSTMLayer.py”, line 79, in forward
out, _ = torch.nn.utils.rnn.pad_packed_sequence(out) # (seq_len, BS, hid_size)
File “/u/nieyifan/anaconda3/lib/python3.6/site-packages/torch/nn/utils/rnn.py”, line 121, in pad_packed_sequence
output[prev_i:, :prev_batch_size] = var_data[data_offset:data_offset + l]
File “/u/nieyifan/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py”, line 87, in setitem
return SetItem.apply(self, key, value)
File “/u/nieyifan/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/tensor.py”, line 117, in forward
i._set_index(ctx.index, value)
RuntimeError: invalid argument 2: sizes do not match at /opt/conda/conda-bld/pytorch_1512386481460/work/torch/lib/THC/THCTensorCopy.cu:31
I have 2 classes, one low-level layer called LSTMlayer
it’s a wrapper of the nn.LSTM with the management of padding
def forward(self, x, h_0, c_0, mask):
......
packed_x = torch.nn.utils.rnn.pack_padded_sequence(x,
sorted_len.long().data.tolist(), batch_first=False)
print("packed_x", packed_x.data.size())
# run Encoder
out, (h_n, c_n) = self.lstm(packed_x, (h_0, c_0)) # (n_layers*n_dir, BS, hid_size)
# unpack out
out, _ = torch.nn.utils.rnn.pad_packed_sequence(out) # (seq_len, BS, hid_size)
......
then there’s a Decoder class which calls this forward function
def forward(self, ...):
dec_out_i, h, c = self.rnn(
torch.cat((x_i.unsqueeze(0),
context.permute(1, 0, 2)), dim=2),
h, c, mask)
self.rnn is the LSTMlayer’s forward function
so basically the input for lowlevel LSTM is a concatenated tensor of dim (seqlen=1, BS, 306 ). I run the decoder one time step at each time, so seqlen=1, and the packer could pack the concatenaated tensor, but the unpacker couldn’t not unpack it dec_out_i
I tried my lowlevel LSTMlayer alone, there’s no problem, I don’t know why this Decoder’s function is not working. Thanks |
st104818 | This is my entire code for a small classification problem,The problem is that when i run this it consumes a lot of CPU ,I am not sure which part of the code is responsible for it,
Any ideas or suggestions will be really helpful,
Thanks in advance ,
import os
import torch
import pandas as pd
from skimage import io, transform
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import torch.optim as optim
import torchvision
from torch.optim import lr_scheduler
from torch.autograd import Variable
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
from tensorboardX import SummaryWriter
import setproctitle
from tensorboard_logger import log_value, configure
RUN_PATH = '/mnt/da5df9e4-cdc6-4d55-91e8-b2383e89165f/Ryan/1234/models'
configure(RUN_PATH, flush_secs=1)
setproctitle.setproctitle('train.py')
writer = SummaryWriter('/mnt/machine/Ryan/1234/models')
net = Net().cuda()
print (net)
transform = transforms.Compose([
transforms.Scale((224,224)),
transforms.ToTensor()
])
class WhaleData(Dataset):
def __init__(self, data_file, root_dir , transform = None):
self.csv_file = pd.read_csv(data_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(os.listdir(self.root_dir))
def __getitem__(self, index):
image = os.path.join(self.root_dir, self.csv_file['Image'][index])
image = Image.open(image).convert('RGB')
label = self.csv_file['Id'][index]
sample = {'image': image, 'label':label}
if self.transform:
sample['image'] = self.transform(sample['image'])
return sample
out_path = '/mnt/machine/Ryan/1234/models/'
trainset = WhaleData(data_file = '/mnt/machine/Ryan/1234/train_encoded.csv',
root_dir = '/mnt/machine/Ryan/1234/train',transform = transform)
train_loader = torch.utils.data.DataLoader(trainset,batch_size = 128)
test_loader = torch.utils.data.DataLoader(trainset)
model_ft = torchvision.models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 4251)
model_ft = model_ft
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
# criterion = nn.CrossEntropyLoss()
# optimizer = optim.SGD(net.parameters(), lr = 0.001, momentum = 0.9)
for epoch in range(100):
running_loss = 0.0
for index,info in enumerate(test_loader):
inputs = info['image']
inputs = Variable(inputs.float())#.cuda()
labels= Variable(info['label'].long())#.cuda()
# print (model_conv)
outputs = model_ft(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss+=loss
print ('Loss for this image ', loss.data[0])
writer.add_scalar('loss', loss.data[0])
log_value('train_loss', loss, epoch)
torch.save(model_ft.state_dict, out_path+'{}.pth'.format(epoch)) |
st104819 | First thing to try is to specify num_workers=1 or more in your loaders.
On a side note, if your model runs on CPU, it is normal to have high CPU usage |
st104820 | Hi @Latope2-150 Giving the argument num_workers=1 does not change the cpu usage,Any other ideas ? |
st104821 | I dont know what to say,Thanks a lot ,The consumption rate is much lower now,Thanks for your time.!!! |
st104822 | I just have another question ,my batch size is set to 128 , when i run the code, it occupies about 750 of my GPU memory , When i increase it to 256, it still occupies only 750Mb of memory,do you see anything wrong here? |
st104823 | From your code, you are training on your test_loader, so you probably want to change
for index,info in enumerate(test_loader):
to
for index,info in enumerate(train_loader): |
st104824 | Hi,
In my current project, I want to randomly drop some information before a custom classification module (MIL related) as my training set is relatively small. However, I do not want to set the activations to 0 because it might be picked as useful information by the last module which selects the top activations.
My solution to that is to select “by hand” the desired amount of activations as follows:
self.drop = drop
def forward(self, input: torch.Tensor):
N, C, H, W = input.size()
device = input.device
activs = input.view(N, C, H * W)
if self.drop and self.training:
keep = round(H * W * (1 - self.drop))
selector = torch.rand_like(activs).sort(-1)[1][:,:,:keep]
activs = torch.masked_select(activs, torch.zeros_like(activs).scatter_(-1, selector, 1).byte()).view(N, C, -1)
However this solution is a bit dirty and does not preserve the shape of the input. Any idea to another solution ?
EDIT: randperm is not efficient as it is not implemented on GPU, rand with sorting works faster |
st104825 | Hello everyone,
I am still new to PyTorch and am trying to implement multi-output neural network. To be more precise, I want to predict the mean and disperson of two negative binomial distributions, therefore 4 outputs in total.
For this purpose, I have defined a network and a custom loss function, but I think something has gone awfully wrong because my loss can become arbitrarily small/negative and reaches NaN eventually.
Could someone give me a few hints?
EDIT: I found the mistake. The NLL loss was negative which is impossible for probability mass functions and this is because I mixed up plus/minus signs in my loss function… |
st104826 | Hi, I believe it is a very odd question to ask here, but I am still wondering if anyone could explain the flow difference between pytorch and tensorflow. There was never concern about detach method in tensorflow but in pytorch I have used many times. How is the auto-differentiation unique for pytorch compared to tensorflow, because in tensorflow session gradients are computed automatically?
Thank you,
Inspiring pokemon master |
st104827 | When you run the below code then the last line - print(test_output[0])- shows that the model returns all ‘nan’ values. Also when I added a print statement to the start of the forward method to print out a weight - print("preforward w: ", w[0][0][0]) - in the 4th round it throws an error RuntimeError: Overflow when unpacking long.
My main problem is that I don’t know why the nan values come. Any ideas?
import torch
import torchvision
import torch.nn as nn
import numpy as np
import torchvision.transforms as transforms
from torchvision import datasets
from torch.autograd import Variable
import pandas as pd
torch.manual_seed(1)
class MnistModel(nn.Module):
def __init__(self, w, b):
super(MnistModel, self).__init__()
self.w = torch.nn.Parameter(w)
self.b = torch.nn.Parameter(b)
self.w2 = torch.nn.Parameter(torch.rand(1000, 784, 10))
self.w3 = torch.nn.Parameter(torch.rand(1000, 784, 1))
def forward(self, x):
# print("preforward w: ", w[0][0][0])
layer = torch.mul(x , self.w)
layer = torch.bmm(layer, self.w2)
layer = layer.view(1000, 10, 784)
layer = torch.bmm(layer, self.w3)
output = layer.view(1000, 10)
return output
batch_size = 1000
classes = 10
train_data = datasets.MNIST('data', train=True,
download=True,
transform=transforms.ToTensor())
train_loader = torch.utils.data.DataLoader(train_data,
batch_size=batch_size,
shuffle=False)
test_data = datasets.MNIST('data', train=False, transform=transforms.ToTensor())
test_loader = torch.utils.data.DataLoader(test_data,batch_size=batch_size)
w = torch.randn(1, 784, requires_grad=True)
b = torch.randn(784, 1, requires_grad=True)
learning_rate = 0.001
model = MnistModel(w, b)
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
rows = np.array(range(batch_size))
zeros = torch.zeros((batch_size, 10), dtype=torch.float64)
for i, (raw_data, raw_target) in enumerate(train_loader):
data = raw_data.view((batch_size, 784, 1))
logits = model(data)
zeros = pd.DataFrame(0, index=range(batch_size), columns=range(classes))
zeros.iloc[[rows, raw_target.numpy()]] = 1
zeros = torch.from_numpy(zeros.as_matrix())
target = zeros.float()
loss = criterion(logits, target)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i == 3:
break
for i, (raw_data, raw_target) in enumerate(test_loader):
if i == 1:
break
data = raw_data.view((batch_size, 784, 1))
test_output = model(data)
print(test_output[0]) |
st104828 | Hi guys,
New to ML and pytorch and have thrown myself in the deep end trying to get a CIFAR network up and running using LeNet Archcitecture. I am confident in my Binarization functions and how I am loading the data set in. What I am having problems with is is when I run, the network it does not seem to improve.
A few notes: Binarization requires I keep a copy of the full precision weights for updates during backprop (code adapted from Courbariaux on binarization).
Here is the code:
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from Binarize import *
------------------------------------------------
EnableCUDAExecution = False
Batch_Size = 50
LearningRate = 0.01
Epochs = 3
Momentum = 0.8
WeightDecay = 0
logInterval = -1
seed = 10
------------------------------------------------
class LeNet(nn.Module):
def init(self):
super(LeNet, self).init()
self.conv1 = BinarizeConv2d(3, 6, 5)
self.conv2 = BinarizeConv2d(6, 16, 5)
self.fc1 = BinarizeLinear(16*5*5, 120)
self.fc2 = BinarizeLinear(120, 84)
self.fc3 = BinarizeLinear(84, 10)
torch.nn.init.xavier_uniform(self.fc1.weight)
torch.nn.init.xavier_uniform(self.fc2.weight)
torch.nn.init.xavier_uniform(self.fc3.weight)
def forward(self, x):
out = F.relu(self.conv1(x))
out = F.max_pool2d(out, 2)
out = F.relu(self.conv2(out))
out = F.max_pool2d(out, 2)
out = out.view(out.size(0), -1)
out = F.relu(self.fc1(out))
out = F.relu(self.fc2(out))
out = self.fc3(out)
return out
def train(epoch):
model.train()
if EnableCUDAExecution is True:
model.cuda()
for BatchIndex, (Inputs, Labels) in enumerate(TrainingSetLoader):
if EnableCUDAExecution is True:
Inputs, Labels = Inputs.cuda(), Labels.cuda()
Inputs, Labels = Variable(Inputs), Variable(Labels)
Optimizer.zero_grad()
Output = model(Inputs)
Loss = LossFunction(Output, Labels)
Optimizer.zero_grad()
Loss.backward()
for p in list(model.parameters()):
if hasattr(p, 'org'):
p.data.copy_(p.org)
Optimizer.step()
for p in list(model.parameters()):
if hasattr(p, 'org'):
p.org.copy_(p.data.clamp_(-1, 1))
if logInterval != -1:
if (BatchIndex) % logInterval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(epoch, BatchIndex * len(Inputs),
len(TrainingSetLoader.dataset), 100. *
BatchIndex / len(TrainingSetLoader),
Loss.data[0]))
def test(epoch):
model.eval()
testLoss = 0
Correct = 0
for BatchIndex, (Data, Labels) in enumerate(TestSetLoader):
if EnableCUDAExecution is True:
Data, Labels = Data.cuda(), Labels.cuda()
Data, Labels = Variable(Data, volatile=True), Variable(Labels)
Output = model(Data)
testLoss += LossFunction(Output, Labels)
pred = Output.data.max(1, keepdim=True)[1]
Correct += pred.eq(Labels.data.view_as(pred)).cpu().sum()
testLoss /= len(TestSetLoader.dataset) # Determine the test loss and echo the respective output
print('Epoch ' + str(epoch) + ": ", end='')
# print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
# testLoss, Correct, len(TestSetLoader.dataset),
# 100. * Correct / len(TestSetLoader.dataset)))
validationErrorRate = 100 - (100. * Correct / len(TestSetLoader.dataset))
print('Epoch ' + str(epoch) + ": ", str(validationErrorRate))
return validationErrorRate, testLoss
TrainingSetLoader, TestSetLoader = loadCIFAR(Batch_Size)
model = LeNet()
if EnableCUDAExecution == True:
torch.cuda.set_device(1)
model.cuda()
Optimizer = torch.optim.SGD(model.parameters(), lr=LearningRate, momentum=Momentum)
LossFunction = torch.nn.CrossEntropyLoss()
print(‘Here we go…’)
for Epoch in range(1, Epochs + 1):
train(Epoch)
validationErrorRate, test_loss = test(Epoch)
#print(Epoch, validationErrorRate, test_loss)
Definitely some redundant code in there, but I am mainly focused on simply getting this to network to improve. If anyone thinks they can help but needs more detail, please let me know, happy to post any other necessary code/files. Also, bit of a python noob, so please forgive any ignorance on my part. Cheers guys! |
st104829 | Ihave trained a CNN network and save it,but when i load it and predict some picture,it show error that RuntimeError: expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[2, 2].
But if i have some error in network,why do i could train the network successfly?And i check up the code,i set the stride with 2,not [2,2] |
st104830 | This is a misleading error message, which is thrown if your data is missing a batch dimension.
It should be fixed on master. Here 3 is the github issue.
Your data for prediction might be a single image with dimensions: [channels, height, width].
Just add a batch dimension at dim0 and your code should run again:
x = torch.randn(3, 24, 24) # your image
x.unsqueeze_(0)
print(x.shape)
> torch.Size([1, 3, 24, 24]) |
st104831 | Tensors in Pytorch have new_zeros, new_empty that make sure devices are consistent. While ATen has at::zeros_like and at::empty_like it seems the tensors do not have new_zeros.
So if I call:
auto ious = at::zeros(xy.type(), {10, 3, 1, 1, 2});
It makes sure the type is same as xy, but what about the device?
If I want to create a tensor with a new type (e.g. int) I can call:
auto gt_target = at::empty(at::CUDA(at::kInt), {10, 3, 1, 1, 2});
Which device will this tensor be using? can I multiply ious to gt_target |
st104832 | Solved by dashesy in post #3
I think this PR and this issue are related to my question:
ATen currently does not have any concept of a “device” beyond CPU vs. CUDA (i.e. the “ordinal” part is missing from ATen’s model). I’m working on changing this right now, so you can expect this to be fixed soon |
st104833 | I got part of the answer I was looking for
Yes .type() also has the device information, so ious.device == xy.device
Not sure still, it will be cuda of course but if there are multiple devices I do not know which index will be used |
st104834 | I think this PR 51 and this issue 39 are related to my question:
ATen currently does not have any concept of a “device” beyond CPU vs. CUDA (i.e. the “ordinal” part is missing from ATen’s model). I’m working on changing this right now, so you can expect this to be fixed soon |
st104835 | Hi guys,
I am doing image segmentation, and I applied classic U-Net to the training.
However, the results show that loss: nan, but accuracy could be 0.8-0.9 which sounds pretty good.
I have changed the batch_size and the learning rate many times, but it still not work.
How can I figure out this loss: nan?
Many Thanks
QQ截图20180610101640.png1619×783 19.1 KB |
st104836 | What is your data distribution? Do you have 2 classes where 1 is ~80% of data?
I think this is your situation (exaclly 870k of majority class and 170k of second one). So when you will be predicting just majority class, you will have very nice accuracy.
In Summary, your model doesn’t learn anything useful.
Also, you have in your output much better metrics, like precision,recall, F1 score. Monitor this one, not pure accuracy (or accuracy per class) |
st104837 | Thanks so much for your help.
Maybe my network is not written correctly. I will check my network first. |
st104838 | Solved by ptrblck in post #12
In the latest stable release (0.4.0) type() of a tensor no longer reflects the data type.
You should use tensor.type() and isinstance() instead.
Have a look at the Migration Guide for more information. |
st104839 | Simple: tensor.type(). Unfortunately the docs 600 aren’t totally clear that this works so easily. |
st104840 | Actually tensor.type() without any arguments returns an error that it’s missing an unspecified positional argument. |
st104841 | Something unusual about the tensor that you’re working with? The following should work fine (note that the Python builtin type also gives what you want).
>>> import torch
>>> a = torch.ones(1)
>>> a.type()
'torch.FloatTensor'
>>> type(a)
<class 'torch.FloatTensor'> |
st104842 | Thanks. It seems that the issue has to do with if the tensor is a Variable.
import torch
from torch.autograd import Variable
a = torch.ones(1)
a.type()
'torch.FloatTensor’
type(a)
< class ‘torch.FloatTensor’>
b = Variable(a)
print(b.type())
TypeError: type() missing 1 required positional argument: 't’
print(b.data.type())
< class ‘torch.FloatTensor’> |
st104843 | If the object is a torch Variable, not the plain torch Tensor, using type() method of the object without any argument will cause this error.
TypeError: type() missing 1 required positional argument: ‘t’
But it is not clear what this error message means. |
st104844 | If you have a torch variable say a then simply do
type(a.data)
to know its type |
st104845 | In current builds, type(new Tensor(5)) now always returns ‘Variable’ instead of the actual tensor type… Is that a bug? |
st104846 | David_Bau:
type(new Tensor(5))
That’s not expected behavior I believe. Here is what I receive:
>>> type(torch.Tensor(5))
<class 'torch.FloatTensor'>
To summarize this thread:
To print tensor type use: print(type(tensor))
To print variable tensor type use: print(type(tensor.data)) |
st104847 | In the latest stable release (0.4.0) type() of a tensor no longer reflects the data type.
You should use tensor.type() and isinstance() instead.
Have a look at the Migration Guide 304 for more information. |
st104848 | Looking for some hyperparamter library recommendations for Pytorch. This 182 one looks good but would be great if anyone can recommend one they have had success with or use regularly |
st104849 | I made a 10 layers nn, with the MSELoss(), it works normally, but I have to add a regularization term, when I add it to the loss function directly, it report an error :
TypeError: add() received an invalid combination of arguments - got (MSELoss), but expected one of:
(Tensor other, float alpha)
(float other, float alpha)
when I transpose the regularization to numpy, it error too:
TypeError: unsupported operand type(s) for +: ‘MSELoss’ and ‘float’
I know why it happened, but I don`t know how to correct,
My question is that how to write a regularization?
Thank you!
here is my code:
params=model.state_dict()
for k,v in params.items():
print(k)
n1=params[‘layer1.weight’]
n2=params[‘layer2.weight’]
n3=params[‘layer3.weight’]
n4=params[‘layer4.weight’]
n5=params[‘layer5.weight’]
n6=params[‘layer6.weight’]
n7=params[‘layer7.weight’]
n8=params[‘layer8.weight’]
n9=params[‘layer9.weight’]
n=(
1**4*(params[‘layer1.weight’] -params[‘layer1.weight’] )2 +
0.94*(params[‘layer1.weight’] -params[‘layer1.weight’] )2 +
0.84*(params[‘layer1.weight’] -params[‘layer1.weight’] )2 +
0.74*(params[‘layer1.weight’] -params[‘layer1.weight’] )2 +
0.64*(params[‘layer1.weight’] -params[‘layer1.weight’] )2 +
0.54*(params[‘layer1.weight’] -params[‘layer1.weight’] )2 +
0.44*(params[‘layer1.weight’] -params[‘layer1.weight’] )2 +
0.34*(params[‘layer1.weight’] -params[‘layer1.weight’] )**2
)
E=0.003*n
def train(epoch):
model.train()
criterion = torch.nn.MSELoss()+E |
st104850 | It looks like you would like to create a static computation graph, which is then executed (theano-style).
In PyTorch you create a dynamic computation graph, so that each operation is executed immediately (with some exceptions like asynchronous execution of GPU operations).
If you would like to add some value to your criterion, you have to calculate the loss first:
criterion = nn.MSELoss()
...
# In your training loop
output = model(data)
loss = criterion(output, target)
loss = loss + E
Also, have a look at this thread 105 for more information regarding custom regularization. |
st104851 | I basically want to compute torch.where on outer dimensions. Here is the snippet from ipython-
In [599]: a.shape
Out[599]: torch.Size([32, 4, 13])
In [600]: b.shape
Out[600]: torch.Size([32, 4, 13])
In [601]: c.shape
Out[601]: torch.Size([32])
In [602]: torch.where(c, a, b)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-602-4767f96aa34a> in <module>()
----> 1 torch.where(c, a, b)
RuntimeError: The size of tensor a (32) must match the size of tensor b (13) at non-singleton dimension 2
Right now I am doing this by following command -
out=torch.stack([a[i] if c[i] else b[i] for i, val in enumerate(c)])
Please let me know if I haven’t explored any other useful torch API or a better pythonic way to perform the desired task. |
st104852 | The tensors should be broadcastable, so this would work:
a = torch.randn(32, 4, 13)
b = torch.randn(32, 4, 13)
c = torch.empty((32, 1, 1), dtype=torch.uint8).random_(2)
d = torch.where(c, a, b)
In your case, just get a new view on c:
c = c.view(-1, 1, 1) |
st104853 | I’m having some trouble to make the following tensor transformation with pytorch ops rather than for loops. I want to transform x intro target_x. I’ve tried several combinations of torch.cat and torch.transpose and I haven’t been able to do it. Am I missing some operation?
import torch
x = torch.tensor([
[
[
[ 1, 2],
[11, 12]
],
[
[ 3, 4],
[13, 14]
]
],
[
[
[21, 22],
[31, 32]
],
[
[23, 24],
[33, 34]
]
]
])
target_x = torch.tensor([
[
[
[ 1, 2],
[ 3, 4]
],
[
[11, 12],
[13, 14]
]
],
[
[
[21, 22],
[23, 24]
],
[
[31, 32],
[33, 34]
]
]
]) |
st104854 | I am working on loading data in pytorch, I followed the official tutorial and wrote my own dataset, the code is shown as below and it worked fine:
import os
import numpy as np
import time
from torch.utils.data import Dataset, DataLoader
data_dir = 'D:/lily_data/data'
label_dir = 'D:/lily_data/label'
data_list = os.listdir(data_dir)
label_list = os.listdir(label_dir)
dataset_list = []
for i in range(len(data_list)-15):
data_name = data_dir+'/'+data_list[i]
label_name = label_dir+'/'+label_list[i]
dataset_list.append([data_name, label_name])
class lily_dataset(Dataset):
def __init__(self, data_list):
self.data_list = data_list
def __len__(self):
return(len(self.data_list))
def __getitem__(self, idx):
data = np.load(self.data_list[idx][0])
label = np.load(self.data_list[idx][1])
return data, label
start = time.time()
dataset = lily_dataset(dataset_list)
#dataset = DataLoader(dataset, batch_size=2, shuffle=True, num_workers=2)
for idx, data in enumerate(dataset):
print(idx, data[0].shape, data[1].shape, time.time()-start)
start = time.time()
The result of this code is shown like this:
0 (512, 512, 800) (512, 512, 1) 0.8958175182342529
1 (512, 512, 800) (512, 512, 1) 1.065506935119629
2 (512, 512, 800) (512, 512, 1) 1.1634063720703125
3 (512, 512, 800) (512, 512, 1) 1.31596040725708
4 (512, 512, 800) (512, 512, 1) 14.15942931175232
5 (512, 512, 800) (512, 512, 1) 14.121654033660889
But when I uncomment this line
dataset = DataLoader(dataset, batch_size=2, shuffle=True, num_workers=2)
and run the code, it just give no response. I run this script in spyder, it seems to be running but yield no result or any response.
I think I just followed the tutorial and cannot figure out what’s wrong. Any help or idea would be appreciated. |
st104855 | If you are using Windows, you should guard all of your code with:
def main():
# Your code
if __name__=='__main__':
main()
Since Windows uses spawn instead of fork to create processes, your code will be executed multiple times.
Have a look at the Windows FAQ 5. |
st104856 | I am new to PyTorch and was intrigued by Jeremy Howard of fast.ai to learn. His Deep Learning videos are simply great and completely free.
The tutorial says to uncomment this to run on GPU but it assigns to dtype whereas it should assign to device. This typo is repeated on all examples.
dtype = torch.float
device = torch.device(“cpu”)
# dtype = torch.device(“cuda:0”) # Uncomment this to run on GPU |
st104857 | Looks like a bug. Could you post a link to the tutorial where you have found it?
I just skimmed through them and couldn’t find the line. |
st104858 | Here is the link and it starts from here. Screenshot below.
Hope this helps!
https://pytorch.org/tutorials/beginner/pytorch_with_examples.html#warm-up-numpy 1
DEF46FCAE9C849D5AB4946A64D14935E.png902×591 278 KB
Mail](https://go.microsoft.com/fwlink/?LinkId=550986 1) for Windows 10 |
st104859 | So I have a stupid question, I apologize in advance!
I am following the steps here: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html and they are all clear.
In training a classifier I defined the trainset and testset as explained in the tutorial.
Here is my question, why we dont have the validationset defined?
Can anyone please tell me how I can do that thanks |
st104860 | import torch
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root=’./data’, train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root=’./data’, train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = (‘plane’, ‘car’, ‘bird’, ‘cat’,
‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’)
functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
#plt.show()
get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
show images
imshow(torchvision.utils.make_grid(images))
plt.show()
print labels
print(’ ‘.join(’%5s’ % classes[labels[j]] for j in range(4)))
The pictures can’t be shown in my computer complied by Pycharm. I use the Pycharm to connect the Server(CentOS).
But the pictures can be shown in my computer compiled by JupyterNotebook. I am very confused .Please Help me!
Error Message:
ssh://[email protected]:22/data/cong_pan/anaconda3/bin/python3.6 -u /data/cong_pan/jiaqi_wang/code/pytorch/showphoto.py
Files already downloaded and verified
Files already downloaded and verified
dog horse car horse
Exception ignored in: <bound method DataLoaderIter.del of <torch.utils.data.dataloader.DataLoaderIter object at 0x7f7253d55eb8>>
Traceback (most recent call last):
File “/data/cong_pan/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 333, in del
File “/data/cong_pan/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 319, in _shutdown_workers
File “/data/cong_pan/anaconda3/lib/python3.6/multiprocessing/queues.py”, line 345, in get
ImportError: sys.meta_path is None, Python is likely shutting down
Process finished with exit code 0 |
st104861 | Similar issue when run the tutorial.
The code is exactly the same code as shown in the tutorial.
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
Error message
Files already downloaded and verified
Files already downloaded and verified
plane deer truck cat
Exception ignored in: <bound method DataLoaderIter.__del__ of <torch.utils.data.dataloader.DataLoaderIter object at 0x7f8850688518>>
Traceback (most recent call last):
File "/data/apollo/a/Xiaoke/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 333, in __del__
File "/data/apollo/a/Xiaoke/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 319, in _shutdown_workers
File "/data/apollo/a/Xiaoke/anaconda/lib/python3.6/multiprocessing/queues.py", line 337, in get
ImportError: sys.meta_path is None, Python is likely shutting down
System info:
ubuntu 14.04
python3.6
anconda
torch gpu
I am ssh the server…
Thanks a lot if anyone can give the help. |
st104862 | Experiencing the same issue as well
macos 10.13.4
python 3.6.5
torch 0.3.1
torchvision 0.2.0 |
st104863 | Hi everyone! Thanks for reporting. I am aware of the issue and will have a fix for the next release. The error you saw was harmless and can be ignored. |
st104864 | I solve the problem by removing “dataiter = iter(trainloader)”.Instead, I use the “iter” directly. Like that:
“images, labels = iter(trainloader).next()” |
st104865 | JinsongWei263:
“images, labels = iter(trainloader).next()
this is extremely troublesome if you have shuffle=False… |
st104866 | I have test it on my machine. It’s Okay.
There is no problem, even I change shuffle=False. |
st104867 | I see it as @SimonW.
It won’t cause an error, but you will most likely get the same batch over and over again.
Have a look at this small example:
class MyDataset(Dataset):
def __init__(self):
self.data = torch.arange(100)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return len(self.data)
dataset = MyDataset()
loader = DataLoader(dataset,
batch_size=1,
shuffle=False)
data = iter(loader).next()
print(data)
data = iter(loader).next()
print(data)
# Prints 0s every time
data_iter = iter(loader)
data = data_iter.next()
print(data)
data = data_iter.next()
print(data)
# Gets new batch (0, 1, ...) |
st104868 | I have the same error, also using pycharm and no images are displayed.
Any help would be apreciated,
Thanks. |
st104869 | I solved the iterator issue by calling the following before the program exits:
data_iter._shutdown_workers()
I solved the display issue by calling the following immediately after plt.imshow:
plt.show() |
st104870 | I have searched for ensemble method in PyTorch, but fail to get what I want.
I have several models already trained, but these models have different transform methods. I want to use some ensemble method to combine these models. One method is to save all results into files for each model, and load the file for training. But this might be not very elegant.
What I desire is to get outputs given by each model in my code directly (without pre-saving), but I can only get outputs of a batch for each model, as each model has its own dataloader (because they have different transforms). I don’t quite know how can I combine the outputs of these models.
The code looks something like this:
def evaluate(model, device, dataloader):
...
for inputs, labels in dataloader:
...
outputs = model(inputs)
###How to combine outputs from different models...
for mdl_name in mdl_list:
mdl = ...
tf = mdl.tf
dataset = ImageLoader(img_path, tf)
dataloader = DataLoader(dataset, batch_size, shuffle=True)
mdl.load_state_dict(...)
###How to combine outputs from different models...
Thanks in advance. |
st104871 | You could add the different transformations into your Dataset and return the N transformed samples for each model.
Using this approach you would have to iterate your dataset only once. |
st104872 | Thanks @ptrblck. So do you mean I cannot use batch training, but use the whole dataset as one batch? I think this is not what I want and will raise out of memory problem.
Sorry, I think I do not quite get your point. |
st104873 | Sorry for not explaining it clearly.
The use case would be to return N transformed samples instead of one:
def __getitem__(self, index):
x = self.data[index]
x1 = self.transform1(x)
x2 = self.transform2(x)
...
return x1, x2, ...
Now in your DataLoader you will get a batch of samples in all transformations. You could try to return a dict of transformed samples, if you find it easier to get the right one for the models. |
st104874 | Thanks @ptrblck. To be honest, I feel this method is a little cumbersome, as you need to carefully specify the correct model corresponding to the portion of your input, and what I desire is some concatenation of output tensors from different models, and use the whole big tensor as input to following ensemble classifier (for training). Anyway, thank you a lot for your kind help. |
st104875 | In my code, I have 2 model works on different graphics, and caculate the loss between their outputs. I have set input on cuda(0) as well as the src_model, but error occured while training.
RuntimeError: Expected tensor for argument #1 ‘input’ to have the same device as tensor for argument #2 ‘weight’; but device 0 does not equal 1 (while checking arguments for cudnn_convolution) |
st104876 | Did you push the output of your first model to the GPU of the second model?
I tried to create an example in your other thread 166. Does this not work?
If so, could you provide a small executable code snippet? |
st104877 | the follwing is the code. error occured while caculating src_fc6, src_fc7, src_fc8, src_scoremap = self.src_model(src_inputs, transfer=True)
dst_fc6, dst_fc7, dst_fc8, dst_scoremap = self.dst_model(dst_inputs, transfer=True)
src_fc6, src_fc7, src_fc8, src_scoremap = self.src_model(src_inputs, transfer=True)
src_domain_outputs = self.discrinamitor([src_fc6.to(1), src_fc7.to(1), src_fc8.to(1)])
dst_domain_outputs = self.discrinamitor([dst_fc6, dst_fc7, dst_fc8]) |
st104878 | Which line throws this error?
Is self.discriminator on GPU1?
Could you check this by printing any layer of it? |
st104879 | src_fc6, src_fc7, src_fc8, src_scoremap = self.src_model(src_inputs, transfer=True) |
st104880 | Could you check the device of src_input and all model parameters? Do you have any other information parameters besides the layers in src_model? |
st104881 | I want to create a minibatch based on the loss values for the samples. Loss based sampling simply means to sample data points based on loss values.
I want to recreate the results of this paper 9.
Here is code I implemented but i am not getting the desired results I am getting around 75% Test Accuray on Mnist.
class LossSampler(Sampler):
r"""Samples elements randomly, without replacement.
Arguments:
data_source (Dataset): dataset to sample from
"""
def __init__(self, model, data_source, train_data, train_target, batch_size):
self.model = model
self.data_source = data_source
self.batch_size = batch_size
self.data = train_data
self.data = torch.unsqueeze (self.data, 1)
self.data = self.data.type (torch.cuda.FloatTensor)
self.target = train_target
def get_scores(self):
output, feat = self.model.forward (self.data)
criterion = nn.CrossEntropyLoss (reduce=False)
loss = criterion (output, self.target)
return (loss, feat)
def __iter__(self):
num_batches = len (self.data_source) // self.batch_size
while num_batches > 0:
scores, feat = self.get_scores ()
sampled = []
sampled = torch.multinomial (scores, num_samples=self.batch_size)
yield sampled
num_batches -= 1
def __len__(self):
return len(self.data_source)
Is there any glaring error in the code that you can see. |
st104882 | Getting this error on this call back
Traceback (most recent call last):
File “/Users/Documents/Internship Main/test/venv/lib/python2.7/site-packages/flask/app.py”, line 2309, in call
return self.wsgi_app(environ, start_response)
File “/Users/Documents/Internship Main/test/venv/lib/python2.7/site-packages/flask/app.py”, line 2295, in wsgi_app
response = self.handle_exception(e)
File “/Users/Documents/Internship Main/test/venv/lib/python2.7/site-packages/flask/app.py”, line 1741, in handle_exception
reraise(exc_type, exc_value, tb)
File “/Users/Documents/Internship Main/test/venv/lib/python2.7/site-packages/flask/app.py”, line 2292, in wsgi_app
response = self.full_dispatch_request()
File “/Users/Documents/Internship Main/test/venv/lib/python2.7/site-packages/flask/app.py”, line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File “/Users/Documents/Internship Main/test/venv/lib/python2.7/site-packages/flask/app.py”, line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File “/Users/Documents/Internship Main/test/venv/lib/python2.7/site-packages/flask/app.py”, line 1813, in full_dispatch_request
rv = self.dispatch_request()
File “/Users/Documents/Internship Main/test/venv/lib/python2.7/site-packages/flask/app.py”, line 1799, in dispatch_request
return self.view_functionsrule.endpoint
File “/Users/Documents/Internship Main/test/flaskintro.py”, line 138, in upload
torch.save(model, “/Users/Documents/Internship Main/test/model/my_model.t7”)
File “/Users/Documents/Internship Main/test/venv/lib/python2.7/site-packages/torch/serialization.py”, line 161, in save
return _with_file_like(f, “wb”, lambda f: _save(obj, f, pickle_module, pickle_protocol))
File “/Users/Documents/Internship Main/test/venv/lib/python2.7/site-packages/torch/serialization.py”, line 118, in _with_file_like
return body(f)
File “/Users/Documents/Internship Main/test/venv/lib/python2.7/site-packages/torch/serialization.py”, line 161, in
return _with_file_like(f, “wb”, lambda f: _save(obj, f, pickle_module, pickle_protocol))
File “/Users/Documents/Internship Main/test/venv/lib/python2.7/site-packages/torch/serialization.py”, line 232, in _save
pickler.dump(obj)
PicklingError: Can’t pickle <type ‘instancemethod’>: attribute lookup builtin.instancemethod failed
And don’t know why won’t correctly pickle or why it thinks that the model is type instance method |
st104883 | Hi I’m new to PyTorch, I have a simple RNN, during forward() pass, at each step, I compute the loss of the function (in a loop) which is very slow. Is there a way to speed things up.
I read that there is a way to avoid such loops using packed sequences. But I do not understand.
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size, n_layers=1):
super(RNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.encoder = nn.Embedding(input_size, hidden_size)
self.rnn = nn.RNN(hidden_size, hidden_size, n_layers)
self.decoder = nn.Linear(hidden_size, output_size)
def forward(self, input, hidden):
input = self.encoder(input.view(1, -1))
output, hidden = self.rnn(input.view(1, 1, -1), hidden)
output = self.decoder(output.view(1, -1))
return output, hidden
def init_hidden(self):
return Variable(torch.zeros(self.n_layers, 1, self.hidden_size))
And the loss function as
def loss(inputs, targets)
# reset the hidden layer
hidden = net.init_hidden()
loss = 0
chunk_len = inputs.shape[0]
for c in range(chunk_len):
output, hidden = net.forward(inputs[c], hidden)
output_ = output.view((1,-1))
target_ = targets[c].reshape((-1,1)).squeeze(1)
loss += criterion(output_, target_)
# mle loss
mle = loss.data / chunk_len
return mle |
st104884 | Solved by SimonW in post #4
it’s passed internally. you can still supply the starting hidden state if you want to |
st104885 | You are using nn.RNN, which supports taking in the entire sequence. So you don’t need to feed into it token-by-token. |
st104886 | Thanks, but what about the hidden state? How do we pass since we are passing the whole sequence? |
st104887 | it’s passed internally. you can still supply the starting hidden state if you want to |
st104888 | Hi,
I have a pre-trained model implemented and trained in TensorFlow and saved into the three common TF files (’.data’ , ‘.index’ and ‘.meta’).
Is there any way for trans-coding these files into another format that Pytorch can load and directly build the same trained model accordingly ?
I have heard about ‘onnex’ but still don’t know if it is relevant to such task.
Best Regards |
st104889 | I noticed that there is a weird slow down of the training phase when I accumulate the losses using .item() instead of .data[0] (note I am testing this code on google colab GPU). The network is a relatively simple CNN:
import torch
import time
from torch.autograd import Variable
import torchvision
from torchvision import transforms, datasets
import torch.nn.functional as F
import matplotlib.pyplot as plt
import numpy as np
import torch.nn as nn
import torch.optim as optim
#these are the per channel mean and standart deviation of
#CIFAR10 image database. We will use these to normalize each
#channel to unit deviation with mean 0.
mean_CIFAR10=np.array([0.49139968, 0.48215841, 0.44653091])
std_CIFAR10=np.array([0.49139968, 0.48215841, 0.44653091])
#this transformation is used to transform the images to 0 mean and 1 std.
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize(mean_CIFAR10 , std_CIFAR10)])
#load the CIFAR10 training and test sets
training_set_CIFAR10 = datasets.CIFAR10(root = 'cifar10/',
transform = transform,
train = True,
download = True)
test_set_CIFAR10 = datasets.CIFAR10(root = 'cifar10/',
transform = transform,
train = False,
download = True)
print('Number of training examples:', len(training_set_CIFAR10))
print('Number of test examples:', len(test_set_CIFAR10))
#there are ten classes in the CIFAR10 database
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
#DataLoaders are used to iterate over the database images in batches rather
#one by one using for loops which is expensive in python since it is interpreted
training_loader_CIFAR10 = torch.utils.data.DataLoader(dataset=training_set_CIFAR10,
batch_size=512,
shuffle=True)
test_loader_CIFAR10 = torch.utils.data.DataLoader(dataset=test_set_CIFAR10,
batch_size=512,
shuffle=False)
#this function is used to test the accuracy of the model
#over the test set. The network cnn is defined later on in the code.
def test():
print('Started evaluating test accuracy...')
cnn.eval()
#calculate the accuracy of our model over the whole test set in batches
correct = 0
for x, y in test_loader_CIFAR10:
x, y = Variable(x).cuda(), y.cuda()
h = cnn.forward(x)
pred = h.data.max(1)[1]
correct += pred.eq(y).sum()
return correct/len(test_set_CIFAR10)
#Below we define the convolutional network class.
#See the beginning of the document for the architecture
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
#define the feature extraction layers
self.conv1 = torch.nn.Conv2d(3,16,kernel_size=3,stride=1,padding=1)
self.pool1 = nn.MaxPool2d(2, stride=2)
self.conv2 = torch.nn.Conv2d(16,32,kernel_size=3,stride=1,padding=1)
self.pool2 = nn.MaxPool2d(2, stride=2)
self.conv3 = torch.nn.Conv2d(32,64,kernel_size=3,stride=1,padding=1)
self.pool3 = nn.MaxPool2d(2, stride=2)
self.conv4 = torch.nn.Conv2d(64,128,kernel_size=3,stride=1,padding=1)
self.pool4 = nn.MaxPool2d(2, stride=2)
#define the categorization layers
self.full1=nn.Linear(512,512)
self.full2=nn.Linear(512, 256)
self.full3=nn.Linear(256,10)
#define the forward run for the input data x
def forward(self, x):
#convolutional feature extraction layers
x = F.relu(self.conv1(x))
x = self.pool1(x)
x = F.relu(self.conv2(x))
x = self.pool2(x)
x = F.relu(self.conv3(x))
x = self.pool3(x)
x = F.relu(self.conv4(x))
x = self.pool4(x)
#learning layers
x = x.view(-1,512)
x = F.relu(self.full1(x))
x = self.full2(x) #no relu here since we use crossentropyloss
return x
#this is the training function. cnn is the network that is defined later
#optimizer and learning rate lr are modified inside the function
def train(cycles,cost_criterion,cnn,optimizer):
average_cost=0 #cost function for the training
acc=0 #accuracy over the test set
for e in range(cycles): #cycle through the database many times
print('Cycle: ',e)
cnn.train()
loadt=0
cudat=0
forwardt=0
costt=0
stept=0
avcostt=0
#following for loop cycles over the training set in batches
#of batch_number=5 using the training_loader object
s1 = time.clock()
t1 = time.clock()
for i, (x, y) in enumerate(training_loader_CIFAR10 ,0):
s2 = time.clock()
loadt=loadt+s2-s1
#here x,y will store data from the training set in batches
x, y = Variable(x).cuda(), Variable(y).cuda()
s3 = time.clock()
cudat=cudat+s3-s2
h = cnn.forward(x) #calculate hypothesis over the batch
s4 = time.clock()
forwardt=forwardt+s4-s3
cost = cost_criterion(h, y) #calculate cost the cost of the results
#print(type(cost))
s5 = time.clock()
costt=costt+s5-s4
optimizer.zero_grad() #set the gradients to 0
cost.backward() # calculate derivatives wrt parameters
optimizer.step() #update parameters
s6 = time.clock()
stept=stept+s6-s5
average_cost+=cost.item(); #add the cost to the costs
s1 = time.clock()
avcostt=avcostt+s1-s6
t2 = time.clock()
print('total time %.2f loading time %.2f, cuda transfer time %.2f, forward time: %.2f, cost time %.2f, step time %.2f, average cost time %.2f'%(t2-t1,loadt,cudat,forwardt,costt,stept,avcostt))
average_cost=0
cycles = 50 #number of cycles that the training runs over the database
cost_criterion = torch.nn.CrossEntropyLoss() #cost function
cnn = ConvNet().cuda() #build the initial network (in the GPU)
optimizer=optim.Adam(cnn.parameters(), lr= 0.0001)
train(cycles,cost_criterion,cnn,optimizer)
torch.save(cnn.state_dict(), 'cnn_trained')
It happens when I try to accumulate losses by
average_cost+=cost.item()
in Pytorch 0.4 (The full code is at the end of the message). The timing is as follows
Cycle: 0
total time 16.31 loading time 10.85, cuda transfer time 0.11, forward time: 0.37, cost time 0.02, step time 0.80, average cost time 4.17
Cycle: 1
total time 16.32 loading time 10.84, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.80, average cost time 4.18
Cycle: 2
total time 16.32 loading time 10.84, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.80, average cost time 4.19
Cycle: 3
where total time is the time it takes the network to optimize through the whole dataset once and average cost time is the time it takes for the operation I mentioned above. If I use .data[0] instead I get
Cycle 0
total time 12.11 loading time 10.80, cuda transfer time 0.11, forward time: 0.38, cost time 0.02, step time 0.80, average cost time 0.01
Cycle: 1
total time 12.05 loading time 10.75, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.80, average cost time 0.01
Am I making a mistake elsewhere that affects this operation?
Even weirder is the following. I considered the same code with a more complicated network (residual network). It has the same behaviour but something funny happens, when I replace .item() with .data[0] the time for accumulating the cost decreases but the time for transfering the tensors to CUDA increases? The code is below
import torch
import time
from torch.autograd import Variable
import torchvision
from torchvision import transforms, datasets
import torch.nn.functional as F
import matplotlib.pyplot as plt
import numpy as np
import torch.nn as nn
import torch.optim as optim
#these are the per channel mean and standart deviation of
#CIFAR10 image database. We will use these to normalize each
#channel to unit deviation with mean 0.
mean_CIFAR10=np.array([0.49139968, 0.48215841, 0.44653091])
std_CIFAR10=np.array([0.49139968, 0.48215841, 0.44653091])
#this transformation is used to transform the images to 0 mean and 1 std.
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize(mean_CIFAR10 , std_CIFAR10)])
#load the CIFAR10 training and test sets
training_set_CIFAR10 = datasets.CIFAR10(root = 'cifar10/',
transform = transform,
train = True,
download = True)
test_set_CIFAR10 = datasets.CIFAR10(root = 'cifar10/',
transform = transform,
train = False,
download = True)
print('Number of training examples:', len(training_set_CIFAR10))
print('Number of test examples:', len(test_set_CIFAR10))
#there are ten classes in the CIFAR10 database
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
#DataLoaders are used to iterate over the database images in batches rather
#one by one using for loops which is expensive in python since it is interpreted
training_loader_CIFAR10 = torch.utils.data.DataLoader(dataset=training_set_CIFAR10,
batch_size=512,
shuffle=True)
test_loader_CIFAR10 = torch.utils.data.DataLoader(dataset=test_set_CIFAR10,
batch_size=512,
shuffle=False)
#this function is used to test the accuracy of the model
#over the test set. The network cnn is defined later on in the code.
def test():
print('Started evaluating test accuracy...')
cnn.eval()
#calculate the accuracy of our model over the whole test set in batches
correct = 0
for x, y in test_loader_CIFAR10:
x, y = Variable(x).cuda(), y.cuda()
h = cnn.forward(x)
pred = h.data.max(1)[1]
correct += pred.eq(y).sum()
return correct/len(test_set_CIFAR10)
#These are the two types of the basic blocks in a residual network. The residual network
#in this code is built by concatenating several such blocks together.
#Basic blocks are of the form x -> D(x) + F(x), where D(x) is x downsampled
#to the same dimensions as F(x) by a single convolution and F(x) is collection of
#successive operations involving several convolutions and batchnorms.
class BasicResBlock1(nn.Module):
def __init__(self, input, output, downsample, stride=1):
super(BasicResBlock1, self).__init__()
self.conv1 = torch.nn.Conv2d(input,output,kernel_size=3,stride=stride,padding=1, bias=False)
self.batchNorm1 = torch.nn.BatchNorm2d(output)
self.conv2 = torch.nn.Conv2d(output,output,kernel_size=3,padding=1, stride=1, bias=False)
self.downsample=downsample
#applied to the residual to downsample
def forward(self,x1):
residual = self.downsample(x1)
x2 = self.conv1(x1)
x2 = self.batchNorm1(x2)
x2 = F.relu(x2,inplace=True)
x2 = self.conv2(x2)
x2+= residual
return x2
class BasicResBlock2(nn.Module):
def __init__(self, input, output):
super(BasicResBlock2, self).__init__()
self.conv1 = torch.nn.Conv2d(input,output,kernel_size=3,stride=1,padding=1, bias=False)
self.batchNorm1 = torch.nn.BatchNorm2d(input)
self.conv2 = torch.nn.Conv2d(output,output,kernel_size=3,padding=1, stride=1, bias=False)
self.batchNorm2 = torch.nn.BatchNorm2d(output)
self.batchNorm3 = torch.nn.BatchNorm2d(output)
def forward(self,x1):
residual = x1
x2 = self.batchNorm1(x1)
x2 = F.relu(x2,inplace=True)
x2 = self.conv1(x1);
x2 = self.batchNorm2(x2)
x2 = F.relu(x2,inplace=True)
x2 = self.conv2(x2)
x2+= residual
x2 = self.batchNorm3(x2)
x2 = F.relu(x2, inplace=True)
return x2
#Below we define the residual network class
class ResNet(nn.Module):
def __init__(self,width, number_of_blocks):
super(ResNet, self).__init__()
#these are the inital layers applied before basic blocks
self.conv1 = torch.nn.Conv2d(3,width,kernel_size=3,stride=1,padding=1, bias=False)
self.batchNorm1 = torch.nn.BatchNorm2d(width)
self.relu1 = nn.ReLU(inplace=True)
#resLayer1 is the basic block for the residual network that is formed by
#concatenating several basic blocks of increasing dimensions together.
self.downsample1=torch.nn.Conv2d(width,2*width,kernel_size=1,stride=1,bias=False)
self.downsample2=torch.nn.Conv2d(2*width,4*width,kernel_size=1,stride=2,bias=False)
self.downsample3=torch.nn.Conv2d(4*width,8*width,kernel_size=1,stride=2,bias=False)
self.resLayer1=[]
self.resLayer1.append(BasicResBlock1(width,2*width,self.downsample1,1))
for x in range (0, number_of_blocks[0]) : #stage1
self.resLayer1.append(BasicResBlock2(2*width,2*width))
self.resLayer1=nn.Sequential(*self.resLayer1)
self.resLayer2=[]
self.resLayer2.append(BasicResBlock1(2*width,4*width,self.downsample2,2)) #stage2
for x in range (0, number_of_blocks[1]) :
self.resLayer2.append(BasicResBlock2(4*width,4*width))
self.resLayer2=nn.Sequential(*self.resLayer2)
self.resLayer3=[]
self.resLayer3.append(BasicResBlock1(4*width,8*width,self.downsample3,2)) #stage3
for x in range (0, number_of_blocks[2]) :
self.resLayer3.append(BasicResBlock2(8*width,8*width))
self.resLayer3=nn.Sequential(*self.resLayer3)
self.avgpool1 = torch.nn.AvgPool2d(8,stride=1)
#define the final linear classifier layer
self.full1=nn.Linear(8*width,10)
#weight initializations
for m in self.modules():
if isinstance(m, nn.Conv2d):
torch.nn.init.kaiming_normal(m.weight, mode='fan_out')
elif isinstance(m, nn.BatchNorm2d):
torch.nn.init.constant(m.weight, 1)
torch.nn.init.constant(m.bias, 0)
elif isinstance(m, nn.Linear):
torch.nn.init.kaiming_normal(m.weight, mode='fan_out')
torch.nn.init.constant(m.bias, 0)
#define the forward run for the input data x
def forward(self, x):
#initial layers before basic blocks
x = self.conv1(x)
x = self.batchNorm1(x)
x = self.relu1(x)
#residual layers and then average pooling
x = self.resLayer1(x);
x = self.resLayer2(x);
x = self.resLayer3(x);
#x = self.resLayer4(x);
x = self.avgpool1(x)
#linear classifier layer (since we
#use CrossEntropyLoss for the loss function
#which already has logsoftmax incorporated inside
#we dont have any activation function here.)
x = x.view(x.size(0), -1)
x = self.full1(x)
return x
#this is the training function. cnn is the network that is defined later
#optimizer and learning rate lr are modified inside the function
def train(cycles,cost_criterion,cnn,optimizer):
average_cost=0 #cost function for the training
acc=0 #accuracy over the test set
for e in range(cycles): #cycle through the database many times
print('Cycle: ',e)
cnn.train()
loadt=0
cudat=0
forwardt=0
costt=0
stept=0
avcostt=0
#following for loop cycles over the training set in batches
#of batch_number=5 using the training_loader object
s1 = time.clock()
t1 = time.clock()
for i, (x, y) in enumerate(training_loader_CIFAR10 ,0):
s2 = time.clock()
loadt=loadt+s2-s1
#here x,y will store data from the training set in batches
x, y = Variable(x).cuda(), Variable(y).cuda()
s3 = time.clock()
cudat=cudat+s3-s2
h = cnn.forward(x) #calculate hypothesis over the batch
s4 = time.clock()
forwardt=forwardt+s4-s3
cost = cost_criterion(h, y) #calculate cost the cost of the results
#print(type(cost))
s5 = time.clock()
costt=costt+s5-s4
optimizer.zero_grad() #set the gradients to 0
cost.backward() # calculate derivatives wrt parameters
optimizer.step() #update parameters
s6 = time.clock()
stept=stept+s6-s5
average_cost+=cost.data[0]; #add the cost to the costs
s1 = time.clock()
avcostt=avcostt+s1-s6
t2 = time.clock()
print('total time %.2f loading time %.2f, cuda transfer time %.2f, forward time: %.2f, cost time %.2f, step time %.2f, average cost time %.2f'%(t2-t1,loadt,cudat,forwardt,costt,stept,avcostt))
average_cost=0
cycles = 50 #number of cycles that the training runs over the database
cost_criterion = torch.nn.CrossEntropyLoss() #cost function
cnn = ResNet(16,[1, 1, 1]).cuda() #build the initial network (in the GPU)
optimizer=optim.Adam(cnn.parameters(), lr= 0.0001)
train(cycles,cost_criterion,cnn,optimizer)
torch.save(cnn.state_dict(), 'cnn_trained')
In this case if I use .item(0) I get
Cycle: 1
total time 51.80 loading time 10.91, cuda transfer time 0.10, forward time: 9.27, cost time 0.02, step time 2.63, average cost time 28.87
where as if use. data[0] I get
Cycle: 1
total time 41.51 loading time 10.99, cuda transfer time 18.51, forward time: 9.34, cost time 0.02, step time 2.65, average cost time 0.01
I am completely confused. I checked the type of the cost thinking maybe it is also a GPU tensor and somehow I mess somethings up but it is in the CPU. |
st104890 | Solved by ptrblck in post #3
I think your timing might give weird results, because your synchronization points are different in both implementations.
Calling .item() on a tensor gives you a standard Python number, which is pushed to the CPU.
This line of code would add a synchronization to wait for the GPU to finish calculati… |
st104891 | Thats kind of a lot of code to read through. eg hard to confirm visually things like it is in the CPU. Question: are the costs you are getting similar? Or is eg .data[0] case giving weird, uninitialized values? (or, eg, always the same value?) what happens if you print both, and try printing first .data[0] and then .item(), and also try the other way around? |
st104892 | I think your timing might give weird results, because your synchronization points are different in both implementations.
Calling .item() on a tensor gives you a standard Python number, which is pushed to the CPU.
This line of code would add a synchronization to wait for the GPU to finish calculating.
I’m not completely sure, if that’s also the case calling .data[0] without e.g. a print statement or if the operation can run asynchronously.
If you would like to time certain operations, you should call torch.cuda.synchronize() before stopping the timer.
This could make your overall script slower, but will yield valid timing results. |
st104893 | I see so basically the code can pass to the next line of calculation before GPU finishes calculating and I should put sync where ever I either copy something to the GPU or do some calculations in the GPU. I will update the code as you suggested and try again. |
st104894 | Yes, the GPU operation can be performed in the background while your python script continues its execution.
Once you get to a point where you push your GPU op result to CPU or print it, the script has to wait for the GPU so a synch point will be added automatically.
Timing is therefore a bit complicated, because it’s often not showing the true GPU op times. |
st104895 | So here is what happens when I sync (with the residual network)
total time 52.23 loading time 11.10, cuda transfer time 0.10, forward time: 12.89, cost time 0.03, step time 28.09, average cost time with data 0.02, average cost time with item 0.01
whereas previously it was (with .item())
total time 51.80 loading time 10.91, cuda transfer time 0.10, forward time: 9.27, cost time 0.02, step time 2.63, average cost time 28.87
and with .data[0]
total time 41.51 loading time 10.99, cuda transfer time 18.51, forward time: 9.34, cost time 0.02, step time 2.65, average cost time 0.01
So now there is no difference between data[0] and item(0) and step time (time it takes for .backward + .step operations) increases. Now most of the time is spend on backward and forward calculations as expected.
I assume when one uses item instead of data, some of the sync happens in the next batch of data which seems to affect the gpu loading time (presumably because this is where sync happens when one uses data instead of item). This resolves the question mostly thanks. Still not sure why .data[0] decreases the overall time spent though. going to do somemore tests to see whether if it really decreases or my timers miss a sync step somewhere. |
st104896 | iAvicenna:
Still not sure why .data[0] decreases the overall time spent though.
This is why I’d like to see a comparison of the results of both operations |
st104897 | I tried what you suggest with the simpler convolutional network since that was where the total time difference appeared. Both data[0] and item() accumulate the losses correctly no matter what order they are in.
This is when .data[0] is before .item()
Cycle:0
total time 16.52 loading time 11.07, cuda transfer time 0.11, forward time: 1.40, cost time 0.02, step time 3.90, average cost time with .data[0] 0.01, average cost time with .item() 0.01
cost with data[0] is 3.51052 cost with .item() is 3.51052
Cycle: 1
total time 16.56 loading time 11.10, cuda transfer time 0.11, forward time: 1.40, cost time 0.02, step time 3.90, average cost time with .data[0] 0.01, average cost time with .item() 0.01
cost with data[0] is 2.21125 cost with .item() is 2.21125
Cycle: 2
total time 16.61 loading time 11.17, cuda transfer time 0.11, forward time: 1.39, cost time 0.02, step time 3.89, average cost time with .data[0] 0.01, average cost time with .item() 0.01
cost with data[0] is 2.01861 cost with .item() is 2.01861
This is when .item[0] is before .data[0]
Cycle:0
total time 16.52 loading time 11.08, cuda transfer time 0.11, forward time: 1.40, cost time 0.02, step time 3.89, average cost time with .data[0] 0.01, average cost time with .item() 0.01
cost with data[0] is 3.51901 cost with .item() is 3.51901
Cycle: 1
total time 16.51 loading time 11.06, cuda transfer time 0.11, forward time: 1.40, cost time 0.02, step time 3.90, average cost time with .data[0] 0.01, average cost time with .item() 0.01
cost with data[0] is 2.23629 cost with .item() is 2.23629
Cycle: 3
total time 16.47 loading time 11.03, cuda transfer time 0.11, forward time: 1.40, cost time 0.02, step time 3.90, average cost time with .data[0] 0.01, average cost time with .item() 0.01
cost with data[0] is 2.05838 cost with .item() is 2.05838
This is when I only use .data[0]
Cycle: 0
total time 16.55 loading time 11.10, cuda transfer time 0.11, forward time: 1.40, cost time 0.02, step time 3.90, average cost time with data 0.01
cost with data[0] is 3.47288
Cycle: 1
total time 16.54 loading time 11.10, cuda transfer time 0.11, forward time: 1.40, cost time 0.02, step time 3.90, average cost time with data 0.01
cost with data[0] is 2.18770
Note that in the previous attempts when I did not use sync with only .data[0] it was
Cycle 0
total time 12.11 loading time 10.80, cuda transfer time 0.11, forward time: 0.38, cost time 0.02, step time 0.80, average cost time 0.01
Cycle: 1
total time 12.05 loading time 10.75, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.80, average cost time 0.01
Does that mean when you use .data[0] there is some synchronization that actually never happens unless you specifically ask for manual sync. This behaviour only happens in smaller networks so this might be also related to the amount of mem available on GPU |
st104898 | Can you rerun the first two sections with no syncs please?
(edit: because the concern I have is that maybe you are getting the result earlier in the .data case because it’s not waiting for the data to arrive first, and just giving you some garbage result ) |
st104899 | The results are below. It seems almost if .item() causes for some synchronization that is not needed when .data[0] is used. Most of the difference is definitely related to the synchronization of calculations that happen at the update step.
When I put sync before .data[0] the total time is about 12 whereas if I put it at the end of the cycle it is 16. If I remove all syncs inside the for loop over the database but put it after the whole loop just before when the final measurement is made the time is again 12. What does this suggest? That there is some unecessary sync done by the .item() that is not done by .data[0] ?
This is when I remove sync everywhere .item() is before .data[0]
Cycle: 0
total time 16.41 loading time 10.95, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.78, average cost time with .data[0] 0.02, average cost time with .item() 4.19
cost with data[0] is 3.33067 cost with .item() is 3.33067
Cycle: 1
total time 16.56 loading time 11.09, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.79, average cost time with .data[0] 0.01, average cost time with .item() 4.19
cost with data[0] is 2.20410 cost with .item() is 2.20410
Cycle: 2
total time 16.61 loading time 11.12, cuda transfer time 0.11, forward time: 0.37, cost time 0.02, step time 0.80, average cost time with .data[0] 0.01, average cost time with .item() 4.18
cost with data[0] is 2.05243 cost with .item() is 2.05243
And when .data[0] is before .item()
Cycle: 0
total time 16.26 loading time 10.79, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.78, average cost time with .data[0] 0.01, average cost time with .item() 4.18
cost with data[0] is 3.50083 cost with .item() is 3.50083
Cycle: 1
total time 16.31 loading time 10.85, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.78, average cost time with .data[0] 0.01, average cost time with .item() 4.18
cost with data[0] is 2.17939 cost with .item() is 2.17939
Cycle: 2
total time 16.19 loading time 10.73, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.78, average cost time with .data[0] 0.01, average cost time with .item() 4.19
cost with data[0] is 2.01196 cost with .item() is 2.01196
When I comment out .item() (i,e only .data[0])
Cycle: 0
total time 11.96 loading time 10.69, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.78, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 3.57116 cost with .item() is 0.00000
Cycle: 1
total time 11.92 loading time 10.65, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.78, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 2.27203 cost with .item() is 0.00000
Cycle: 2
total time 11.95 loading time 10.68, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.78, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 2.08692 cost with .item() is 0.00000
Cycle: 3
total time 11.88 loading time 10.61, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.77, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 1.92515 cost with .item() is 0.00000
When I comment out .item() (i,e only .data[0]) and put torch.cuda.synchronize() at the end of the cycle
Cycle 0
total time 16.23 loading time 10.77, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.78, average cost time with .data[0] 0.01, average cost time with .item() 4.18
cost with data[0] is 3.46457 cost with .item() is 0.00000
Cycle: 1
total time 16.06 loading time 10.62, cuda transfer time 0.11, forward time: 0.35, cost time 0.02, step time 0.77, average cost time with .data[0] 0.01, average cost time with .item() 4.20
cost with data[0] is 2.20793 cost with .item() is 0.00000
When I comment out .item() (i,e only .data[0]) and put torch.cuda.synchronize() BEFORE accumulation of cost with .data[0]
Cycle: 0
total time 16.53 loading time 11.06, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 4.97, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 3.53346 cost with .item() is 0.00000
Cycle: 1
total time 16.47 loading time 11.00, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 4.96, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 2.23358 cost with .item() is 0.00000
Cycle: 2
total time 16.43 loading time 10.97, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 4.95, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 2.03888 cost with .item() is 0.00000
Cycle: 3
total time 16.42 loading time 10.95, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 4.97, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 1.92610 cost with .item() is 0.00000
When I comment out .item() (i,e only .data[0]) and put torch.cuda.synchronize() BEFORE accumulation of cost with .data[0] and BEFORE the update phase (i.e step time)
Cycle: 0
total time 16.11 loading time 10.69, cuda transfer time 0.11, forward time: 0.35, cost time 1.05, step time 3.90, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 3.52324 cost with .item() is 0.00000
Cycle: 1
total time 16.10 loading time 10.67, cuda transfer time 0.11, forward time: 0.36, cost time 1.05, step time 3.90, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 2.19925 cost with .item() is 0.00000
Cycle: 2
total time 16.13 loading time 10.70, cuda transfer time 0.11, forward time: 0.36, cost time 1.05, step time 3.90, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 1.97843 cost with .item() is 0.00000
Cycle: 3
total time 16.10 loading time 10.68, cuda transfer time 0.11, forward time: 0.36, cost time 1.04, step time 3.90, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 1.86069 cost with .item() is 0.00000
Finally when I remove all the syncs inside the loop that iterate over the database but put it just before the timer that is activated once the loop is finished:
Cycle: 0
total time 11.96 loading time 10.67, cuda transfer time 0.11, forward time: 0.35, cost time 0.02, step time 0.77, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 3.41078 cost with .item() is 0.00000
Cycle: 1
total time 12.03 loading time 10.73, cuda transfer time 0.11, forward time: 0.35, cost time 0.02, step time 0.77, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 2.16014 cost with .item() is 0.00000
Cycle: 2
total time 12.03 loading time 10.73, cuda transfer time 0.11, forward time: 0.36, cost time 0.02, step time 0.77, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 1.98470 cost with .item() is 0.00000
Cycle: 3
total time 12.00 loading time 10.71, cuda transfer time 0.11, forward time: 0.35, cost time 0.02, step time 0.77, average cost time with .data[0] 0.01, average cost time with .item() 0.00
cost with data[0] is 1.84018 cost with .item() is 0.00000
p.s: In the light of these findings I changed the title of the topic to be less misleading |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.