instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Why does the CrossEntropy loss not go down during training of my network? | I am fairly new to Pytorch and I am currently trying to implement the network in this paper: https://arxiv.org/pdf/1811.06621.pdf?fbclid=IwAR3Ya9ZfBNN40UO0wct7dGupjlBFEpU47IRHK-wXmejI4U2UQGf03sXHMlw.
I have provided the class for this network and some training code that uses dummy data. The code compiles and runs but the loss that's printed each iteration is always the same (8.371). This leads me to believe that there is something wrong with the way I implemented my network. Is there anything glaringly wrong with my implementation?
import torch
import torch.nn as nn
import torchvision.datasets as dsets
import torchvision.transforms as transforms
from torch.autograd import Variable
torch.manual_seed(1)
# Hyper Parameters
sequence_length = 1
input_size = 320
hidden_size = 2048
recurrent_size = 640
num_layers = 8
num_classes = 10
batch_size = 10
num_epochs = 2
learning_rate = 0.01
# RNNT Model
class RNNTModel(nn.Module):
def __init__(self, input_size, hidden_size, recurrent_size, bias=True):
super(RNNTModel, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.recurrent_size = recurrent_size
self.bias = bias
self.downsample_fc = nn.Linear(self.recurrent_size * 2, self.recurrent_size)
self.joint_fc = nn.Linear(self.recurrent_size * 2, self.recurrent_size)
self.out_fc = nn.Linear(640, 4096)
self.softmax = nn.LogSoftmax(dim=1)
self.encoder_1 = nn.ModuleDict({
'lstm1': nn.LSTM(self.input_size, self.hidden_size, num_layers=1, bias=bias, batch_first=True),
'proj1': nn.Linear(self.hidden_size, self.recurrent_size),
'lstm2': nn.LSTM(self.recurrent_size, self.hidden_size, num_layers=1, bias=bias, batch_first=True),
'proj2': nn.Linear(self.hidden_size, self.recurrent_size)
})
self.encoder_2 = nn.ModuleDict({
'lstm3': nn.LSTM(self.recurrent_size, self.hidden_size, num_layers=1, bias=bias, batch_first=True),
'proj3': nn.Linear(self.hidden_size, self.recurrent_size),
'lstm4': nn.LSTM(self.recurrent_size, self.hidden_size, num_layers=1, bias=bias, batch_first=True),
'proj4': nn.Linear(self.hidden_size, self.recurrent_size),
'lstm5': nn.LSTM(self.recurrent_size, self.hidden_size, num_layers=1, bias=bias, batch_first=True),
'proj5': nn.Linear(self.hidden_size, self.recurrent_size),
'lstm6': nn.LSTM(self.recurrent_size, self.hidden_size, num_layers=1, bias=bias, batch_first=True),
'proj6': nn.Linear(self.hidden_size, self.recurrent_size),
'lstm7': nn.LSTM(self.recurrent_size, self.hidden_size, num_layers=1, bias=bias, batch_first=True),
'proj7': nn.Linear(self.hidden_size, self.recurrent_size),
'lstm8': nn.LSTM(self.recurrent_size, self.hidden_size, num_layers=1, bias=bias, batch_first=True),
'proj8': nn.Linear(self.hidden_size, self.recurrent_size)
})
self.prediction_net = nn.ModuleDict({
'fc1': nn.Linear(4096, 76),
'lstm1': nn.LSTM(76, self.hidden_size, num_layers=1, bias=bias, batch_first=True),
'proj1': nn.Linear(self.hidden_size, self.recurrent_size),
'lstm2': nn.LSTM(self.recurrent_size, self.hidden_size, num_layers=1, bias=bias, batch_first=True),
'proj2': nn.Linear(self.hidden_size, self.recurrent_size)
})
def forward(self, x, ):
y = [torch.zeros(1, x.size(1), 4096)]
for i in range(x.size(0) // 2):
# Unrolled loop of encoder 1
enc_out, (h1, c1) = self.encoder_1['lstm1'](torch.stack([x[2 * i], x[2 * i + 1]]))
enc_out = self.encoder_1['proj1'](enc_out)
enc_out, _ = self.encoder_1['lstm2'](enc_out)
enc_out = self.encoder_1['proj2'](enc_out)
# Downsample by halving framrate
enc_out = enc_out.view(1, -1, 2 * self.recurrent_size)
enc_out = self.downsample_fc(enc_out)
# Unrolled loop of encoder 2
enc_out, _ = self.encoder_2['lstm3'](enc_out)
enc_out = self.encoder_2['proj3'](enc_out)
enc_out, _ = self.encoder_2['lstm4'](enc_out)
enc_out = self.encoder_2['proj4'](enc_out)
enc_out, _ = self.encoder_2['lstm5'](enc_out)
enc_out = self.encoder_2['proj5'](enc_out)
enc_out, _ = self.encoder_2['lstm6'](enc_out)
enc_out = self.encoder_2['proj6'](enc_out)
enc_out, _ = self.encoder_2['lstm7'](enc_out)
enc_out = self.encoder_2['proj3'](enc_out)
enc_out, _ = self.encoder_2['lstm7'](enc_out)
enc_out = self.encoder_2['proj3'](enc_out)
enc_out, _ = self.encoder_2['lstm8'](enc_out)
enc_out = self.encoder_2['proj8'](enc_out)
# Unrolled loop of prediction net
pred_out = self.prediction_net['fc1'](y[i])
pred_out, _ = self.prediction_net['lstm1'](pred_out)
pred_out = self.prediction_net['proj1'](pred_out)
pred_out, _ = self.prediction_net['lstm2'](pred_out)
pred_out = self.prediction_net['proj2'](pred_out)
# Unrolled loop of joint layers
joint_out = torch.cat([enc_out, pred_out], dim=-1)
joint_out = self.joint_fc(joint_out)
joint_out = self.out_fc(joint_out)
joint_out = self.softmax(joint_out)
y.append(joint_out)
return(torch.stack(y[1:]))
rnnt = RNNTModel(input_size, hidden_size, recurrent_size, bias=True)
# y = rnnt(torch.rand(batch_size, sequence_length, input_size))
training_data = [(torch.rand(batch_size, sequence_length, input_size), torch.ones(batch_size//2, 1, 4096).long()) for _ in range(100)]
# Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(rnnt.parameters(), lr=learning_rate)
# Train the model
for epoch in range(num_epochs):
for i, (x, y) in enumerate(training_data):
x = Variable(x)
y = Variable(y)
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = rnnt(x).view(-1, 4096)
loss = criterion(outputs, torch.max(y, 2)[1].squeeze())
loss.backward()
optimizer.step
if (i+1) % 1 == 0:
print('Epoch [%d/%d], Step [%d/%d], Loss: %.4f'
%(epoch+1, num_epochs, i+1, len(training_data)//batch_size, loss.item()))
| I think you want optimizer.step() instead of optimizer.step.
| https://stackoverflow.com/questions/57081738/ |
Does Tensorflow or Pytorch work on RTX 20xx super series | I'm going to buy a new gpu to learn deep learning. The new Nvidia RTX 2060 Super seems to best fit my needs. But I wonder that is it compatible with CUDA and tensorflow or pytorch now?
| I have bought same VGA a few days ago and using it.
tested it, it works fine in Python (I used numby library) and used CUDA 10.1
| https://stackoverflow.com/questions/57088166/ |
A strange behavior about pytorch tensor. Any one can explain it clear? | When I create a PyTorch tensor and tried to explore its type, I found this:
>>> a = torch.rand(3,5)
>>> a
tensor([[0.3356, 0.0968, 0.2185, 0.9843, 0.7846],
[0.8523, 0.3300, 0.7181, 0.2692, 0.6523],
[0.0523, 0.9344, 0.3505, 0.8901, 0.6464]])
>>> type(a)
<class 'torch.Tensor'>
>>> a = a.cuda()
>>> a.is_cuda
True
>>> type(a)
<class 'torch.Tensor'>
>>> a.dtype
torch.float32
>>>
Why is type(a) still torch.Tensor rather than torch.cuda.Tensor, even though this tensor is already on GPU?
| You got me digging there, but apparaently type() as built-in method does not work for type detection since 0.4.0 (see this comment and this point in migration guide).
To get proper type, torch.Tensor classes have type() member, which can be simply used:
import torch
a = torch.rand(3, 5)
print(a)
print(a.type())
a = a.to("cuda")
print(a.is_cuda)
print(a.type())
which yields, as expected:
tensor([[0.5060, 0.6998, 0.5054, 0.4822, 0.4408],
[0.7686, 0.4508, 0.4968, 0.4460, 0.7352],
[0.1810, 0.6953, 0.7499, 0.7105, 0.1776]])
torch.FloatTensor
True
torch.cuda.FloatTensor
However I am unsure about the rationale standing behind the decision and could not find anything relevant other than that.
| https://stackoverflow.com/questions/57104935/ |
Why are the images generated by a GAN get darker as the network trains more? | I created a simple DCGAN with 6 layers and trained it on CelebA dataset (a portion of it containing 30K images).
I noticed my network generated images are dimmed looking and as the network trains more, the bright colors fade into dim ones!
here are some example:
This is how CelebA images look like (real images used for training) :
and these are the generated ones ,the number shows the epoch number(they were trained for 30 epochs ultimately) :
What is the cause for this phenomenon?
I tried to do all the general tricks concerning GANs, such as rescaling the input image between -1 and 1, or not using BatchNorm in the first layer of the Discriminator, and for the last layer of the Generator or
using LeakyReLU(0.2) in the Discriminator, and ReLU for the Generator. yet I have no idea why the images are this dim/dark!
Is this caused by simply less training images?
or is it caused by the networks deficiencies ? if so what is the source of such deficiencies?
Here are how these networks implemented :
def conv_batch(in_dim, out_dim, kernel_size, stride, padding, batch_norm=True):
layers = nn.ModuleList()
conv = nn.Conv2d(in_dim, out_dim, kernel_size, stride, padding, bias=False)
layers.append(conv)
if batch_norm:
layers.append(nn.BatchNorm2d(out_dim))
return nn.Sequential(*layers)
class Discriminator(nn.Module):
def __init__(self, conv_dim=32, act = nn.ReLU()):
super().__init__()
self.conv_dim = conv_dim
self.act = act
self.conv1 = conv_batch(3, conv_dim, 4, 2, 1, False)
self.conv2 = conv_batch(conv_dim, conv_dim*2, 4, 2, 1)
self.conv3 = conv_batch(conv_dim*2, conv_dim*4, 4, 2, 1)
self.conv4 = conv_batch(conv_dim*4, conv_dim*8, 4, 1, 1)
self.conv5 = conv_batch(conv_dim*8, conv_dim*10, 4, 2, 1)
self.conv6 = conv_batch(conv_dim*10, conv_dim*10, 3, 1, 1)
self.drp = nn.Dropout(0.5)
self.fc = nn.Linear(conv_dim*10*3*3, 1)
def forward(self, input):
batch = input.size(0)
output = self.act(self.conv1(input))
output = self.act(self.conv2(output))
output = self.act(self.conv3(output))
output = self.act(self.conv4(output))
output = self.act(self.conv5(output))
output = self.act(self.conv6(output))
output = output.view(batch, self.fc.in_features)
output = self.fc(output)
output = self.drp(output)
return output
def deconv_convtranspose(in_dim, out_dim, kernel_size, stride, padding, batchnorm=True):
layers = []
deconv = nn.ConvTranspose2d(in_dim, out_dim, kernel_size = kernel_size, stride=stride, padding=padding)
layers.append(deconv)
if batchnorm:
layers.append(nn.BatchNorm2d(out_dim))
return nn.Sequential(*layers)
class Generator(nn.Module):
def __init__(self, z_size=100, conv_dim=32):
super().__init__()
self.conv_dim = conv_dim
# make the 1d input into a 3d output of shape (conv_dim*4, 4, 4 )
self.fc = nn.Linear(z_size, conv_dim*4*4*4)#4x4
# conv and deconv layer work on 3d volumes, so we now only need to pass the number of fmaps and the
# input volume size (its h,w which is 4x4!)
self.drp = nn.Dropout(0.5)
self.deconv1 = deconv_convtranspose(conv_dim*4, conv_dim*3, kernel_size =4, stride=2, padding=1)
self.deconv2 = deconv_convtranspose(conv_dim*3, conv_dim*2, kernel_size =4, stride=2, padding=1)
self.deconv3 = deconv_convtranspose(conv_dim*2, conv_dim, kernel_size =4, stride=2, padding=1)
self.deconv4 = deconv_convtranspose(conv_dim, conv_dim, kernel_size =3, stride=2, padding=1)
self.deconv5 = deconv_convtranspose(conv_dim, 3, kernel_size =4, stride=1, padding=1, batchnorm=False)
def forward(self, input):
output = self.fc(input)
output = self.drp(output)
output = output.view(-1, self.conv_dim*4, 4, 4)
output = F.relu(self.deconv1(output))
output = F.relu(self.deconv2(output))
output = F.relu(self.deconv3(output))
output = F.relu(self.deconv4(output))
# we create the image using tanh!
output = F.tanh(self.deconv5(output))
return output
# testing nets
dd = Discriminator()
zd = np.random.rand(2,3,64,64)
zd = torch.from_numpy(zd).float()
# print(dd)
print(dd(zd).shape)
gg = Generator()
z = np.random.uniform(-1,1,size=(2,100))
z = torch.from_numpy(z).float()
print(gg(z).shape)
| I think that the problem lies rather in the architecture itself and I would first consider the overall quality of generated images rather than their brightness or darkness. The generations clearly get better as you train for more epochs. I agree that the images get darker but even in the early epochs, the generated images are significantly darker than the ones in the training samples. (At least compared to ones that you posted.)
And now coming back to your architecture, 30k samples are actually enough to obtain very convincing results as achieved by state-of-the-art models in face generations. The generations do get better but they are still far away from being "very good".
I think the generator is definitely not strong enough and is the problematic part. (The fact that your generator loss skyrockets can also be a hint for this.) In the generator, all you do is just upsampling and upsampling. You should note that the transposed convolution is more like a heuristic and it does not provide much learnability. This is related to the nature of the problem. When you are doing convolutions, you have all the information and you are trying to learn to encode but in the decoder, you are trying to recover information that was previously lost :). So, in a way, it is harder to learn because the information taken as input is limited and lacking.
In fact, deterministic bilinear interpolation methods do perform similar or even better than transposed convolutions and these are purely based on scaling/extending with zero learnability. (https://arxiv.org/pdf/1707.05847.pdf)
To observe the transposed convolutions' limits, I suggest that you replace all the Transposedconv2d with UpSampling2D (https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling2D) and I claim that the results will not be much different. UpSampling2D is one of those deterministic methods that I mentioned.
To improve your generator, you can try to insert convolutional layers between upsampling layers. These layers would refine the features/images and correct some of the mistakes that occurred during the up-sampling. In addition to corrections, the next upsampling layer would take a more informative input. What I mean is to try a UNet like decoding that you can find in this link (https://arxiv.org/pdf/1505.04597.pdf). Of course, that would be a primary step to explore. There are many more GAN architectures that you can try and probably perform better.
| https://stackoverflow.com/questions/57119171/ |
How do I compile a pytorch script into an exe file with small size? | I have a semantic segmentation model using PyTorch. In order to participate in a competition, I am compiling the test.py to an exe file with PyInstaller and UPX. Although the resulting executable file runs correctly, its size is nearly 800MB. How do I make it smaller?
This is my test.py:
from torch import nn
from torch.autograd import Variable as V
from torch import Tensor
from torch import cuda
from torch import load
import cv2
import os
import numpy as np
from time import time
from networks.unet import Unet
# from networks.dunet import Dunet
# from networks.dinknet import LinkNet34, DinkNet34, DinkNet50, DinkNet101, DinkNet34_less_pool
# from networks.dinkbranch import DinkBranch50, DinkBranch34
BATCHSIZE_PER_CARD = 2
class TTAFrame():
def __init__(self, net):
self.net = net().cuda()
self.net = nn.DataParallel(self.net, device_ids=range(cuda.device_count()))
def test_one_img_from_path(self, path, evalmode = True):
if evalmode:
self.net.eval()
batchsize = cuda.device_count() * BATCHSIZE_PER_CARD
if batchsize >= 8:
return self.test_one_img_from_path_1(path)
elif batchsize >= 4:
return self.test_one_img_from_path_2(path)
elif batchsize >= 2:
return self.test_one_img_from_path_4(path)
def test_one_img_from_path_8(self, path):
img = cv2.imread(path)#.transpose(2,0,1)[None]
img90 = np.array(np.rot90(img))
img1 = np.concatenate([img[None],img90[None]])
img2 = np.array(img1)[:,::-1]
img3 = np.array(img1)[:,:,::-1]
img4 = np.array(img2)[:,:,::-1]
img1 = img1.transpose(0,3,1,2)
img2 = img2.transpose(0,3,1,2)
img3 = img3.transpose(0,3,1,2)
img4 = img4.transpose(0,3,1,2)
img1 = V(Tensor(np.array(img1, np.float32)/255.0 * 3.2 -1.6).cuda())
img2 = V(Tensor(np.array(img2, np.float32)/255.0 * 3.2 -1.6).cuda())
img3 = V(Tensor(np.array(img3, np.float32)/255.0 * 3.2 -1.6).cuda())
img4 = V(Tensor(np.array(img4, np.float32)/255.0 * 3.2 -1.6).cuda())
maska = self.net.forward(img1).squeeze().cpu().data.numpy()
maskb = self.net.forward(img2).squeeze().cpu().data.numpy()
maskc = self.net.forward(img3).squeeze().cpu().data.numpy()
maskd = self.net.forward(img4).squeeze().cpu().data.numpy()
mask1 = maska + maskb[:,::-1] + maskc[:,:,::-1] + maskd[:,::-1,::-1]
mask2 = mask1[0] + np.rot90(mask1[1])[::-1,::-1]
return mask2
def test_one_img_from_path_4(self, path):
img = cv2.imread(path)#.transpose(2,0,1)[None]
img90 = np.array(np.rot90(img))
img1 = np.concatenate([img[None],img90[None]])
img2 = np.array(img1)[:,::-1]
img3 = np.array(img1)[:,:,::-1]
img4 = np.array(img2)[:,:,::-1]
img1 = img1.transpose(0,3,1,2)
img2 = img2.transpose(0,3,1,2)
img3 = img3.transpose(0,3,1,2)
img4 = img4.transpose(0,3,1,2)
img1 = V(Tensor(np.array(img1, np.float32)/255.0 * 3.2 -1.6).cuda())
img2 = V(Tensor(np.array(img2, np.float32)/255.0 * 3.2 -1.6).cuda())
img3 = V(Tensor(np.array(img3, np.float32)/255.0 * 3.2 -1.6).cuda())
img4 = V(Tensor(np.array(img4, np.float32)/255.0 * 3.2 -1.6).cuda())
maska = self.net.forward(img1).squeeze().cpu().data.numpy()
maskb = self.net.forward(img2).squeeze().cpu().data.numpy()
maskc = self.net.forward(img3).squeeze().cpu().data.numpy()
maskd = self.net.forward(img4).squeeze().cpu().data.numpy()
mask1 = maska + maskb[:,::-1] + maskc[:,:,::-1] + maskd[:,::-1,::-1]
mask2 = mask1[0] + np.rot90(mask1[1])[::-1,::-1]
return mask2
def test_one_img_from_path_2(self, path):
img = cv2.imread(path)#.transpose(2,0,1)[None]
img90 = np.array(np.rot90(img))
img1 = np.concatenate([img[None],img90[None]])
img2 = np.array(img1)[:,::-1]
img3 = np.concatenate([img1,img2])
img4 = np.array(img3)[:,:,::-1]
img5 = img3.transpose(0,3,1,2)
img5 = np.array(img5, np.float32)/255.0 * 3.2 -1.6
img5 = V(Tensor(img5).cuda())
img6 = img4.transpose(0,3,1,2)
img6 = np.array(img6, np.float32)/255.0 * 3.2 -1.6
img6 = V(Tensor(img6).cuda())
maska = self.net.forward(img5).squeeze().cpu().data.numpy()#.squeeze(1)
maskb = self.net.forward(img6).squeeze().cpu().data.numpy()
mask1 = maska + maskb[:,:,::-1]
mask2 = mask1[:2] + mask1[2:,::-1]
mask3 = mask2[0] + np.rot90(mask2[1])[::-1,::-1]
return mask3
def test_one_img_from_path_1(self, path):
img = cv2.imread(path)#.transpose(2,0,1)[None]
img90 = np.array(np.rot90(img))
img1 = np.concatenate([img[None],img90[None]])
img2 = np.array(img1)[:,::-1]
img3 = np.concatenate([img1,img2])
img4 = np.array(img3)[:,:,::-1]
img5 = np.concatenate([img3,img4]).transpose(0,3,1,2)
img5 = np.array(img5, np.float32)/255.0 * 3.2 -1.6
img5 = V(Tensor(img5).cuda())
mask = self.net.forward(img5).squeeze().cpu().data.numpy()#.squeeze(1)
mask1 = mask[:4] + mask[4:,:,::-1]
mask2 = mask1[:2] + mask1[2:,::-1]
mask3 = mask2[0] + np.rot90(mask2[1])[::-1,::-1]
return mask3
def load(self, path):
self.net.load_state_dict(load(path))
#source = 'dataset/test/'
import sys
if len(sys.argv) < 2:
arg1 = r'dataset/504/original'
else:
arg1 = sys.argv[1]
# source = r'dataset/504/original'
source = arg1
source_path = os.path.join(os.getcwd(), source)
val = os.listdir(source_path)
solver = TTAFrame(Unet)
model_path = '/'
model_path = r'weights/log02_Unet.th'
solver.load(os.path.join(os.getcwd(), model_path))
tic = time()
target = r'submits/log02_baseline504'
target_path = os.path.join(os.getcwd(), target)
if os.path.exists(target_path):
pass
else:
os.makedirs(target_path)
for i,name in enumerate(val):
if i%10 == 0:
print(i/10, ' ','%.2f'%(time()-tic))
mask = solver.test_one_img_from_path(os.path.join(source_path, name))
mask[mask>4.0] = 255
mask[mask<=4.0] = 0
mask = np.concatenate([mask[:,:,None],mask[:,:,None],mask[:,:,None]],axis=2)
cv2.imwrite(target_path+r'/'+name[:-7]+'mask.png', mask.astype(np.uint8))
This is the 'Unet' file:
from torch import autograd, cat
from torch import nn
class Unet(nn.Module):
def __init__(self):
super(Unet, self).__init__()
self.down1 = self.conv_stage(3, 8)
self.down2 = self.conv_stage(8, 16)
self.down3 = self.conv_stage(16, 32)
self.down4 = self.conv_stage(32, 64)
self.down5 = self.conv_stage(64, 128)
self.down6 = self.conv_stage(128, 256)
self.down7 = self.conv_stage(256, 512)
self.center = self.conv_stage(512, 1024)
#self.center_res = self.resblock(1024)
self.up7 = self.conv_stage(1024, 512)
self.up6 = self.conv_stage(512, 256)
self.up5 = self.conv_stage(256, 128)
self.up4 = self.conv_stage(128, 64)
self.up3 = self.conv_stage(64, 32)
self.up2 = self.conv_stage(32, 16)
self.up1 = self.conv_stage(16, 8)
self.trans7 = self.upsample(1024, 512)
self.trans6 = self.upsample(512, 256)
self.trans5 = self.upsample(256, 128)
self.trans4 = self.upsample(128, 64)
self.trans3 = self.upsample(64, 32)
self.trans2 = self.upsample(32, 16)
self.trans1 = self.upsample(16, 8)
self.conv_last = nn.Sequential(
nn.Conv2d(8, 1, 3, 1, 1),
nn.Sigmoid()
)
self.max_pool = nn.MaxPool2d(2)
for m in self.modules():
if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):
if m.bias is not None:
m.bias.data.zero_()
def conv_stage(self, dim_in, dim_out, kernel_size=3, stride=1, padding=1, bias=True, useBN=False):
if useBN:
return nn.Sequential(
nn.Conv2d(dim_in, dim_out, kernel_size=kernel_size, stride=stride, padding=padding, bias=bias),
nn.BatchNorm2d(dim_out),
#nn.LeakyReLU(0.1),
nn.ReLU(),
nn.Conv2d(dim_out, dim_out, kernel_size=kernel_size, stride=stride, padding=padding, bias=bias),
nn.BatchNorm2d(dim_out),
#nn.LeakyReLU(0.1),
nn.ReLU(),
)
else:
return nn.Sequential(
nn.Conv2d(dim_in, dim_out, kernel_size=kernel_size, stride=stride, padding=padding, bias=bias),
nn.ReLU(),
nn.Conv2d(dim_out, dim_out, kernel_size=kernel_size, stride=stride, padding=padding, bias=bias),
nn.ReLU()
)
def upsample(self, ch_coarse, ch_fine):
return nn.Sequential(
nn.ConvTranspose2d(ch_coarse, ch_fine, 4, 2, 1, bias=False),
nn.ReLU()
)
def forward(self, x):
conv1_out = self.down1(x)
conv2_out = self.down2(self.max_pool(conv1_out))
conv3_out = self.down3(self.max_pool(conv2_out))
conv4_out = self.down4(self.max_pool(conv3_out))
conv5_out = self.down5(self.max_pool(conv4_out))
conv6_out = self.down6(self.max_pool(conv5_out))
conv7_out = self.down7(self.max_pool(conv6_out))
out = self.center(self.max_pool(conv7_out))
#out = self.center_res(out)
out = self.up7(cat((self.trans7(out), conv7_out), 1))
out = self.up6(cat((self.trans6(out), conv6_out), 1))
out = self.up5(cat((self.trans5(out), conv5_out), 1))
out = self.up4(cat((self.trans4(out), conv4_out), 1))
out = self.up3(cat((self.trans3(out), conv3_out), 1))
out = self.up2(cat((self.trans2(out), conv2_out), 1))
out = self.up1(cat((self.trans1(out), conv1_out), 1))
out = self.conv_last(out)
return out
| pyinstaller is kind of cheated .exe. It does not compile the script, but bundles what's needed (including python interpreter) into one (or many) files.
To really be Python agnostic you should convert your model using torchscript (read about it here). You will be able to run your module using C++ libtorch without Python interpreter.
| https://stackoverflow.com/questions/57122734/ |
Undefined symbol on torchvision import | pip install torch==0.4.1 -f https://download.pytorch.org/whl/cu100/stable
cat /usr/local/cuda/version.txt
CUDA Version 10.1.168
python -V
Python 3.5.2
python -c "import torch; print(torch.__version__)"
0.4.1
Get error on :
python -c "import torchvision; print(torchvision.__version__)"
...
from torchvision import _C
ImportError: /my_env/lib/python3.5/site-packages/torchvision/_C.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail36_typeMetaDataInstance_preallocated_7E
Versions:
pip freeze | grep torch
torch==0.4.1
torchfile==0.1.0
torchvision==0.3.0
I have tried:
pip install torchvision==0.4.1 --no-dependencies
Collecting torchvision==0.4.1
ERROR: Could not find a version that satisfies the requirement torchvision==0.4.1 (from versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.2.post2, 0.2.2.post3, 0.3.0)
ERROR: No matching distribution found for torchvision==0.4.1
How to install older version of torchvision?
| Seems pip install torchvision==0.2.0 --no-deps --no-cache-dir helped.
| https://stackoverflow.com/questions/57136161/ |
Extracting 3D patches from 3D images in both overlapping and nonoverlapping process and recovering the image back | I am working with 172x220x156 shaped 3D images. To feed the image into the network for output I need to extract patches of size 32x32x32 from the image and add those back to get the image again.
Since my image dimension are not multiples of patch size thus I have to get overlapping patches.
I want to know how to do that.
I am working in PyTorch, there are some options like unfold and fold but I am not sure how they work.
| To extract (overlapping-) patches and to reconstruct the input shape we can use the torch.nn.functional.unfold and the inverse operation torch.nn.functional.fold. These methods only process 4D tensors or 2D images, however you can use these methods to process one dimension at a time.
Few notes:
This way requires fold/unfold methods from pytorch, unfortunately I have yet to find a similar method in the TF api.
We can extract patches in 2 ways, their output is the same. The methods are called extract_patches_3d and extract_patches_3ds where X is the number of dimensions. The latter uses torch.Tensor.unfold() and has less lines of code. (output is the same, except it cannot use dilation)
The methods extract_patches_Xd and combine_patches_Xd are inverse methods and the combiner reverses the steps from the extracter step by step.
The lines of code are followed by a comment stating the dimensionality such as (B, C, D, H, W). The following are used:
B: Batch size
C: Channels
D: Depth Dimension
H: Height Dimension
W: Width Dimension
x_dim_in: In the extraction method, this is the number input pixels in dimension x. In the combining method, this is the number of number of sliding windows in dimension x.
x_dim_out: In the extraction method, this is the number of sliding windows in dimension x. In the combining method, this is the number output pixels in dimension x.
I have a public notebook to try out the code
The get_dim_blocks() method is the function given on the pytorch docs website to compute the output shape of a convolutional layer.
Note that if you have overlapping patches and you combine them, the overlapping elements will be summed. If you would like to get the initial input again there is a way.
Create similar sized tensor of ones as the patches with torch.ones_like(patches_tensor).
Combine the patches into full image with same output shape. (this creates a counter for overlapping elements).
Divide the Combined image with the Combined ones, this should reverse any double summation of elements.
(3D):
We need to use 2 fold and unfold where we first apply the fold to the D dimension and leave the W and H untouched by setting kernel to 1, padding to 0, stride to 1 and dilation to 1. After we review the tensor and fold over the H and W dimensions. The unfolding happens in reverse, starting with H and W, then D.
def extract_patches_3ds(x, kernel_size, padding=0, stride=1):
if isinstance(kernel_size, int):
kernel_size = (kernel_size, kernel_size, kernel_size)
if isinstance(padding, int):
padding = (padding, padding, padding, padding, padding, padding)
if isinstance(stride, int):
stride = (stride, stride, stride)
channels = x.shape[1]
x = torch.nn.functional.pad(x, padding)
# (B, C, D, H, W)
x = x.unfold(2, kernel_size[0], stride[0]).unfold(3, kernel_size[1], stride[1]).unfold(4, kernel_size[2], stride[2])
# (B, C, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2])
x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2])
# (B * d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2])
return x
def extract_patches_3d(x, kernel_size, padding=0, stride=1, dilation=1):
if isinstance(kernel_size, int):
kernel_size = (kernel_size, kernel_size, kernel_size)
if isinstance(padding, int):
padding = (padding, padding, padding)
if isinstance(stride, int):
stride = (stride, stride, stride)
if isinstance(dilation, int):
dilation = (dilation, dilation, dilation)
def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1):
dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1
return dim_out
channels = x.shape[1]
d_dim_in = x.shape[2]
h_dim_in = x.shape[3]
w_dim_in = x.shape[4]
d_dim_out = get_dim_blocks(d_dim_in, kernel_size[0], padding[0], stride[0], dilation[0])
h_dim_out = get_dim_blocks(h_dim_in, kernel_size[1], padding[1], stride[1], dilation[1])
w_dim_out = get_dim_blocks(w_dim_in, kernel_size[2], padding[2], stride[2], dilation[2])
# print(d_dim_in, h_dim_in, w_dim_in, d_dim_out, h_dim_out, w_dim_out)
# (B, C, D, H, W)
x = x.view(-1, channels, d_dim_in, h_dim_in * w_dim_in)
# (B, C, D, H * W)
x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1))
# (B, C * kernel_size[0], d_dim_out * H * W)
x = x.view(-1, channels * kernel_size[0] * d_dim_out, h_dim_in, w_dim_in)
# (B, C * kernel_size[0] * d_dim_out, H, W)
x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[1], kernel_size[2]), padding=(padding[1], padding[2]), stride=(stride[1], stride[2]), dilation=(dilation[1], dilation[2]))
# (B, C * kernel_size[0] * d_dim_out * kernel_size[1] * kernel_size[2], h_dim_out, w_dim_out)
x = x.view(-1, channels, kernel_size[0], d_dim_out, kernel_size[1], kernel_size[2], h_dim_out, w_dim_out)
# (B, C, kernel_size[0], d_dim_out, kernel_size[1], kernel_size[2], h_dim_out, w_dim_out)
x = x.permute(0, 1, 3, 6, 7, 2, 4, 5)
# (B, C, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2])
x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2])
# (B * d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2])
return x
def combine_patches_3d(x, kernel_size, output_shape, padding=0, stride=1, dilation=1):
if isinstance(kernel_size, int):
kernel_size = (kernel_size, kernel_size, kernel_size)
if isinstance(padding, int):
padding = (padding, padding, padding)
if isinstance(stride, int):
stride = (stride, stride, stride)
if isinstance(dilation, int):
dilation = (dilation, dilation, dilation)
def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1):
dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1
return dim_out
channels = x.shape[1]
d_dim_out, h_dim_out, w_dim_out = output_shape[2:]
d_dim_in = get_dim_blocks(d_dim_out, kernel_size[0], padding[0], stride[0], dilation[0])
h_dim_in = get_dim_blocks(h_dim_out, kernel_size[1], padding[1], stride[1], dilation[1])
w_dim_in = get_dim_blocks(w_dim_out, kernel_size[2], padding[2], stride[2], dilation[2])
# print(d_dim_in, h_dim_in, w_dim_in, d_dim_out, h_dim_out, w_dim_out)
x = x.view(-1, channels, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2])
# (B, C, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2])
x = x.permute(0, 1, 5, 2, 6, 7, 3, 4)
# (B, C, kernel_size[0], d_dim_in, kernel_size[1], kernel_size[2], h_dim_in, w_dim_in)
x = x.contiguous().view(-1, channels * kernel_size[0] * d_dim_in * kernel_size[1] * kernel_size[2], h_dim_in * w_dim_in)
# (B, C * kernel_size[0] * d_dim_in * kernel_size[1] * kernel_size[2], h_dim_in * w_dim_in)
x = torch.nn.functional.fold(x, output_size=(h_dim_out, w_dim_out), kernel_size=(kernel_size[1], kernel_size[2]), padding=(padding[1], padding[2]), stride=(stride[1], stride[2]), dilation=(dilation[1], dilation[2]))
# (B, C * kernel_size[0] * d_dim_in, H, W)
x = x.view(-1, channels * kernel_size[0], d_dim_in * h_dim_out * w_dim_out)
# (B, C * kernel_size[0], d_dim_in * H * W)
x = torch.nn.functional.fold(x, output_size=(d_dim_out, h_dim_out * w_dim_out), kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1))
# (B, C, D, H * W)
x = x.view(-1, channels, d_dim_out, h_dim_out, w_dim_out)
# (B, C, D, H, W)
return x
a = torch.arange(1, 129, dtype=torch.float).view(2,2,2,4,4)
print(a.shape)
print(a)
b = extract_patches_3d(a, 2, padding=1, stride=1)
bs = extract_patches_3ds(a, 2, padding=1, stride=1)
print(b.shape)
print(b)
c = combine_patches_3d(b, (2,2,2,4,4), kernel_size=2, padding=1, stride=1)
print(c.shape)
print(c)
ones = torch.ones_like(b)
ones = combine_patches_3d(ones, (2,2,2,4,4), kernel_size=2, padding=1, stride=1)
print(torch.all(a==c))
print(c.shape, ones.shape)
d = c / ones
print(d)
print(torch.all(a==d))
Output (3D)
torch.Size([2, 2, 2, 4, 4])
tensor([[[[[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.],
[ 9., 10., 11., 12.],
[ 13., 14., 15., 16.]],
[[ 17., 18., 19., 20.],
[ 21., 22., 23., 24.],
[ 25., 26., 27., 28.],
[ 29., 30., 31., 32.]]],
[[[ 33., 34., 35., 36.],
[ 37., 38., 39., 40.],
[ 41., 42., 43., 44.],
[ 45., 46., 47., 48.]],
[[ 49., 50., 51., 52.],
[ 53., 54., 55., 56.],
[ 57., 58., 59., 60.],
[ 61., 62., 63., 64.]]]],
[[[[ 65., 66., 67., 68.],
[ 69., 70., 71., 72.],
[ 73., 74., 75., 76.],
[ 77., 78., 79., 80.]],
[[ 81., 82., 83., 84.],
[ 85., 86., 87., 88.],
[ 89., 90., 91., 92.],
[ 93., 94., 95., 96.]]],
[[[ 97., 98., 99., 100.],
[101., 102., 103., 104.],
[105., 106., 107., 108.],
[109., 110., 111., 112.]],
[[113., 114., 115., 116.],
[117., 118., 119., 120.],
[121., 122., 123., 124.],
[125., 126., 127., 128.]]]]])
torch.Size([150, 2, 2, 2, 2])
tensor([[[[[ 0., 0.],
[ 0., 0.]],
[[ 0., 0.],
[ 0., 1.]]],
[[[ 0., 0.],
[ 0., 0.]],
[[ 0., 0.],
[ 1., 2.]]]],
[[[[ 0., 0.],
[ 0., 0.]],
[[ 0., 0.],
[ 2., 3.]]],
[[[ 0., 0.],
[ 0., 0.]],
[[ 0., 0.],
[ 3., 4.]]]],
[[[[ 0., 0.],
[ 0., 0.]],
[[ 0., 0.],
[ 4., 0.]]],
[[[ 0., 0.],
[ 0., 0.]],
[[ 0., 1.],
[ 0., 5.]]]],
...,
[[[[124., 0.],
[128., 0.]],
[[ 0., 0.],
[ 0., 0.]]],
[[[ 0., 125.],
[ 0., 0.]],
[[ 0., 0.],
[ 0., 0.]]]],
[[[[125., 126.],
[ 0., 0.]],
[[ 0., 0.],
[ 0., 0.]]],
[[[126., 127.],
[ 0., 0.]],
[[ 0., 0.],
[ 0., 0.]]]],
[[[[127., 128.],
[ 0., 0.]],
[[ 0., 0.],
[ 0., 0.]]],
[[[128., 0.],
[ 0., 0.]],
[[ 0., 0.],
[ 0., 0.]]]]])
torch.Size([2, 2, 2, 4, 4])
tensor([[[[[ 8., 16., 24., 32.],
[ 40., 48., 56., 64.],
[ 72., 80., 88., 96.],
[ 104., 112., 120., 128.]],
[[ 136., 144., 152., 160.],
[ 168., 176., 184., 192.],
[ 200., 208., 216., 224.],
[ 232., 240., 248., 256.]]],
[[[ 264., 272., 280., 288.],
[ 296., 304., 312., 320.],
[ 328., 336., 344., 352.],
[ 360., 368., 376., 384.]],
[[ 392., 400., 408., 416.],
[ 424., 432., 440., 448.],
[ 456., 464., 472., 480.],
[ 488., 496., 504., 512.]]]],
[[[[ 520., 528., 536., 544.],
[ 552., 560., 568., 576.],
[ 584., 592., 600., 608.],
[ 616., 624., 632., 640.]],
[[ 648., 656., 664., 672.],
[ 680., 688., 696., 704.],
[ 712., 720., 728., 736.],
[ 744., 752., 760., 768.]]],
[[[ 776., 784., 792., 800.],
[ 808., 816., 824., 832.],
[ 840., 848., 856., 864.],
[ 872., 880., 888., 896.]],
[[ 904., 912., 920., 928.],
[ 936., 944., 952., 960.],
[ 968., 976., 984., 992.],
[1000., 1008., 1016., 1024.]]]]])
tensor(False)
torch.Size([2, 2, 2, 4, 4]) torch.Size([2, 2, 2, 4, 4])
tensor([[[[[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.],
[ 9., 10., 11., 12.],
[ 13., 14., 15., 16.]],
[[ 17., 18., 19., 20.],
[ 21., 22., 23., 24.],
[ 25., 26., 27., 28.],
[ 29., 30., 31., 32.]]],
[[[ 33., 34., 35., 36.],
[ 37., 38., 39., 40.],
[ 41., 42., 43., 44.],
[ 45., 46., 47., 48.]],
[[ 49., 50., 51., 52.],
[ 53., 54., 55., 56.],
[ 57., 58., 59., 60.],
[ 61., 62., 63., 64.]]]],
[[[[ 65., 66., 67., 68.],
[ 69., 70., 71., 72.],
[ 73., 74., 75., 76.],
[ 77., 78., 79., 80.]],
[[ 81., 82., 83., 84.],
[ 85., 86., 87., 88.],
[ 89., 90., 91., 92.],
[ 93., 94., 95., 96.]]],
[[[ 97., 98., 99., 100.],
[101., 102., 103., 104.],
[105., 106., 107., 108.],
[109., 110., 111., 112.]],
[[113., 114., 115., 116.],
[117., 118., 119., 120.],
[121., 122., 123., 124.],
[125., 126., 127., 128.]]]]])
tensor(True)
| https://stackoverflow.com/questions/57137505/ |
How to delete permanently from mounted Drive folder? | I wrote a script to upload my models and training examples to Google Drive after every iteration in case of crashes or anything that stops the notebook from running, which looks something like this:
drive_path = 'drive/My Drive/Colab Notebooks/models/'
if path.exists(drive_path):
shutil.rmtree(drive_path)
shutil.copytree('models', drive_path)
Whenever I check my Google Drive, a few GBs is taken up by dozens of deleted models folder in the Trash, which I have to manually delete them.
The only function in google.colab.drive seems to be mount and that's it.
According to this tutorial, shutil.rmtree() removes a directory permanently but apparently it doesn't work for Drive.
| It is possible to perform this action inside Google Colab by using the pydrive module. I suggest that you first move your unwanted files and folders to Trash (by ordinarily removing them in your code), and then, anytime you think it's necessary (e.g. you want to free up some space for saving weights of a new DL project), empty your trash by coding the following lines.
In order to permanently empty your Google Drive's Trash, code the following lines in your Google Colab notebook:
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
my_drive = GoogleDrive(gauth)
After entering authentication code and creating a valid instance of GoogleDrive class, write:
for a_file in my_drive.ListFile({'q': "trashed = true"}).GetList():
# print the name of the file being deleted.
print(f'the file "{a_file['title']}", is about to get deleted permanently.')
# delete the file permanently.
a_file.Delete()
If you don't want to use my suggestion and want to permanently delete a specific folder in your Drive, it is possible that you have to make more complex queries and deal with fileId, parentId, and the fact that a file or folder in your Drive may have multiple parent folders, when making queries to Google Drive API.
For more information:
You can find examples of more complex (yet typical) queries, here.
You can find an example of Checking if a file is in a specific folder, here.
This statement that Files and folders in Google Drive can each have multiple parent folders may become better and more deeply understood, by reading this post.
| https://stackoverflow.com/questions/57151432/ |
How to get grads in pytorch after matrix multiplication? | I want to get the product of matrix multiplication in the latent space and optimize the weight matrix by the optimizer. I use different kinds of ways to do that. While, The value of 'pi_' in the below codes never changes. What should I do?
I've tried different functions to get the product, like torch.mm(), torch.matual() and @. The weight matrix 'pi_' never changed.
import torch
from torch.utils.data import DataLoader
from torch.utils.data import TensorDataset
#from torchvision import transforms
from torchvision.datasets import MNIST
def get_mnist(data_dir='./data/mnist/',batch_size=128):
train=MNIST(root=data_dir,train=True,download=True)
test=MNIST(root=data_dir,train=False,download=True)
X=torch.cat([train.data.float().view(-1,784)/255.,test.data.float().view(-1,784)/255.],0)
Y=torch.cat([train.targets,test.targets],0)
dataset=dict()
dataset['X']=X
dataset['Y']=Y
dataloader=DataLoader(TensorDataset(X,Y),batch_size=batch_size,shuffle=True)
return dataloader
class tests(torch.nn.Module):
def __init__(self):
super(tests, self).__init__()
self.pi_= torch.nn.Parameter(torch.FloatTensor(10, 1).fill_(1),requires_grad=True)
self.linear0 = torch.nn.Linear(784,10)
self.linear1 = torch.nn.Linear(1,784)
def forward(self, data):
data = torch.nn.functional.relu(self.linear0(data))
# data = data.mm(self.pi_)
# data = torch.mm(data, self.pi_)
# data = data @ self.pi_
data = torch.matmul(data, self.pi_)
data = torch.nn.functional.relu(self.linear1(data))
return data
if __name__ == '__main__':
DL=get_mnist()
t = tests().cuda()
optimizer = torch.optim.Adam(t.parameters(), lr = 2e-3)
for i in range(100):
for inputs, classes in DL:
inputs = inputs.cuda()
res = t(inputs)
loss = torch.nn.functional.mse_loss(res, inputs)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epoch:", i,"pi:",t.pi_)
| TL;DR
You have too many parameters in your neural network, some of them becomes useless and therefore they are no longer being updated. Change your network architecture to reduce useless parameters.
Full explanation:
The weight matrix pi_ does change. You initialize pi_ as all 1, after running the first epochs, the weight matrix pi_ becomes
output >>>
tensor([[0.9879],
[0.9874],
[0.9878],
[0.9880],
[0.9876],
[0.9878],
[0.9878],
[0.9873],
[0.9877],
[0.9871]], device='cuda:0', requires_grad=True)
So, it has changed once. The true reason behind it involves some mathematics. But to put it in non-mathematical ways it means this layer doesn't contribute much to the loss, therefore, the network decided not to update this layer. i.e. The existence of pi_ in this network is redundant.
If you want to observe the change in pi_, you should modify the neural network such that pi_ is not redundant anymore.
One possible modification is to change your reconstruction problem to a classification problem
import torch
from torch.utils.data import DataLoader
from torch.utils.data import TensorDataset
#from torchvision import transforms
from torchvision.datasets import MNIST
def get_mnist(data_dir='./data/mnist/',batch_size=128):
train=MNIST(root=data_dir,train=True,download=True)
test=MNIST(root=data_dir,train=False,download=True)
X=torch.cat([train.data.float().view(-1,784)/255.,test.data.float().view(-1,784)/255.],0)
Y=torch.cat([train.targets,test.targets],0)
dataset=dict()
dataset['X']=X
dataset['Y']=Y
dataloader=DataLoader(TensorDataset(X,Y),batch_size=batch_size,shuffle=True)
return dataloader
class tests(torch.nn.Module):
def __init__(self):
super(tests, self).__init__()
# self.pi_= torch.nn.Parameter(torch.randn((10, 1),requires_grad=True))
self.pi_= torch.nn.Parameter(torch.FloatTensor(10, 1).fill_(1),requires_grad=True)
self.linear0 = torch.nn.Linear(784,10)
# self.linear1 = torch.nn.Linear(1,784)
def forward(self, data):
data = torch.nn.functional.relu(self.linear0(data))
# data = data.mm(self.pi_)
# data = torch.mm(data, self.pi_)
# data = data @ self.pi_
data = torch.matmul(data, self.pi_)
# data = torch.nn.functional.relu(self.linear1(data))
return data
if __name__ == '__main__':
DL=get_mnist()
t = tests().cuda()
optimizer = torch.optim.Adam(t.parameters(), lr = 2e-3)
for i in range(100):
for inputs, classes in DL:
inputs = inputs.cuda()
classes = classes.cuda().float()
output = t(inputs)
loss = torch.nn.functional.mse_loss(output.view(-1), classes)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# print("Epoch:", i, "pi_grad", t.pi_.grad)
print("Epoch:", i,"pi:",t.pi_)
Now, pi_ changes every single epoch.
output >>>
Epoch: 0 pi: Parameter containing:
tensor([[1.3429],
[1.0644],
[0.9817],
[0.9767],
[0.9715],
[1.1110],
[1.1139],
[0.9759],
[1.2424],
[1.2632]], device='cuda:0', requires_grad=True)
Epoch: 1 pi: Parameter containing:
tensor([[1.4413],
[1.1977],
[0.9588],
[1.0325],
[0.9241],
[1.1988],
[1.1690],
[0.9248],
[1.2892],
[1.3427]], device='cuda:0', requires_grad=True)
Epoch: 2 pi: Parameter containing:
tensor([[1.4653],
[1.2351],
[0.9539],
[1.1588],
[0.8670],
[1.2739],
[1.2058],
[0.8648],
[1.2848],
[1.3891]], device='cuda:0', requires_grad=True)
Epoch: 3 pi: Parameter containing:
tensor([[1.4375],
[1.2256],
[0.9580],
[1.2293],
[0.8174],
[1.3471],
[1.2035],
[0.8102],
[1.2505],
[1.4201]], device='cuda:0', requires_grad=True)
| https://stackoverflow.com/questions/57155691/ |
PyTorch loss decreases even if requires_grad = False for all variables | When I create a neural network with PyTorch, using the torch.nn.Sequential method for defining layers, it seems that the parameters have requires_grad = False by default. However, when I train this network, the loss decreases. How is this possible if the layers are not being updated via gradients?
For example, this is the code for defining my network:
class Network(torch.nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = torch.nn.Sequential(
torch.nn.Linear(10, 5),
torch.nn.Linear(5, 2)
)
print('Network Parameters:')
model_dict = self.state_dict()
for param_name in model_dict:
param = model_dict[param_name]
print('Name: ' + str(param_name))
print('\tRequires Grad: ' + str(param.requires_grad))
def forward(self, input):
prediction = self.layers(input)
return prediction
And this prints out:
Network Parameters:
Name: layers.0.weight
Requires Grad: False
Name: layers.0.bias
Requires Grad: False
Name: layers.1.weight
Requires Grad: False
Name: layers.1.bias
Requires Grad: False
Then this is the code to train my network:
network = Network()
network.train()
optimiser = torch.optim.SGD(network.parameters(), lr=0.001)
criterion = torch.nn.MSELoss()
inputs = np.random.random([100, 10]).astype(np.float32)
inputs = torch.from_numpy(inputs)
labels = np.random.random([100, 2]).astype(np.float32)
labels = torch.from_numpy(labels)
while True:
prediction = network.forward(inputs)
loss = criterion(prediction, labels)
print('loss = ' + str(loss.item()))
optimiser.zero_grad()
loss.backward()
optimiser.step()
And this prints out:
loss = 0.284633219242
loss = 0.278225809336
loss = 0.271959483624
loss = 0.265835255384
loss = 0.259853869677
loss = 0.254015892744
loss = 0.248321473598
loss = 0.242770522833
loss = 0.237362638116
loss = 0.232097044587
loss = 0.226972639561
loss = 0.221987977624
loss = 0.217141270638
loss = 0.212430402637
loss = 0.207852959633
loss = 0.203406244516
loss = 0.199087426066
loss = 0.19489350915
loss = 0.190821439028
loss = 0.186868071556
loss = 0.183030322194
loss = 0.179305106401
loss = 0.175689414144
loss = 0.172180294991
loss = 0.168774917722
loss = 0.165470585227
loss = 0.162264674902
loss = 0.159154698253
Why is the loss decreasing if all of the parameters have requires_grad = False?
| This is interesting -- there seems to be a difference between state_dict() and parameters():
class Network(torch.nn.Module):
def __init__(self):
super(Network, self).__init__()
self.layers = torch.nn.Sequential(
torch.nn.Linear(10, 5),
torch.nn.Linear(5, 2)
)
print(self.layers[0].weight.requires_grad) # True
print(self.state_dict()['layers.0.weight'].requires_grad) # False
print(list(self.parameters())[0].requires_grad) # True
def forward(self, input):
prediction = self.layers(input)
return prediction
So it appears your loss is decreasing because the net is in fact learning, because requires_grad is True. (In general for debugging, I prefer querying the actual objects (self.layers[0]...).
[EDIT] Ahah - found the problem: there's a keep_vars boolean option that you can pass into state_dict that does the following (among other things): (https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L665)
for name, param in self._parameters.items():
if param is not None:
destination[prefix + name] = param if keep_vars else param.data
so, if you want the actual param, use keep_vars=True -- if you want just the data, use the default keep_vars=False.
So:
print(self.layers[0].weight.requires_grad) # True
print(self.state_dict(keep_vars=True)['layers.0.weight'].requires_grad) # True
print(list(self.parameters())[0].requires_grad) # True
| https://stackoverflow.com/questions/57171426/ |
How to handle entrypoints nested in folders with amazon sagemaker pytorch estimator? | I am attempting to run a training job on amazon sagemaker using the python-sagemaker-sdk, estimator class.
I have the following
estimator = PyTorch(entry_point='training_scripts/train_MSCOCO.py',
source_dir='./',
role=#dummy_role,
train_instance_type='ml.p3.2xlarge',
train_instance_count=1,
framework_version='1.0.0',
output_path=#dummy_output_path,
hyperparameters={'lr': 0.001,
'batch_size': 32,
'num_workers': 4,
'description': description})
role and output_path hidden for privacy.
I get the following error, "No module named training_scripts\train_MSCOCO".
When I run python -m training_scripts.train_MSCOCO the script runs fine. However when I pass entry_point='training_script.train_MSCOCO.py it will not run as, "No file named "training_scripts.train_MSCOCO.py" was found in directory "./"".
I am confused as to how to run a nested training script from the top level of my repository within AWS sagemaker, as they seem to have conflicting path needs, one in python module dot notation, the other in standard filepath slash notation.
| Either one of these will work:
estimator = PyTorch(entry_point='training_scripts/train_MSCOCO.py',
role=#dummy_role,
...
estimator = PyTorch(entry_point='train_MSCOCO.py',
source_dir='training_scripts',
role=#dummy_role,
...
| https://stackoverflow.com/questions/57187148/ |
Non-reproducible results in pytorch after saving and loading the model | I am unable to reproduce my results in PyTorch after saving and loading the model whereas the in-memory model works as expected. Just for context, I am seeding my libraries, using model.eval to turn off the dropouts but still, results are not reproducible. Any suggestions if I am missing something. Thanks in advance.
Libraries that I am using:-
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import numpy as np
import random
Libraries that I am seeding
manualSeed = 1
np.random.seed(manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
random.seed(manualSeed)
Below are the results on in memory and loaded model.
In Memory model Loss : 1.596395881312668, In Memory model Accuracy : tensor(0.3989)
Loaded model Loss : 1.597083057567572, Loaded model Accuracy : tensor(0.3983)
| Since the date that Szymon Maszke posted his response above (2019), a new API has been added, torch.use_deterministic_algorithms().
This new function does everything that torch.backends.cudnn.deterministic did (namely, makes CuDNN convolution operations deterministic), plus much more (makes every known normally-nondeterministic function either deterministic or throw an error if a deterministic implementation is not available). CuDNN convolution is only one of the many possible sources of nondeterminism in PyTorch, so torch.use_deterministic_algorithms() should now be used instead of the old torch.backends.cudnn.deterministic.
The link to the reproducibility documentation is still relevant. However, note that this page has been changed a fair bit since 2019.
| https://stackoverflow.com/questions/57195650/ |
What is pixel-wise softmax loss? | what is the pixel-wise softmax loss? In my understanding, it's just a cross-entropy loss, but I didn't find the formula. Can someone help me? It's better to have the pytorch code.
| You can read here all about it (there's also a link to source code there).
As you already observed the "softmax loss" is basically a cross entropy loss which computation combines the softmax function and the loss for numerical stability and efficiency.
In your example, the loss is computed for a pixel-wise prediction so you have a per-pixel prediction, a per-pixel target and a per-pixel loss term.
| https://stackoverflow.com/questions/57199288/ |
How to implement my own ResNet with torch.nn.Sequential in Pytorch? | I want to implement a ResNet network (or rather, residual blocks) but I really want it to be in the sequential network form.
What I mean by sequential network form is the following:
## mdl5, from cifar10 tutorial
mdl5 = nn.Sequential(OrderedDict([
('pool1', nn.MaxPool2d(2, 2)),
('relu1', nn.ReLU()),
('conv1', nn.Conv2d(3, 6, 5)),
('pool1', nn.MaxPool2d(2, 2)),
('relu2', nn.ReLU()),
('conv2', nn.Conv2d(6, 16, 5)),
('relu2', nn.ReLU()),
('Flatten', Flatten()),
('fc1', nn.Linear(1024, 120)), # figure out equation properly
('relu4', nn.ReLU()),
('fc2', nn.Linear(120, 84)),
('relu5', nn.ReLU()),
('fc3', nn.Linear(84, 10))
]))
but of course with the NN lego blocks being “ResNet”.
I know the equation is something like:
but I am not sure how to do it in Pytorch AND Sequential. Sequential is key for me!
Bounty:
I'd like to see an example with a fully connected net and where the BN layers would have to go (and the drop out layers would go too). Ideally on a toy example/data if possible.
Cross-posted:
https://discuss.pytorch.org/t/how-to-have-residual-network-using-only-sequential-blocks/51541
https://www.quora.com/unanswered/How-does-one-implement-my-own-ResNet-with-torch-nn-Sequential-in-Pytorch
https://www.reddit.com/r/pytorch/comments/uyyr28/how_to_implement_my_own_resnet_with/
| You can't do it solely using torch.nn.Sequential as it requires operations to go, as the name suggests, sequentially, while yours are parallel.
You could, in principle, construct your own block really easily like this:
import torch
class ResNet(torch.nn.Module):
def __init__(self, module):
super().__init__()
self.module = module
def forward(self, inputs):
return self.module(inputs) + inputs
Which one can use something like this:
model = torch.nn.Sequential(
torch.nn.Conv2d(3, 32, kernel_size=7),
# 32 filters in and out, no max pooling so the shapes can be added
ResNet(
torch.nn.Sequential(
torch.nn.Conv2d(32, 32, kernel_size=3),
torch.nn.ReLU(),
torch.nn.BatchNorm2d(32),
torch.nn.Conv2d(32, 32, kernel_size=3),
torch.nn.ReLU(),
torch.nn.BatchNorm2d(32),
)
),
# Another ResNet block, you could make more of them
# Downsampling using maxpool and others could be done in between etc. etc.
ResNet(
torch.nn.Sequential(
torch.nn.Conv2d(32, 32, kernel_size=3),
torch.nn.ReLU(),
torch.nn.BatchNorm2d(32),
torch.nn.Conv2d(32, 32, kernel_size=3),
torch.nn.ReLU(),
torch.nn.BatchNorm2d(32),
)
),
# Pool all the 32 filters to 1, you may need to use `torch.squeeze after this layer`
torch.nn.AdaptiveAvgPool2d(1),
# 32 10 classes
torch.nn.Linear(32, 10),
)
Fact being usually overlooked (without real consequences when it comes to shallowe networks) is that skip connection should be left without any nonlinearities like ReLU or convolutional layers and that's what you can see above (source: Identity Mappings in Deep Residual Networks).
| https://stackoverflow.com/questions/57229054/ |
How is it that torch is not installed by torchvision? | Somehow when I do the install it installs torchvision but not torch. Command I am running as dictated from the main website:
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
then I do conda list but look:
$ conda list
# packages in environment at /home/ubuntu/anaconda3/envs/pytorch_p36:
#
# Name Version Build Channel
alabaster 0.7.10 py36h306e16b_0
anaconda-client 1.6.14 py36_0
anaconda-project 0.8.2 py36h44fb852_0
argparse 1.4.0 <pip>
asn1crypto 0.24.0 py36_0
astroid 1.6.3 py36_0
astropy 3.0.2 py36h3010b51_1
attrs 18.1.0 py36_0
autovizwidget 0.12.7 <pip>
babel 2.5.3 py36_0
backcall 0.1.0 py36_0
backports 1.0 py36hfa02d7e_1
backports.shutil_get_terminal_size 1.0.0 py36hfea85ff_2
bcrypt 3.1.6 <pip>
beautifulsoup4 4.6.0 py36h49b8c8c_1
bitarray 0.8.1 py36h14c3975_1
bkcharts 0.2 py36h735825a_0
blas 1.0 mkl
blaze 0.11.3 py36h4e06776_0
bleach 2.1.3 py36_0
blosc 1.14.3 hdbcaa40_0
bokeh 1.0.4 py36_0
boto 2.48.0 py36h6e4cd66_1
boto3 1.9.146 <pip>
boto3 1.9.134 py_0
botocore 1.12.146 <pip>
botocore 1.12.134 py_0
bottleneck 1.2.1 py36haac1ea0_0
bzip2 1.0.6 h14c3975_5
ca-certificates 2019.1.23 0
cached-property 1.5.1 <pip>
cairo 1.14.12 h8948797_3
certifi 2019.3.9 py36_0
cffi 1.11.5 py36h9745a5d_0
chardet 3.0.4 py36h0f667ec_1
click 6.7 py36h5253387_0
cloudpickle 0.5.3 py36_0
clyent 1.2.2 py36h7e57e65_1
colorama 0.3.9 py36h489cec4_0
contextlib2 0.5.5 py36h6c84a62_0
cryptography 2.3.1 py36hc365091_0
cudatoolkit 10.0.130 0
curl 7.60.0 h84994c4_0
cycler 0.10.0 py36h93f1223_0
cymem 2.0.2 py36hfd86e86_0
cython 0.28.2 py36h14c3975_0
cytoolz 0.9.0.1 py36h14c3975_0
dask 0.17.5 py36_0
dask-core 0.17.5 py36_0
dataclasses 0.6 py_0 fastai
datashape 0.5.4 py36h3ad6b5c_0
dbus 1.13.2 h714fa37_1
decorator 4.3.0 py36_0
defusedxml 0.6.0 py_0
dill 0.2.9 py36_0
distributed 1.21.8 py36_0
docker 3.7.2 <pip>
docker-compose 1.24.0 <pip>
docker-pycreds 0.4.0 <pip>
dockerpty 0.4.1 <pip>
docopt 0.6.2 <pip>
docutils 0.14 py36hb0f60f5_0
entrypoints 0.2.3 py36h1aec115_2
environment-kernels 1.1.1 <pip>
et_xmlfile 1.0.1 py36hd6bccc3_0
expat 2.2.5 he0dffb1_0
fastai 1.0.52 1 fastai
fastcache 1.0.2 py36h14c3975_2
fastprogress 0.1.21 py_0 fastai
filelock 3.0.4 py36_0
flask 1.0.2 py36_1
flask-cors 3.0.4 py36_0
fontconfig 2.13.0 h9420a91_0
freetype 2.9.1 h8a8886c_1
fribidi 1.0.5 h7b6447c_0
get_terminal_size 1.0.0 haa9412d_0
gevent 1.3.0 py36h14c3975_0
glib 2.56.1 h000015b_0
glob2 0.6 py36he249c77_0
gmp 6.1.2 h6c8ec71_1
gmpy2 2.0.8 py36hc8893dd_2
graphite2 1.3.11 h16798f4_2
graphviz 2.40.1 h21bd128_2
greenlet 0.4.13 py36h14c3975_0
gst-plugins-base 1.14.0 hbbd80ab_1
gstreamer 1.14.0 hb453b48_1
h5py 2.8.0 py36h989c5e5_3
harfbuzz 1.8.4 hec2c2bc_0
hdf5 1.10.2 hba1933b_1
hdijupyterutils 0.12.7 <pip>
heapdict 1.0.0 py36_2
html5lib 1.0.1 py36h2f9c1c0_0
icu 58.2 h9c2bf20_1
idna 2.6 py36h82fb2a8_1
imageio 2.3.0 py36_0
imagesize 1.0.0 py36_0
intel-openmp 2018.0.0 8
ipykernel 4.8.2 py36_0
ipyparallel 6.2.2 <pip>
ipython 6.4.0 py36_0
ipython_genutils 0.2.0 py36hb52b0d5_0
ipywidgets 7.2.1 py36_0
ipywidgets 7.4.0 <pip>
isort 4.3.4 py36_0
itsdangerous 0.24 py36h93cc618_1
jbig 2.1 hdba287a_0
jdcal 1.4 py36_0
jedi 0.12.0 py36_1
jinja2 2.10 py36ha16c418_0
jmespath 0.9.4 py_0
jpeg 9b h024ee3a_2
jsonschema 2.6.0 py36h006f8b5_0
jupyter 1.0.0 py36_4
jupyter_client 5.2.3 py36_0
jupyter_console 5.2.0 py36he59e554_1
jupyter_core 4.4.0 py36h7c827e3_0
jupyterlab 0.32.1 py36_0
jupyterlab_launcher 0.10.5 py36_0
kiwisolver 1.0.1 py36h764f252_0
krb5 1.14.2 hcdc1b81_6
lazy-object-proxy 1.3.1 py36h10fcdad_0
libcurl 7.60.0 h1ad7b7a_0
libedit 3.1.20170329 h6b74fdf_2
libffi 3.2.1 hd88cf55_4
libgcc-ng 8.2.0 hdf63c60_1
libgfortran 3.0.0 1 conda-forge
libgfortran-ng 7.2.0 hdf63c60_3
libpng 1.6.37 hbc83047_0
libprotobuf 3.5.2 hd28b015_1 conda-forge
libsodium 1.0.16 h1bed415_0
libssh2 1.8.0 h9cfc8f7_4
libstdcxx-ng 8.2.0 hdf63c60_1
libtiff 4.0.9 he85c1e1_1
libtool 2.4.6 h544aabb_3
libuuid 1.0.3 h1bed415_2
libxcb 1.13 h1bed415_1
libxml2 2.9.8 h26e45fe_1
libxslt 1.1.32 h1312cb7_0
llvmlite 0.23.1 py36hdbcaa40_0
locket 0.2.0 py36h787c0ad_1
lxml 4.2.1 py36h23eabaa_0
lzo 2.10 h49e0be7_2
markupsafe 1.0 py36hd9260cd_1
matplotlib 2.2.2 <pip>
matplotlib 3.0.3 py36h5429711_0
mccabe 0.6.1 py36h5ad9710_1
mistune 0.8.3 py36h14c3975_1
mkl 2018.0.3 1
mkl-service 1.1.2 py36h17a0993_4
mkl_fft 1.0.6 py36h7dd41cf_0
mkl_random 1.0.1 py36h629b387_0
mock 3.0.5 <pip>
more-itertools 4.1.0 py36_0
mpc 1.0.3 hec55b23_5
mpfr 3.1.5 h11a74b3_2
mpi 1.0 openmpi conda-forge
mpmath 1.0.0 py36hfeacd6b_2
msgpack 0.6.0 <pip>
msgpack-numpy 0.4.3.2 py36_0
msgpack-python 0.5.6 py36h6bb024c_0
multipledispatch 0.5.0 py36_0
murmurhash 1.0.2 py36he6710b0_0
nb_conda 2.2.1 py36_2 conda-forge
nb_conda_kernels 2.2.1 py36_0 conda-forge
nbconvert 5.4.1 py36_3
nbformat 4.4.0 py36h31c9010_0
ncurses 6.1 hf484d3e_0
networkx 2.1 py36_0
ninja 1.8.2 py36h6bb024c_1
nltk 3.3.0 py36_0
nose 1.3.7 py36hcdf7029_2
notebook 5.5.0 py36_0
numba 0.38.0 py36h637b7d7_0
numexpr 2.6.5 py36h7bf3b9c_0
numpy 1.15.4 py36h1d66e8a_0
numpy 1.15.4 <pip>
numpy-base 1.15.4 py36h81de0dd_0
numpydoc 0.8.0 py36_0
nvidia-ml-py3 7.352.0 py_0 fastai
odo 0.5.1 py36h90ed295_0
olefile 0.45.1 py36_0
onnx 1.4.1 <pip>
openmpi 3.1.0 h26a2512_3 conda-forge
openpyxl 2.5.3 py36_0
openssl 1.0.2r h7b6447c_0
packaging 17.1 py36_0
pandas 0.24.2 <pip>
pandas 0.23.0 py36h637b7d7_0
pandoc 1.19.2.1 hea2e7c5_1
pandocfilters 1.4.2 py36ha6701b7_1
pango 1.42.3 h8589676_0
paramiko 2.4.2 <pip>
parso 0.2.0 py36_0
partd 0.3.8 py36h36fd896_0
patchelf 0.9 hf79760b_2
path.py 11.0.1 py36_0
pathlib2 2.3.2 py36_0
patsy 0.5.0 py36_0
pcre 8.42 h439df22_0
pep8 1.7.1 py36_0
pexpect 4.5.0 py36_0
pickleshare 0.7.4 py36h63277f8_0
pillow 5.2.0 py36heded4f4_0
pip 10.0.1 py36_0
pixman 0.34.0 hceecf20_3
pkginfo 1.4.2 py36_1
plac 0.9.6 py36_0
plotly 2.7.0 <pip>
pluggy 0.6.0 py36hb689045_0
ply 3.11 py36_0
preshed 2.0.1 py36he6710b0_0
prompt_toolkit 1.0.15 py36h17d85b1_0
protobuf 3.5.2 py36hd28b015_0 conda-forge
protobuf3-to-dict 0.1.5 <pip>
psutil 5.4.5 py36h14c3975_0
psycopg2 2.7.5 <pip>
ptyprocess 0.5.2 py36h69acd42_0
py 1.5.3 py36_0
py4j 0.10.7 <pip>
pyasn1 0.4.5 <pip>
pycodestyle 2.4.0 py36_0
pycosat 0.6.3 py36h0a5515d_0
pycparser 2.18 py36hf9f622e_1
pycrypto 2.6.1 py36h14c3975_8
pycurl 7.43.0.1 py36hb7f436b_0
pyflakes 1.6.0 py36h7bd6a15_0
pygal 2.4.0 <pip>
pygments 2.2.0 py36h0d3125c_0
pykerberos 1.2.1 py36h14c3975_0
pylint 1.8.4 py36_0
PyNaCl 1.3.0 <pip>
pyodbc 4.0.23 py36hf484d3e_0
pyopenssl 18.0.0 py36_0
pyparsing 2.2.0 py36hee85983_1
pyqt 5.9.2 py36h751905a_0
pysocks 1.6.8 py36_0
pyspark 2.3.2 <pip>
pytables 3.4.3 py36h02b9ad4_2
pytest 3.5.1 py36_0
pytest-arraydiff 0.2 py36_0
pytest-astropy 0.3.0 py36_0
pytest-doctestplus 0.1.3 py36_0
pytest-openfiles 0.3.0 py36_0
pytest-remotedata 0.2.1 py36_0
python 3.6.5 hc3d631a_2
python-dateutil 2.7.3 py36_0
pytorch 1.1.0 py3.6_cuda10.0.130_cudnn7.5.1_0 pytorch
pytz 2018.4 py36_0
pywavelets 0.5.2 py36he602eb0_0
pyyaml 3.12 py36hafb9ca4_1
pyzmq 17.0.0 py36h14c3975_0
qt 5.9.6 h52aff34_0
qtawesome 0.4.4 py36h609ed8c_0
qtconsole 4.3.1 py36h8f73b5b_0
qtpy 1.4.1 py36_0
readline 7.0 ha6073c6_4
regex 2018.01.10 py36h14c3975_1000 fastai
requests 2.20.0 py36_1000 conda-forge
requests-kerberos 0.12.0 <pip>
rope 0.10.7 py36h147e2ec_0
ruamel_yaml 0.15.35 py36h14c3975_1
s3fs 0.1.5 py36_0
s3transfer 0.2.0 <pip>
s3transfer 0.2.0 py36_0
sagemaker 1.20.1 <pip>
sagemaker-pyspark 1.2.4 <pip>
scikit-image 0.13.1 py36h14c3975_1
scikit-learn 0.19.1 py36h7aa7ec6_0
scikit-learn 0.20.3 <pip>
scipy 1.1.0 py36hfc37229_0
seaborn 0.8.1 py36hfad7ec4_0
send2trash 1.5.0 py36_0
setuptools 39.1.0 py36_0
simplegeneric 0.8.1 py36_2
singledispatch 3.4.0.3 py36h7a266c3_0
sip 4.19.8 py36hf484d3e_0
six 1.11.0 py36h372c433_1
snappy 1.1.7 hbae5bb6_3
snowballstemmer 1.2.1 py36h6febd40_0
sortedcollections 0.6.1 py36_0
sortedcontainers 1.5.10 py36_0
spacy 2.0.18 py36hf484d3e_1000 fastai
sparkmagic 0.12.5 <pip>
sphinx 1.7.4 py36_0
sphinxcontrib 1.0 py36h6d0f590_1
sphinxcontrib-websupport 1.0.1 py36hb5cb234_1
spyder 3.2.8 py36_0
SQLAlchemy 1.2.11 <pip>
sqlalchemy 1.2.7 py36h6b74fdf_0
sqlite 3.23.1 he433501_0
statsmodels 0.9.0 py36h3010b51_0
sympy 1.1.1 py36hc6d1c1c_0
tblib 1.3.2 py36h34cf8b6_0
terminado 0.8.1 py36_1
testpath 0.3.1 py36h8cadb63_0
texttable 0.9.1 <pip>
thinc 6.12.1 py36h637b7d7_1000 fastai
tk 8.6.8 hbc83047_0
toolz 0.9.0 py36_0
torchvision 0.2.2 py_3 pytorch
tornado 5.0.2 py36_0
tqdm 4.31.1 py36_1
traitlets 4.3.2 py36h674d592_0
typing 3.6.4 py36_0
typing-extensions 3.7.2 <pip>
ujson 1.35 py36h14c3975_0
unicodecsv 0.14.1 py36ha668878_0
unixodbc 2.3.6 h1bed415_0
urllib3 1.23 py36_0
wcwidth 0.1.7 py36hdf4376a_0
webencodings 0.5.1 py36h800622e_1
websocket-client 0.56.0 <pip>
werkzeug 0.14.1 py36_0
wheel 0.31.1 py36_0
widgetsnbextension 3.2.1 py36_0
widgetsnbextension 3.4.2 <pip>
wrapt 1.10.11 py36h28b7045_0
xlrd 1.1.0 py36h1db9f0c_1
xlsxwriter 1.0.4 py36_0
xlwt 1.3.0 py36h7b00a1f_0
xz 5.2.4 h14c3975_4
yaml 0.1.7 had09818_2
zeromq 4.2.5 h439df22_0
zict 0.1.3 py36h3a3bf81_0
zlib 1.2.11 ha838bed_2
I was told I do have pytorch installed but my script keeps giving me this error:
$ cat nohup.out
Traceback (most recent call last):
File "high_performing_data_point_models_cifar10.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
does that mean that I need to install it as pytroch and not torch? Is this not weird?
Note I am running this on an AWS instance p3.2xlarge. This keeps happening when I log out and then go back in that my torch package gets missing...?!?! :/
original post: https://discuss.pytorch.org/t/torchvision-installed-but-not-torch/51758
The issue persists even if I open just a python interactive and try to import it:
(pytorch_p36) ubuntu@ip-123-12-21-123:~$ python
Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torch'
>>>
It also happens running the script directly:
(pytorch_p36) ubuntu@ip-123-12-21-123:~/project/folder$ python high_performing_data_point_models_cifar10.py
Traceback (most recent call last):
File "high_performing_data_point_models_cifar10.py", line 5, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
I can't import torchvision either!
$ python
Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torchvision
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torchvision'
>>>
| Your conda list command shows that it was run from the environment called automl:
# packages in environment at /home/ubuntu/anaconda3/envs/automl:
However, when you show the commands that you are trying to run, you are doing so from the (pytorch_p36) environment.
You should run your conda install command while inside this pytorch_p36 environment.
| https://stackoverflow.com/questions/57233958/ |
What is the difference between .flatten() and .view(-1) in PyTorch? | Both .flatten() and .view(-1) flatten a tensor in PyTorch. What's the difference?
Does .flatten() copy the data of the tensor?
Is .view(-1) faster?
Is there any situation that .flatten() doesn't work?
| In addition to @adeelh's comment, there is another difference: torch.flatten() results in a .reshape(), and the differences between .reshape() and .view() are:
[...] torch.reshape may return a copy or a view of the original tensor. You can not count on that to return a view or a copy.
Another difference is that reshape() can operate on both contiguous and non-contiguous tensor while view() can only operate on contiguous tensor. Also see here about the meaning of contiguous
For context:
The community requested for a flatten function for a while, and after Issue #7743, the feature was implemented in the PR #8578.
You can see the implementation of flatten here, where a call to .reshape() can be seen in return line.
| https://stackoverflow.com/questions/57234095/ |
saving and loading RNN hidden states in PyTorch | I am trying to use an RNN network in PyTorch for regression task. In the training phase the model is learned. I want to use the trained model in testing phase. For this purpose I have saved the learned model by:
torch.save(learned_model, "model_path")
Then I can load the model again by:
loaded_model = torch.load("model_path")
For testing phase I must use this loaded model but I want to know what is the value of the first hidden state of the model? I can initialize the first hidden state by zero but I think maybe this is not correct. Is there any function other than torch.save which can return the last hidden state in the learned mode? Then I can restore that hidden state and use it as the first hidden state in the loaded model for testing phase.
Thanks in advance.
| Your question is a bit unclear. As far as I understand you want to know the weights of the last hidden layer in the trained model, i.e. loaded_model. In that case, you can simply use model's state_dict, which is basically a python dictionary object that maps each layer to its parameter tensor. Read more about it from here.
for param in loaded_model.state_dict():
print(param)
Sample output:
rnn.weight_ih_l0
rnn.weight_hh_l0
rnn.bias_ih_l0
rnn.bias_hh_l0
out.weight
out.bias
After that, you can get the weights of the last hidden layer using below code:
out_weights, out_bias = loaded_model.state_dict()['out.weight'], loaded_model.state_dict()['out.bias']
| https://stackoverflow.com/questions/57238839/ |
Pytorch dataloader, too many threads, too much cpu memory allocation | I'm training a model using PyTorch. To load the data, I'm using torch.utils.data.DataLoader. The data loader is using a custom database I've implemented. A strange problem has occurred, every time the second for in the following code executes, the number of threads/processes increases and a huge amount of memory is allocated
for epoch in range(start_epoch, opt.niter + opt.niter_decay + 1):
epoch_start_time = time.time()
if epoch != start_epoch:
epoch_iter = epoch_iter % dataset_size
for i, item in tqdm(enumerate(dataset, start=epoch_iter)):
I suspect the threads and memories of the previous iterators are not released after each __iter__() call to the data loader.
The allocated memory is close to the amount of memory allocated by the main thread/process when the threads are created. That is in the initial epoch the main thread is using 2GB of memory and so 2 threads of size 2GB are created. In the next epochs, 5GB of memory is allocated by the main thread and two 5GB threads are constructed (num_workers is 2).
I suspect that fork() function copies most of the context to the new threads.
The following is the Activity monitor showing the processes created by python, ZMQbg/1 are processes related to python.
My dataset used by the data loader has 100 sub-datasets, the __getitem__ call randomly selects one (ignoring the index). (the sub-datasets are AlignedDataset from pix2pixHD GitHub repository):
| torch.utils.data.DataLoader prefetch 2*num_workers, so that you will always have data ready to send to the GPU/CPU, this could be the reason you see the memory increase
https://pytorch.org/docs/stable/_modules/torch/utils/data/dataloader.html
| https://stackoverflow.com/questions/57250275/ |
How can I use a PyTorch DataLoader for Reinforcement Learning? | I'm trying to set up a generalized Reinforcement Learning framework in PyTorch to take advantage of all the high-level utilities out there which leverage PyTorch DataSet and DataLoader, like Ignite or FastAI, but I've hit a blocker with the dynamic nature of Reinforcement Learning data:
Data Items are generated from code, not read from a file, and they are dependent on previous actions and model results, therefore each nextItem call needs access to the model state.
Training episodes are not fixed length so I need a dynamic batch size as well as a dynamic total data set size. My preference would be to use a terminating condition function instead of a number. I could "possibly" do this with padding, as in NLP sentence processing, but that's a real hack.
My Google and StackOverflow searches so far have yielded zilch. Anyone here know of existing solutions or workarounds to using DataLoader or DataSet with Reinforcement Learning? I hate to loose access to all the existing libraries out there which depend on those.
| Here is one PyTorch-based framework and here is something from Facebook.
When it comes to your question (and noble quest, no doubt):
You could easily create a torch.utils.data.Dataset dependent on anything, including the model, something like this (pardon weak abstraction, it's just to prove a point):
import typing
import torch
from torch.utils.data import Dataset
class Environment(Dataset):
def __init__(self, initial_state, actor: torch.nn.Module, max_interactions: int):
self.current_state = initial_state
self.actor: torch.nn.Module = actor
self.max_interactions: int = max_interactions
# Just ignore the index
def __getitem__(self, _):
self.current_state = self.actor.update(self.current_state)
return self.current_state.get_data()
def __len__(self):
return self.max_interactions
Assuming, torch.nn.Module-like network has some kind of update changing state of the environment. All in all it's just a Python structure and so you could model a lot of things with it.
You could specify max_interactions to be almost infinite or you could change it on the fly if needed with some callbacks during training (as __len__ will be called multiple times throughout the code probably). Environment could furthermore provide batches instead of samples.
torch.utils.data.DataLoader has batch_sampler argument, there you could generate batches of varying length. As the network is not dependent on the first dimension, you could return any batch size you want from there as well.
BTW. Padding should be used if each sample would be of different length, varying batch size has nothing to do with that.
| https://stackoverflow.com/questions/57258323/ |
How to Build Simple LSTM Model for Anomally Detection | I want to create a LSTM model in PyTorch that will be used for anomally detection, but I'm having trouble understanding the details in doing so.
Note, my training-data consists of sets with 16 features in 80 time-steps. Here is what I've written for the model below:
class AutoEncoder(torch.nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim):
super(AutoEncoder, self).__init__()
self.hidden_dim = hidden_dim
self.layer_dim = layer_dim
self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True)
self.fc1 = torch.nn.Linear(hidden_dim, hidden_dim)
self.fc2 = torch.nn.Linear(hidden_dim, input_dim)
def forward(self, x):
h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()
c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()
out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach()))
out = self.fc1(out[:, -1, :])
out = self.fc2(out)
return out
input_dim = 16
hidden_dim = 8
layer_dim = 2
model = AutoEncoder(input_dim, hidden_dim, layer_dim)
I don't think I've built the model correctly. How does it know that I'm feeding it 80 time-steps of data? How will the auto-encoder reconstruct those 80 time-steps of data?
I'm having a hard time understanding the material online. What would the final layer have to be?
| If you check out the PyTorch LSTM documentation, you will see that the LSTM equations are applied to each timestep in your sequence. nn.LSTM will internally obtain the seq_len dimension and optimize from there, so you do not need to provide the number of time steps.
At the moment, the line
out = self.fc1(out[:, -1, :])
is selecting the final hidden state (corresponding to time step 80) and then this is being projected onto a space of size input_dim.
To output a sequence of length 80, you should have an output for each hidden state. All hidden states are stacked in out so you can simply use
out = self.fc1(out)
out = self.fc2(out)
I would also note that if you must have two fully connected layers after encoding in your hidden state, you should use a non-linearity in between otherwise this is the equivalent of just one layer but with more parameters.
| https://stackoverflow.com/questions/57258882/ |
Object Detection inference using multi-gpu & multi threading, Pytorch | I am trying to detect objects in a video using multiple GPUs. I want to distribute frames to GPUs for inference to increase total process time. I succeeded running inference in single gpu, but failed to run on multiple GPUs.
I thought dividing frames per number of gpus and processing inference would decrease the time. If there is another way I can decrease running time, I would be glad to receive suggestions.
I am using pre-trained model provided by Pytorch. What I tried is as follows:
1. I read the video and divide frames by number of gpus I have(currently two NVIDIA GeForce GTX 1080 Ti)
2. Then, I distributed frames to gpus and process object detection inference.
(Later I planned to use multi-threads to dynamically distribute frames per number of gpus, but currently I made it static)
The same method I tried worked well in Tensorflow using with tf.device() and I am trying to make it possible in Pytorch as well.
pytorch_multithread.py
...
def detection_gpu(frame_list, device_name, device, detect, model):
model.to(device)
model.eval()
for frame in frame_list:
start = time.time()
detect.bounding_box_rcnn(frame, model=model)
end = time.time()
cv2.putText(frame, '{:.2f}ms'.format((end - start) * 1000), (40, 40), cv2.FONT_HERSHEY_SIMPLEX, 0.75,
(255, 0, 0),
2)
cv2.imshow(str(device_name), frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
def main():
args = arg_parse()
VIDEO_PATH = args.video
print("Loading network.....")
model = models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
print("Network successfully loaded")
num_gpus = torch.cuda.device_count()
if torch.cuda.is_available() and num_gpus > 1:
device = ["cuda:{}".format(i) for i in range(num_gpus)]
elif num_gpus == 1:
device = "cuda"
else:
device = "cpu"
# class names ex) person, car, truck, and etc.
PATH_TO_LABELS = "labels/mscoco_labels.names"
# load detection class, default confidence threshold is 0.5
if num_gpus>1:
detect = [DetectBoxes(PATH_TO_LABELS, device[i], conf_threshold=args.confidence) for i in range(num_gpus)]
else:
detect = [DetectBoxes(PATH_TO_LABELS, device, conf_threshold=args.confidence) for i in range(1)]
cap = cv2.VideoCapture(VIDEO_PATH)
# find number of gpus that is available
frame_length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
# TODO: CPU환경 고려하기
# divide frames of video by number of gpus
div = frame_length // num_gpus
divide_point = [i for i in range(frame_length) if i != 0 and i % div == 0]
divide_point.pop()
frame_list = []
fragments = []
count = 0
while cap.isOpened():
hasFrame, frame = cap.read()
if not hasFrame:
frame_list.append(fragments)
break
if count in divide_point:
frame_list.append(fragments)
fragments = []
fragments.append(frame)
count += 1
cap.release()
detection_gpu(frame_list[0], 0, device[0], detect[0], model)
detection_gpu(frame_list[1], 1, device[1], detect[1], model)
# Process object detection using threading
# thread_detection = [ThreadWithReturnValue(target=detection_gpu,
# args=(frame_list[i], i, detect, model))
# for i in range(num_gpus)]
#
#
# final_list = []
# # Begin operating threads
# for th in thread_detection:
# th.start()
#
# # Once tasks are completed get return value (frames) and put to new list
# for th in thread_detection:
# final_list.extend(th.join())
cv2.destroyAllWindows()
detection_boxes_pytorch.py
def bounding_box_rcnn(self, frame, model):
print(self.device)
# Image is converted to image Tensor
transform = transforms.Compose([transforms.ToTensor()])
img = transform(frame).to(self.device)
with torch.no_grad():
# The image is passed through model to get predictions
pred = model([img])
# classes, bounding boxes, confidence scores are gained
# only classes and bounding boxes > confThershold are passed to draw_boxes
pred_class = [self.classes[i] for i in list(pred[0]['labels'].cpu().clone().numpy())]
pred_boxes = [[(i[0], i[1]), (i[2], i[3])] for i in list(pred[0]['boxes'].detach().cpu().clone().numpy())]
pred_score = list(pred[0]['scores'].detach().cpu().clone().numpy())
pred_t = [pred_score.index(x) for x in pred_score if x > self.confThreshold][-1]
pred_colors = [i for i in list(pred[0]['labels'].cpu().clone().numpy())]
pred_boxes = pred_boxes[:pred_t + 1]
pred_class = pred_class[:pred_t + 1]
for i in range(len(pred_boxes)):
left = int(pred_boxes[i][0][0])
top = int(pred_boxes[i][0][1])
right = int(pred_boxes[i][1][0])
bottom = int(pred_boxes[i][1][1])
color = STANDARD_COLORS[pred_colors[i] % len(STANDARD_COLORS)]
self.draw_boxes(frame, pred_class[i], pred_score[i], left, top, right, bottom, color)
The error I get is as follows:
Traceback (most recent call last):
File "C:/Users/username/Desktop/Object_Detection_Video_AllInOne/pytorch_multithread.py", line 133, in <module>
main()
File "C:/Users/username/Desktop/Object_Detection_Video_AllInOne/pytorch_multithread.py", line 113, in main
detection_gpu(frame_list[1], 1, device[1], detect[1], model)
File "C:/Users/username/Desktop/Object_Detection_Video_AllInOne/pytorch_multithread.py", line 39, in detection_gpu
detect.bounding_box_rcnn(frame, model=model)
File "C:\Users\username\Desktop\Object_Detection_Video_AllInOne\p_utils\detection_boxes_pytorch.py", line 64, in bounding_box_rcnn
pred = model([img])
File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\site-packages\torchvision\models\detection\generalized_rcnn.py", line 51, in forward
proposals, proposal_losses = self.rpn(images, features, targets)
File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\site-packages\torchvision\models\detection\rpn.py", line 409, in forward
proposals = self.box_coder.decode(pred_bbox_deltas.detach(), anchors)
File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\site-packages\torchvision\models\detection\_utils.py", line 168, in decode
rel_codes.reshape(sum(boxes_per_image), -1), concat_boxes
File "C:\Users\username\AppData\Local\Programs\Python\Python37\lib\site-packages\torchvision\models\detection\_utils.py", line 199, in decode_single
pred_ctr_x = dx * widths[:, None] + ctr_x[:, None]
RuntimeError: binary_op(): expected both inputs to be on same device, but input a is on cuda:1 and input b is on cuda:0
| Pytorch provides DataParallel module to run a model on mutiple GPUs. Detailed documentation of DataParallel and toy example can be found here and here.
| https://stackoverflow.com/questions/57264800/ |
Fine Tuning Pretrained Model MobileNet_V2 in Pytorch | I am new to pyTorch and I am trying to Create a Classifier where I have around 10 kinds of Images Folder Dataset, for this task I am using Pretrained model( MobileNet_v2 ) but the problem is I am not able to change the FC layer of it. There is not model.fc attribute.
Can anyone help me to do this.
Thanks
| Do something like below:
import torch
model = torch.hub.load('pytorch/vision', 'mobilenet_v2', pretrained=True)
print(model.classifier)
model.classifier[1] = torch.nn.Linear(in_features=model.classifier[1].in_features, out_features=10)
print(model.classifier)
output:
Sequential(
(0): Dropout(p=0.2)
(1): Linear(in_features=1280, out_features=1000, bias=True)
)
Sequential(
(0): Dropout(p=0.2)
(1): Linear(in_features=1280, out_features=10, bias=True)
)
Note: you would need torch >= 1.1.0 to use torch.hub.
| https://stackoverflow.com/questions/57285224/ |
How the parenthesis are used right after the object's name? | I want to know about the parenthesis after the Object's name.
I am learning AI and building the AI model, now in the Tutorial's code the author has written a line which is containing the Parenthesis right after the object's name which is : self.model(...)
Where self.model is the Object of the Network class.
How the objects are having parenthesis being an object, not a function?
Now I want to know about the parenthesis after the Object's name.
class Network(nn.Module):
def __init__(self, input_size, nb_action):
super(Network, self).__init__()
self.input_size = input_size
self.nb_action = nb_action
self.fc1 = nn.Linear(input_size, 30)
self.fc2 = nn.Linear(30, nb_action)
def forward(self, state):
x = F.relu(self.fc1(state))
q_values = self.fc2(x)
return q_values
class Dqn():
def __init__(self, input_size, nb_action, gamma):
self.gamma = gamma
self.reward_window = []
self.model = Network(input_size, nb_action)
self.memory = ReplayMemory(100000)
self.optimizer = optim.Adam(self.model.parameters(), lr = 0.001)
self.last_state = torch.Tensor(input_size).unsqueeze(0)
self.last_action = 0
self.last_reward = 0
def select_action(self, state):
probs = F.softmax(self.model(Variable(state, volatile = True))*100) # <-- The problem is here where the self.model object is CALLED with Parenthesis.
action = probs.multinomial(10)
return action.data[0,0]
| In python, everything is an object. The functions you create and call are also objects. Anything in python that can be called is a callable object.
However, if you want a class object in python to be a callable object, the __call__ method must be defined inside the class.
When the object is called, the __call__(self, ...) method gets called.
Here is an example.
class Foo:
def __init__(self, x=0):
self.x = x
def __str__(self):
return f'{self.__class__.__name__}({self.x})'
def __call__(self, *args):
print(f'{self} has been called with {args} as arguments')
f1 = Foo(5)
f2 = Foo()
f1() # f1.__call__()
f2() # f2.__call__()
f1(1, 2, 3) # f1.__call__(1, 2, 3)
f2(10, 20) # f2.__call__(10, 20)
Output:
Foo(5) has been called with () as arguments
Foo(0) has been called with () as arguments
Foo(5) has been called with (1, 2, 3) as arguments
Foo(0) has been called with (10, 20) as arguments
This is how you can make a python object a callable object.
| https://stackoverflow.com/questions/57296738/ |
Pytorch Grayscale input to Vgg | I am new to pytorch and I want to use Vgg for transfer learning.
I want to delete the fully connected layers and add some new fully connected layers. Also rather than RGB input I want to use grayscale input. For this I will add the weights of the input layer and get a single weight. So the three channel's weights will be added.
I achieved deleting the fully connected layers but I am having trouble with grayscale part. I add the three weights together and form a new weight. Then I try to change the state dict of the vgg model but this gives me error. The networks code is below:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
vgg=models.vgg16(pretrained = True).features[:30]
w1=vgg.state_dict()['0.weight'][:,0,:,:] #first channel of first input layer's weight
w2=vgg.state_dict()['0.weight'][:,1,:,:]
w3=vgg.state_dict()['0.weight'][:,2,:,:]
w4=w1+w2+w3 # add the three weigths of the channels
w4=w4.unsqueeze(1) # make it 4 dimensional
a=vgg.state_dict()#create a new statedict
a['0.weight']=w4 #replace the new state dict's weigt
vgg.load_state_dict(a) # this line gives the error,load the new state dict
self.vgg =nn.Sequential(vgg)
self.fc1 = nn.Linear(14*14*512, 1000)
self.fc2 = nn.Linear(1000, 2)
def forward(self, x):
x = self.vgg(x)
x = x.view(-1, 14 * 14 * 512)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
This gives an error of:
RuntimeError: Error(s) in loading state_dict for Sequential: size
mismatch for 0.weight: copying a param with shape torch.Size([64, 1,
3, 3]) from checkpoint, the shape in current model is torch.Size([64,
3, 3, 3]).
So it doesn't allow me to replace the weight with a different sized weight. Is there a solution for this problem or is there anything other that I can try. All I want to do is use the vgg's layers up to fully connected layers and change the first layers weights.
|
In short: The Error is caused by Mismatch between pretrained model parameters and the vgg model
Reason: You modified the parameters in pretrained model from [64,3,3,3] -> [64,1,3,3] by adding, but you didn't change the structure of VGG, which still needs a [64,3,3,3] shape of input.
Resolution: Remove the first convolution layer of VGG structure and add a new one which makes it to fit you input shape.
| https://stackoverflow.com/questions/57296799/ |
Trouble Converting LSTM Pytorch Model to ONNX | I am trying to export my LSTM Anomally-Detection Pytorch model to ONNX, but I'm experiencing errors. Please take a look at my code below.
Note: My data is shaped as [2685, 5, 6].
Here is where I define my model:
class Model(torch.nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim):
super(Model, self).__init__()
self.hidden_dim = hidden_dim
self.layer_dim = layer_dim
self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True)
self.fc1 = torch.nn.Linear(hidden_dim, hidden_dim)
self.fc2 = torch.nn.Linear(hidden_dim, input_dim)
def forward(self, x):
h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()
c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()
out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach()))
out = self.fc1(out)
out = self.fc2(out)
return out
input_dim = 6
hidden_dim = 3
layer_dim = 2
model = Model(input_dim, hidden_dim, layer_dim)
I can train it and test with it fine. But the problem comes when exporting:
model.eval()
import torch.onnx
torch_out = torch.onnx.export(model,
torch.randn(2685, 5, 6),
"onnx_model.onnx",
export_params = True
)
But I have the following error:
LSTM(6, 3, num_layers=2, batch_first=True)
Linear(in_features=3, out_features=3, bias=True)
Linear(in_features=3, out_features=6, bias=True)
['input_1', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear']
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/symbolic.py:173: UserWarning: ONNX export failed on RNN/GRU/LSTM because batch_first not supported
warnings.warn("ONNX export failed on " + op + " because " + msg + " not supported")
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-264-28c6c55537ab> in <module>()
10 torch.randn(2685, 5, 6),
11 "onnx_model.onnx",
---> 12 export_params = True
13 )
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/__init__.py in export(*args, **kwargs)
23 def export(*args, **kwargs):
24 from torch.onnx import utils
---> 25 return utils.export(*args, **kwargs)
26
27
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, strip_doc_string)
129 operator_export_type=operator_export_type, opset_version=opset_version,
130 _retain_param_name=_retain_param_name, do_constant_folding=do_constant_folding,
--> 131 strip_doc_string=strip_doc_string)
132
133
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, propagate, opset_version, _retain_param_name, do_constant_folding, strip_doc_string)
367 if export_params:
368 proto, export_map = graph._export_onnx(params_dict, opset_version, defer_weight_export, operator_export_type,
--> 369 strip_doc_string)
370 else:
371 proto, export_map = graph._export_onnx({}, opset_version, False, operator_export_type, strip_doc_string)
RuntimeError: ONNX export failed: Couldn't export operator aten::lstm
Defined at:
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/rnn.py(522): forward_impl
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/rnn.py(539): forward_tensor
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/rnn.py(559): forward
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(481): _slow_forward
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(491): __call__
<ipython-input-255-468cef410a2c>(14): forward
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(481): _slow_forward
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(491): __call__
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/__init__.py(294): forward
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(493): __call__
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/__init__.py(231): get_trace_graph
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/utils.py(225): _trace_and_get_graph_from_model
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/utils.py(266): _model_to_graph
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/utils.py(363): _export
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/utils.py(131): export
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/__init__.py(25): export
<ipython-input-264-28c6c55537ab>(12): <module>
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2963): run_code
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2903): run_ast_nodes
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2785): _run_cell
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2662): run_cell
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/zmqshell.py(537): run_cell
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/ipkernel.py(208): do_execute
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/kernelbase.py(399): execute_request
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/kernelbase.py(233): dispatch_shell
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/kernelbase.py(283): dispatcher
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/tornado/stack_context.py(276): null_wrapper
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(432): _run_callback
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(480): _handle_recv
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(450): _handle_events
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/tornado/stack_context.py(276): null_wrapper
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/tornado/platform/asyncio.py(117): _handle_events
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/asyncio/events.py(145): _run
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/asyncio/base_events.py(1432): _run_once
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/asyncio/base_events.py(422): run_forever
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/tornado/platform/asyncio.py(127): start
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/kernelapp.py(486): start
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/traitlets/config/application.py(658): launch_instance
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/__main__.py(3): <module>
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/runpy.py(85): _run_code
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/runpy.py(193): _run_module_as_main
Graph we tried to export:
graph(%input.1 : Float(2685, 5, 6),
%lstm.weight_ih_l0 : Float(12, 6),
%lstm.weight_hh_l0 : Float(12, 3),
%lstm.bias_ih_l0 : Float(12),
%lstm.bias_hh_l0 : Float(12),
%lstm.weight_ih_l1 : Float(12, 3),
%lstm.weight_hh_l1 : Float(12, 3),
%lstm.bias_ih_l1 : Float(12),
%lstm.bias_hh_l1 : Float(12),
%fc1.weight : Float(3, 3),
%fc1.bias : Float(3),
%fc2.weight : Float(6, 3),
%fc2.bias : Float(6)):
%13 : Long() = onnx::Constant[value={0}](), scope: Model
%14 : Tensor = onnx::Shape(%input.1), scope: Model
%15 : Long() = onnx::Gather[axis=0](%14, %13), scope: Model
%16 : Long() = onnx::Constant[value={2}](), scope: Model
%17 : Long() = onnx::Constant[value={3}](), scope: Model
%18 : Tensor = onnx::Unsqueeze[axes=[0]](%16)
%19 : Tensor = onnx::Unsqueeze[axes=[0]](%15)
%20 : Tensor = onnx::Unsqueeze[axes=[0]](%17)
%21 : Tensor = onnx::Concat[axis=0](%18, %19, %20)
%22 : Float(2, 2685, 3) = onnx::ConstantOfShape[value={0}](%21), scope: Model
%23 : Long() = onnx::Constant[value={0}](), scope: Model
%24 : Tensor = onnx::Shape(%input.1), scope: Model
%25 : Long() = onnx::Gather[axis=0](%24, %23), scope: Model
%26 : Long() = onnx::Constant[value={2}](), scope: Model
%27 : Long() = onnx::Constant[value={3}](), scope: Model
%28 : Tensor = onnx::Unsqueeze[axes=[0]](%26)
%29 : Tensor = onnx::Unsqueeze[axes=[0]](%25)
%30 : Tensor = onnx::Unsqueeze[axes=[0]](%27)
%31 : Tensor = onnx::Concat[axis=0](%28, %29, %30)
%32 : Float(2, 2685, 3) = onnx::ConstantOfShape[value={0}](%31), scope: Model
%33 : Long() = onnx::Constant[value={1}](), scope: Model/LSTM[lstm]
%34 : Long() = onnx::Constant[value={2}](), scope: Model/LSTM[lstm]
%35 : Double() = onnx::Constant[value={0}](), scope: Model/LSTM[lstm]
%36 : Long() = onnx::Constant[value={0}](), scope: Model/LSTM[lstm]
%37 : Long() = onnx::Constant[value={0}](), scope: Model/LSTM[lstm]
%38 : Long() = onnx::Constant[value={1}](), scope: Model/LSTM[lstm]
%input.2 : Float(2685!, 5!, 3), %40 : Float(2, 2685, 3), %41 : Float(2, 2685, 3) = aten::lstm(%input.1, %22, %32, %lstm.weight_ih_l0, %lstm.weight_hh_l0, %lstm.bias_ih_l0, %lstm.bias_hh_l0, %lstm.weight_ih_l1, %lstm.weight_hh_l1, %lstm.bias_ih_l1, %lstm.bias_hh_l1, %33, %34, %35, %36, %37, %38), scope: Model/LSTM[lstm]
%42 : Float(3!, 3!) = onnx::Transpose[perm=[1, 0]](%fc1.weight), scope: Model/Linear[fc1]
%43 : Float(2685, 5, 3) = onnx::MatMul(%input.2, %42), scope: Model/Linear[fc1]
%44 : Float(2685, 5, 3) = onnx::Add(%43, %fc1.bias), scope: Model/Linear[fc1]
%45 : Float(3!, 6!) = onnx::Transpose[perm=[1, 0]](%fc2.weight), scope: Model/Linear[fc2]
%46 : Float(2685, 5, 6) = onnx::MatMul(%44, %45), scope: Model/Linear[fc2]
%47 : Float(2685, 5, 6) = onnx::Add(%46, %fc2.bias), scope: Model/Linear[fc2]
return (%47)
What does this mean? What should I do to export properly?
| If you're coming here from Google the previous answers are no longer up to date. ONNX now supports an LSTM operator. Take care as exporting from PyTorch will fix the input sequence length by default unless you use the dynamic_axes parameter.
Below is a minimal LSTM export example I adapted from the torch.onnx FAQ
import torch
import onnx
from torch import nn
import numpy as np
import onnxruntime.backend as backend
import numpy as np
torch.manual_seed(0)
layer_count = 4
model = nn.LSTM(10, 20, num_layers=layer_count, bidirectional=True)
model.eval()
with torch.no_grad():
input = torch.randn(1, 3, 10)
h0 = torch.randn(layer_count * 2, 3, 20)
c0 = torch.randn(layer_count * 2, 3, 20)
output, (hn, cn) = model(input, (h0, c0))
# default export
torch.onnx.export(model, (input, (h0, c0)), 'lstm.onnx')
onnx_model = onnx.load('lstm.onnx')
# input shape [5, 3, 10]
print(onnx_model.graph.input[0])
# export with `dynamic_axes`
torch.onnx.export(model, (input, (h0, c0)), 'lstm.onnx',
input_names=['input', 'h0', 'c0'],
output_names=['output', 'hn', 'cn'],
dynamic_axes={'input': {0: 'sequence'}, 'output': {0: 'sequence'}})
onnx_model = onnx.load('lstm.onnx')
# input shape ['sequence', 3, 10]
# Check export
y, (hn, cn) = model(input, (h0, c0))
y_onnx, hn_onnx, cn_onnx = backend.run(
onnx_model,
[input.numpy(), h0.numpy(), c0.numpy()],
device='CPU'
)
np.testing.assert_almost_equal(y_onnx, y.detach(), decimal=5)
np.testing.assert_almost_equal(hn_onnx, hn.detach(), decimal=5)
np.testing.assert_almost_equal(cn_onnx, cn.detach(), decimal=5)
I've tested this example with:
torch==1.4.0,
onnx=1.7.0
| https://stackoverflow.com/questions/57299674/ |
Should the embedding layer be changed during training a neural network? | I'm a new one for the field of deep learning and Pytorch.
Recently when I learn one of the pytorch tutorial example for NER task, I found the embedding of nn.Embedding changed during the training.
So my question is should the embedding be changed during training the network?
And if I want to load a pre-trained embedding (for example, trained word2vec embedding) into a PyTorch embedding layer, should the pre-trained embedding also be changed during the train process?
Or how could I prevent updating the embeddings?
Thank you.
| One can either learn embeddings during the task, finetune them for task at hand or leave as they are (provided they have been learned in some fashion before).
In the last case, with standard embeddings like word2vec one eventually finetunes (using small learning rate), but uses vocabulary and embeddings provided. When it comes to current SOTA like BERT fine-tuning on your data should always be done, but in unsupervised way (as trained originally).
Easiest way to use them is static method of torch.nn.Embedding.from_pretrained (docs) and provide Tensor with your pretrained data.
If you want the layer to be trainable, pass freeze=False, by default it's not as you want.
| https://stackoverflow.com/questions/57303955/ |
RuntimeError: size mismatch, m1: [192 x 68], m2: [1024 x 68] at /opt/conda/conda-bld/pytorch_/work/aten/src/THC/generic/THCTensorMathBlas.cu:268 | I'm getting a size mismatch error that I can't understand.
(Pdb) self.W_di
Linear(in_features=68, out_features=1024, bias=True)
(Pdb) indices.size()
torch.Size([32, 6, 68])
(Pdb) self.W_di(indices)
*** RuntimeError: size mismatch, m1: [192 x 68], m2: [1024 x 68] at /opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/generic/THCTensorMathBlas.cu:268
Why is there a mismatch?
Maybe because of the way I defined the weight in forward (instead of _init_)?
This is how I defined self.W_di:
def forward(self):
if self.W_di is None:
self.W_di_weight = nn.Parameter(torch.randn(mL_n * 2,1024).to(device))
self.W_di_bias = nn.Parameter(torch.ones(1024).to(device))
self.W_di = nn.Linear(mL_n * 2, 1024)
self.W_di.weight = self.W_di_weight
self.W_di.bias = self.W_di_bias
result = self.W_di(indices)
Any pointer would be highly appreciated!
| Check my answer in here in general you may set
self.W_di = nn.Linear(mL_n * 2, 68)
Or increase the in features.
| https://stackoverflow.com/questions/57321704/ |
Assign Keras/TF/PyTorch layer to hardware type | Suppose we have the following architecture:
Multiple CNN layers
RNN layer
(Time-distributed) Dense classification layer
We want to train this architecture now. Our fancy GPU is very fast at solving the CNN layers. Although using a lower clockrate, it can perform many convolutions in parallel, thus the speed. Our fancy CPU however is faster for the (very long) resulting time series, because the time steps cannot be parallelized, and the processing profits from the higher CPU clockrate. So the (supposedly) smart idea for execution would look like this:
Multiple CNN layers (run on GPU)
RNN layer (run on CPU)
(Time-distributed) Dense classification layer (run on GPU/CPU)
This lead me to two important questions:
Is it possible, with any of the frameworks mentioned in the title, to distribute certain layers to certain hardware, and how?
If it is possible, would the overhead for the additional memory operations, e.g. tranferring between GPU-/CPU-RAM, render the whole idea useless?
| Basically, in Pytorch you can control the device on which variables/parameters reside. AFAIK, it is your responsibility to make sure that for each operation all the arguments reside on the same device: i.e., you cannot conv(x, y) where x is on GPU and y is on CPU.
This is done via pytorch's .to() method that moves a module/variable .to('cpu') or .to('cuda:0')
| https://stackoverflow.com/questions/57323465/ |
Translating Pytorch program into Keras: different results | I have translated a pytorch program into keras.
A working Pytorch program:
import numpy as np
import cv2
import torch
import torch.nn as nn
from skimage import segmentation
np.random.seed(1)
torch.manual_seed(1)
fi = "in.jpg"
class MyNet(nn.Module):
def __init__(self, n_inChannel, n_outChannel):
super(MyNet, self).__init__()
self.seq = nn.Sequential(
nn.Conv2d(n_inChannel, n_outChannel, kernel_size=3, stride=1, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(n_outChannel),
nn.Conv2d(n_outChannel, n_outChannel, kernel_size=3, stride=1, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(n_outChannel),
nn.Conv2d(n_outChannel, n_outChannel, kernel_size=1, stride=1, padding=0),
nn.BatchNorm2d(n_outChannel)
)
def forward(self, x):
return self.seq(x)
im = cv2.imread(fi)
data = torch.from_numpy(np.array([im.transpose((2, 0, 1)).astype('float32')/255.]))
data = data.cuda()
labels = segmentation.slic(im, compactness=100, n_segments=10000)
labels = labels.flatten()
u_labels = np.unique(labels)
label_indexes = np.array([np.where(labels == u_label)[0] for u_label in u_labels])
n_inChannel = 3
n_outChannel = 100
model = MyNet(n_inChannel, n_outChannel)
model.cuda()
model.train()
loss_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
label_colours = np.random.randint(255,size=(100,3))
for batch_idx in range(100):
optimizer.zero_grad()
output = model( data )[ 0 ]
output = output.permute( 1, 2, 0 ).view(-1, n_outChannel)
ignore, target = torch.max( output, 1 )
im_target = target.data.cpu().numpy()
nLabels = len(np.unique(im_target))
im_target_rgb = np.array([label_colours[ c % 100 ] for c in im_target]) # correct position of "im_target"
im_target_rgb = im_target_rgb.reshape( im.shape ).astype( np.uint8 )
for inds in label_indexes:
u_labels_, hist = np.unique(im_target[inds], return_counts=True)
im_target[inds] = u_labels_[np.argmax(hist, 0)]
target = torch.from_numpy(im_target)
target = target.cuda()
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
print (batch_idx, '/', 100, ':', nLabels, loss.item())
if nLabels <= 3:
break
fo = "out.jpg"
cv2.imwrite(fo, im_target_rgb)
(source: https://github.com/kanezaki/pytorch-unsupervised-segmentation/blob/master/demo.py)
My translation into Keras:
import cv2
import numpy as np
from skimage import segmentation
from keras.layers import Conv2D, BatchNormalization, Input, Reshape
from keras.models import Model
import keras.backend as k
from keras.optimizers import SGD, Adam
from skimage.util import img_as_float
from skimage import io
from keras.models import Sequential
np.random.seed(0)
fi = "in.jpg"
im = cv2.imread(fi).astype(float)/255.
labels = segmentation.slic(im, compactness=100, n_segments=10000)
labels = labels.flatten()
print (labels.shape)
u_labels = np.unique(labels)
label_indexes = [np.where(labels == u_label)[0] for u_label in np.unique(labels)]
n_channels = 100
model = Sequential()
model.add ( Conv2D(n_channels, kernel_size=3, activation='relu', input_shape=im.shape, padding='same'))
model.add( BatchNormalization())
model.add( Conv2D(n_channels, kernel_size=3, activation='relu', padding='same'))
model.add( BatchNormalization())
model.add( Conv2D(n_channels, kernel_size=1, padding='same'))
model.add( BatchNormalization())
model.add( Reshape((im.shape[0] * im.shape[1], n_channels)))
img = np.expand_dims(im,0)
print (img.shape)
output = model.predict(img)
print (output.shape)
im_target = np.argmax(output[0], 1)
print (im_target.shape)
for inds in label_indexes:
u_labels_, hist = np.unique(im_target[inds], return_counts=True)
im_target[inds] = u_labels_[np.argmax(hist, 0)]
def custom_loss(loss_target, loss_output):
return k.categorical_crossentropy(target=k.stack(loss_target), output=k.stack(loss_output), from_logits=True)
model.compile(optimizer=SGD(lr=0.1, momentum=0.9), loss=custom_loss)
model.fit(img, output, epochs=100, batch_size=1, verbose=1)
pred_result = model.predict(x=[img])[0]
print (pred_result.shape)
target = np.argmax(pred_result, 1)
print (target.shape)
nLabels = len(np.unique(target))
label_colours = np.random.randint(255, size=(100, 3))
im_target_rgb = np.array([label_colours[c % 100] for c in im_target])
im_target_rgb = im_target_rgb.reshape(im.shape).astype(np.uint8)
cv2.imwrite("out.jpg", im_target_rgb)
However, Keras output is really different than of pytorch
Input image:
Pytorch result:
Keras result:
Could someone help me for this translation?
Edit 1:
I corrected two errors as advised by @sebrockm
1. removed `relu` from last conv layer
2. added `from_logits = True` in the loss function
Also, changed the no. of conv layers from 4 to 3 to match with the original code.
However, output image did not improve than before and the `loss` are resulted in negative:
Epoch 99/100
1/1 [==============================] - 0s 92ms/step - loss: -22.8380
Epoch 100/100
1/1 [==============================] - 0s 99ms/step - loss: -23.039
I think that the Keras code lacks connection between model and output. However, could not figure out to make this connection.
| Two major mistakes that I see (likely related):
The last convolutional layer in the original model does not have an activation function, while your translation uses relu.
The original model uses CrossEntropyLoss as loss function, while your model uses categorical_crossentropy with logits=False (a default argument). Without mathematical background the difference is tricky to explain, but in short: CrossEntropyLoss has a softmax built in, that's why the model doesn't have one on the last layer. To do the same in keras, use k.categorical_crossentropy(..., logits=True). "logits" means the input values are expected not to be "softmaxed", i.e. all values can be arbitrary. Currently, your loss function expects the output values to be "softmaxed", i.e. all values must be between 0 and 1 (and sum up to 1).
Update:
One other mistake, likely a huge one: In Keras, you calculate the output once in the beginning and never change it from there on. Then you train your model to fit on this initially generated output.
In the original pytorch code, target (which is the variable being trained on) gets updated in every training loop.
So, you cannot use Keras' fit method which is designed for doing the entire training for you (given fixed training data). You will have to replicate the training loop manually, just as it is done in the pytorch code. I'm not sure if this is easily doable with the API Keras provides. train_on_batch is one method you surely will need in your manual loop. You will have to do some more work, I'm afraid...
| https://stackoverflow.com/questions/57342987/ |
What's the difference between tf.nn.ctc_loss with pytorch.nn.CTCLoss | For the same input and label:
the output of pytorch.nn.CTCLoss is 5.74,
the output of tf.nn.ctc_loss is 129.69,
but the output of math.log(tf ctc loss) is 4.86
So what's the difference between pytorch.nn.CTCLoss with tf.nn.ctc_loss?
tf: 1.13.1
pytorch: 1.1.0
I had try to these:
log_softmax the input, and then send it to pytorch.nn.CTCLoss,
tf.nn.log_softmax the input, and then send it to tf.nn.ctc_loss
directly send the input to tf.nn.ctc_loss
directly send the input to tf.nn.ctc_loss, and then math.log(output of tf.nn.ctc_loss)
In the case 2, case 3, and case 4, the result of calculation is difference from pytorch.nn.CTCLoss
from torch import nn
import torch
import tensorflow as tf
import math
time_step = 50 # Input sequence length
vocab_size = 20 # Number of classes
batch_size = 16 # Batch size
target_sequence_length = 30 # Target sequence length
def dense_to_sparse(dense_tensor, sequence_length):
indices = tf.where(tf.sequence_mask(sequence_length))
values = tf.gather_nd(dense_tensor, indices)
shape = tf.shape(dense_tensor, out_type=tf.int64)
return tf.SparseTensor(indices, values, shape)
def compute_loss(x, y, x_len):
ctclosses = tf.nn.ctc_loss(
y,
tf.cast(x, dtype=tf.float32),
x_len,
preprocess_collapse_repeated=False,
ctc_merge_repeated=False,
ignore_longer_outputs_than_inputs=False
)
ctclosses = tf.reduce_mean(ctclosses)
with tf.Session() as sess:
ctclosses = sess.run(ctclosses)
print(f"tf ctc loss: {ctclosses}")
print(f"tf log(ctc loss): {math.log(ctclosses)}")
minimum_target_length = 10
ctc_loss = nn.CTCLoss(blank=vocab_size - 1)
x = torch.randn(time_step, batch_size, vocab_size) # [size] = T,N,C
y = torch.randint(0, vocab_size - 2, (batch_size, target_sequence_length), dtype=torch.long) # low, high, [size]
x_lengths = torch.full((batch_size,), time_step, dtype=torch.long) # Length of inputs
y_lengths = torch.randint(minimum_target_length, target_sequence_length, (batch_size,),
dtype=torch.long) # Length of targets can be variable (even if target sequences are constant length)
loss = ctc_loss(x.log_softmax(2).detach(), y, x_lengths, y_lengths)
print(f"torch ctc loss: {loss}")
x = x.numpy()
y = y.numpy()
x_lengths = x_lengths.numpy()
y_lengths = y_lengths.numpy()
x = tf.cast(x, dtype=tf.float32)
y = tf.cast(dense_to_sparse(y, y_lengths), dtype=tf.int32)
compute_loss(x, y, x_lengths)
I expect the output of tf.nn.ctc_loss is same with the output of pytorch.nn.CTCLoss, but actually they are not, but how can i make them same?
| The automatic mean reduction of the CTCLoss of pytorch is not the same as computing all the individual losses, and then doing the mean (as you are doing in the Tensorflow implementation). Indeed from the doc of CTCLoss (pytorch):
``'mean'``: the output losses will be divided by the target lengths and
then the mean over the batch is taken.
To obtain the same value:
1- Change the reduction method to sum:
ctc_loss = nn.CTCLoss(reduction='sum')
2- Divide the loss computed by the batch_size:
loss = ctc_loss(x.log_softmax(2).detach(), y, x_lengths, y_lengths)
loss = (loss.item())/batch_size
3- Change the parameter ctc_merge_repeated of Tensorflow to True (I am assuming it is the case in the pytorch CTC as well)
ctclosses = tf.nn.ctc_loss(
y,
tf.cast(x, dtype=tf.float32),
x_len,
preprocess_collapse_repeated=False,
ctc_merge_repeated=True,
ignore_longer_outputs_than_inputs=False
)
You will now get very close results between the pytorch loss and the tensorflow loss (without taking the log of the value). The small difference remaining probably comes from slight differences in between the implementations.
In my last three runs, I got the following values:
pytorch loss : 113.33 vs tf loss = 113.52
pytorch loss : 116.30 vs tf loss = 115.57
pytorch loss : 115.67 vs tf loss = 114.54
| https://stackoverflow.com/questions/57362240/ |
How to get the full Jacobian of a derivative in PyTorch? | Lets consider a simple tensor x and lets define another one which depends on x and have multiple dimension : y = (x, 2x, x^2).
How can I have the full gradient dy/dx = (1,2,x) ?
For example lets take the code :
import torch
from torch.autograd import grad
x = 2 * torch.ones(1)
x.requires_grad = True
y = torch.cat((x, 2*x, x*x))
# dy_dx = ???
This is what I have I unsuccessfuly tried so far :
>>> dy_dx = grad(y, x, grad_outputs=torch.ones_like(y), create_graph=True)
(tensor([7.], grad_fn=<AddBackward0>),)
>>> dy_dx = grad(y, x, grad_outputs=torch.Tensor([1,0,0]), create_graph=True)
(tensor([1.], grad_fn=<AddBackward0>),)
>>> dy_dx = grad(y, [x,x,x], grad_outputs=torch.eye(3), create_graph=True)
(tensor([7.], grad_fn=<AddBackward0>),)
Each times I got only part of the gradient or an accumulated version...
I know I could use a for loop using the second expression like
dy_dx = torch.zeros_like(y)
coord = torch.zeros_like(y)
for i in range (y.size(0)):
coord[i] = 1
dy_dx[i], = grad(y, x, grad_outputs=coord, create_graph=True)
coord[i] = 0
However, as I am handeling with high dimensions tensors, this for loop could take too much time to compute. Moreover, there must be a way to perform the full jacobian without acuumulating the gradient...
Does anyone has the solution ? Or an alternative ?
| torch.autograd.grad in PyTorch is aggregated. To have a vector auto-differentiated with respect to the input, use torch.autograd.functional.jacobian.
| https://stackoverflow.com/questions/57378143/ |
Why are Pytorch and Keras implementations giving vastly different results? | I am trying to train a 1-D ConvNet for time series classification as shown in this paper (refer to FCN om Fig. 1b) https://arxiv.org/pdf/1611.06455.pdf
The Keras implementation is giving me vastly superior performance. Could someone explain why is that the case?
The code for Pytorch is as follow:
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv1d(x_train.shape[1], 128, 8)
self.bnorm1 = nn.BatchNorm1d(128)
self.conv2 = nn.Conv1d(128, 256, 5)
self.bnorm2 = nn.BatchNorm1d(256)
self.conv3 = nn.Conv1d(256, 128, 3)
self.bnorm3 = nn.BatchNorm1d(128)
self.dense = nn.Linear(128, nb_classes)
def forward(self, x):
c1=self.conv1(x)
b1 = F.relu(self.bnorm1(c1))
c2=self.conv2(b1)
b2 = F.relu(self.bnorm2(c2))
c3=self.conv3(b2)
b3 = F.relu(self.bnorm3(c3))
output = torch.mean(b3, 2)
dense1=self.dense(output)
return F.softmax(dense1)
model = Net()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.5, momentum=0.99)
losses=[]
for t in range(1000):
y_pred_1= model(x_train.float())
loss_1 = criterion(y_pred_1, y_train.long())
print(t, loss_1.item())
optimizer.zero_grad()
loss_1.backward()
optimizer.step()
For comparison, I use the following code for Keras:
x = keras.layers.Input(x_train.shape[1:])
conv1 = keras.layers.Conv1D(128, 8, padding='valid')(x)
conv1 = keras.layers.BatchNormalization()(conv1)
conv1 = keras.layers.Activation('relu')(conv1)
conv2 = keras.layers.Conv1D(256, 5, padding='valid')(conv1)
conv2 = keras.layers.BatchNormalization()(conv2)
conv2 = keras.layers.Activation('relu')(conv2)
conv3 = keras.layers.Conv1D(128, 3, padding='valid')(conv2)
conv3 = keras.layers.BatchNormalization()(conv3)
conv3 = keras.layers.Activation('relu')(conv3)
full = keras.layers.GlobalAveragePooling1D()(conv3)
out = keras.layers.Dense(nb_classes, activation='softmax')(full)
model = keras.models.Model(inputs=x, outputs=out)
optimizer = keras.optimizers.SGD(lr=0.5, decay=0.0, momentum=0.99)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
hist = model.fit(x_train, Y_train, batch_size=x_train.shape[0], nb_epoch=2000)
The only difference I see between the two is the initialization but however, the results are just vastly different. For reference, I use the same preprocessing as follows for both the datasets, with a subtle difference in input shapes, for Pytorch (Batch_Size, Channels, Length) and for Keras: (Batch_Size, Length, Channels).
| The reason of different results is due to different default parameters of layers and optimizer. For example in pytorch decay-rate of batch-norm is considered as 0.9, whereas in keras it is 0.99. Like that, there may be other variation in default parameters.
If you use same parameters and fixed random seed for initialization, there won't be much difference in the result for both library.
| https://stackoverflow.com/questions/57380020/ |
How can I compute the mean of values selected from a vector A from an indexing vector B? | I have a vector of values, for example:
import torch
v = torch.rand(6)
tensor([0.0811, 0.9658, 0.1901, 0.0872, 0.8895, 0.9647])
and an index to select values from v:
index = torch.tensor([0, 1, 0, 2, 0, 2])
I want to produce a vector mean which would compute the mean values of v grouped by indexes from index.
In this example:
mean = torch.tensor([(0.0811 + 0.1901 + 0.8895) / 3, 0.9658, (0.0872 + 0.9647) / 2, 0, 0, 0])
tensor([0.3869, 0.9658, 0.5260, 0.0000, 0.0000, 0.0000])
| One possible solution using a combination of torch.bincount and Tensor.index_add():
v = torch.tensor([0.0811, 0.9658, 0.1901, 0.0872, 0.8895, 0.9647])
index = torch.tensor([0, 1, 0, 2, 0, 2])
bincount() gets the total for each index use in index:
bincount = torch.bincount(index, minlength=6)
# --> tensor([3, 1, 2, 0, 0, 0])
index_add() adds from v in the order given by index:
numerator = torch.zeros(6)
numerator = numerator.index_add(0, index, v)
# --> tensor([1.1607, 0.9658, 1.0520, 0.0000, 0.0000, 0.0000])
Replace zeros with 1.0 in bincount to prevent division by 0
and convert from int to float:
div = bincount.float()
div[bincount == 0] = 1.0
# --> tensor([3., 1., 2., 1., 1., 1.])
mean = num/div
# --> tensor([0.3869, 0.9658, 0.5260, 0.0000, 0.0000, 0.0000])
| https://stackoverflow.com/questions/57386257/ |
How to get entire dataset from dataloader in PyTorch | How to load entire dataset from the DataLoader? I am getting only one batch of dataset.
This is my code
dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=64)
images, labels = next(iter(dataloader))
| You can set batch_size=dataset.__len__() in case dataset is torch Dataset, else something like batch_szie=len(dataset) should work.
Beware, this might require a lot of memory depending upon your dataset.
| https://stackoverflow.com/questions/57386851/ |
How to fix the error "TypeError: forward()... - CUDA | For my bachelor thesis I need to train a network with some music similarity data using GPU and CUDA.
Tried to fix the problem several times with different approaches, but none of them worked.
use_cuda = torch.cuda.is_available()
BSG_model = bayesian_skipgram(V, EMBEDDING_DIM)
if use_cuda:
BSG_model.cuda()
optimizer = torch.optim.Adam(BSG_model.parameters(), lr=0.005)
BSG_model.train()
loss_progress = []
iter_time = time.time()
dataloader = DataLoader(data, batch_size=16, shuffle=True)
print("N_batches", len(dataloader))
for epoch in range(1):
for i, batch in enumerate(dataloader):
batch_start = time.time()
main_word = batch[:,0]
context_word = batch[:,1:]
#print("Main word:,", main_word.shape, context_word.shape)
optimizer.zero_grad()
if use_cuda:
loss = BSG_model.forward(main_word.cuda(), context_word.cuda(), use_cuda=True)
else:
loss = BSG_model.forward(main_word, context_word)
loss.backward()
optimizer.step()
batch_end = time.time() - batch_start
if i % 10 == 0:
print("epoch, batch ", epoch, i)
loss_progress.append(loss.item())
print(time.time()-iter_time)
print(loss)
iter_time = time.time()
"The expected result should be that the model starts to train the embeddings..."
"The output is the following:"
TypeError Traceback (most recent call last)
<ipython-input-36-c69aba816e22> in <module>()
34
35 if use_cuda:
---> 36 loss = BSG_model.forward(main_word.cuda(), context_word.cuda(), use_cuda=True)
37 else:
38 loss = BSG_model.forward(main_word, context_word)
TypeError: forward() got an unexpected keyword argument 'use_cuda'
| The error message is quite self explanatory:
TypeError: forward() got an unexpected keyword argument 'use_cuda'
You call forward function like this
oss = BSG_model.forward(main_word.cuda(), context_word.cuda(), use_cuda=True)
with two positional arguments: (main_word.cuda(), context_word.cuda() and one keyword arguement: use_cuda=True.
Keyword arguments means that when the function is declared/defined it has an argument with the same name. For instance:
def forward(self, word, context, use_cuda):
...
Is a declaration of forward function with use_cuda argument.
However, it seems like you are calling forward with use_cuda keyword argument, but the function forward you are using does not have a use_cuda argument at all!
Please look carefully at the way your BSG_model.forward function is defined.
| https://stackoverflow.com/questions/57389421/ |
How to rotate set of 3d points using angle axis to rotation matrix? | I am trying to rotate a set of 3d points and I am looking at this function from the kornia library. If I try to rotate a point around the z-axis by pi/2, my input(axis angle representation) should be [0, 0, pi/2]. When I use this as input into the function, it returns a 4x4 rotation matrix. However, I don't know how to apply this 4x4 matrix to my data because it is Nx3. What do I do with the output matrix? Thanks!
| if you look at their source, they are only updating 3 rows and 3 columns of torch.eye(4) tensor. So I think rotation_matrix[..., :3, :3] should provide you with the correct rotation matrix.
| https://stackoverflow.com/questions/57396424/ |
How to define several layers via a loop in __init__ for Pytorch? | I am trying to define a multi-task model in Pytorch where I need a different set of layers for different tasks. I face problems in defining layers, especially if I use a for loop to store different layers in a list then I get an error from optimizer stating that model.parameters() is an empty list, which in fact it is.
Following is the code:
x_trains=[]
y_trains=[]
num_tasks=2
for i in range(num_tasks):
x_trains.append(torch.from_numpy(np.random.rand(100,1,50)).float())
y_trains.append(torch.from_numpy(np.array([np.random.randint(10) for i in range(100)])).long())
nb_classes=10
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.all_task_layers=[]
for i in range(num_tasks):
self.all_task_layers.append(nn.Conv1d(1, 128, 8))
self.all_task_layers.append(nn.BatchNorm1d(128))
self.all_task_layers.append(nn.Conv1d(128, 256, 5))
self.all_task_layers.append(nn.BatchNorm1d(256))
self.all_task_layers.append(nn.Conv1d(256, 128, 3))
self.all_task_layers.append(nn.BatchNorm1d(128))
self.all_task_layers.append(nn.Linear(128, nb_classes))
#self.dict_layers_for_tasks[i][1]
self.all_b1s=[]
self.all_b2s=[]
self.all_b3s=[]
self.all_dense1s=[]
def forward(self, x_num_tasks):
for i in range(0,len(self.all_task_layers),num_tasks):
self.all_b1s.append(F.relu(self.all_task_layers[i+1](self.all_task_layers[i+0](x_num_tasks[i]))))
for i in range(0,len(self.all_task_layers),num_tasks):
self.all_b2s.append(F.relu(self.all_task_layers[i+3](self.all_task_layers[i+2](self.all_b1s[i]))))
for i in range(0,len(self.all_task_layers),num_tasks):
self.all_b3s.append(F.relu(self.all_task_layers[i+5](self.all_task_layers[i+4](self.all_b2s[i]))))
for i in range(0,len(self.all_task_layers),num_tasks):
self.all_dense1s.append(self.all_task_layers[i+6](self.all_b3s[i]))
return self.all_dense1s
model = Net()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
losses=[]
for t in range(50):
y_preds= model(x_trains)
optimizer.zero_grad()
for i in range(num_tasks):
loss=criterion(y_preds[i], y_trains[i])
losses.append(loss)
print(t,i,loss.item())
loss.backward(retain_graph=True)
optimizer.step()
The same model if I try to initialize the layers for the two tasks with two sets of Conv->BatchNorm->Conv->BatchNorm->Conv->BatchNorm->GlobalAveragePooling->Linear then it would work. But if I have let’s say 5 tasks, then it can get pretty messy, and that is why I created a list of layers, and indexed them. For example, the first 7 layers are for task 1 and then the last 7 for task 2. But then the model.parameters() is giving me an empty list. How else could I do it? Or is there a simple fix, I am overlooking?
| you should use nn.ModuleList() to wrap the list. for example
x_trains = nn.ModuleList(x_trains)
see PyTorch : How to properly create a list of nn.Linear()
| https://stackoverflow.com/questions/57396854/ |
Colab not recognizing local gpu | Im trying to train a Neural Network that I wrote, but it seems that colab is not recognizing the gtx 1050 on my laptop. I can't use their cloud GPU's for this task, because I run into memory constraints
print(cuda.is_available())
is returning False
| Indeed you gotta select the local runtime accelerator to use GPUs or TPUs, go to Runtime then Change runtime type like in the picture:
And then change it to GPU (takes some secs):
| https://stackoverflow.com/questions/57403572/ |
Best practices for generating a random seeds to seed Pytorch? | What I really want is to seed the dataset and dataloader. I am adapting code from:
https://gist.github.com/kevinzakka/d33bf8d6c7f06a9d8c76d97a7879f5cb
Anyone know how to seed this properly? What are the best practices for seeding things in Pytorch.
Honestly, I have no idea if there is an algorithm specific way for GPU vs CPU. I care mostly about general pytorch and make sure my code is "truly random". Specially when it uses GPU I guess...
related:
https://discuss.pytorch.org/t/best-practices-for-seeding-random-numbers-on-gpu/18751
https://discuss.pytorch.org/t/the-random-seed/19516/4
https://discuss.pytorch.org/t/best-practices-for-generating-a-random-seed-to-seed-pytorch/52894/2
My answer was deleted and here is its content:
I don't know if this is the best for pytorch but this is what seems the best for any programming language:
Usually the best random sample you could get in any programming language is generated through the operating system. In Python, you can use the os module:
random_data = os.urandom(4)
In this way you get a cryptographic safe random byte sequence which you may convert in a numeric data type for using as a seed.
seed = int.from_bytes(random_data, byteorder="big")
EDIT: the snippets of code works only on Python 3
'''
Greater than 4 I get this error:
ValueError: Seed must be between 0 and 2**32 - 1
'''
RAND_SIZE = 4
| Have a look at https://pytorch.org/docs/stable/notes/randomness.html
This is what I use
def seed_everything(seed=42):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
the two last parameters (cudnn) are for GPU
| https://stackoverflow.com/questions/57416925/ |
How to use non-square padding for deconvnet in PyTorch | Thank you for you attention. I hopes to use nn.ConvTranspose2d to expand dimension of a tensor in PyTorch.(from (N,C,4,4) to (N,C,8,8)) However, I find that if I want to keep the kernel size to 3 and stride to 2. I need to set the padding to [[0,0],[0,0],[0,1],[0,1]](only one side to H and W), which is not a square.
I learnt that in tensorflow we can set the padding like [[0,0],[0,0],[0,1],[0,1]], but in torch, the padding can only be a value for each dimension, which means the padding is on both sides.
So, I am wondering is there any way to do this in PyTorch?
Here's some code of details
import torch
import torch.nn as nn
a = torch.rand([100, 80, 4, 4])
a.shape
nn.ConvTranspose2d(80, 40, kernel_size=3, stride=2, padding=0)(a).shape
nn.ConvTranspose2d(80, 40, kernel_size=3, stride=2, padding=1)(a).shape
nn.ConvTranspose2d(80, 40, kernel_size=3, stride=2, padding=(0,1,0,1))(a).shape
>>> torch.Size([100, 80, 4, 4])
>>> torch.Size([100, 40, 9, 9])
>>> torch.Size([100, 40, 7, 7])
>>> RuntimeError: expected padding to be a single integer value or a list of 2 values to match the convolution dimensions, but got padding=[0, 1, 0, 1]
| You can use F.pad from functional API.
b= nn.ConvTranspose2d(80, 40, kernel_size=3, stride=2, padding=1)(a)
c = F.pad(b, (0,1,0,1),"constant", 0)
| https://stackoverflow.com/questions/57422132/ |
outputs are different between ONNX and pytorch | I try to convert my pytorch Resnet50 model to ONNX and do inference. The conversion procedural makes no errors, but the final result of onnx model from onnxruntime has large gaps with the result of origin model from pytorch.
What is possible solution ?
Version of ONNX: 1.5.0
Version of pytorch: 1.1.0
CUDA: 9.0
System: Ubuntu 18.06
Python: 3.5
Here is the code of conversion
import torch
import models
from collections import OrderedDict
state_dict = "/home/yx-wan/newhome/workspace/filter-pruning-geometric-median/scripts/snapshots/resnet50-rate-0.7/best.resnet50.GM_0.7_76.82.pth.tar"
arch = 'resnet50'
def import_sparse(model,state_dict):
new_state_dict = OrderedDict()
for k, v in state_dict.items():
name = k[7:] # remove `module.`
new_state_dict[name] = v
model.load_state_dict(new_state_dict)
print("sparse_model_loaded")
return model
# initialize model
model = models.__dict__[arch](pretrained=False).cuda()
checkpoint = torch.load(state_dict)
model = import_sparse(model, checkpoint['state_dict'])
print("Top 1 precise of model: {}".format(checkpoint['best_prec1']))
dummy_input =torch.randn(1, 3, 224, 224).cuda()
torch.onnx.export(model, dummy_input, "{}.onnx".format(arch), verbose=True)
Here is the result checking code
import sys
from onnxruntime.datasets import get_example
import onnxruntime
import cv2
import numpy as np
import torch
import models
import onnxruntime
from collections import OrderedDict
from my_tools import resize_img
def import_sparse(model,checkpoint):
new_state_dict = OrderedDict()
for k, v in checkpoint['state_dict'].items():
name = k[7:] # remove `module.`
new_state_dict[name] = v
model.load_state_dict(new_state_dict)
return model
image_path = "./img652.jpg"
onnx_model_path = "/workplace/workspace/filter-pruning-geometric-median/resnet50.onnx"
ckpt="./scripts/snapshots/resnet50-rate-0.7/best.resnet50.GM_0.7_76.82.pth.tar"
img_ori = cv2.imread(image_path) # BGR
img = cv2.cvtColor(img_ori, cv2.COLOR_BGR2RGB)
img, ratio_h, ratio_w = resize_img(img,224,224)
img = img - np.array([123.68, 116.78, 103.94],dtype=np.float32)
img_batch = np.expand_dims(img, 0)
# NHWC -> NCHW
img_batch = np.transpose(img_batch,[0,3,1,2])
example_model = get_example(onnx_model_path)
sess = onnxruntime.InferenceSession(example_model)
input_name = sess.get_inputs()[0].name
print("Input name :", input_name)
input_shape = sess.get_inputs()[0].shape
print("Input shape :", input_shape)
input_type = sess.get_inputs()[0].type
print("Input type :", input_type)
output_name = sess.get_outputs()[0].name
print("Output name :", output_name)
output_shape = sess.get_outputs()[0].shape
print("Output shape :", output_shape)
output_type = sess.get_outputs()[0].type
print("Output type :", output_type)
print("Input data shape{}".format(img_batch.shape))
assert(list(input_shape) == list(img_batch.shape))
result_onnx = sess.run([output_name], {input_name: img_batch})
# initialize model
model = models.__dict__["resnet50"]()
checkpoint = torch.load(ckpt,map_location='cpu')
best_prec1 = checkpoint['best_prec1']
model = import_sparse(model,checkpoint)
img_batch = torch.FloatTensor(img_batch)
with torch.no_grad():
result_torch = model(img_batch)
result_torch = result_torch.numpy()
print("max onnx-torch:{}".format(np.max(result_onnx-result_torch)))
And the output (with some warning but I think is doesn't matter) of checking code is
2019-08-09 02:59:21.378599853 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer4.2.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378654931 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer4.2.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378665235 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer4.2.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378675069 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer4.1.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378686874 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer4.0.downsample.1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378698995 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer4.1.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378718700 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.5.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378729567 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.4.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378739657 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.4.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378752091 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.3.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378762533 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.3.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378771168 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.2.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378781705 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.2.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378792325 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.4.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378802071 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.1.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378812061 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer4.0.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378822884 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.1.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378834198 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.0.downsample.1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378845176 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer1.2.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378859324 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.0.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378869709 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.0.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378883281 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.5.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378893302 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.3.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378904876 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer1.1.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378915507 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer1.0.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378926638 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer4.0.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378938115 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.0.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378948686 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378958670 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.2.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378969125 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer4.1.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378979556 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.1.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.378990553 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.2.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379001126 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer1.2.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379011508 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer1.0.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379021900 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer1.0.downsample.1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379033504 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.2.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379044076 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer1.2.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379064049 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer1.1.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379076654 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.0.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379089769 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.1.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379102140 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.0.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379114598 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.3.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379133520 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.2.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379144015 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.3.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379155771 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer1.1.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379167084 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.3.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379178303 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.0.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379189605 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer4.0.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379199974 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.1.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379211042 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.0.downsample.1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379221800 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer3.5.bn2.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379232566 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer1.0.bn1.num_batches_tracked'. It is not used by any node and should be removed from the model.
2019-08-09 02:59:21.379243442 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'layer2.1.bn3.num_batches_tracked'. It is not used by any node and should be removed from the model.
Input name : 0
Input shape : [1, 3, 224, 224]
Input type : tensor(float)
Output name : 503
Output shape : [1, 1000]
Output type : tensor(float)
Input data shape(1, 3, 224, 224)
max onnx-torch:104.89282989501953
| Problem solve by adding model.eval() before running inference of pytorch model in test code. Solution is from the link
model = models.__dict__["resnet50"]()
checkpoint = torch.load(ckpt,map_location='cpu')
best_prec1 = checkpoint['best_prec1']
model = import_sparse(model,checkpoint)
model.eval()
img_batch = torch.FloatTensor(img_batch)
with torch.no_grad():
result_torch = model(img_batch)
result_torch = result_torch.numpy()
| https://stackoverflow.com/questions/57423150/ |
pytorch: inception v3 parameter empty error | I am using inception_v3 from torchvision.models as my base model and adding an FC layer at the end to get features. However, I am getting an empty parameter error.
import torch
import torch.nn as nn
import torchvision.models as models
class Baseline(nn.Module):
def __init__(self, out_size):
super().__init__()
model = models.inception_v3(pretrained=True)
model.fc = nn.Linear(2048, out_size)
model.aux_logits = False
# Freeze model weights
for param in model.parameters():
param.requires_grad = False
self.parameters = nn.ParameterList()
def forward(self, image):
x = model(image)
x = x.view(x.size(0), -1)
x = model.fc = (x)
return x
| My understanding is that you are updating self.parameters with an empty nn.ParameterList which is not required here.
self.parameters will already have all the parameters your Baseline class has, including those of inception_v3 and nn.Linear. When you are updating them at the end with an empty list, you are essentially deleting all of the previously stored parameters.
| https://stackoverflow.com/questions/57423185/ |
How to get unique value along 1 dim of a 2d array in pytorch? | I have a 2d tensor now, which may have repeated elements along a dim, like
tmp = torch.tensor([[1,2,3,2,4],[0,5,6,7,2],[3,4,5,3,5],[7,5,6,7,7]])
I hope to get unique elements along dim=1, the result should be like this
result = [[1,2,3,4],[0,5,6,7,2],[3,4,5],[5,6,7]]
Is there a way could get the result without using for- loop?
I tried using torch.unique, like this,
result=[]
for i in range(tmp.shape[0]):
t = tmp[i,:]
result.append(torch.unique(t))
It works, but time-consuming.
| You cannot call unique on n-rank tensor when n>=2.
This is because in PyTorch there are no jagged array tensors.
tmp = torch.tensor([[1,2,3,2,4],[0,5,6,7,2],[3,4,5,3,5],[7,5,6,7,7]])
%timeit tmpt =torch.unbind(tmp); [torch.unique(t) for t in tmpt]
This returned 39.3 µs, while your original loop took
tmp = torch.tensor([[1,2,3,2,4],[0,5,6,7,2],[3,4,5,3,5],[7,5,6,7,7]])
result=[]
%timeit for i in range(tmp.shape[0]): t = tmp[i,:] ; result.append(torch.unique(t))
54 µs on average
| https://stackoverflow.com/questions/57425873/ |
Pytorch-Implement the same model in pytorch and keras but got different results | I am learning pytorch and want to practice it with an keras example (https://keras.io/examples/lstm_seq2seq/), this is a seq2seq 101 example which translate eng to fra on char-level features (no embedding).
Keras code is below:
from keras.models import Model
from keras.layers import Input, LSTM, Dense
import numpy as np
batch_size = 64 # Batch size for training.
epochs = 100 # Number of epochs to train for.
latent_dim = 256 # Latent dimensionality of the encoding space.
num_samples = 10000 # Number of samples to train on.
# Path to the data txt file on disk.
data_path = 'fra-eng/fra.txt'
# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, 'r', encoding='utf-8') as f:
lines = f.read().split('\n')
for line in lines[: min(num_samples, len(lines) - 1)]:
input_text, target_text = line.split('\t')
# We use "tab" as the "start sequence" character
# for the targets, and "\n" as "end sequence" character.
target_text = '\t' + target_text + '\n'
input_texts.append(input_text)
target_texts.append(target_text)
for char in input_text:
if char not in input_characters:
input_characters.add(char)
for char in target_text:
if char not in target_characters:
target_characters.add(char)
input_characters = sorted(list(input_characters))
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters)
num_decoder_tokens = len(target_characters)
max_encoder_seq_length = max([len(txt) for txt in input_texts])
max_decoder_seq_length = max([len(txt) for txt in target_texts])
print('Number of samples:', len(input_texts))
print('Number of unique input tokens:', num_encoder_tokens)
print('Number of unique output tokens:', num_decoder_tokens)
print('Max sequence length for inputs:', max_encoder_seq_length)
print('Max sequence length for outputs:', max_decoder_seq_length)
input_token_index = dict(
[(char, i) for i, char in enumerate(input_characters)])
target_token_index = dict(
[(char, i) for i, char in enumerate(target_characters)])
encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens),
dtype='float32')
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.
# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# Run training
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2)
# Save model
model.save('s2s.h5')
# Next: inference mode (sampling).
# Here's the drill:
# 1) encode input and retrieve initial decoder state
# 2) run one step of decoder with this initial state
# and a "start of sequence" token as target.
# Output will be the next target token
# 3) Repeat with the current target token and current states
# Define sampling models
encoder_model = Model(encoder_inputs, encoder_states)
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(
decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict(
(i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
(i, char) for char, i in target_token_index.items())
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1, 1, num_decoder_tokens))
# Populate the first character of target sequence with the start character.
target_seq[0, 0, target_token_index['\t']] = 1.
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char
# Exit condition: either hit max length
# or find stop character.
if (sampled_char == '\n' or
len(decoded_sentence) > max_decoder_seq_length):
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.
# Update states
states_value = [h, c]
return decoded_sentence
for seq_index in range(100):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = encoder_input_data[seq_index: seq_index + 1]
decoded_sentence = decode_sequence(input_seq)
print('-')
print('Input sentence:', input_texts[seq_index])
print('Decoded sentence:', decoded_sentence)
I want to implement this exact same model using pytorch, below is my code:
from __future__ import unicode_literals, print_function, division
from io import open
import unicodedata
import string
import re
import random
import numpy as np
import torch
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
data_path = './eng_fra.txt'
# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, 'r', encoding='utf-8') as f:
lines = f.read().split('\n')
for line in lines[: min(num_samples, len(lines) - 1)]:
#print('line:',line)
input_text, target_text = line.split('\t')
# We use "tab" as the "start sequence" character
# for the targets, and "\n" as "end sequence" character.
target_text = '\t' + target_text + '\n' # why?
# print('input_text and target_text:',input_text, target_text)
input_texts.append(input_text)
target_texts.append(target_text)
for char in input_text:
if char not in input_characters:
input_characters.add(char)
for char in target_text:
if char not in target_characters:
target_characters.add(char)
input_characters = sorted(list(input_characters))
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters)
print('input_characters',input_characters)
num_decoder_tokens = len(target_characters)
print('target_characters',target_characters)
max_encoder_seq_length = max([len(txt) for txt in input_texts])
max_decoder_seq_length = max([len(txt) for txt in target_texts])
print('max_encoder_seq_length and max_decoder_seq_length',max_encoder_seq_length,max_decoder_seq_length)
input_token_index = dict(
[(char, i) for i, char in enumerate(input_characters)])
target_token_index = dict(
[(char, i) for i, char in enumerate(target_characters)])
# define the shapes
encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens),
dtype='float32')
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
# one hot encoding for each word in each sentence
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.
encoder_input_data=torch.Tensor(encoder_input_data).to(device)
decoder_input_data=torch.Tensor(decoder_input_data).to(device)
decoder_target_data=torch.Tensor(decoder_target_data).to(device)
class encoder(nn.Module):
def __init__(self):
super(encoder,self).__init__()
self.LSTM=nn.LSTM(input_size=num_encoder_tokens,hidden_size=256,batch_first=True)
def forward(self,x):
out,(h,c)=self.LSTM(x)
return h,c
class decoder(nn.Module):
def __init__(self):
super(decoder,self).__init__()
self.LSTM=nn.LSTM(input_size=num_decoder_tokens,hidden_size=256,batch_first=True)
self.FC=nn.Linear(256,num_decoder_tokens)
def forward(self,x, hidden):
out,(h,c)=self.LSTM(x,hidden)
out=self.FC(out)
return out,(h,c)
class seq2seq(nn.Module):
def __init__(self,encoder,decoder):
super(seq2seq,self).__init__()
self.encoder=encoder
self.decoder=decoder
def forward(self,encode_input_data,decode_input_data):
hidden, cell = self.encoder(encode_input_data)
output, (hidden, cell) = self.decoder(decode_input_data, (hidden, cell))
return output
encoder=encoder().to(device)
# encoder_loss = nn.CrossEntropyLoss() # CrossEntropyLoss compute softmax internally in pytorch
# encoder_optimizer = torch.optim.Adam(encoder.parameters(), lr=0.001)
decoder=decoder().to(device)
# decoder_loss = nn.CrossEntropyLoss() # CrossEntropyLoss compute softmax internally in pytorch
# decoder_optimizer = torch.optim.Adam(decoder.parameters(), lr=0.001)
model=seq2seq(encoder,decoder).to(device)
optimizer = optim.RMSprop(model.parameters(),lr=0.01)
loss_fun=nn.CrossEntropyLoss()
# model.train()
num_epochs=50
batches=np.array_split(range(decoder_target_data.shape[0]),100)
total_step=len(batches)
for epoch in range(num_epochs):
for i,batch_ids in enumerate(batches):
encoder_input=encoder_input_data[batch_ids]
decoder_input=decoder_input_data[batch_ids]
decoder_target=decoder_target_data[batch_ids]
output = model(encoder_input, decoder_input)
loss=loss_fun(output.view(-1,93).to(device),decoder_target.view(-1,93).max(dim=1)[1].to(device))
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 20 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict(
(i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
(i, char) for char, i in target_token_index.items())
def decode_sequence(input_seq):
# Encode the input as state vectors.
h,c=model.encoder(input_seq)
# Generate empty target sequence of length 1.
# Populate the first character of target sequence with the start character.
target_seq = torch.zeros((1, 1, num_decoder_tokens)).to(device)
target_seq[0, 0, target_token_index['\t']] = 1.
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, (h_t, c_t) = model.decoder(target_seq,(h,c))
# Sample a token
sampled_token_index = output_tokens.view(-1,93).squeeze(0).max(dim=0)[1].item()
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char
# Exit condition: either hit max length
# or find stop character.
if (sampled_char == '\n' or
len(decoded_sentence) > max_decoder_seq_length):
stop_condition = True
# Update the target sequence (of length 1).
target_seq = torch.zeros((1, 1, num_decoder_tokens)).to(device)
target_seq[0, 0, sampled_token_index] = 1.
# Update states
h,c=h_t,c_t
return decoded_sentence
for seq_index in range(100):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = encoder_input_data[seq_index: seq_index + 1]
decoded_sentence = decode_sequence(input_seq)
print('-')
print('Input sentence:', input_texts[seq_index])
print('Decoded sentence:', decoded_sentence)
As you can see, I used exactly the same data processing and model structure. My pytorch version can run without error, but the performance seems worse than the original keras version by comparing the translation results.
One thing might cause error is the loss function (cross_entropy). In pytorch, cross_entropy loss function seems does not support one-hot labels directly, which I need to change the label to integer. However I don't think this should make big difference.
If you want to run the models, the data can be downloaded from:
https://github.com/jinfagang/pytorch_chatbot/blob/master/datasets/eng-fra.txt
Did I do something wrong in my code? Many thanks
| One way to look at the issue would be:
Fixing seeds to the same value in both Pytorch and Keras, albeit it cannot really guarantee the same output.
Weight initialization in Pytorch is different from Keras. Make sure they have the same weight initialization functions
I've been using for a problem of mine and I can say that even with 1 and 2 being setup identically, there is a high probability of getting the same results (that could be due to the way how Pytorch is implemented).
Hope that helps! Please update us if you managed to resolve your issue.
| https://stackoverflow.com/questions/57438562/ |
One hot encoding a segmented image using pytorch | I have a segmented image as a tensor of size [1,1,256,256]. The image is a binary segmented image. I want to one hot encode it to get an image of size [1,2,256,256].
I tried torch.nn.functional.one_hot(img, 2). But it gave me an image of size [1,256,256,2]. How do I get the desired tensor?
|
Try to use transpose():
img_one_hot = torch.nn.functional.one_hot(img, 2).transpose(1, 4).squeeze(-1)
transpose(1, 4) - swaps 1st and 4th dimension, returning the tensor of the shape of [1, 2, 256, 256, 1], squeeze(-1) removes the last dim resulting in [1 , 2, 256, 256] shaped tensor.
| https://stackoverflow.com/questions/57448795/ |
How to run Tensorboard in parallel | https://github.com/NVIDIA/DeepRecommender
According to the above page, I tried to run the NVIDIA's DeepRecommender program.After I activated the pytorch, I run the program as below but it failed.
[I run this Command]
$ python run.py --gpu_ids 0 \
--path_to_train_data Netflix/NF_TRAIN \
--path_to_eval_data Netflix/NF_VALID \
--hidden_layers 512,512,1024 \
--non_linearity_type selu \
--batch_size 128 \
--logdir model_save \
--drop_prob 0.8 \
--optimizer momentum \
--lr 0.005 \
--weight_decay 0 \
--aug_step 1 \
--noise_prob 0 \
--num_epochs 12 \
--summary_frequency 1000
[The comments of the Guide.]
Note that you can run Tensorboard in parallel
$ tensorboard --logdir=model_save
[My Question]
The guide says as above.I don't know how to run in parallel.Please tell me the way. Shoud I open 2 terminal windows?
[Enviroment]
The detail of the enviroment is as follow.
---> Ubuntu 18.04 LTS, python 3.6, Pytorch 1.2.0, CUDA V10.1.168
[The 1st trial]
After I activated the pytorch,
$source activate pytorch
$python run.py --gpu_ids 0 \ (The long parameters are abbreviated here.)
[The Error messages of the 1st trial]
Traceback (most recent call last):
File "run.py", line 13, in
from logger import Logger
File "/home/user/NVIDIA_DeepRecommender/DeepRecommender-mp_branch/logge r.py", line 4, in
import tensorflow as tf
ModuleNotFoundError: No module named 'tensorflow'
[The 2nd trial]
After I activated the tensorflow-gpu,
$ source activate tensorflow-gpu
$python run.py --gpu_ids 0 \ (The long parameters are abbreviated here.)
[The Error messages of the 2nd trial.]
Traceback (most recent call last):
File "run.py", line 2, in
import torch
ModuleNotFoundError: No module named 'torch'
[Expected result]
$ python run.py --gpu_ids 0 \
The program can run with no error and finish training the model.
| Try either installing tensorflow-gpu in your pytorch environment or pytorch in your tensorflow-gpu environemnt and use that environment to run your program.
| https://stackoverflow.com/questions/57455390/ |
Matplotlib Pylot - Images are being displayed in low resolution (pixel to pixel) | When I display some samples photos from the dataset I use, the previews of images are displayed in low resolution (they look like very low-resolution photos). How I can I display the images without losing their resolutions?
Here are my transformations which are used to move the data to the tensor and apply some transformations using PyTorch functions:
data_transforms = transforms.Compose([
transforms.Resize((50, 50)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
Then I load the data through DataLoader:
train_loader = DataLoader(face_train_dataset,
batch_size=train_batch_size, shuffle=False,
num_workers=4)
Finally, I display some previews for the sample photos which are retrieved using the DataLoader object:
example_data = example_data.cpu()
example_targets = example_targets.cpu()
for i in range(6):
plt.subplot(2, 3, i + 1)
plt.tight_layout()
plt.imshow(example_data[i][0], cmap='gray', interpolation='none')
plt.title('{}'.format(folders[example_targets[i]]))
plt.show()
p.s. Images are in tiff format.
| What resolution are you expecting?
One of the transformations you are applying is
transforms.Resize((50, 50))
That is, you are reducing the input images resolution to 50 by 50 pixels. This is the resolution you are getting when you plot the images.
In order to have a more graceful display of the low-res images you might want to consider changing the interpolation method of imshow to
plt.imshow(example_data[i][0], cmap='gray', interpolation='bicubic')
| https://stackoverflow.com/questions/57472090/ |
Keyword arguments in torch.nn.Sequential (pytroch) | a question regarding keywords in torch.nn.Sequential, it is possible in some way to forward keywords to specific models in a sequence?
model = torch.nn.Sequential(model_0, MaxPoolingChannel(1))
res = model(input_ids_2, keyword_test=mask)
here, keyword_test should be forwarded only to the first model.
Thank a lot and best regards!
my duplicate from - https://discuss.pytorch.org/t/keyword-arguments-in-torch-nn-sequential/53282
| No; you cannot. This is only possible if all models passed to the nn.Sequential expects the argument you are trying to pass in their forward method (at least at the time of writing this).
Two workarounds could be (I'm not aware of the whole case, but anticipated from the question):
If your value is static, why not initializing your first model with that value, and access it during computation with self.keyword_test.
In case the value is dynamic, you could have it as an inherent property in the input; hence, you can access it, also, during computation with input_ids_2.keyword_test
| https://stackoverflow.com/questions/57481612/ |
Getting distributed package doesn't have mpi built in error | I have been trying to write a distributed application using pytorch. I have been following tutorial here. Over there, I am using the "MPI Backend" option. According to that, I need to follow the basic steps to install pytorch and then install openmpi as conda install -c conda-forge openmpi
Unfortunately, whenever I try to run a script using mpirun mpiexec -n 2 python ptdist.py, I get the following error RuntimeError: Distributed package doesn't have MPI built in. I believe this is happening because of error in import ProcessGroupMPI code here in python.
I have tried to install openmpi from their source code as well as sudo apt-get install python-mpi4py, but am still facing the same error.
I also tried pip install mpi4py but that also does not help
Does anyone know what is the problem?
| From https://medium.com/@esaliya/pytorch-distributed-with-mpi-acb84b3ae5fd
The MPI backend, though supported, is not available unless you compile PyTorch from its source
This suggests you should first install your favorite MPI library, and possibly mpi4py built on top of it, and then build pytorch from sources at last.
| https://stackoverflow.com/questions/57483933/ |
How does one use 3D convolutions on standard 3 channel images? | I am trying to use 3d conv on cifar10 data set (just for fun). I see the docs that we usually have the input be 5d tensors (N,C,D,H,W). Am I really forced to pass 5 dimensional data necessarily?
The reason I am skeptical is because 3D convolutions simply mean my conv moves across 3 dimensions/directions. So technically I could have 3d 4d 5d or even 100d tensors and then should all work as long as its at least a 3d tensor. Is that not right?
I tried it real quick and it did give an error:
import torch
def conv3d_example():
N,C,H,W = 1,3,7,7
img = torch.randn(N,C,H,W)
##
in_channels, out_channels = 1, 4
kernel_size = (2,3,3)
conv = torch.nn.Conv3d(in_channels, out_channels, kernel_size)
##
out = conv(img)
print(out)
print(out.size())
##
conv3d_example()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-29c73923cc64> in <module>
15
16 ##
---> 17 conv3d_example()
<ipython-input-3-29c73923cc64> in conv3d_example()
10 conv = torch.nn.Conv3d(in_channels, out_channels, kernel_size)
11 ##
---> 12 out = conv(img)
13 print(out)
14 print(out.size())
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
474 self.dilation, self.groups)
475 return F.conv3d(input, self.weight, self.bias, self.stride,
--> 476 self.padding, self.dilation, self.groups)
477
478
RuntimeError: Expected 5-dimensional input for 5-dimensional weight 4 1 2 3, but got 4-dimensional input of size [1, 3, 7, 7] instead
cross posted:
https://discuss.pytorch.org/t/how-does-one-use-3d-convolutions-on-standard-3-channel-images/53330
How does one use 3D convolutions on standard 3 channel images?
| Consider the following scenario. You have a 3 channel NxN image. This image will have size of 3xNxN in pytorch (ignoring the batch dimension for now).
Say you pass this image to a 2D convolution layer with no bias, kernel size 5x5, padding of 2, and input/output channels of 3 and 10 respectively.
What's actually happening when we apply this layer to the input image?
You can think of it like this...
For each of the 10 output channels there is a kernel of size 3x5x5. A 3D convolution is applied to the 3xNxN input image using this kernel, which can be thought of as unpadded in the first dimension. The result of this convolution is a 1xNxN feature map.
Since there are 10 output layers, there are 10 of the 3x5x5 kernels. After all kernels have been applied the outputs are stacked into a single 10xNxN tensor.
So really, in the classical sense, a 2D convolution layer is already performing a 3D convolution.
Similarly for a 3D convolution layer, its really doing a 4D convolution, which is why you need 5 dimensional input.
| https://stackoverflow.com/questions/57484508/ |
Why did I get 2 different results from two models with same parameters and inputs? | I loaded resnet18 into my two models (model1 and model2), with pretrained weights.
I want to use them as feature extractors
For model1: I freezed the parameters except the last linear layer model1.fc, then train it. After training, I set model1.fc into torch.nn.Identity()
For model2: I directly set model2.fc into torch.nn.Identity()
Then these 2 models should be the same, but I get different forward result from the same inputs.
If the training of model1 is not done, they will have the same result, maybe something wrong with the freezing of parameter.
However, I checked their parameters after the training of model1 and setting the last layer of both models to identity layer, and they seems to be the same.
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms, models
# Load weights pretrained on ImageNet
def load_weights(model):
model_dir = "....."
model.load_state_dict(tor`enter code here`ch.utils.model_zoo.load_url("https://download.pytorch.org/models/resnet18-5c106cde.pth", model_dir=model_dir))
return model
model1=models.resnet18()
model1=load_weights(model1)
for param in model1.parameters():
param.requires_grad = False
model1.fc=nn.Linear(512, 2)
model1.cuda()
optimizer = optim.SGD(model1.fc.parameters(), lr=1e-2, momentum=0.9)
result_freeze = \
run_training(model1, optimizer, device, train_loader, val_loader,num_epochs=10)
model2=models.resnet18()
model2=load_weights(model2)
model2.fc=nn.Identity()
model2.cuda()
model1.fc=nn.Identity()
model1.cuda()
# checking forward results(extracting features)
# The batch size is one here
for batch_idx, (data, target) in enumerate(X_train):
data, target = data.to(device), target.to(device)
d=data
X_train_feature[batch_idx]=model1(data).cpu().detach().numpy()
y_train[batch_idx]=target.cpu().detach().numpy()
X_train2_feature[batch_idx]=model2(d).cpu().detach().numpy()
y_train2[batch_idx]=target.cpu().detach().numpy()
print(sum(X_train_feature[batch_idx]==X_train2_feature[batch_idx]))
print(sum(y_train[batch_idx]==y_train2[batch_idx]))
print(torch.sum(d==data))
for batch_idx, (data, target) in enumerate(X_test):
data, target = data.to(device), target.to(device)
d=data
X_test_feature[batch_idx]=model1(data).cpu().detach().numpy()
y_test[batch_idx]=target.cpu().detach().numpy()
X_test2_feature[batch_idx]=model2(d).cpu().detach().numpy()
y_test2[batch_idx]=target.cpu().detach().numpy()
print(sum(X_test_feature[batch_idx]==X_test2_feature[batch_idx]))
print(sum(y_test[batch_idx]==y_test2[batch_idx]))
print(torch.sum(d==data))
# checking parameters
for a,b in zip(model1.parameters(),model2.parameters()):
print(torch.sum(a!=b))
Expect to get the same forward results from model1 and model2, but they are different. And If they produce different forward results, why do they have exactly the same parameters?
| Have you taken into account changes that might occur to BatchNorm layers?
Batch norm layers do not behave like normal layers - their internal parameters are modified by computing running mean and std of the data, and not by gradient descent.
Try setting model1.eval() before the finetune and then check.
| https://stackoverflow.com/questions/57486370/ |
Pyinstaller unable to find dlls for the project dependencies while creating exe | Pyinstaller failed to find certain dlls that are required for binding in dependencies into one exe.
Please find the error logs below.
We have tried installing these libraries:
pip3 install intel-openmp mkl
Tried adding --paths to the command, but as there are no dlls in the system, pyinstaller is unable to find them:
pyinstaller --onefile --paths <Paths-where-dll-could-be> -c main.py
These libs are missing and showed in logs as WARNINGS.
364427 WARNING: lib not found: impi.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\Library\bin\mkl_blacs_intelmpi_ilp64.dll
365396 WARNING: lib not found: mpich2mpi.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\Library\bin\mkl_blacs_mpich2_lp64.dll
366241 WARNING: lib not found: msmpi.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\Library\bin\mkl_blacs_msmpi_lp64.dll
368089 WARNING: lib not found: msmpi.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\Library\bin\mkl_blacs_msmpi_ilp64.dll
369270 WARNING: lib not found: pgf90.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\Library\bin\mkl_pgi_thread.dll
369997 WARNING: lib not found: pgc14.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\Library\bin\mkl_pgi_thread.dll
370791 WARNING: lib not found: pgf90rtl.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\Library\bin\mkl_pgi_thread.dll
373039 WARNING: lib not found: mpich2mpi.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\Library\bin\mkl_blacs_mpich2_ilp64.dll
374289 WARNING: lib not found: impi.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\Library\bin\mkl_blacs_intelmpi_lp64.dll
377030 WARNING: lib not found: torch_python.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\lib\site-packages\torch\_C.cp36-win_amd64.pyd
378792 WARNING: lib not found: c10_cuda.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\lib\site-packages\torchvision\_C.cp36-win_amd64.pyd
379568 WARNING: lib not found: torch.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\lib\site-packages\torchvision\_C.cp36-win_amd64.pyd
380290 WARNING: lib not found: caffe2.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\lib\site-packages\torchvision\_C.cp36-win_amd64.pyd
381126 WARNING: lib not found: c10.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\lib\site-packages\torchvision\_C.cp36-win_amd64.pyd
382053 WARNING: lib not found: torch_python.dll dependency of c:\users\1311654\appdata\local\programs\python\python36\lib\site-packages\torchvision\_C.cp36-win_amd64.pyd
As the missing lib dlls are not in the system, kindly assist the efficient way to build the exe.
| Your log of warnings is very similar to mine: PyInstaller .exe file terminates early without an error message
I am therefore assuming, despite these warnings, PyInstaller still successfully builds your executable? These steps (as per above link) worked for me:
Use PyInstaller to generate a one-folder bundle.
From within your python environment that you used to write your program, search for all "dll" files, and copy & paste them into the dist folder of your bundle.
In my case, this enabled my executable file to run successfully.
You can then experiment by systematically removing the DLL files you pasted into your dist folder, in order to identify which are essential and which are redundant.
(optional) if you prefer a one-file bundle, you will need to do step 4 to identify the essential DLL files that are required for your executable file to run, so that you can add just these DLLs as data files to your one-file build (in my case, it was just the one! - "libiomp5md.dll").
Appreciate this is not a very elegant solution, it's more about perseverance! Good luck :)
| https://stackoverflow.com/questions/57491610/ |
Pytorch: Visualize model while training | I am training a neural network by regression but it is predicting a constant value during testing. Which is why I want to visualize the weights of the neural network change during training and see the weights change dynamically in the jupyter notebook.
Currently, my model looks like this:
import torch
from torch import nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.inp = nn.Linear(2, 40)
self.act1 = nn.Tanh()
self.h1 = nn.Linear(40, 40)
self.act2 = nn.Tanh()
self.h2 = nn.Linear(40, 2)
self.act3 = nn.Tanh()
#self.h3 = nn.Linear(20, 20)
#self.act4=nn.Tanh()
self.h4 = nn.Linear(2, 1)
def forward_one_pt(self, x):
out = self.inp(x)
out = self.act1(out)
out = self.h1(out)
out = self.act2(out)
out = self.h2(out)
out = self.act3(out)
#out = self.h3(out)
#out = self.act4(out)
out = self.h4(out)
return out
def forward(self, config):
E = torch.zeros([config.shape[0], 1])
for i in range(config.shape[0]):
E[i] = self.forward_one_pt(config[i])
# print("config[",i,"] = ",config[i],"E[",i,"] = ",E[i])
return torch.sum(E, 0)
and my main function looks like this:
def main() :
learning_rate = 0.5
n_pts = 1000
t_pts = 100
epochs = 15
coords,E = load_data(n_pts,t_pts)
#generating my data to NN
G = get_symm(coords,save,load_symmetry,symmtery_pickle_file,eeta1,eeta2,Rs,ex,lambdaa,zeta,boxl,Rc,pi,E,scale)
net = Net()
if(cuda_flag):
net.cuda()
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
net_trained = train(save,text_output,epochs,n_pts,G,E,net,t_pts,optimizer,criterion,out,cuda_flag)
test(save,n_pts,t_pts,G,E,net_trained,out,criterion,cuda_flag)
torch.save(net,save_model)
any tutorials or answers would be helpful
| You can use model.state_dict() to see if your weights are updating across epochs:
old_state_dict = {}
for key in model.state_dict():
old_state_dict[key] = model.state_dict()[key].clone()
output = model(input)
new_state_dict = {}
for key in model.state_dict():
new_state_dict[key] = model.state_dict()[key].clone()
for key in old_state_dict:
if not (old_state_dict[key] == new_state_dict[key]).all():
print('Diff in {}'.format(key))
else:
print('NO Diff in {}'.format(key))
On a side note, you can vectorize your forward function instead of looping over it. Following would do the same job as your original forward function but much faster:
def forward(self, config):
out= self.forward_one_pt(config)
return torch.sum(out, 0)
| https://stackoverflow.com/questions/57494217/ |
Why is the memory in GPU still in use after clearing the object? | Starting with zero usage:
>>> import gc
>>> import GPUtil
>>> import torch
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 0% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
Then I create a big enough tensor and hog the memory:
>>> x = torch.rand(10000,300,200).cuda()
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 26% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
Then I tried several ways to see if the tensor disappears.
Attempt 1: Detach, send to CPU and overwrite the variable
No, doesn't work.
>>> x = x.detach().cpu()
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 26% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
Attempt 2: Delete the variable
No, this doesn't work either
>>> del x
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 26% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
Attempt 3: Use the torch.cuda.empty_cache() function
Seems to work, but it seems that there are some lingering overheads...
>>> torch.cuda.empty_cache()
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 5% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
Attempt 4: Maybe clear the garbage collector.
No, 5% is still being hogged
>>> gc.collect()
0
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 5% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
Attempt 5: Try deleting torch altogether (as if that would work when del x didn't work -_- )
No, it doesn't...*
>>> del torch
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 5% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
And then I tried to check gc.get_objects() and it looks like there's still quite a lot of odd THCTensor stuff inside...
Any idea why is the memory still in use after clearing the cache?
| It looks like PyTorch's caching allocator reserves some fixed amount of memory even if there are no tensors, and this allocation is triggered by the first CUDA memory access
(torch.cuda.empty_cache() deletes unused tensor from the cache, but the cache itself still uses some memory).
Even with a tiny 1-element tensor, after del and torch.cuda.empty_cache(), GPUtil.showUtilization(all=True) reports exactly the same amount of GPU memory used as for a huge tensor (and both torch.cuda.memory_cached() and torch.cuda.memory_allocated() return zero).
| https://stackoverflow.com/questions/57496285/ |
How to customize pytorch data | I am trying to make a customized Dataloader using pytorch.
I've seen some codes like (omitted the class sorry.)
def __init__(self, data_root, transform=None, training=True, return_id=False):
super().__init__()
self.mode = 'train' if training else 'test'
self.data_root = Path(data_root)
csv_fname = 'train.csv' if training else 'sample_submission.csv'
self.csv_file = pd.read_csv(self.data_root / csv_fname)
self.transform = transform
self.return_id = return_id
def __getitem__():
""" TODO
"""
def __len__():
""" TODO
"""
The problem here is that the datas I've dealt with before contains all the training data in one csv file, and all the testing data in the other csv file, with total 2 csv files for training and testing. (For example like in MNIST, the last column is the labeling, and the all the previous columns are each different features.)
However, the problem I've been facing is that I've got very many, (about 200,000) csv files for training, each one smaller than 60,000 sized MNIST, but still quite big. All these csv files contain different number of rows.
To inherit torch.util.data, how can I make customized class? MNIST dataset is quite small, so can be uploaded on RAM at once. However, the data I'm dealing with is super big, so I need some help.
Any ideas? Thank you in advance.
| First, you want to customize (overload) data.Dataset and not data.DataLoader which is perfectly fine for your use case.
What you can do, instead of loading all data to RAM, is to read and store "meta data" on __init__ and read one relevant csv file whnever you need to __getitem__ a specific entry.
A pseudo-code of your Dataset will look something like:
class ManyCSVsDataset(data.Dataset):
def __init__(self, ...):
super(ManyCSVsDataset, self).__init__()
# store the paths for all csvs and the number of items in each one
self.metadata = ...
self.num_items = total_number_of_items
def __len__(self):
return self.num_items
def __getitem__(self, index):
# based on the index, use self.metadata to determine what csv file to open
with open(relevant_csv_file, 'r') as R:
# read from R the specific line matching item index
return item
This implementation is not efficient in the sense that it reads the same csv file over and over and does not cache anything. On the other hand, you can take advantage of data.DataLoader's multi processing support to have many parallel sub-processes doing all these file access at the background while you actually use the data for training.
| https://stackoverflow.com/questions/57504262/ |
How can I deploy my Pytorch model into IOS? | I have a deep learning neural network I built on Pytorch I am seeking to deploy onto IOS.
| Native support doesn't exist still I think, but what some do is to export the ONNX model and then open this in Caffe2 which has the support for IOS device (also Androids)
So use ONNX export tutorial and this mobile integration helper.
There is also a path converting ONNX to CoreML but depending on your project it may not be super fast.
There is also an option to port the ONNX to TensorFlow Lite and to integrate with your Swift or Objective C app from there on.
| https://stackoverflow.com/questions/57511185/ |
Pytorch custom dataset: ValueError: some of the strides of a given numpy array are negative | I wrote a custom pytorch dataset, but ran into an error thhat seems quite unintelligible.
My custom dataset,
class data_from_xlsx(Dataset):
def __init__(self, xlsx_fp, path_col, class_cols_list):
self.xlsx_file = pd.read_excel(xlsx_fp)
self.path_col = path_col
self.class_cols_list = class_cols_list
def __len__(self):
return get_xlsx_length(self.xlsx_file)
def __getitem__(self, index):
file_path = cols_from_xlsx(self.xlsx_file, index, 1, self.path_col)
feature = load_nii_file(file_path) # get 3D volume (x, y, z)
feature = np.expand_dims(feature, axis=0) # add channel (c, x, y, z)
label = cols_from_xlsx(self.xlsx_file, index, 1, self.class_cols_list) # get label
return feature, label.astype(np.bool)
def main():
dataset = data_from_xlsx("train.xlsx", "file_path", ["pos", "neg"], transformations, aug=True)
data_loader = DataLoader(dataset, batch_size=4, shuffle=True)
for (f, l) in data_loader:
print("f shape", f.shape)
print("l shape", l.shape)
An error is reported when I ran main(),
File "d:\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 346, in __next__
data = self.dataset_fetcher.fetch(index) # may raise StopIteration
File "d:\pytorch\lib\site-packages\torch\utils\data\_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "d:\pytorch\lib\site-packages\torch\utils\data\_utils\collate.py", line 80, in default_collate
return [default_collate(samples) for samples in transposed]
File "d:\pytorch\lib\site-packages\torch\utils\data\_utils\collate.py", line 80, in <listcomp>
return [default_collate(samples) for samples in transposed]
File "d:\pytorch\lib\site-packages\torch\utils\data\_utils\collate.py", line 65, in default_collate
return default_collate([torch.as_tensor(b) for b in batch])
File "d:\pytorch\lib\site-packages\torch\utils\data\_utils\collate.py", line 65, in <listcomp>
return default_collate([torch.as_tensor(b) for b in batch])
ValueError: some of the strides of a given numpy array are negative. This is currently not supported, but will be added in future release
The reported error does't make sense to me, so I googled it. At first I thought I didn't change the feature from numpy.array to tensor, so I tried feature = torch.from_array(feature.copy()) and also tried transforms.TOTensor() but both attempts failed.
| Thanks to the advice from @jodag and @UsmanAli, I sovled this by return torch.from_numpy(feature.copy()) and torch.tensor(label.astype(np.bool))
So the whole thing should be,
class data_from_xlsx(Dataset):
def __init__(self, xlsx_fp, path_col, class_cols_list):
self.xlsx_file = pd.read_excel(xlsx_fp)
self.path_col = path_col
self.class_cols_list = class_cols_list
def __len__(self):
return get_xlsx_length(self.xlsx_file)
def __getitem__(self, index):
file_path = cols_from_xlsx(self.xlsx_file, index, 1, self.path_col)
feature = load_nii_file(file_path) # get 3D volume (x, y, z)
feature = np.expand_dims(feature, axis=0) # add channel (c, x, y, z)
label = cols_from_xlsx(self.xlsx_file, index, 1, self.class_cols_list) # get label
return torch.from_numpy(feature.copy()), torch.tensor(label.astype(np.bool))
| https://stackoverflow.com/questions/57517740/ |
constants in Pytorch Linear Module Class Definition | What is __constants__ in pytorch class Linear(Module): defined in https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html?
What is its functionality and why is it used?
I have been searching around, but did not find any documentation. Please note that this does not mean the __constants__ in torch script.
| The __constants__ you're talking about is, in fact, the one related to TorchScript. You can confirm it by using git blame (when it was added and by who) on GitHub. For example, for torch/nn/modules/linear.py, check its git blame.
TorchScript also provides a way to use constants that are defined in Python. These can be used to hard-code hyper-parameters into the function, or to define universal constants.
-- Attributes of a ScriptModule can be marked constant by listing them as a member of the constants property of the class:
class Foo(torch.jit.ScriptModule):
__constants__ = ['a']
def __init__(self):
super(Foo, self).__init__(False)
self.a = 1 + 4
@torch.jit.script_method
def forward(self, input):
return self.a + input
| https://stackoverflow.com/questions/57522806/ |
What is the meaning of keep_vars in state_dict? | state_dict(destination=None, prefix='', keep_vars=False)
what does changing keep_vars to True do?
| In PyTorch >=0.4, it has no use.
keep_vars was added in the commit: Add keep_vars parameter to state_dict stating that
When keep_vars is true, it returns a Variable for each parameter
(rather than a Tensor).
In state_dict function, _save_to_state_dict is called internally, which contains the following code
for name, param in self._parameters.items():
if param is not None:
destination[prefix + name] = param if keep_vars else param.data
for name, buf in self._buffers.items():
if buf is not None:
destination[prefix + name] = buf if keep_vars else buf.data
The portion param if keep_vars else param.data made difference prior to PyTorch 0.4.0 when Variable and Tensor were separate, but now as they are merged, keep_vars is probably present only for backward compatibility. Check Is .data still useful in pytorch?
| https://stackoverflow.com/questions/57534801/ |
What is the difference between register_parameter and register_buffer in PyTorch? | Module's parameters get changed during training, that is, they are what is learnt during training of a neural network, but what is a buffer?
and is it learnt during neural network training?
| Pytorch doc for register_buffer() method reads
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the persistent state.
As you already observed, model parameters are learned and updated using SGD during the training process.
However, sometimes there are other quantities that are part of a model's "state" and should be
- saved as part of state_dict.
- moved to cuda() or cpu() with the rest of the model's parameters.
- cast to float/half/double with the rest of the model's parameters.
Registering these "arguments" as the model's buffer allows pytorch to track them and save them like regular parameters, but prevents pytorch from updating them using SGD mechanism.
An example for a buffer can be found in _BatchNorm module where the running_mean , running_var and num_batches_tracked are registered as buffers and updated by accumulating statistics of data forwarded through the layer. This is in contrast to weight and bias parameters that learns an affine transformation of the data using regular SGD optimization.
| https://stackoverflow.com/questions/57540745/ |
Filter data in pytorch tensor | I have a tensor X like [0.1, 0.5, -1.0, 0, 1.2, 0], and I want to implement a function called filter_positive(), it can filter the positive data into a new tensor and return the index of the original tensor. For example:
new_tensor, index = filter_positive(X)
new_tensor = [0.1, 0.5, 1.2]
index = [0, 1, 4]
How can I implement this function most efficiently in pytorch?
| Take a look at torch.nonzero which is roughly equivalent to np.where. It translates a binary mask to indices:
>>> X = torch.tensor([0.1, 0.5, -1.0, 0, 1.2, 0])
>>> mask = X >= 0
>>> mask
tensor([1, 1, 0, 1, 1, 1], dtype=torch.uint8)
>>> indices = torch.nonzero(mask)
>>> indices
tensor([[0],
[1],
[3],
[4],
[5]])
>>> X[indices]
tensor([[0.1000],
[0.5000],
[0.0000],
[1.2000],
[0.0000]])
A solution would then be to write:
mask = X >= 0
new_tensor = X[mask]
indices = torch.nonzero(mask)
| https://stackoverflow.com/questions/57570043/ |
Difference in Model Performance when using Validation set/ Testing set | I have implemented a PyTorch NN code for classification and regression.
Classification:
a) Use stratifiedKfolds for cross-validation (K=10- means 10 fold-cross validation)
I divided the data: as follows:
Suppose I have 100 data: 10 for testing, 18 for validation, 72 for training.
b) Loss function = CrossEntropy
c) Optimization = SGD
d) Early Stopping where waittime = 100 epochs.
Problem is:
Baseline Accuracy = 51%
Accuracy on Training set = 100%
Accuracy on validation set = 90%
Accuracy on testing set = 72%
I don’t understand what are the reasons behind the huge performance difference in Testing data/ Validation data?
How can I solve this problem?
Regression:
a) use the same network structure
b) loss function = MSELoss
c) Optimization = SGD
d) Early Stopping where wait-time = 100 epochs.
e) Use K-fold for cross-validation.
I divided the data: as follows:
Suppose I have 100 data: 10 for testing, 18 for validation, 72 for training.
Problem is:
Baseline MSE= 14.0
Accuracy on Training set = 0.0012
Accuracy on validation set = 6.45
Accuracy on testing set = 17.12
I don’t understand what are the reasons behind the huge performance difference in Testing data/ Validation data?
How can I solve these problems? or Is this an obvious thing for NN/ depend on particular dataset?
| You have a large gap between training and validation performance, and between validation and test performance. There are two issues to explore:
Differences in the distribution. We assume that train / val / test sets are all drawn from the same distribution, and so have similar characteristics. A well trained model should perform equally well on the val and test datasets. If you are dataset really is only 10 samples for test and 18 for val, there is a high chance that the samples selected will skew one/both of these datasets, so that they no longer have similar characteristics. Therefore the difference between your val and test performance could just be chance: Your test set just happens to be much harder. You could test this by manual inspection.
Overfitting to val: However, I think it is more likely that you have experimented with different architectures, training regimes, etc, and have tweaked parameters to get the best performance on you validation set. This means that you have overfit your model to your val set. The test set is a truer reflection of your model's accuracy.
Your training accuracy is very high for both problems, and there is a large gap between training and validation performance. You are therefore overfitting to the training data, so need to train less, or introduce more stringent regularisation.
| https://stackoverflow.com/questions/57581436/ |
ModuleNotFoundError: No module named 'past' when installing tensorboard with pytorch 1.2 | I'm trying out tensorboard with pytorch by following this: https://pytorch.org/docs/stable/tensorboard.html
I've installed tensorboard with pip install tb-nightly
The command tensorboard --logdir=runs starts ok.
But the line self.writer = SummaryWriter()
Gives the following error:
ModuleNotFoundError: No module named 'past'
How do I fix this?
| Following this issue: https://github.com/pytorch/pytorch/issues/22389,
Adding future to the list of requirements solved the problem
# requirements.txt:
tb-nightly
future
pip install -r requirements.txt
| https://stackoverflow.com/questions/57599555/ |
Difference between PyTorch, PyTorchModel in sagemaker.pytorch | I am trying to create a model using pytorch in sagemaker. I tried deploying using - PyTorch module in sagemaker.pytorch [from sagemaker.pytorch import PyTorch].
But, I want to understand what is PyTorchModel in sagemaker.pytorch [from sagemaker.pytorch import PyTorchModel]. They both have deploy() . And I followed the link https://sagemaker.readthedocs.io/en/stable/using_pytorch.html to create and deploy the model . Where I don't see using "PyTorchModel" anywhere. I would like to know the difference and when to use what.
I tried the following so far.
Step 1 : I called pytorch estimator
pytorch_model = PyTorch(entry_point='entry_v1.py',
train_instance_type='ml.m5.4xlarge',
role = role,
train_instance_count=1,
output_path = "s3://model-output-bucket/test",
framework_version='1.1',
hyperparameters = {'epochs': 10,'learning-rate': 0.01})
Step2 : I called fit method
pytorch_model.fit({'train': 's3://training-data/train_data.csv',
'test':'s3://testing-data/test_data.csv'})
Step3: I called deploy method.
predictor = pytorch_model.deploy(instance_type='ml.m4.xlarge', initial_instance_count=1)
I would like to know when to call create_model() here.
I got some understanding over here. We use [from sagemaker.pytorch import PyTorch] for end to end process where we train a model with .fit() and then we can deploy the model with .deploy()
But, with [from sagemaker.pytorch import PyTorchModel] we can use the model which is already trained.
Step1:
pytorch_model = PyTorchModel(model_data='s3://model-output-bucket/sagemaker-pytorch-2019-08-20-16-54-32-500/output/model.tar.gz', role=role,entry_point=entry_v1.py,sagemaker_session=sagemaker_session)
Step2:
predictor = pytorch_model.deploy(instance_type='ml.c4.xlarge', initial_instance_count=1)
Also, .create_model() of PyTorch Estimator will return an object of PyTorchModel.
Please correct me if i am wrong anywhere.
| PyTorch class is inherited from the Framework class whereas PyTorchModel is inherited from the FrameworkModel class.
The difference between these two is that:
Framework is used to perform the end to end training and deployment of a model
FrameworkModel is used to create an Estimator from a pretrained model and then use it to deploy an endpoint using the deploy() method. This does not involve training of the model.
In the PyTorch class, you cannot directly call the deploy model. You'll first have to invoke the fit() method which will be followed by the deploy() method.
You can read the following blog on how you can bring your own Pretrained model to Sagemaker
https://aws.amazon.com/blogs/machine-learning/bring-your-own-pre-trained-mxnet-or-tensorflow-models-into-amazon-sagemaker/
Regarding the create_model() method, you need not invoke it in your script if you'd like to directly deploy an endpoint after training.
It is generally used in scenarios where you require to create a pipeline for inferences through multiple models
| https://stackoverflow.com/questions/57604157/ |
pytorch TypeError: '<' not supported between instances of 'Example' and 'Example' when referring to iterator | I am trying to use my own dataset to classify text according to https://github.com/bentrevett/pytorch-sentiment-analysis/blob/master/5%20-%20Multi-class%20Sentiment%20Analysis.ipynb. My dataset is a csv of sentences and a class associated with it. there are 6 different classes:
sent class
'the fox is brown' animal
'the house is big' object
'one water is drinkable' water
...
When running:
N_EPOCHS = 5
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
print(start_time)
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
#print(train_loss.type())
#print(train_acc.type())
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut5-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
, I obtained the following error
1566460706.977012
epoch_loss
<torchtext.data.iterator.BucketIterator object at 0x000001FABE907E80>
TypeError: '<' not supported between instances of 'Example' and 'Example'
pointing to:
TypeError Traceback (most recent call last)
<ipython-input-22-19e8a7eb204e> in <module>()
10 #print(train_loss.type())
11 #print(train_acc.type())
---> 12 valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
13
14 end_time = time.time()
<ipython-input-21-83b02f99bca7> in evaluate(model, iterator, criterion)
9 print('epoch_loss')
10 print(iterator)
---> 11 for batch in iterator:
12 print('batch')
13 predictions = model(batch.text)
I am new to pytorch, so only added a line to idenify iterator data type and got:
<torchtext.data.iterator.BucketIterator object at 0x000001FABE907E80>
I tried to determine specific attributes following https://github.com/pytorch/text/blob/master/torchtext/data/iterator.py to no avail.
Any suggestions are appreciated.
The code for the evaluate (where the iterator is) method is:
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
print('epoch_loss')
print(iterator)
for batch in iterator:
print('batch')
predictions = model(batch.text)
loss = criterion(predictions, batch.label)
acc = categorical_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
| I faced a similar issue and got it resolved by using sort_key and sort_within_batch while creating iterators.
train_iterator, valid_iterator = BucketIterator.splits(
(train, valid),
batch_size = BATCH_SIZE,
sort_key = lambda x: len(x.sent),
sort_within_batch=True,
device = device)
| https://stackoverflow.com/questions/57605217/ |
Taking a norm of matrix rows/cols in pytorch | The norm of a vector can be taken by
torch.norm(vec)
However, how to take a norm of a set of vectors grouped as a matrix (either as rows or columns)?
For example, if a matrix size is (5,8), then the rows norms should return a vector of norms of size (5).
| torch.norm without extra arguments performs what is called a Frobenius norm which is effectively reshaping the matrix into one long vector and returning the 2-norm of that. To take the norm along a particular dimension provide the optional dim argument.
For example torch.norm(mat, dim=1) will compute the 2-norm along the columns (i.e. this will compute the 2-norm of each row) thus converting a mat of size [N,M] to a vector of norms of size [N].
To compute the norm of the columns use dim=0.
| https://stackoverflow.com/questions/57627833/ |
PyTorch not downloading | I go to the PyTorch website and select the following options
PyTorch Build: Stable (1.2)
Your OS: Windows
Package: pip
Language: Python 3.7
CUDA: None
(All of these are correct)
Than it displays a command to run
pip3 install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
I have already tried to mix around the the different options but none of them has worked.
ERROR: ERROR: Could not find a version that satisfies the requirement torch==1.2.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.2.0+cpu
I tried to do pip install pytorch but pytorch doesn't support pypi
| I've been in same situation.
My prob was, the python version... I mean, in the 'bit' way.
It was 32 bit that the python I'd installed.
You should check which bit of python you installed.
you can check in the app in setting, search python, then you will see the which bit you've installed.
After I installed the 64 bit of Python, it solved.
I hope you figure it out!
environment : win 10
| https://stackoverflow.com/questions/57642019/ |
How to set the output size of an RNN? | I want to have an RNN with input size 7, hidden size 10 and output size 2.
So for an input of, say, shape 99x1x7 I expect an output of shape 99x1x2.
For an RNN alone, I get:
model = nn.RNN(input_size=7, hidden_size=10, num_layers=1)
output,hn=model(torch.rand(99,1,7))
print(output.shape) #torch.Size([99, 1, 10])
print(hn.shape) #torch.Size([ 1, 1, 10])
So I assume I still have to put a Linear behind it:
model = nn.Sequential(nn.RNN(input_size=7, hidden_size=10, num_layers=1),
nn.Linear(in_features=10, out_features=2))
model(torch.rand(99,1,7))
Traceback (most recent call last):
File "train_rnn.py", line 80, in <module>
main()
File "train_rnn.py", line 25, in main
model(torch.rand(99,1,7))
File "/home/.../virtual-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/.../virtual-env/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/.../virtual-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/.../virtual-env/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 92, in forward
return F.linear(input, self.weight, self.bias)
File "/home/.../virtual-env/lib/python3.6/site-packages/torch/nn/functional.py", line 1404, in linear
if input.dim() == 2 and bias is not None:
AttributeError: 'tuple' object has no attribute 'dim'
I guess this is because Linear receives the tuple that RNN.forward yields. But how am I supposed to combine the two?
| From the pytorch doc https://pytorch.org/docs/stable/nn.html?highlight=rnn#torch.nn.RNN
the output is of shape seq_len, batch, num_directions * hidden_size
So depending on what you want you might add a fc layer to get an output of size 2.
Basically, a Sequential will apply each model on top of the output of the next_one so you must either not use a Sequential or create a special Linear Layer that works on sequence, the following should work :
class seq_Linear(nn.module):
def __init__(self, linear):
self.linear = linear
# To apply on every hidden state
def forward(self, x):
return torch.stack([self.linear(hs) for hs in x])
# To apply on the last hidden state
def forward(self, x):
return self.linear(x[-1])
and replace your nn.Linear by a seq_Linear(nn.Linear) in your code.
Edit : If you want to create a sequence of outputs of size 2, perhaps the best way would be to stack another RNN on top of your first one with input_size 10 and output_size 2, they should be stackable inside a Sequential without any trouble.
| https://stackoverflow.com/questions/57642161/ |
What is the difference between these two neural network structures? | first using nn.Parameter
class ModelOne(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.randn(300, 10))
self.bias = nn.Parameter(torch.zeros(10))
def forward(self, x):
return x @ self.weights + self.bias
when I do
mo = ModelOne()
[len(param) for param in mo.parameters()]
it gives
[300, 10]
second using nn.Linear
class ModelTwo(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(300, 10)
def forward(self, x):
return self.linear(x)
same thing here gives
[10, 10]
| The difference lies in how nn.Linear initializes weights and bias:
class Linear(Module):
def __init__(self, in_features, out_features, bias=True):
super(Linear, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = Parameter(torch.Tensor(out_features, in_features))
if bias:
self.bias = Parameter(torch.Tensor(out_features))
...
So, when you write nn.Linear(300, 10) the weight is (10, 300) and bias is (10). But, in the ModelOne, weights has dimension (300, 10).
You can confirm it using
for name, param in mo.named_parameters():
print(name, param.shape)
The output in ModelOne:
weights torch.Size([300, 10])
bias torch.Size([10])
In ModelTwo:
linear.weight torch.Size([10, 300])
linear.bias torch.Size([10])
Now, the reason you're getting [300, 10] in first case and [10, 10] in second case is because if you print the length of a 2d Tensor, then it'll only give its first dimension i.e.
a = torch.Tensor(10, 300)
b = torch.Tensor(10)
print(len(a), len(b))
(10, 10)
| https://stackoverflow.com/questions/57660053/ |
DataLoader crashes when shuffling | I'm using DataLoader to read from a custom Dataset object based on numpy memmap.
As long as I read the data without shuffling everything works fine but, as I set shuffle=True, the runtime crash.
I tried implementing the shuffling mechanism in the Dataset class by using a permutation vector and setting shuffle=False in the DataLoader but the issue persists.
I also noticed that, when shuffling, the __getitem__() function of the Dataset object is called n times, where n is the batch_size.
Here's the Dataset code:
class CustomDataset(Dataset):
num_pattern = 60112
base_folder = 'dataset'
def __init__(self, root):
self.root = os.path.expanduser(root)
self.output_ = np.memmap('{0}/output'.format(root), 'int64', 'r', shape=(60112, 62))
self.out_len = np.memmap('{0}/output-lengths'.format(root), 'int32', 'r', shape=(60112))
self.input_ = np.memmap('{0}/input'.format(root), 'float32', 'r', shape=(60112, 512, 1024))
self.in_len = np.memmap('{0}/input-lengths'.format(root), 'int32', 'r', shape=(60112))
def __len__(self):
return self.num_pattern
def __getitem__(self, index):
return (self.in_len[index], torch.from_numpy(self.input_[index])), (self.out_len[index], torch.from_numpy(self.output_[index]))
if __name__ == '__main__':
dataset = CustomDataset(root='/content/')
data_loader = data.DataLoader(dataset, batch_size=32, shuffle=False, num_workers=1)
for i, data in enumerate(data_loader, 0):
# training
The error stack is the following:
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _try_get_batch(self, timeout)
510 try:
--> 511 data = self.data_queue.get(timeout=timeout)
512 return (True, data)
9 frames
/usr/lib/python3.6/multiprocessing/queues.py in get(self, block, timeout)
103 timeout = deadline - time.monotonic()
--> 104 if not self._poll(timeout):
105 raise Empty
/usr/lib/python3.6/multiprocessing/connection.py in poll(self, timeout)
256 self._check_readable()
--> 257 return self._poll(timeout)
258
/usr/lib/python3.6/multiprocessing/connection.py in _poll(self, timeout)
413 def _poll(self, timeout):
--> 414 r = wait([self], timeout)
415 return bool(r)
/usr/lib/python3.6/multiprocessing/connection.py in wait(object_list, timeout)
910 while True:
--> 911 ready = selector.select(timeout)
912 if ready:
/usr/lib/python3.6/selectors.py in select(self, timeout)
375 try:
--> 376 fd_event_list = self._poll.poll(timeout)
377 except InterruptedError:
/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/signal_handling.py in handler(signum, frame)
62 # Python can still get and update the process status successfully.
---> 63 _error_if_any_worker_fails()
64 if previous_handler is not None:
RuntimeError: DataLoader worker (pid 3978) is killed by signal: Bus error.
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
<ipython-input-8-b407a8532808> in <module>()
5 data_loader = data.DataLoader(dataset, batch_size=4, shuffle=True, num_workers=1)
6
----> 7 for i, data in enumerate(data_loader, 0):
8 print(i)
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __next__(self)
574 while True:
575 assert (not self.shutdown and self.batches_outstanding > 0)
--> 576 idx, batch = self._get_batch()
577 self.batches_outstanding -= 1
578 if idx != self.rcvd_idx:
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _get_batch(self)
551 else:
552 while True:
--> 553 success, data = self._try_get_batch()
554 if success:
555 return data
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _try_get_batch(self, timeout)
517 if not all(w.is_alive() for w in self.workers):
518 pids_str = ', '.join(str(w.pid) for w in self.workers if not w.is_alive())
--> 519 raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str))
520 if isinstance(e, queue.Empty):
521 return (False, None)
RuntimeError: DataLoader worker (pid(s) 3978) exited unexpectedly
| RuntimeError: DataLoader worker (pid(s) 3978) exited unexpectedly
This error is because, In data.DataLoader(dataset, batch_size=32, shuffle=False, num_workers=1) make num_workers=0 , its saying there are no subprocesses in your cpu
| https://stackoverflow.com/questions/57664289/ |
What is the difference between feature and classifier? | I saw code like this
self.feature = model_func()
if loss_type == 'softmax':
self.classifier = nn.Linear(self.feature.final_feat_dim, num_class)
self.classifier.bias.data.fill_(0)
elif loss_type == 'dist': #Baseline ++
self.classifier = backbone.distLinear(self.feature.final_feat_dim, num_class)
where model_func is a ConvNet 4/6 or ResNet 10/18/34/101
What here is classifier?
I know that in neural networks we have parameters that we learn, buffers that are used to store something that gets updated during training, activations that are the results after each layer.
Is feature same as activation, and what is a classifier, where is the end of features, and the beginning of classifier in a neural network? Is the result of a classifier also activation?
| I find the question a little messy, but I'll give my best from what I understand you're asking.
What here is classifier?
The classifier would be the model itself. The model is the one who will, after being trained, be able to classify new data.
Is feature same as activation
I don't know what kind of feature you have in mind. In data science context, a feature is understood to be one of the variables of the data one has. For example, if you have a dataset about houses, you may have features such as latitude, long., if it has a pool, how many bedrooms it has, etc.
Activation functions are mathematical equations that determine the output of a neural network. The function is attached to each neuron in the network, and determines whether it should be activated (“fired”) or not, based on whether each neuron’s input is relevant for the model’s prediction. [1]
I'm not sure I'm truly understanding what you're asking.
Is the result of a classifier also activation?
The result of a classifier is the label, the class to which each data point belongs. Activation functions are used by neural networks in the process of classifying.
Hope this helps!
[1] https://missinglink.ai/guides/neural-network-concepts/7-types-neural-network-activation-functions-right/
| https://stackoverflow.com/questions/57667304/ |
Not able to load the saved graph using torch.utils.tensorboard.SummaryWriter.add_graph method | I am saving the scalar summary along with model graph using add_scalar and add_graph methods from torch.utils.tensorboard.SummaryWriter.
While running tensorboard on the summary file, it doesnt show the model graph. Just 2 small rectangle at the bottom right, However, it is able to show the scalar variable and images.
Sample code from the pytorch documentation attached
import torch
import torchvision
from torch.utils.tensorboard import SummaryWriter
from torchvision import datasets, transforms
# Writer will output to ./runs/ directory by default
writer = SummaryWriter()
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])
trainset = datasets.MNIST('mnist_train', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
model = torchvision.models.resnet50(False)
print(model)
# Have ResNet model take in grayscale rather than RGB
model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False)
images, labels = next(iter(trainloader))
grid = torchvision.utils.make_grid(images)
writer.add_image('images', grid, 0)
writer.add_graph(model, images)
writer.close()
Any pointer to the solution would be appreciated
PyTorch version - 1.2.0 Tensorboard version - 1.15.0a20190828
| I had the same problem.
Check out this thread:
https://github.com/pytorch/pytorch/issues/24157
TLDR: Update PyTorch to PyTorch-nightly and the problem should be solved.
https://pytorch.org/get-started/locally/
| https://stackoverflow.com/questions/57706256/ |
How can I find a solution for the "FileNotFoundError" | I'm currently working on an image classifier project. During the testing of the predict function, I receive the error: FileNotFoundError: [Errno 2] No such file or directory: 'flowers/test/1/image_06760'
the path of the file is correct.
You can find the whole notebook here:
https://github.com/MartinTschendel/image-classifier-1/blob/master/190829-0510-Image%20Classifier%20Project.ipynb
Here is the respective predict function and the test of this function:
def predict(image_path, model, topk=5):
''' Predict the class (or classes) of an image using a trained deep learning model.
'''
#loading model
loaded_model = load_checkpoint(model)
# implement the code to predict the class from an image file
img = Image.open(image_name)
img = process_image(img)
# convert 2D image to 1D vector
img = np.expand_dims(img, 0)
img = torch.from_numpy(img)
#set model to evaluation mode and turn off gradients
loaded_model.eval()
with torch.no_grad():
#run image through network
output = loaded_model.forward(img)
#calculate probabilities
probs = torch.exp(output)
probs_top = probs.topk(topk)[0]
index_top = probs.topk(topk)[1]
# Convert probabilities and outputs to lists
probs_top_list = np.array(probs_top)[0]
index_top_list = np.array(index_top[0])
#load index and class mapping
loaded_model.class_to_idx = train_data.class_to_idx
#invert index-class dictionary
idx_to_class = {x: y for y, x in class_to_idx.items()}
#convert index list to class list
classes_top_list = []
for index in index_top_list:
classes_top_list += [idx_to_class[index]]
return probs_top_list, classes_top_list
# test predict function
# inputs equal paths to saved model and test image
model_path = '190824_checkpoint.pth'
image_name = data_dir + '/test' + '/1/' + 'image_06760'
probs, classes = predict(image_name, model_path, topk=5)
print(probs)
print(classes)
This is the error I receive:
FileNotFoundError: [Errno 2] No such file or directory: 'flowers/test/1/image_06760'
| These are the images you have
You should have it as
image_name = data_dir + '/test' + '/1/' + 'image_06760.jpg'
for it to work as you were not specifying the image extension.
| https://stackoverflow.com/questions/57710920/ |
Getting truth value of array with more than one element in ambigous for albuementation transform | I am using albumentations for applying transform to a Pytorch model but getting this error and I m not getting any clue of what this error is about. Only thing I know is this is occuring due to transform which is being applied but not sure what is wrong with that.
ValueError: Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 99, in <listcomp>
samples = collate_fn([dataset[i] for i in batch_indices])
File "<ipython-input-23-119ea6bc360e>", line 24, in __getitem__
image = self.transform(image)
File "/opt/conda/lib/python3.6/site-packages/albumentations/core/composition.py", line 164, in __call__
need_to_run = force_apply or random.random() < self.p
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
This is are the snippets of code.
Dataloader getitem() method:
image = cv2.imread(p_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = crop_image_from_gray(image)
image = cv2.resize(image, (IMG_SIZE, IMG_SIZE))
image = cv2.addWeighted ( image,4, cv2.GaussianBlur( image , (0,0) , 10) ,-4 ,128)
print(image.shape)
image = self.transform(image)
transforms applied :
val_transform = albumentations.Compose([
Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
),
ToTensor()
])
The class is called by:
valset = MyDataset(val_df, transform = val_transform)
| From official albumentation documentation, you can apply transformation to image
from PIL import Image
import cv2
import numpy as np
from torch.utils.data import Dataset
from torchvision import transforms
from albumentations import Compose, RandomCrop, Normalize, HorizontalFlip, Resize
from albumentations.pytorch import ToTensor
class AlbumentationsDataset(Dataset):
"""__init__ and __len__ functions are the same as in TorchvisionDataset"""
def __init__(self, file_paths, labels, transform=None):
self.file_paths = file_paths
self.labels = labels
self.transform = transform
def __len__(self):
return len(self.file_paths)
def __getitem__(self, idx):
label = self.labels[idx]
file_path = self.file_paths[idx]
# Read an image with OpenCV
image = cv2.imread(file_path)
# By default OpenCV uses BGR color space for color images,
# so we need to convert the image to RGB color space.
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = crop_image_from_gray(image)
image = cv2.resize(image, (IMG_SIZE, IMG_SIZE))
image = cv2.addWeighted ( image,4, cv2.GaussianBlur( image , (0,0) , 10) ,-4 ,128)
image = Img.fromarray(image, mode='RGB')
if self.transform:
augmented = self.transform(image=np.array(image))
image = augmented['image']
image = np.transpose(image, (2, 0, 1))
return image, label
albumentations_transform = Compose([
Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
),
ToTensor()
])
albumentations_dataset = AlbumentationsDataset(
file_paths=['./images/image_1.jpg', './images/image_2.jpg', './images/image_3.jpg'],
labels=[1, 2, 3],
transform=albumentations_transform,
)
test_loader = DataLoader(dataset = albumentations_dataset, batch_size=4, drop_last=False, shuffle=False).
| https://stackoverflow.com/questions/57718447/ |
TypeError although same shape: if not (target.size() == input.size()): 'int' object is not callable | This is the error message I get. In the first line, I output the shapes of predicted and target. From my understanding, the error arises from those shapes not being the same but here they clearly are.
torch.Size([6890, 3]) torch.Size([6890, 3])
Traceback (most recent call last):
File "train.py", line 251, in <module>
main()
File "train.py", line 230, in main
train(net, training_dataset, targets, device, criterion, optimizer, epoch, args.epochs)
File "train.py", line 101, in train
loss = criterion(predicted, target.detach().cpu().numpy())
File "/home/hb119056/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/hb119056/.local/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 443, in forward
return F.mse_loss(input, target, reduction=self.reduction)
File "/home/hb119056/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 2244, in mse_loss
if not (target.size() == input.size()):
TypeError: 'int' object is not callable
I hope all the relevant context information is provided and if not, please let me know. Thanks for any suggestions!
EDIT: This is the part of the code where this error occurs:
target = torch.from_numpy(np.load(file_dir + '/points/points{:03}.npy'.format(i))).to(device)
rv = torch.zeros(12 * outputs.shape[0])
for j in [x for x in range(10) if x != i]:
source = torch.from_numpy(np.load(file_dir + '/points/points{:03}.npy'.format(j))).to(device)
rv = factor.ransac(source, target, prob, n_iter, tol, device) # some self-written RANSAC-like method
predicted = factor.predict(source, rv, outputs)
print(target.shape, predicted.shape)
loss = criterion(predicted, target.detach().cpu().numpy()) ## error occurs here
criterion is nn.MSELoss().
| A little bit late but maybe it will help someone else. Just solved the same problem for myself.
As Alpha said in his answer we cannot call .size() for a numpy array.
But we can call .size() for a tensor.
Therefore, we need to make our target a tensor. You can do it like this:
target = torch.from_numpy(target)
I'm using GPU, so I also needed to send my target to GPU. You can do it like this:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
target = target.to(device)
And then the loss function must work perfectly.
| https://stackoverflow.com/questions/57724134/ |
Binary cross entropy Vs categorical cross entropy with 2 classes | When considering the problem of classifying an input to one of 2 classes, 99% of the examples I saw used a NN with a single output and sigmoid as their activation followed by a binary cross-entropy loss. Another option that I thought of is having the last layer produce 2 outputs and use a categorical cross-entropy with C=2 classes, but I never saw it in any example.
Is there any reason for that?
Thanks
| If you are using softmax on top of the two output network you get an output that is mathematically equivalent to using a single output with sigmoid on top.
Do the math and you'll see.
In practice, from my experience, if you look at the raw "logits" of the two outputs net (before softmax) you'll see that one is exactly the negative of the other. This is a result of the gradients pulling exactly in the opposite direction each neuron.
Therefore, since both approaches are equivalent, the single output configuration has less parameters and requires less computations, thus it is more advantageous to use a single output with a sigmoid ob top.
| https://stackoverflow.com/questions/57726064/ |
torch.cat but create a new dimension | I would like to concatenate tensors, not along a dimension, but by creating a new dimension.
For example:
x = torch.randn(2, 3)
x.shape # (2, 3)
torch.cat([x,x,x,x], 0).shape # (8, 3)
# This concats along dim 0, not what I want
torch.cat([x,x,x,x], -1).shape # (2, 10)
# This concats along dim 1, not what I want
torch.cat([x[None, :, :],x[None, :, :],x[None, :, :],x[None, :, :]], 0).shape
# => (4, 2, 3)
# This is what I want, but unwieldy
Is there a simpler way?
| Just use torch.stack:
torch.stack([x,x,x,x]).shape # (4, 2, 3)
| https://stackoverflow.com/questions/57727618/ |
How to ensure that a batch contains samples from all workers with PyTorch's DataLoader? | I want to know how to use torch.utils.data.DataLoader in PyTorch, especially in a multi-worker case.
I found that one batch output from DataLoader always comes from a single worker.
I expected that there is a queue in the DataLoader which stores data from all of the workers and DataLoader shuffles them in the queue to output the random batch data. I think this is the way in tf.data.Dataset in Tensorflow.
Can we implement a similar function in PyTorch? I want to load a dataset from big serialized files (like Tfrecord) by using multi workers. In this case, mixing the source file in one batch, which means mixing the source of the worker, is important.
Please refer to following code:
import random
import time
import torch
class MyDataset(torch.utils.data.Dataset):
def __len__(self):
return 50
def __getitem__(self, idx):
info = torch.utils.data.get_worker_info()
time.sleep(random.uniform(0, 1))
print("[{}]:{}".format(info.id, idx))
return idx, info.id
if __name__ == '__main__':
dataset = MyDataset()
dataloader = torch.utils.data.DataLoader(dataset, batch_size=5, shuffle=False, num_workers=2)
for batch in dataloader:
print(batch)
Output:
[0]:0
[1]:5
[0]:1
[1]:6
[0]:2
[0]:3
[1]:7
[0]:4
[tensor([0, 1, 2, 3, 4]), tensor([0, 0, 0, 0, 0])]
[1]:8
[1]:9
[tensor([5, 6, 7, 8, 9]), tensor([1, 1, 1, 1, 1])]
[0]:10
[0]:11
[1]:15
[1]:16
[0]:12
[1]:17
...
Here, [0, 1, 2, 3, 4] and [0, 0, 0, 0, 0] in [tensor([0, 1, 2, 3, 4]), tensor([0, 0, 0, 0, 0])] mean that this batch includes index 0-th to 4-th data came from worker id 0.
Note that shuffle=True does not solve this problem which only change the indices of data.
In this case, I want to get a batch like: [tensor([0, 5, 1, 6, 2]), tensor([0, 1, 0, 1, 0])].
| I've implemented something simple to solve a similar problem, where I have large video files as training data and each worker is responsible for loading and preprocessing a single file and then yielding samples from it. Problem is that as OP describes, with Pytorch's default data loading mechanism, each batch contains samples only from a single video file.
First, let's review the problem. In this simplified code example each worker yields a single Tensor containing its zero-indexed worker id. With a batch size of 32 and 4 workers, we want each batch to contain 8 zeros, 8 ones, 8 twos and 8 threes.
from collections import defaultdict
import torch as T
import torch.utils.data as tdata
class Dataset(tdata.IterableDataset):
def __init__(self, batch_size: int):
self._bs = batch_size
def __iter__(self):
worker_info = tdata.get_worker_info()
if not worker_info:
raise NotImplementedError('Not implemented for num_workers=0')
for _ in range(self._bs):
yield T.tensor([worker_info.id])
batch_size = 32
num_workers = 4
dataset = Dataset(batch_size)
loader = tdata.DataLoader(dataset,
batch_size=batch_size,
num_workers=num_workers)
for batch in loader:
counts = defaultdict(int)
for n in batch.numpy().flatten():
counts[n] += 1
print(dict(counts))
Instead the code prints:
{0: 32}
{1: 32}
{2: 32}
{3: 32}
Meaning that the first batch contains samples only from worker 0, the second only from worker 1, etc. To remedy this, we will set the batch size in the DataLoader to batch_size // num_workers and use a simple wrapper over the DataLoader to pool samples from each worker for our batch:
def pooled_batches(loader):
loader_it = iter(loader)
while True:
samples = []
for _ in range(loader.num_workers):
try:
samples.append(next(loader_it))
except StopIteration:
pass
if len(samples) == 0:
break
else:
yield T.cat(samples, dim=0)
batch_size = 32
num_workers = 4
dataset = Dataset(batch_size)
per_worker = batch_size // num_workers
loader = tdata.DataLoader(dataset,
batch_size=per_worker,
num_workers=num_workers)
for batch in pooled_batches(loader):
counts = defaultdict(int)
for n in batch.numpy().flatten():
counts[n] += 1
print(dict(counts))
And the code now prints
{0: 8, 1: 8, 2: 8, 3: 8}
{0: 8, 1: 8, 2: 8, 3: 8}
{0: 8, 1: 8, 2: 8, 3: 8}
{0: 8, 1: 8, 2: 8, 3: 8}
as expected.
| https://stackoverflow.com/questions/57729279/ |
Can't import torch in jupyter notebook | System: macOS 10.13.6
Python: 3.7
Anaconda3
I have trouble when import torch in jupyter notebook.
ModuleNotFoundError: No module named 'torch'
Here is how I install pytorch:
conda install pytorch torchvision -c pytorch
I've checked PyTorch is installed in my anaconda environment:
When I command python3 in my terminal and import torch, it works. But not work in jupyter notebook
I've tried:
conda update conda
conda install mkl=2018
But still the same error.
| You have to install jupyter in addition to pytorch inside your activated conda env. Here is installation steps:
1. Create conda env
for example: pytorch_p37 with python 3.7:
user@pc:~$ conda create -n pytorch_p37 python=3.7
2. Activate it
user@pc:~$ conda activate pytorch_p37
Or with (for older conda versions):
user@pc:~$ source activate pytorch_p37
Now you should see (pytorch_p37) before your shell prompt:
(pytorch_p37) user@pc:~$
3. Go to PyTorch website and choose appropriate installation command via conda. Run it in your shell, for example
(pytorch_p37) user@pc:~$ conda install pytorch torchvision -c pytorch
4. Install jupyter inside your activated env as well
(pytorch_p37) user@pc:~$ conda install jupyter
5. Verify the installation
(pytorch_p37) user@pc:~$ conda list
# packages in environment at /home/user/anaconda3/envs/pytorch_p37:
#
# Name
...
jupyter 1.0.0
jupyter_client 5.3.1
jupyter_console 6.0.0
jupyter_core 4.5.0
...
python 3.7.4
pytorch 1.2.0
...
6. Run jupyter
(pytorch_p37) user@pc:~$ jupyter notebook
| https://stackoverflow.com/questions/57735701/ |
What does the `model.parameters()` include? | In Pytorch,
What will be registered into the model.parameters().
As far as now, what I know are as belows:
1. Conv layer: weight bias
2. BN layers: weight(gamma) bias(beta)
3. nn.Parameter()
such as: self.alpha = nn.Parameter(torch.rand(10)) defined in the model.
My question is:
And are there some parameters else that are registered in the model.parameters() ?
PS. The most common case for the model.parameters() is in the optimizer,
e.g. pytorch resnet example
optimizer = torch.optim.SGD(model.parameters(), args.lr,
momentum=args.momentum,
weight_decay=args.weight_decay)
Thank you in advance.
| Like you wrote there, model.parameters() stores the weight and bias (if set to true) values of the model.
It is given as an argument to an optimizer to update the weight and bias values of the model with one line of code optimizer.step(), which you then use when next you go over your dataset.
| https://stackoverflow.com/questions/57738199/ |
Data Normalization using Pytorch | I'm starting to work on the classification of images dataset, as many tutorials I followed; it starts by normalizing the data (train and test data)
My question is: if I want to normalize the data by shifting and scaling it with a factor of 0.5
What does this mean 'the factor of something x'?
I know that it will be used within .Normalize():
transform_train = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(),
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(),
])
But I'm a little bit confused about the meaning shift and scale (maybe it's like resize?) by the factor of X
Thanks.
| Shifting and scaling refers to the color space. What you do is you subtract the mean (shifting to the mean of the pixel values of the whole dataset to 0) and divide by the standard deviation (scaling the pixel values to [0, 1].
It has nothing to do with modifying the size of the image or the like.
In numy you would do something like:
mean, std = np.mean(image), np.std(image)
image = image - mean
image = image / std
Note: You wouldn't want to normalize the data bz just 0.5 but by its mean and standard deviation.
| https://stackoverflow.com/questions/57744270/ |
Multiple outputs in Pytorch, Keras style | How could you implement these 2 Keras models (inspired by the Datacamp course 'Advanced Deep Learning with Keras in Python') in Pytorch:
Classification with 1 input, 2 outputs:
from keras.layers import Input, Concatenate, Dense
from keras.models import Model
input_tensor = Input(shape=(1,))
output_tensor = Dense(2)(input_tensor)
model = Model(input_tensor, output_tensor)
model.compile(optimizer='adam', loss='categorical_crossentropy')
X = ... # e.g. a pandas series
y = ... # e.g. a pandas df with 2 columns
model.fit(X, y, epochs=100)
A model with classification and regression:
from keras.layers import Input, Dense
from keras.models import Model
input_tensor = Input(shape=(1,))
output_tensor_reg = Dense(1)(input_tensor)
output_tensor_class = Dense(1, activation='sigmoid')(output_tensor_reg)
model.compile(loss=['mean_absolute_error','binary_crossentropy']
X = ...
y_reg = ...
y_class = ...
model.fit(X, [y_reg, y_class], epochs=100)
| This ressource was particularly helpful.
Basically, the idea is that, contrary to Keras, you have to explicitly say where you're going to compute each output in your forward function and how the global loss is gonna be computed from them.
For example, regarding the 1st example:
def __init__(self, ...):
... # define your model elements
def forward(self, x):
# Do your stuff here
...
x1 = F.sigmoid(x) # class probabilities
x2 = F.sigmoid(x) # bounding box calculation
return x1, x2
Then you compute the losses:
out1, out2 = model(data)
loss1 = criterion1(out1, target1)
loss2 = criterion2(out2, targt2)
alpha = ... # define the weights of each sub-loss in the global loss
loss = alpha * loss1 + (1-alpha) * loss2
loss.backward()
For the 2nd one, it's almost the same but you compute the loss at different point in the forward pass:
def __init__(self, ...):
... # define your model elements
def forward(self, main_input, aux_input):
aux = F.relu(self.dense_1(main_input))
x = F.relu(self.input_layer(aux))
x = F.sigmoid(x)
return x, aux
| https://stackoverflow.com/questions/57753687/ |
TypeError: hook() takes 2 positional arguments but 3 were given | I'm new to pytorch and I'm trying to use hook() and register_forward_pre_hook in my project
What I've tried is
def get_features_hook(module,input):
print(input)
handle_feat = alexnet.features[0].register_forward_pre_hook(get_features_hook)
a = alexnet(input_data)
And I got belows error at a = alexnet(input_data)
TypeError: get_features_hook() takes 2 positional arguments but 3 were given
I've lost few hours on this problem and I just can't able to figure it out.
Anyone likes to help me?
With Shai's help, I tried his codes, and I got this
Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
get_features_hook called with 2 args:
arg of type Conv2d
arg of type tuple
File "<input>", line 2, in get_features_hook
NameError: name 'args' is not defined
| If get_features_hook is defined inside your torch.nn.Module, it should be annotated as @staticmethod, otherwise self is implicitly passed to it
| https://stackoverflow.com/questions/57765751/ |
Specify network to not backpropagate the gradients in a given function | In my neural network, in order to compute the loss, I need to do some intermediate computations during training to first obtain some transformations rv.
rv = factor.ransac(source, target, prob, device)
predicted = factor.predict(source, rv, outputs, device)
loss = criterion(predicted, target)
I want to backpropagate the gradients only through predicted but not through factor.ransac. How do I do this?
| You can use detach().
rv = factor.ransac(source, target, prob, device).detach()
That way, gradients on rv will be discarded.
| https://stackoverflow.com/questions/57774456/ |
Is indexing with a bool array or index array faster in numpy/pytorch? | I can index my numpy array / pytorch tensor with a boolean array/tensor of the same shape or an array/tensor containing integer indexes of the elements I'm after. Which is faster?
| The following tests indicate that it's generally 3x to 20x faster with an index array in both numpy and pytorch:
In [1]: a = torch.arange(int(1e5))
idxs = torch.randint(len(a), (int(1e4),))
ind = torch.zeros_like(a, dtype=torch.uint8)
ind[idxs] = 1
ac, idxsc, indc = a.cuda(), idxs.cuda(), ind.cuda()
In [2]: %timeit a[idxs]
73.4 µs ± 1 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [3]: %timeit a[ind]
622 µs ± 8.99 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [4]: %timeit ac[idxsc]
9.51 µs ± 475 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [5]: %timeit ac[indc]
59.6 µs ± 313 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [6]: idxs = torch.arange(len(a)-1, dtype=torch.long)
ind = torch.zeros_like(a, dtype=torch.uint8)
ind[idxs] = 1
ac, idxsc, indc = a.cuda(), idxs.cuda(), ind.cuda()
In [7]: %timeit a[idxs]
146 µs ± 14.2 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [8]: %timeit a[ind]
4.59 ms ± 106 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [9]: %timeit ac[idxsc]
33 µs ± 15.1 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [10]: %timeit ac[indc]
85.9 µs ± 56.9 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
| https://stackoverflow.com/questions/57783029/ |
mask first k elements in a 3D tensor in PyTorch (different k for each row) | I have a tensor M of dimensions [NxQxD] and a 1d tensor of indices idx (of size N). I want to efficiently create a tensor mask of dimensions [NxQxD] such that mask[i,j,k] = 1 iff j <= idx[i], i.e. I want to keep only the idx[i] first dimensions out of Q in the second dimension (dim=1) of M, for every row i.
Thanks!
| It turns out this can be done via a broadcasting trick:
mask_2d = torch.arange(Q)[None, :] < idx[:, None] #(N,Q)
mask_3d = mask[..., None] #(N,Q,1)
masked = mask.float() * data
| https://stackoverflow.com/questions/57788412/ |
torch.utils.data.random_split() is not splitting the data | I’m not getting split when I use torch.utils.data.random_split.
I get correct numbers for train_size and val_size, but when I do random_split, both train_data and val_data get full_data. There is no split happening.
Please help me with this issue.
class DeviceLoader(Dataset):
def __init__(self, root_dir, train=True, transform=None):
self.file_path = root_dir
self.train = train
self.transform = transform
self.file_names = ['%s/%s'%(root,file) for root,_,files in os.walk(root_dir) for file in files]
self.len = len(self.file_names)
self.labels = {'BP_Raw_Images':0, 'DT_Raw_Images':1, 'GL_Raw_Images':2, 'PO_Raw_Images':3, 'WS_Raw_Images':4}
def __len__(self):
return(len(self.file_names))
def __getitem__(self, idx):
file_name = self.file_names[idx]
device = file_name.split('/')[5]
img = self.pil_loader(file_name)
if(self.transform):
img = self.transform(img)
cat = self.labels[device]
if(self.train):
return(img, cat)
else:
return(img, file_name)
full_data = DeviceLoader(root_dir=’/kaggle/input/devices/dataset/’, transform=transforms, train=True)
train_size = int(0.7*len(full_data))
val_size = len(full_data) - train_size
train_data, val_data = torch.utils.data.random_split(full_data,[train_size,val_size])
Expected result is to split the full_data into train_data(2000) and val_data(500). But instead, I get full_data(2500) in both train and val.
| From the image below you could see, it actually makes a subset of data but the original dataset is still there. This might be confusing. I did the following on mnist dataset
train, validate, test = data.random_split(training_set, [50000, 10000, 10000])
print(len(train))
print(len(validate))
print(len(test))
output:
50000
10000
10000
| https://stackoverflow.com/questions/57789645/ |
Pytorch : how to run code on several machines in cluster | I am using a cluster to train a recurrent neural network developed using PyTorch. PyTorch automatically threads, which allows to use all the cores of a machine in parallel without having to explicitly program for it. This is great !
Now when I try to use several nodes at the same time using a script like this one :
#$ -S /bin/bash
#$ -N comparison_of_architecture
#$ -pe mvapich2-rostam 32
#4 -tc 4
#$ -o /scratch04.local/cnelias/Deep-Jazz/logs/out_comparison_training.txt
#$ -e /scratch04.local/cnelias/Deep-Jazz/logs/err_comparison_training.txt
#$ -t 1
#$ -cwd
I see that 4 nodes are being used but only one is actually doing work, so "only" 32 cores are in use.
I have no knowledge of parallel programming and I don't understand a thing in the tutorial provided on PyTorch's website, I am afraid this is completely out of my scope.
Are you aware of a simple way to let a PyTorch program run on several machines without having to explicitly program the exchanges of the messages and computation between these machines ?
PS : I unfortunately don't have a GPU and the cluster I am using also doesn't, otherwise I would have tried it.
| tl;dr There is no easy solution.
There are two ways how you can parallelize training of a deep learning model. The most commonly used is data parallelism (as opposed to model parallelism). In that case, you have a copy of the model on each device, run the model and back-propagation on each device independently and get the weight gradients. Now, the tricky part begins. You need to collect all gradients at a single place, sum them (differentiation is linear w.r.t. summation) and do the optimizer step. The optimizer computes the weight updates and you need to tell each copy of your model how to update the weights.
PyTorch can somehow do this for multiple GPUs on a single machine, but as far as I know, there is no ready-made solution to do this on multiple machines.
| https://stackoverflow.com/questions/57803005/ |
"AssertionError: Torch not compiled with CUDA enabled" in spite upgrading to CUDA version | I figured out this is a popular question, but still I couldn't find a solution for that.
I'm trying to run a simple repo Here which uses PyTorch. Although I just upgraded my Pytorch to the latest CUDA version from pytorch.org (1.2.0), it still throws the same error. I'm on Windows 10 and use conda with python 3.7.
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
How to fix the problem?
Here is my conda list:
# Name Version Build Channel
_ipyw_jlab_nb_ext_conf 0.1.0 py37_0 anaconda
_pytorch_select 1.1.0 cpu anaconda
_tflow_select 2.3.0 mkl anaconda
absl-py 0.7.1 pypi_0 pypi
alabaster 0.7.12 py37_0 anaconda
anaconda 2019.07 py37_0 anaconda
anaconda-client 1.7.2 py37_0 anaconda
anaconda-navigator 1.9.7 py37_0 anaconda
anaconda-project 0.8.3 py_0 anaconda
argparse 1.4.0 pypi_0 pypi
asn1crypto 0.24.0 py37_0 anaconda
astor 0.8.0 pypi_0 pypi
astroid 2.2.5 py37_0 anaconda
astropy 3.2.1 py37he774522_0 anaconda
atomicwrites 1.3.0 py37_1 anaconda
attrs 19.1.0 py37_1 anaconda
babel 2.7.0 py_0 anaconda
backcall 0.1.0 py37_0 anaconda
backports 1.0 py_2 anaconda
backports-csv 1.0.7 pypi_0 pypi
backports-functools-lru-cache 1.5 pypi_0 pypi
backports.functools_lru_cache 1.5 py_2 anaconda
backports.os 0.1.1 py37_0 anaconda
backports.shutil_get_terminal_size 1.0.0 py37_2 anaconda
backports.tempfile 1.0 py_1 anaconda
backports.weakref 1.0.post1 py_1 anaconda
beautifulsoup4 4.7.1 py37_1 anaconda
bitarray 0.9.3 py37he774522_0 anaconda
bkcharts 0.2 py37_0 anaconda
blas 1.0 mkl anaconda
bleach 3.1.0 py37_0 anaconda
blosc 1.16.3 h7bd577a_0 anaconda
bokeh 1.2.0 py37_0 anaconda
boto 2.49.0 py37_0 anaconda
bottleneck 1.2.1 py37h452e1ab_1 anaconda
bzip2 1.0.8 he774522_0 anaconda
ca-certificates 2019.5.15 0 anaconda
certifi 2019.6.16 py37_0 anaconda
cffi 1.12.3 py37h7a1dbc1_0 anaconda
chainer 6.2.0 pypi_0 pypi
chardet 3.0.4 py37_1 anaconda
cheroot 6.5.5 pypi_0 pypi
cherrypy 18.1.2 pypi_0 pypi
click 7.0 py37_0 anaconda
cloudpickle 1.2.1 py_0 anaconda
clyent 1.2.2 py37_1 anaconda
colorama 0.4.1 py37_0 anaconda
comtypes 1.1.7 py37_0 anaconda
conda 4.7.11 py37_0 anaconda
conda-build 3.18.9 py37_3 anaconda
conda-env 2.6.0 1 anaconda
conda-package-handling 1.3.11 py37_0 anaconda
conda-verify 3.4.2 py_1 anaconda
console_shortcut 0.1.1 3 anaconda
constants 0.6.0 pypi_0 pypi
contextlib2 0.5.5 py37_0 anaconda
cpuonly 1.0 0 pytorch
cryptography 2.7 py37h7a1dbc1_0 anaconda
cudatoolkit 10.0.130 0 anaconda
curl 7.65.2 h2a8f88b_0 anaconda
cycler 0.10.0 py37_0 anaconda
cython 0.29.12 py37ha925a31_0 anaconda
cytoolz 0.10.0 py37he774522_0 anaconda
dask 2.1.0 py_0 anaconda
dask-core 2.1.0 py_0 anaconda
decorator 4.4.0 py37_1 anaconda
defusedxml 0.6.0 py_0 anaconda
distributed 2.1.0 py_0 anaconda
docutils 0.14 py37_0 anaconda
entrypoints 0.3 py37_0 anaconda
et_xmlfile 1.0.1 py37_0 anaconda
ez-setup 0.9 pypi_0 pypi
fastcache 1.1.0 py37he774522_0 anaconda
fasttext 0.9.1 pypi_0 pypi
feedparser 5.2.1 pypi_0 pypi
ffmpeg 4.1.3 h6538335_0 conda-forge
filelock 3.0.12 py_0 anaconda
first 2.0.2 pypi_0 pypi
flask 1.1.1 py_0 anaconda
freetype 2.9.1 ha9979f8_1 anaconda
future 0.17.1 py37_0 anaconda
gast 0.2.2 py37_0 anaconda
get 2019.4.13 pypi_0 pypi
get_terminal_size 1.0.0 h38e98db_0 anaconda
gevent 1.4.0 py37he774522_0 anaconda
glob2 0.7 py_0 anaconda
google-pasta 0.1.7 pypi_0 pypi
graphviz 2.38.0 4 anaconda
greenlet 0.4.15 py37hfa6e2cd_0 anaconda
grpcio 1.22.0 pypi_0 pypi
h5py 2.9.0 py37h5e291fa_0 anaconda
hdf5 1.10.4 h7ebc959_0 anaconda
heapdict 1.0.0 py37_2 anaconda
html5lib 1.0.1 py37_0 anaconda
http-client 0.1.22 pypi_0 pypi
hypothesis 4.34.0 pypi_0 pypi
icc_rt 2019.0.0 h0cc432a_1 anaconda
icu 58.2 ha66f8fd_1 anaconda
idna 2.8 py37_0 anaconda
imageio 2.4.1 pypi_0 pypi
imageio-ffmpeg 0.3.0 pypi_0 pypi
imagesize 1.1.0 py37_0 anaconda
importlib_metadata 0.17 py37_1 anaconda
imutils 0.5.2 pypi_0 pypi
intel-openmp 2019.0 pypi_0 pypi
ipykernel 5.1.1 py37h39e3cac_0 anaconda
ipython 7.6.1 py37h39e3cac_0 anaconda
ipython_genutils 0.2.0 py37_0 anaconda
ipywidgets 7.5.0 py_0 anaconda
isort 4.3.21 py37_0 anaconda
itsdangerous 1.1.0 py37_0 anaconda
jaraco-functools 2.0 pypi_0 pypi
jdcal 1.4.1 py_0 anaconda
jedi 0.13.3 py37_0 anaconda
jinja2 2.10.1 py37_0 anaconda
joblib 0.13.2 py37_0 anaconda
jpeg 9b hb83a4c4_2 anaconda
json5 0.8.4 py_0 anaconda
jsonschema 3.0.1 py37_0 anaconda
jupyter 1.0.0 py37_7 anaconda
jupyter_client 5.3.1 py_0 anaconda
jupyter_console 6.0.0 py37_0 anaconda
jupyter_core 4.5.0 py_0 anaconda
jupyterlab 1.0.2 py37hf63ae98_0 anaconda
jupyterlab_server 1.0.0 py_0 anaconda
keras 2.2.4 0 anaconda
keras-applications 1.0.8 py_0 anaconda
keras-base 2.2.4 py37_0 anaconda
keras-preprocessing 1.1.0 py_1 anaconda
keyring 18.0.0 py37_0 anaconda
kiwisolver 1.1.0 py37ha925a31_0 anaconda
krb5 1.16.1 hc04afaa_7
lazy-object-proxy 1.4.1 py37he774522_0 anaconda
libarchive 3.3.3 h0643e63_5 anaconda
libcurl 7.65.2 h2a8f88b_0 anaconda
libiconv 1.15 h1df5818_7 anaconda
liblief 0.9.0 ha925a31_2 anaconda
libmklml 2019.0.5 0 anaconda
libpng 1.6.37 h2a8f88b_0 anaconda
libprotobuf 3.8.0 h7bd577a_0 anaconda
libsodium 1.0.16 h9d3ae62_0 anaconda
libssh2 1.8.2 h7a1dbc1_0 anaconda
libtiff 4.0.10 hb898794_2 anaconda
libxml2 2.9.9 h464c3ec_0 anaconda
libxslt 1.1.33 h579f668_0 anaconda
llvmlite 0.29.0 py37ha925a31_0 anaconda
locket 0.2.0 py37_1 anaconda
lxml 4.3.4 py37h1350720_0 anaconda
lz4-c 1.8.1.2 h2fa13f4_0 anaconda
lzo 2.10 h6df0209_2 anaconda
m2w64-gcc-libgfortran 5.3.0 6
m2w64-gcc-libs 5.3.0 7
m2w64-gcc-libs-core 5.3.0 7
m2w64-gmp 6.1.0 2
m2w64-libwinpthread-git 5.0.0.4634.697f757 2
make-dataset 1.0 pypi_0 pypi
markdown 3.1.1 py37_0 anaconda
markupsafe 1.1.1 py37he774522_0 anaconda
matplotlib 3.1.0 py37hc8f65d3_0 anaconda
mccabe 0.6.1 py37_1 anaconda
menuinst 1.4.16 py37he774522_0 anaconda
mistune 0.8.4 py37he774522_0 anaconda
mkl 2019.0 pypi_0 pypi
mkl-service 2.0.2 py37he774522_0 anaconda
mkl_fft 1.0.12 py37h14836fe_0 anaconda
mkl_random 1.0.2 py37h343c172_0 anaconda
mock 3.0.5 py37_0 anaconda
more-itertools 7.0.0 py37_0 anaconda
moviepy 1.0.0 pypi_0 pypi
mpmath 1.1.0 py37_0 anaconda
msgpack-python 0.6.1 py37h74a9793_1 anaconda
msys2-conda-epoch 20160418 1
multipledispatch 0.6.0 py37_0 anaconda
mysqlclient 1.4.2.post1 pypi_0 pypi
navigator-updater 0.2.1 py37_0 anaconda
nbconvert 5.5.0 py_0 anaconda
nbformat 4.4.0 py37_0 anaconda
networkx 2.3 py_0 anaconda
ninja 1.9.0 py37h74a9793_0 anaconda
nltk 3.4.4 py37_0 anaconda
nose 1.3.7 py37_2 anaconda
notebook 6.0.0 py37_0 anaconda
numba 0.44.1 py37hf9181ef_0 anaconda
numexpr 2.6.9 py37hdce8814_0 anaconda
numpy 1.16.4 pypi_0 pypi
numpy-base 1.16.4 py37hc3f5095_0 anaconda
numpydoc 0.9.1 py_0 anaconda
olefile 0.46 py37_0 anaconda
opencv-contrib-python 4.1.0.25 pypi_0 pypi
opencv-python 4.1.0.25 pypi_0 pypi
openpyxl 2.6.2 py_0 anaconda
openssl 1.1.1c he774522_1 anaconda
packaging 19.0 py37_0 anaconda
pandas 0.24.2 py37ha925a31_0 anaconda
pandoc 2.2.3.2 0 anaconda
pandocfilters 1.4.2 py37_1 anaconda
parso 0.5.0 py_0 anaconda
partd 1.0.0 py_0 anaconda
path.py 12.0.1 py_0 anaconda
pathlib2 2.3.4 py37_0 anaconda
patsy 0.5.1 py37_0 anaconda
pattern 3.6 pypi_0 pypi
pdfminer-six 20181108 pypi_0 pypi
pep8 1.7.1 py37_0 anaconda
pickleshare 0.7.5 py37_0 anaconda
pillow 6.1.0 py37hdc69c19_0 anaconda
pip 19.1.1 py37_0 anaconda
pkginfo 1.5.0.1 py37_0 anaconda
pluggy 0.12.0 py_0 anaconda
ply 3.11 py37_0 anaconda
portend 2.5 pypi_0 pypi
post 2019.4.13 pypi_0 pypi
powershell_shortcut 0.0.1 2 anaconda
proglog 0.1.9 pypi_0 pypi
prometheus_client 0.7.1 py_0 anaconda
prompt_toolkit 2.0.9 py37_0 anaconda
protobuf 3.7.1 pypi_0 pypi
psutil 5.6.3 py37he774522_0 anaconda
public 2019.4.13 pypi_0 pypi
py 1.8.0 py37_0 anaconda
py-lief 0.9.0 py37ha925a31_2 anaconda
pybind11 2.3.0 pypi_0 pypi
pycodestyle 2.5.0 py37_0 anaconda
pycosat 0.6.3 py37hfa6e2cd_0 anaconda
pycparser 2.19 py37_0 anaconda
pycrypto 2.6.1 py37hfa6e2cd_9 anaconda
pycryptodome 3.8.2 pypi_0 pypi
pycurl 7.43.0.3 py37h7a1dbc1_0 anaconda
pydot 1.4.1 pypi_0 pypi
pyflakes 2.1.1 py37_0 anaconda
pygments 2.4.2 py_0 anaconda
pylint 2.3.1 py37_0 anaconda
pyodbc 4.0.26 py37ha925a31_0 anaconda
pyopenssl 19.0.0 py37_0 anaconda
pyparsing 2.4.0 py_0 anaconda
pyqt 5.9.2 py37h6538335_2 anaconda
pyreadline 2.1 py37_1 anaconda
pyrsistent 0.14.11 py37he774522_0 anaconda
pysocks 1.7.0 py37_0 anaconda
pytables 3.5.2 py37h1da0976_1 anaconda
pytest 5.0.1 py37_0 anaconda
pytest-arraydiff 0.3 py37h39e3cac_0 anaconda
pytest-astropy 0.5.0 py37_0 anaconda
pytest-doctestplus 0.3.0 py37_0 anaconda
pytest-openfiles 0.3.2 py37_0 anaconda
pytest-remotedata 0.3.1 py37_0 anaconda
python 3.7.3 h8c8aaf0_1 anaconda
python-dateutil 2.8.0 py37_0 anaconda
python-docx 0.8.10 pypi_0 pypi
python-graphviz 0.11.1 pypi_0 pypi
python-libarchive-c 2.8 py37_11 anaconda
pytorch 1.2.0 py3.7_cpu_1 [cpuonly] pytorch
pytube 9.5.1 pypi_0 pypi
pytz 2019.1 py_0 anaconda
pywavelets 1.0.3 py37h8c2d366_1 anaconda
pywin32 223 py37hfa6e2cd_1 anaconda
pywinpty 0.5.5 py37_1000 anaconda
pyyaml 5.1.1 py37he774522_0 anaconda
pyzmq 18.0.0 py37ha925a31_0 anaconda
qt 5.9.7 vc14h73c81de_0 [vc14] anaconda
qtawesome 0.5.7 py37_1 anaconda
qtconsole 4.5.1 py_0 anaconda
qtpy 1.8.0 py_0 anaconda
query-string 2019.4.13 pypi_0 pypi
request 2019.4.13 pypi_0 pypi
requests 2.22.0 py37_0 anaconda
rope 0.14.0 py_0 anaconda
ruamel_yaml 0.15.46 py37hfa6e2cd_0 anaconda
scikit-image 0.15.0 py37ha925a31_0 anaconda
scikit-learn 0.21.2 py37h6288b17_0 anaconda
scipy 1.3.0 pypi_0 pypi
scipy-stack 0.0.5 pypi_0 pypi
seaborn 0.9.0 py37_0 anaconda
send2trash 1.5.0 py37_0 anaconda
setuptools 41.1.0 pypi_0 pypi
simplegeneric 0.8.1 py37_2 anaconda
singledispatch 3.4.0.3 py37_0 anaconda
sip 4.19.8 py37h6538335_0 anaconda
six 1.12.0 py37_0 anaconda
snappy 1.1.7 h777316e_3 anaconda
snowballstemmer 1.9.0 py_0 anaconda
sortedcollections 1.1.2 py37_0 anaconda
sortedcontainers 2.1.0 py37_0 anaconda
soupsieve 1.8 py37_0 anaconda
sphinx 2.1.2 py_0 anaconda
sphinxcontrib 1.0 py37_1 anaconda
sphinxcontrib-applehelp 1.0.1 py_0 anaconda
sphinxcontrib-devhelp 1.0.1 py_0 anaconda
sphinxcontrib-htmlhelp 1.0.2 py_0 anaconda
sphinxcontrib-jsmath 1.0.1 py_0 anaconda
sphinxcontrib-qthelp 1.0.2 py_0 anaconda
sphinxcontrib-serializinghtml 1.1.3 py_0 anaconda
sphinxcontrib-websupport 1.1.2 py_0 anaconda
spyder 3.3.6 py37_0 anaconda
spyder-kernels 0.5.1 py37_0 anaconda
sqlalchemy 1.3.5 py37he774522_0 anaconda
sqlite 3.29.0 he774522_0 anaconda
statsmodels 0.10.0 py37h8c2d366_0 anaconda
summa 1.2.0 pypi_0 pypi
sympy 1.4 py37_0 anaconda
tbb 2019.4 h74a9793_0 anaconda
tblib 1.4.0 py_0 anaconda
tempora 1.14.1 pypi_0 pypi
tensorboard 1.14.0 py37he3c9ec2_0 anaconda
tensorboardx 1.8 pypi_0 pypi
tensorflow 1.14.0 mkl_py37h7908ca0_0 anaconda
tensorflow-base 1.14.0 mkl_py37ha978198_0 anaconda
tensorflow-estimator 1.14.0 py_0 anaconda
tensorflow-mkl 1.14.0 h4fcabd2_0 anaconda
termcolor 1.1.0 pypi_0 pypi
terminado 0.8.2 py37_0 anaconda
testpath 0.4.2 py37_0 anaconda
tk 8.6.8 hfa6e2cd_0 anaconda
toolz 0.10.0 py_0 anaconda
torchvision 0.4.0 py37_cpu [cpuonly] pytorch
tornado 6.0.3 py37he774522_0 anaconda
tqdm 4.32.1 py_0 anaconda
traitlets 4.3.2 py37_0 anaconda
typing 3.6.6 pypi_0 pypi
typing-extensions 3.6.6 pypi_0 pypi
unicodecsv 0.14.1 py37_0 anaconda
urllib3 1.24.2 py37_0 anaconda
validators 0.13.0 pypi_0 pypi
vc 14.1 h0510ff6_4 anaconda
vs2015_runtime 14.15.26706 h3a45250_4 anaconda
wcwidth 0.1.7 py37_0 anaconda
webencodings 0.5.1 py37_1 anaconda
werkzeug 0.15.4 py_0 anaconda
wheel 0.33.4 py37_0 anaconda
widgetsnbextension 3.5.0 py37_0 anaconda
win_inet_pton 1.1.0 py37_0 anaconda
win_unicode_console 0.5 py37_0 anaconda
wincertstore 0.2 py37_0 anaconda
winpty 0.4.3 4 anaconda
wrapt 1.11.2 py37he774522_0 anaconda
xlrd 1.2.0 py37_0 anaconda
xlsxwriter 1.1.8 py_0 anaconda
xlwings 0.15.8 py37_0 anaconda
xlwt 1.3.0 py37_0 anaconda
xz 5.2.4 h2fa13f4_4 anaconda
yaml 0.1.7 hc54c509_2 anaconda
youtube-dl 2019.8.2 pypi_0 pypi
zc-lockfile 1.4 pypi_0 pypi
zeromq 4.3.1 h33f27b4_3 anaconda
zict 1.0.0 py_0 anaconda
zipp 0.5.1 py_0 anaconda
zlib 1.2.11 h62dcd97_3 anaconda
zstd 1.3.7 h508b16e_0 anaconda
| you dont have to install it via anaconda, you could install cuda from their website. after install ends open a new terminal and check your cuda version with:
>>> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:52:33_Pacific_Standard_Time_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0
my is V11.5
after, go here and select your os and preferred package manager(pip or anaconda), and the cuda version you installed, and copy the generated install command, I got:
pip3 install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
notice that for me I had python 3.10 installed but my project run over 3.9 so either use virtual environment or run pip of your wanted base interpreter explicitly (for example C:\Software\Python\Python39\python.exe -m pip install .....)
else you will be stuck with Could not find a version that satisfies the requirement torch errors
after, open python console and check for cuda availability
>>> import torch
>>> torch.cuda.is_available()
True
| https://stackoverflow.com/questions/57814535/ |
pytorch collate_fn reject sample and yield another | I have built a Dataset, where I'm doing various checks on the images I'm loading. I'm then passing this DataSet to a DataLoader.
In my DataSet class I'm returning the sample as None if a picture fails my checks and i have a custom collate_fn function which removes all Nones from the retrieved batch and returns the remaining valid samples.
However at this point the returned batch can be of varying size. Is there a way to tell the collate_fn to keep sourcing data until the batch size meets a certain length?
class DataSet():
def __init__(self, example):
# initialise dataset
# load csv file and image directory
self.example = example
def __getitem__(self,idx):
# load one sample
# if image is too dark return None
# else
# return one image and its equivalent label
dataset = Dataset(csv_file='../', image_dir='../../')
dataloader = DataLoader(dataset , batch_size=4,
shuffle=True, num_workers=1, collate_fn = my_collate )
def my_collate(batch): # batch size 4 [{tensor image, tensor label},{},{},{}] could return something like G = [None, {},{},{}]
batch = list(filter (lambda x:x is not None, batch)) # this gets rid of nones in batch. For example above it would result to G = [{},{},{}]
# I want len(G) = 4
# so how to sample another dataset entry?
return torch.utils.data.dataloader.default_collate(batch)
| There are 2 hacks that can be used to sort out the problem, choose one way:
By using the original batch sample Fast option:
def my_collate(batch):
len_batch = len(batch) # original batch length
batch = list(filter (lambda x:x is not None, batch)) # filter out all the Nones
if len_batch > len(batch): # if there are samples missing just use existing members, doesn't work if you reject every sample in a batch
diff = len_batch - len(batch)
for i in range(diff):
batch = batch + batch[:diff]
return torch.utils.data.dataloader.default_collate(batch)
Otherwise just load another sample from dataset at random Better option:
def my_collate(batch):
len_batch = len(batch) # original batch length
batch = list(filter (lambda x:x is not None, batch)) # filter out all the Nones
if len_batch > len(batch): # source all the required samples from the original dataset at random
diff = len_batch - len(batch)
for i in range(diff):
batch.append(dataset[np.random.randint(0, len(dataset))])
return torch.utils.data.dataloader.default_collate(batch)
| https://stackoverflow.com/questions/57815001/ |
how to install pytorch in python2.7? | i am using python2.7 in virtual environment. i tried to install pytorch in python2.7 but i got error belows:
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:
- pytorch-cpu -> python[version='3.5.*|3.6.*']
- pytorch-cpu -> python[version='>=3.5,<3.6.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0']
If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Your current python version
is (python=2.7). Note that conda will not change your python version to a different minor version
unless you explicitly specify that.
The following specifications were found to be incompatible with each other:
Package wheel conflicts for:
python=2.7 -> pip -> wheel
pytorch-cpu -> python[version='>=3.6,<3.7.0a0'] -> pip -> wheel
Package vc conflicts for:
python=2.7 -> sqlite[version='>=3.27.2,<4.0a0'] -> vc[version='14.*|>=14,<15.0a0|>=14.1,<15.0a0']
python=2.7 -> vc[version='9.*|>=9,<10.0a0']
pytorch-cpu -> numpy[version='>=1.11'] -> vc[version='14|14.*|>=14,<15.0a0']
pytorch-cpu -> vc[version='>=14.1,<15.0a0']
Package cffi conflicts for:
pytorch-cpu -> cffi
pytorch-cpu -> python[version='>=3.6,<3.7.0a0'] -> pip -> requests -> urllib3[version='>=1.21.1,<1.25'] -> cryptography[version='>=1.3.4'] -> cffi[version='>=1.7']
python=2.7 -> pip -> requests -> urllib3[version='>=1.21.1,<1.25'] -> cryptography[version='>=1.3.4'] -> cffi[version='>=1.7']
Package pip conflicts for:
python=2.7 -> pip
pytorch-cpu -> python[version='>=3.6,<3.7.0a0'] -> pip
Package setuptools conflicts for:
python=2.7 -> pip -> setuptools
pytorch-cpu -> python[version='>=3.6,<3.7.0a0'] -> pip -> setuptools
Package msgpack-python conflicts for:
python=2.7 -> pip -> cachecontrol -> msgpack-python
pytorch-cpu -> python[version='>=3.6,<3.7.0a0'] -> pip -> cachecontrol -> msgpack-python
i tried conda install pytorch-cpu -c pytorch and link(https://pytorch.org/get-started/locally/). but it is not worked. so what should i do for install torch in python version2.7? i want to install pytorch cpu version.
plz help:)
| Here's the link to the PyTorch official download page
From here, you can choose the python version (2.7) and CUDA (None) and other relevant details based on your environment and OS.
Other helpful links:
windows
windows
mac
ubuntu
all
| https://stackoverflow.com/questions/57835948/ |
TypeError: 'int' object is not callable in loss.backward() | While trying to set up a pytorch model, I am getting the error that the loss object is not callable when trying to do Pytorch autograd. (Relevant code shown below)
optimizer = torch.optim.Adam(model.parameters(), lr=lr,
betas(0.0,0.9))
def train(epoch, shuffle, wisdom_model, optim, loss):
print('train')
accuracy = 0
batch_num = 0
wisdom_model.train()
for batch in data.train_dl:
optim.zero_grad()
result = model(batch[0])
loss = nn.CrossEntropyLoss()(result, batch[1].long())
loss.backward()
accuracy += accuracy(result, batch[1])
print(accuracy)
pdb.set_trace()
batch_num += 1
return accuracy / batch_num
TypeError Traceback (most recent call last)
<ipython-input-28-5b9c9fe3b320> in <module>
----> 1 run(1, False)
<ipython-input-27-d0d67dbf6eb2> in run(num_models, dropout)
71
72 for epoch in range(10):
---> 73 train_accuracy = train(epoch, False, model, optimizer, loss)
74 accuracy.append(validate(epoch, model))
75
<ipython-input-27-d0d67dbf6eb2> in train(epoch, shuffle, model, optim, loss)
24 pdb.set_trace()
25
---> 26 loss.backward()
27 optim.step()
28
TypeError: 'int' object is not callable
| The problem is in this line:
loss = nn.CrossEntropyLoss()(result, batch[1].long())
Check out the nn.CrossEntropyLoss.
Should not look like this:
nn.CrossEntropyLoss()()
Should look like this:
nn.CrossEntropyLoss()
| https://stackoverflow.com/questions/57838192/ |
Cleaner way to use "with torch.no_grad()" conditioned on an expression | My code looks like:
if no_grad_condition:
with torch.no_grad():
out=network(input)
else:
out=network(input)
Is there a cleaner way to do it, without duplicating the line out=network(input)?
I am looking for something in the spirit of:
with torch.no_grad(no_grad_condition):
out=network(input)
| OP here: By writing down the question, I understood where to look for the answer. According to pytorch docs, we can use set_grad_enabled:
with torch.set_grad_enabled(not no_grad_condition):
out=network(input)
| https://stackoverflow.com/questions/57852236/ |
I have a trouble after trying to parallelize data using nn.Dataparallel | I didn't have any probelm without data parallelization, but after I put JUST A ONE LINE "model = nn.DataParallel(model)" the error message "TypeError: 'list' object is not callable" comes. If I push out that damn line the source works clean. plz help me
I can't do anything but just mad. I search that google and there are some ways to solve that error message but I can't do anything. Because nn.Dataparallel is already used by other coders. Sorry English is not my mothertongue.
if use_cuda:
[[[model = nn.DataParallel(model)]]]
model = model.cuda()
criterion = criterion.cuda()
print('cuda is used')
I just put model = nn.DataParallel(model) and error comes right after.
Traceback (most recent call last):
File "/home/scrcdeep2/YBJ/espnet_myself/Main.py", line 119, in
train(loader, model_, criterion_, optimizer_, use_cuda_, pretrained=None)
File "/home/scrcdeep2/YBJ/espnet_myself/Main.py", line 83, in train
outputs = model(inputs)
File "/home/scrcdeep2/YBJ/lib/python3.5/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/scrcdeep2/YBJ/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 151, in forward
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
File "/home/scrcdeep2/YBJ/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 156, in replicate
return replicate(module, device_ids)
File "/home/scrcdeep2/YBJ/lib/python3.5/site-packages/torch/nn/parallel/replicate.py", line 114, in replicate
modules = list(network.modules())
TypeError: 'list' object is not callable
| I met the same problem, because I've set moules = [list].
I changed my code like this:
def __init__(self, embedding_size, activation_function='relu'):
super().__init__()
self.act_fn = getattr(F, activation_function)
self.embedding_size = embedding_size
self.conv1 = nn.Conv2d(3, 32, 4, stride=2)
self.conv2 = nn.Conv2d(32, 64, 4, stride=2)
self.conv3 = nn.Conv2d(64, 128, 4, stride=2)
self.conv4 = nn.Conv2d(128, 256, 4, stride=2)
self.fc = nn.Identity() if embedding_size == 1024 else nn.Linear(1024, embedding_size)
if not args.MultiGPU:
self.modules = [self.conv1, self.conv2, self.conv3, self.conv4]
| https://stackoverflow.com/questions/57857939/ |
How to clear GPU memory after PyTorch model training without restarting kernel | I am training PyTorch deep learning models on a Jupyter-Lab notebook, using CUDA on a Tesla K80 GPU to train. While doing training iterations, the 12 GB of GPU memory are used. I finish training by saving the model checkpoint, but want to continue using the notebook for further analysis (analyze intermediate results, etc.).
However, these 12 GB continue being occupied (as seen from nvtop) after finishing training. I would like to free up this memory so that I can use it for other notebooks.
My solution so far is to restart this notebook's kernel, but that is not solving my issue because I can't continue using the same notebook and its respective output computed so far.
| The answers so far are correct for the Cuda side of things, but there's also an issue on the ipython side of things.
When you have an error in a notebook environment, the ipython shell stores the traceback of the exception so you can access the error state with %debug. The issue is that this requires holding all variables that caused the error to be held in memory, and they aren't reclaimed by methods like gc.collect(). Basically all your variables get stuck and the memory is leaked.
Usually, causing a new exception will free up the state of the old exception. So trying something like 1/0 may help. However things can get weird with Cuda variables and sometimes there's no way to clear your GPU memory without restarting the kernel.
For more detail see these references:
https://github.com/ipython/ipython/pull/11572
How to save traceback / sys.exc_info() values in a variable?
| https://stackoverflow.com/questions/57858433/ |
PyTorch equivalent of `numpy.unpackbits`? | I am training a neural net on GPU. It uses a lot of binary input features.
Since moving data to/from GPU is expensive, I am looking for ways to make the initial representation more compact. Now, I encode my features as int8, move them over to GPU and then expand as float32:
# create int8
features = torch.zeros(*dims, dtype=torch.int8)
# fill in some data (set some features to 1.)
…
# move int8 to GPU
features = features.to(device=cuda, non_blocking=True)
# expand int8 as float32
features = features.to(dtype=float32)
Now, I am looking for ways to compress those binary features to bits instead of bytes.
NumPy has functions packbits and unpackbits
>>> a = np.array([[2], [7], [23]], dtype=np.uint8)
>>> b = np.unpackbits(a, axis=1)
>>> b
array([[0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 1, 0, 1, 1, 1]], dtype=uint8)
Is there any way to unpack bits in PyTorch on GPU?
| There is no similar functions at the time of writing this answer. However, a workaround is using torch.from_numpy as in:
In[2]: import numpy as np
In[3]: a = np.array([[2], [7], [23]], dtype=np.uint8)
In[4]: b = np.unpackbits(a, axis=1)
In[5]: b
Out[5]:
array([[0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 1, 0, 1, 1, 1]], dtype=uint8)
In[6]: import torch
In[7]: torch.from_numpy(b)
Out[7]:
tensor([[0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 1, 0, 1, 1, 1]], dtype=torch.uint8)
| https://stackoverflow.com/questions/57871287/ |
Unable to Install Torch or torch vision in pycharm I am running python 3.6 | Installing torch from PyCharm interpreter but error occurs. Python 3.6
Collecting torch==0.4.1.post2
Could not find a version that satisfies the requirement torch==0.4.1.post2 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
No matching distribution found for torch==0.4.1.post2
| Try installing lower version of torch, go to:
File->Setting->Project[project_name]-> Project Interpreter -> + ->
search for torch, in right lower corner is check box 'Specify version'
check it and try sevral starting from top. From what i see current is 1.2.0
| https://stackoverflow.com/questions/57896633/ |
ImportError: cannot import name 'parameter_parser' from 'parser' (unknown location) | I am trying to Import Parameter_parser from Parser. but it is showing the error below:
ImportError: cannot import name 'parameter_parser' from 'parser'
In the line below that I also get:
ModuleNotFoundError: No module named 'load_data'
This is my code:
import matplotlib
matplotlib.use('agg')
import numpy as np
import time
import os
import torch.utils.data
import torch.nn.functional as F
import torch.optim as optim
import torch.optim.lr_scheduler as lr_scheduler
from torch.utils.data import DataLoader
from os.path import join as pjoin
from parser import parameter_parser
from load_data import split_ids, GraphData, collate_batch
from models.gcn_modify import GCN_MODIFY
from models.gcn_origin import GCN_ORIGIN
from models.gat import GAT
from models.mgcn import MGCN
from sklearn import metrics`
| When I try the same thing in my python console I get this:
>>> from parser import parameter_parser
File "<stdin>", line 1
from parser import parameter_parser
^
IndentationError: unexpected indent
>>> from parser import parameter_parser
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name parameter_parser
Is this the same problem for you? This is because you don't have the module installed via pip (pip install PACKAGE_NAME) or whatever you use to install your packages. Another idea is that you have set up a virtual env, installed it there and did not activate it.
At any case although I didn't downvote your answer (I think there are no wrong questions!) I assume the person that did was not able to find the additional information to help you solve your problem. for next time try adding what OS you are using, what package is causing the problem and what solutions have you already tried (did you find other answers on stackoverflow? did you google the problem? did you try importing the package by itself in the console?).
| https://stackoverflow.com/questions/57905600/ |
Interpret GAN loss | I am currently training the standard DCGAN network on my dataset. After 40 epochs, the loss of both generator and discriminator is 45-50. Can someone please explain the reason and possible solution for this?
| This interpretation may be added to unsolved problems.
You cannot interpret the loss of generator and discriminator. Since when one improves it will be harder for the other. When generator improves it will be harder for the critic. When critic improves it will be harder for the generator.
The values totally depend on your loss function. You may expect that numbers should be "about the same" over time.
| https://stackoverflow.com/questions/57919973/ |
DataLoader messing up transformed data | I am testing the MNIST dataset in Pytorch, and after I apply a transformation to the X data, it seems the DataLoader puts all values out of the original order, potentially messing up the training step.
My transformation is to divide all values by 255. One should notice that the transformation itself does not change positions, as shown by the first scatterplots. But after the data is passed to the DataLoader and I retrieve it back, they are out of order. If I make no transformation, everything is fine (not shown). The distribution of the values is the same among before, after1 (divided by 255/before DataLoader) and after2 (divided by 255/after DataLoader) (also not shown), only the order seems to be affected.
import torch
from torchvision import datasets
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
transform = transforms.ToTensor()
train = datasets.MNIST(root = '.', train = True, download = True, transform = transform)
test = datasets.MNIST(root = '.', train = False, download = True, transform = transform)
before = train.data[0]
train.data = train.data.float()/255
after1 = train.data[0]
train_loader = torch.utils.data.DataLoader(train, batch_size = 128)
test_loader = torch.utils.data.DataLoader(test, batch_size = 128)
fig, ax = plt.subplots(1, 2)
ax[0].scatter(range(len(before.view(-1))), before.view(-1))
ax[0].set_title('Before')
ax[1].scatter(range(len(after1.view(-1))), after1.view(-1))
ax[1].set_title('After1')
after2 = next(iter(train_loader))[0][0]
fig, ax = plt.subplots(1, 2)
ax[0].scatter(range(len(before.view(-1))), before.view(-1))
ax[0].set_title('Before')
ax[1].scatter(range(len(after2.view(-1))), after2.view(-1))
ax[1].set_title('After2')
fig, ax = plt.subplots(1, 3)
ax[0].imshow(before, cmap = 'gray')
ax[1].imshow(after1, cmap = 'gray')
ax[2].imshow(after2.view(28, 28), cmap = 'gray')
I know that this might not be the best way to deal with this data (transforms.Normalize should solve it), but I would really like to understand what is happening.
Thank you!
| So... I posted this same question at Pytorch's GitHub page, and they answered the following:
It's unrelated to data loader. You are messing with an attribute of
the particular dataset object, however, the actual __getitem__ of that
object does much more:
https://github.com/pytorch/vision/blob/6de158c473b83cf43344a0651d7c01128c7850e6/torchvision/datasets/mnist.py#L92
In particular this line (mode='L') assumes uint8 input. Since you
replaced it with float, it is wrong.
Then I guess the preferred approach would be to apply a transform when preparing the dataset at the very beginning of my code.
| https://stackoverflow.com/questions/57924724/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.