id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st103300 | Thanks for confirming this. I also used clone() to solve this. But it waste my much time and work before I find it… Really misled by documentation. |
st103301 | Hi there, I am trying to get index from dataloader, which loads from torch.sparse.FloatTensor.
The code is like:
sparseTensor = torch.sparse.FloatTensor(i, v)
train_data = DataLoader(sparseTensor)
for input in train_data:
# Need the index of the input here.
The input is a sparse matrix, and I need the row number for each row trained.
Thanks for your time! |
st103302 | I am under the impressions that if I have some data that is a torch Tensor, and I put it into a multiprocessing queue (from torch.multiprocessing), then that tensor will be copied into shared_memory. If so, how can I avoid the queue from making shared memory copies. In other words, how do I reuse that shared_memory? |
st103303 | Hello,
Does anyone knows what a pr_curve is in tensorboardX ? i have troubles figuring it out.
Thank a lot ! |
st103304 | I am not familiar with tensorboardX, but I guess it is the same Precision-Recall 9 curve as in tensorboard, to visualize tradeoffs between precision and recall in binary classification tasks. |
st103305 | Does anyone have access to the pre-trained features of MSDNet 3 for ImageNet?
Thanks,
Tahereh |
st103306 | Hello!
To work with a remote GPU server running on CentOS 6.9, I had to install pytorch from source (the packaged version as an incompatibility with glibc). I ran 20 batches of my training process with the autograd profiler, and I looked at the trace with chrome://tracing. My local computer, that has a GeForce 1080 Ti, processes a batch 10x faster than the remote server does (which uses one Tesla K80 GPU). The GPUs don’t have the same specs, but I would expect only a 2x slowdown for the Tesla.
I singled out the backward conv operation which is extremely slow on the server. Weirdly, the trace does not show the same function name. Here is the comparison:
Tesla K80 | ThnnConv2DBackward | 62 ms
GeForce 1080 Ti | CudnnConvolutionBackward | 0.0000037 ms
From the function name, one could deduce that CUDA is not used on the server. But nvidia-smi shows 75-99% utilization during the process. Have I a problem with my pytorch installation?
Thanks for your time! |
st103307 | Solved by ptrblck in post #6
If cuDNN isn’t detected, you could try to specify CUDNN_LIB_DIR (libcudnn.so or something similar should be in the dir) and CUDNN_INCLUDE_DIR (cudnn.h should be in the dir). |
st103308 | Did you see the cuDNN version during the build on your server?
Could you check the cuDNN version on the server (if available)?
torch.backends.cudnn.version() |
st103309 | Hello ptrblck_de!
torch.backends.cudnn.version() returns None, while torch.cuda.is_available() returns True!
What does this mean? |
st103310 | It means during the build process cuDNN wasn’t found.
The logs should have shown it as well.
Could you try to install cuDNN and re-build PyTorch? |
st103311 | I should say this is a shared server, provided by a national organization. cuDNN is in fact already installed on this server, but:
It has to be loaded with module load cuda/8.0.44 libs/cuDNN/6
The cuda installation is located in /software-gpu/cuda/8.0.44, and cuDNN is in /software-gpu/libs/cuDNN/6_cuda8.0
Should I do something special to build pytorch correctly? Maybe change $CUDA_HOME?
I don’t have the build logs, I built pytorch about a month ago. |
st103312 | If cuDNN isn’t detected, you could try to specify CUDNN_LIB_DIR (libcudnn.so or something similar should be in the dir) and CUDNN_INCLUDE_DIR (cudnn.h should be in the dir). |
st103313 | Thanks a lot ptrblck, I will try this. Congrats on being a great person.
EDIT:
Here is what I’ve done to compile pytorch
Load python, cuda and cudnn modules
Activate conda environment (torchvision should be already installed)
Set CUDA_HOME, CUDNN_LIB_DIR and CUDNN_INCLUDE_DIR
python setup.py clean && python setup.py install |
st103314 | Hello,
I have studied 2nd order optimization methods like BFGS and Newton’s method, but can anyone help me understand how do we backpropagate the Hessian matrix in let say a feed-forward neural network? Thanks !! |
st103315 | I have a baseline (eg VGG) and i want to connect several small models to different place of the baseline.
For example for simplicity I am going to do it at the end of my baseline here. Then I want to train and share the losses.
I did the following but i am not sure why i got this error.
Can you please help/guide me
Thanks
So I have the feature part of vgg like this:
vgg16 = models.vgg16(pretrained=True).to(device)
vgg_feature = vgg16.features
if I print it I will have:
print("vgg_feature:\n",vgg_feature)
print(type(vgg_feature))
vgg_feature:
Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace)
...
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
<class 'torch.nn.modules.container.Sequential'>
Now I have one of the new model that I want to attach to the baseline:
Attch1 = nn.ModuleList([])
Attch1.append(nn.Conv2d(512, 4, 16))
Attch1 = nn.Sequential(*Attch1)
print(Attch1)
print(type(Attch1))
Sequential(
(0): Conv2d(512, 4, kernel_size=(16, 16), stride=(1, 1))
)
<class 'torch.nn.modules.container.Sequential'>
Then I thought I can do it like this:
import itertools
def forward(x):
xs = []
for name, m in itertools.chain(vgg_feature,Loc2):
m(x)
print(name, m)
and call the forward function:
forward(torch.rand(1,3,512,512))
But I get this error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-19-09f6aa1136a0> in <module>()
----> 1 forward(torch.rand(1,3,512,512))
<ipython-input-14-2f8da09fc583> in forward(x)
3 xs = []
4 for name in itertools.chain(vgg_feature,Attch1):
----> 5 name(x)
6 print(name)
7
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
489 result = self._slow_forward(*input, **kwargs)
490 else:
--> 491 result = self.forward(*input, **kwargs)
492 for hook in self._forward_hooks.values():
493 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
299 def forward(self, input):
300 return F.conv2d(input, self.weight, self.bias, self.stride,
--> 301 self.padding, self.dilation, self.groups)
302
303
RuntimeError: Given groups=1, weight[64, 64, 3, 3], so expected input[1, 3, 512, 512] to have 64 channels, but got 3 channels instead
So if i give a 3x512x512 input to vgg base i expect to get an out with size of 512x16x16 and then i am sending that to the other model (Attach1), but im not sure why the above error happens
Out = vgg_feature[:](torch.rand(1,3,512,512))
print(Out.size())
torch.Size([1, 512, 16, 16])
Update:
I also tried this:
import itertools
def forward(x):
xs = []
for name, m in itertools.chain(vgg_feature._modules.items(),Attch1._modules.items()):
m(x)
print(name)
But got the same error:/
forward(torch.rand(1,3,512,512))
0
1
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-27-09f6aa1136a0> in <module>()
----> 1 forward(torch.rand(1,3,512,512))
<ipython-input-26-0e90031a3b93> in forward(x)
3 xs = []
4 for name, m in itertools.chain(vgg_feature._modules.items(),Attch1._modules.items()):
----> 5 m(x)
6 print(name)
7
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
489 result = self._slow_forward(*input, **kwargs)
490 else:
--> 491 result = self.forward(*input, **kwargs)
492 for hook in self._forward_hooks.values():
493 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
299 def forward(self, input):
300 return F.conv2d(input, self.weight, self.bias, self.stride,
--> 301 self.padding, self.dilation, self.groups)
302
303
RuntimeError: Given groups=1, weight[64, 64, 3, 3], so expected input[1, 3, 512, 512] to have 64 channels, but got 3 channels instead
Update 2:
I am not sure where the problem is from, but when i even do this:
import itertools
def forward(x):
xs = []
for name, m in itertools.chain(vgg_feature._modules.items()):
print(name,m)
m(x)
forward(torch.rand(1,3,512,512))
I get the error which makes me confuse
0 Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
1 ReLU(inplace)
2 Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-42-bb552d02b5df> in <module>()
6 m(x)
7
----> 8 forward(torch.rand(1,3,512,512))
<ipython-input-42-bb552d02b5df> in forward(x)
4 for name, m in itertools.chain(vgg_feature._modules.items()):
5 print(name,m)
----> 6 m(x)
7
8 forward(torch.rand(1,3,512,512))
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
489 result = self._slow_forward(*input, **kwargs)
490 else:
--> 491 result = self.forward(*input, **kwargs)
492 for hook in self._forward_hooks.values():
493 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
299 def forward(self, input):
300 return F.conv2d(input, self.weight, self.bias, self.stride,
--> 301 self.padding, self.dilation, self.groups)
302
303
RuntimeError: Given groups=1, weight[64, 64, 3, 3], so expected input[1, 3, 512, 512] to have 64 channels, but got 3 channels instead |
st103316 | Solved by ptrblck in post #2
Try to assign the result back to x so that the result from m(x) will be fed to the next module. |
st103317 | Try to assign the result back to x so that the result from m(x) will be fed to the next module. |
st103318 | @ptrblck
Interesting!!
def forward(x):
xs = []
for name, m in itertools.chain(vgg_feature._modules.items(),Attch1._modules.items()):
print(name,m)
x = m(x)
return x
a = forward(torch.rand(1,3,512,512))
it solved the problem! But why should i do that?
Also can you please let me know your opinion that if the way that i did is a good way in general or not?
if i want to take the output of different layers at the same time
(e.g. last layer after (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
and the layer
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False), ) and send them to two different network (e.g. Attach 1 and Attach2),
how should i do that in this case?
Thanks a lot |
st103319 | You have to assign the return value, because otherwise the calculated result will be lost. Your model does not store the result in-place in x, but returns it’s output.
I would rather create a new nn.Sequential module using your sub-modules than iterating the parts.
Another approach would be to create an own nn.Module and implementing forward yourself. That would give you more flexibility especially regarding your use case using activations from different layers. |
st103320 | Thanks for the suggestions.
Is there any example regarding those two suggested methods so I can read them and and try to learn from them? |
st103321 | Oh i see what you mean.
I will give it a try and come back if i face issues 8-|
Thanks a lot Boss! |
st103322 | I am using dataparallel in my network to train my model, but for testing if i use the same model it gives me error (KeyError: 'unexpected key "module.Base.conv1_1.conv.0.weight" in state_dict').
to fix this i had to add model = torch.nn.DataParallel(model.cuda()) again in my code, which i dont understand why is that the reason.
Also if i uncomment the part that i need for testing and want to train in that situation, i get the error: AttributeError: 'DataParallel' object has no attribute 'init_parameters'
Can anyone explain it to me, or if im doing something wrong i appreciate if someone tell me what im doing wrong.
I provide the code for training/testing part here, sorry it is kinda long, i comment the part that i have to uncomment for testing in this code and it has a comment (“Uncomment for test”) on the top of it.
Thanks for your help in advance
import os
import time
import argparse
import random
import numpy as np
from numpy.random import RandomState
import cv2
import torch.backends.cudnn as cudnn
import torch
import torch.optim as optim
import torch.backends.cudnn as cudnn
from torch.autograd import Variable
from torch.utils.data import DataLoader
from pascal_voc import VOCDetection, VOC, Viz
from transforms import *
from evaluation import eval_voc_detection
import sys
sys.path.append('./models')
from SSD import SSD300
from loss import MultiBoxLoss
from multibox import MultiBox
from tensorboardX import SummaryWriter
summary = SummaryWriter()
# Setup
parser = argparse.ArgumentParser(description='PyTorch SSD variants implementation')
parser.add_argument('--checkpoint', help='resume from checkpoint', default='')
parser.add_argument('--voc_root', help='PASCAL VOC dataset root path', default='')
parser.add_argument('--batch_size', type=int, help='input data batch size', default=32)
parser.add_argument('--lr', '--learning_rate', type=float, help='initial learning rate', default=1e-3)
parser.add_argument('--start_iter', type=int, help='start iteration', default=0)
parser.add_argument('--backbone', help='pretrained backbone net weights file', default='vgg16_reducedfc.pth')
parser.add_argument('--cuda', action='store_true', help='enable cuda')
parser.add_argument('--test', action='store_true', help='test mode')
parser.add_argument('--demo', action='store_true', help='show detection result')
parser.add_argument('--seed', type=int, help='random seed', default=233)
parser.add_argument('--threads', type=int, help='number of data loader workers.', default=4)
parser.add_argument('--max_iter', type=int, help='max_iter number.', default=100000)
parser.add_argument('--width_mult', type=int, help='width_mult number.', default=1)
opt = parser.parse_args()
print('argparser:', opt)
# random seed
random.seed(opt.seed)
np.random.seed(opt.seed)
torch.manual_seed(opt.seed)
if opt.cuda:
assert torch.cuda.is_available(), 'No GPU found, please run without --cuda'
torch.cuda.manual_seed_all(opt.seed)
cudnn.benchmark = True
# model
model = SSD300(VOC.N_CLASSES)
cfg = model.config
# Uncomment for test
# model = torch.nn.DataParallel(model.cuda())
if opt.checkpoint:
model.load_state_dict(torch.load(opt.checkpoint))
else:
model.init_parameters(opt.backbone)
encoder = MultiBox(cfg)
criterion = MultiBoxLoss()
# cuda
if opt.cuda:
model.cuda()
criterion.cuda()
cudnn.benchmark = True
model = torch.nn.DataParallel(model, device_ids=[0,1])
# optimizer
optimizer = optim.SGD(model.parameters(), lr=opt.lr, momentum=0.9, weight_decay=5e-4)
# learning rate / iterations
init_lr = cfg.get('init_lr', 1e-3)
stepvalues = cfg.get('stepvalues', (80000, 100000))
# max_iter = cfg.get('max_iter', 120000)
max_iter = opt.max_iter
def adjust_learning_rate(optimizer, stage):
lr = init_lr * (0.1 ** stage)
for param_group in optimizer.param_groups:
param_group['lr'] = lr
# def warm_up(iteration):
# warmup_steps = [0, 300, 600, 1000]
# if iteration in warmup_steps:
# i = warmup_steps.index(iteration)
# lr = 10 ** np.linspace(-6, np.log10(init_lr), len(warmup_steps))[i]
# for param_group in optimizer.param_groups:
# param_group['lr'] = lr
def learning_rate_schedule(iteration):
if iteration in stepvalues:
adjust_learning_rate(optimizer, stepvalues.index(iteration) + 1)
def train():
model.train()
PRNG = RandomState(opt.seed)
# augmentation / transforms
transform = Compose([
[ColorJitter(prob=0.5)], # or write [ColorJitter(), None]
BoxesToCoords(),
Expand((1, 4), prob=0.5),
ObjectRandomCrop(),
HorizontalFlip(),
Resize(300),
CoordsToBoxes(),
[SubtractMean(mean=VOC.MEAN)],
[RGB2BGR()],
[ToTensor()],
], PRNG, mode=None, fillval=VOC.MEAN)
target_transform = encoder.encode
dataset = VOCDetection(
root=opt.voc_root,
image_set=[('2007', 'trainval'), ('2012', 'trainval'),],
keep_difficult=True,
transform=transform,
target_transform=target_transform)
dataloader = DataLoader(dataset=dataset, batch_size=opt.batch_size, shuffle=True,
num_workers=opt.threads, pin_memory=True)
iteration = opt.start_iter
while True:
for input, loc, label in dataloader:
t0 = time.time()
learning_rate_schedule(iteration)
input, loc, label = Variable(input), Variable(loc), Variable(label)
if opt.cuda:
input, loc, label = input.cuda(), loc.cuda(), label.cuda()
xloc, xconf = model(input)
loc_loss, conf_loss = criterion(xloc, xconf, loc, label)
loss = loc_loss + conf_loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
t1 = time.time()
if iteration % 10 == 0:
summary.add_scalar('loss/loc_loss', loc_loss.data[0], iteration)
summary.add_scalar('loss/conf_loss', conf_loss.data[0], iteration)
summary.add_scalars('loss/loss', {"loc_loss": loc_loss.data[0],
"conf_loss": conf_loss.data[0],
"loss": loss.data[0]}, iteration)
width_mult=opt.width_mult
if iteration % 20 == 0 and iteration != opt.start_iter:
print('iter: {}, Loss:{}, timer :{}'.format(iteration, loss.data[0], t1 - t0))
if iteration % 5000 == 0 and iteration != opt.start_iter:
print('Save state, iter: {}, Loss:{}'.format(iteration, loss.data[0]))
if not os.path.isdir('weights'):
os.mkdir('weights')
torch.save(model.state_dict(), 'weights/{}_width_mult_0.25_{}.pth'.format(cfg.get('name', 'SSD'), iteration))
iteration += 1
if iteration > max_iter:
return 0
def test():
PRNG = RandomState()
dataset = VOCDetection(
root=opt.voc_root,
image_set=[('2007', 'test')],
transform=Compose([
BoxesToCoords(),
Resize(300),
CoordsToBoxes(),
[SubtractMean(mean=VOC.MEAN)],
[RGB2BGR()],
[ToTensor()]]),
target_transform=None)
print(len(dataset))
pred_bboxes, pred_labels, pred_scores, gt_bboxes, gt_labels = [], [], [], [], []
for i in range(len(dataset)):
num_images = len(dataset)
t0 = time.time()
img, loc, label = dataset[i]
gt_bboxes.append(loc)
gt_labels.append(label)
input = Variable(img.unsqueeze(0), volatile=True)
if opt.cuda:
input = input.cuda()
xloc, xconf = model(input)
xloc = xloc.data.cpu().numpy()[0]
xconf = xconf.data.cpu().numpy()[0]
boxes, labels, scores = encoder.decode(xloc, xconf, nms_thresh=0.5, conf_thresh=0.01)
pred_bboxes.append(boxes)
pred_labels.append(labels)
pred_scores.append(scores)
t1 = time.time()
print('Testing image {:d}/{:d}...., time:{}'.format(i+1, num_images,t1-t0))
print(eval_voc_detection(pred_bboxes, pred_labels, pred_scores, gt_bboxes, gt_labels, iou_thresh=0.5, use_07_metric=True))
def demo():
PRNG = RandomState()
voc_vis = Viz()
dataset = VOCDetection(
root=opt.voc_root,
image_set=[('2007', 'test')],
transform=Compose([
BoxesToCoords(),
Resize(300),
CoordsToBoxes(),
[SubtractMean(mean=VOC.MEAN)],
[RGB2BGR()],
[ToTensor()]]),
target_transform=None)
print(len(dataset))
i = PRNG.choice(len(dataset))
for _ in range(1000):
img, _, _ = dataset[i]
input = Variable(img.unsqueeze(0), volatile=True)
if opt.cuda:
input = input.cuda()
xloc, xconf = model(input)
imgs = input.data.cpu().numpy().transpose(0, 2, 3, 1)
xloc = xloc.data.cpu().numpy()
xconf = xconf.data.cpu().numpy()
for img, loc, conf in zip(imgs, xloc, xconf):
#print(img.mean(axis=(0,1)))
img = ((img[:, :, ::-1] + VOC.MEAN)).astype('uint8')
boxes, labels, scores = encoder.decode(loc, conf, conf_thresh=0.5)
img = voc_vis.draw_bbox(img, boxes, labels, True)
cv2.imshow('0', img[:, :, ::-1])
c = cv2.waitKey(0)
if c == 27 or c == ord('q'): # ESC / 'q'
return
else:
i = PRNG.choice(len(dataset))
if __name__ == '__main__':
if opt.test:
test()
elif opt.demo:
demo()
else:
train() |
st103323 | It’s because you save the weights of the model as a state_dict of cuda tensors. This means that you have to load the state_dict in a model that has the same keys in the state_dict and has the same type of device (i.e. cuda).
To solve your problem, you should move the loading of your checkpoint after the if opt.cuda: as follows:
# model
model = SSD300(VOC.N_CLASSES)
cfg = model.config
model.init_parameters(opt.backbone)
encoder = MultiBox(cfg)
criterion = MultiBoxLoss()
# cuda
if opt.cuda:
model.cuda()
criterion.cuda()
cudnn.benchmark = True
model = torch.nn.DataParallel(model, device_ids=[0,1])
if opt.checkpoint:
model.load_state_dict(torch.load(opt.checkpoint))
That way, you still initialize with your backbone of cpu weights, and then you load your saved state_dict if you test. |
st103324 | Im trying to port a piece of tensorflow code originally written by VinceMarron 12 to pytorch but am yet to replicate the same results. The tf code is as follows:
d=tf.range(kerneldim, dtype=tf.float32)
d=tf.expand_dims((d-d[kerneldim//2])**2,0)
dist=d+tf.transpose(d)
kernel=tf.exp(-dist*(1./(2.*sigma_sq)))
kernel = kernel/tf.reduce_sum(kernel)
kernel = tf.reshape(kernel, (kerneldim, kerneldim,1,1))
kernel = tf.tile(kernel, (1,1,3,1))
return tf.nn.depthwise_conv2d(x, kernel, [1,1,1,1], padding='SAME')
My attempt at doing the same operation in pytorch:
d = torch.arange(0.,float(kerneldim))
d = torch.unsqueeze((d-d[kernel_size//2])**2,0)
dist = d+d.transpose(0,1)
kernel = torch.exp(-dist*(1./(2.*sigma_sq)))
kernel /= torch.sum(kernel)
kernel = kernel.view(1,kerneldim,kerneldim)
kernel = kernel.repeat(3,1,1,1)
conv = nn.Conv2d(3,3,kerneldim,padding=7,groups=3,bias=False)
c.weight = torch.nn.Parameter(kernel)
return conv(Variable(x,volatile=True)).data
I am getting very different results. It runs but does not replicate the tensorflow results.
I have not had the time to dive deeper into depthwise convolutions. I am also really terrible at tensorflow.
As I do not have enough time to research these things more in depth I just tried to port the code directly.
This is something im doing in my free time for fun so if you want to help me please dont waste too much of your time on me as this is in no way critical.
Thanks in advance! |
st103325 | In depthwise conv, you just need to transpose the weights by the axises [2, 3, 0, 1]. For me, the problem is not depthwise conv. I missed out the dilation. |
st103326 | Hey! Sorry for the late reply, I did actually return to this problem today with much more experience in pytorch and solved it. I wanted to perform gaussian smoothing on a [BATCH x C x H x W] tensor separately for each channel. I created a [kernel_size x kernel_size] gaussian kernel and used the following to fit it as a weight for a depthwise conv2d:
gaussian_kernel = gaussian_kernel.view(1, 1, kernel_size, kernel_size)
gaussian_kernel = gaussian_kernel.repeat(channels, 1, 1, 1)
self.gaussian_filter = nn.Conv2d(in_channels=channels, out_channels=channels,
kernel_size=kernel_size, groups=channels, bias=False)
self.gaussian_filter.weight.data = gaussian_kernel
I know there are some threads on here where people want to perform gaussian smoothing so I will try to post my entire solution in one of those. |
st103327 | I am not sure how I represent the training data. For multi labels, should I convert these labels in multi-hot form. That means all images have labels of the same length which corresponds to the class to be classified, Or should I only use the label sequence ? In this way length of labels are various with the different images.
Could anybody give me some suggestion? |
st103328 | Or, BCEWithLogitsLoss
which combines a Sigmoid layer and the BCELoss in one single class. |
st103329 | image.png395×629 25.6 KB
This image is from the tutorial at https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html#sphx-glr-intermediate-seq2seq-translation-tutorial-py 23.
When deriving the attention weights, the input of “attn” is the concatenation of previous decoder hidden state and current decoder input, output is the attention weights who will be applied to the encoder outputs. The shape of this output is (batch_size, FIXED_length_of_encoder_input, 1)
BUT according to “Neural Machine Translation by Jointly Learning to Align and Translate”, when deriving attention weights we should use the concatenation of previous decoder hidden state and each of the encoder outputs to get the corresponding energy e_ij. Output size is (batch_size, ACTUAL_length_of_encoder_input, 1)
Could anyone justify this offical tutorial implementation? (I mean using the decoder input in deriving attention weights) A reference paper also helps.
There is another question:
In every forward function of the tutorial, it reshapes the embedding:
embedded = self.embedding(input).view(1, 1, -1)
Why is that? |
st103330 | Solved by tom in post #4
I think that that is just a simplification in the implementation. The blue box note below the AttnDecoderRNN alludes to the fact that this approach is more limited than others.
What happens with the overly long attention weights is that they will get multiplied by the 0s in decoder_outputs.
Having… |
st103331 | Where do you get the fixed length from?
On first sight, it looks like the training is done with a minibatch size of 1 2 and so the input_length in the train function’s for loop for ei in range(input_length): is just the length of the training input.
Best regards
Thomas |
st103332 | Thanks for your reply!
tom:
Where do you get the fixed length from?
In class AttnDecoderRNN, you can find:
def init(self, hidden_size, output_size, dropout_p=0.1, max_length=MAX_LENGTH):
self.max_length = max_length
self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
So the sequence length of attention weights is determined by the neural network architecture.
The time complexity of the attention weights computations in the tutorial is:
length_of_target_sequence * 1
William_Aptom:
BUT according to “Neural Machine Translation by Jointly Learning to Align and Translate”
We should compare each of the previous decoder hidden states at every time stamp to each of the encoder outputs at every time stamp. The time complexity is:
length_of_target_sequence * length_of_input_sequence
Please notice the time complexity is reduced from n square to n. Could anyone give me the reference paper of this implementation in the tutorial. Using the decoder input and previous decoder hidden state to predict the attention weights does not make too much sense to me. |
st103333 | I think that that is just a simplification in the implementation. The blue box note below the AttnDecoderRNN alludes to the fact that this approach is more limited than others.
What happens with the overly long attention weights is that they will get multiplied by the 0s in decoder_outputs.
Having the previous state and decoder input to predict weights seems not too uncommon to me - it sounds vaguely similar to what I have seen elsewhere. (I like to advocate Graves’ Handwriting RNN 14 for a gentle first contact with attention.)
I don’t think the computational complexity is the main limitation here.
Best regards
Thomas |
st103334 | Hi, I have a problem about how to tying weights between different neurons. Suppose we have a weight matrix w with 10 x 10, and the weights in w[1,:] and w[2,:] is same, and equal to w_0. When training this tiny model, I want to update w_0 instead of updating w[1,:] and w[2,:] separately. The gradient is given by g(w_0) = (g(w[1,:])+g(w[2,:]))/2.
I am not sure how to perform these operations properly. Thanks in advance for any help. |
st103335 | I come up with a simple solution:
def weights_sharing(weights, weights_grad, group_info):
#share weights
group_len = group_info.size(0)
weights_size = weights.size(1)
weights_collection = weights.gather(0,group_info.expand(weights_size,group_len).transpose(0,1))
averge_weight = weights_collection.mean(dim=0)
for i in group_info.numpy():
weights[i] = averge_weight
#share gradient
grad_collection = weights_grad.gather(0, group_info.expand(weights_size, group_len).transpose(0,1))
averge_grad = grad_collection.mean(dim=0)
for i in group_info.numpy():
weights_grad[i] = averge_grad
sample usage:
linear = nn.Linear(10,5)
x=torch.randn(5,10)
y=torch.randn(5,5)
y_c = linear(x)
loss = (y_c-y).pow(2).mean()
loss.backward()
weight_sharing(linear.weight.data, linear.weight.grad.data, group_info)
#then update the parameter as usual
I still have a question, since the weight sharing happens after backward pass, I think it won’t affect computation graph. I think this is true, but not sure about it.
Any discussion is welcome. |
st103336 | You can just assign the same Parameter to two different modules. You’ll get the sum of gradients, but that should be OK.
Best regards
Thomas |
st103337 | Ah, sorry, I misunderstood.
I’d probably do something like
w_raw = nn.Parameter(9, 10)
in __init__ and then in forward:
w = torch.cat([w_raw[:1], w_raw], 0)
Here is a little demo for what this does:
w_raw = torch.randn(9, 10, requires_grad=True)
w = torch.cat([w_raw[:1], w_raw], 0)
w.retain_grad()
og = torch.randn(10, 10)
w.backward(og)
print (w.grad, w_raw.grad)
You see that w_raw.grad[0] is the sum of the first two rows in w.grad.
If you really need the average rather than the sum, you could multiply the grad with 0.5 before processing it further (or use a backward hook), but I’d mostly expect the sum to work for you in most use cases…
Best regards
Thomas |
st103338 | Greeting,
I really would like to know how GRU layers works in Pytorch.
Why there is no neurons defined ?
if it would takes much of your time, please just direct me to a paper or any resource.
Appreciated |
st103339 | I use different size of inputs to forward the torchvision.models.resnet18.
It is OK, when the size is smaller than 400. But when I change the size to 430*430, it raise a error
’RuntimeError: size mismatch, m1: [1 x 2048], m2: [512 x 1000] at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:1229’
How can I use a larger input with torchvision.models. |
st103340 | Can you try different input sizes (e.g. 410, 412, 420, 426, …)? Sometimes only multiples of a specific resolution are compatible else there are mismatches between layers for example because of padding. |
st103341 | Hi @bodokaiser
Yes, I have tried many input sizes. And when the size is smaller than 416, it has a output, when the size is larger than 415, it has an error 'RuntimeError: size mismatch, m1: [1 x 2048], m2: [512 x 1000] at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:1229’
And when I tried much larger size like 1000, the error changed to 'RuntimeError: size mismatch, m1: [1 x 8192], m2: [512 x 1000] at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:1229’
It also happened to other model like resnet34. |
st103342 | The problem is probably in the fully-connected layers, as it was expecting some feature map of a specific size and now it’s bigger.
You can bypass that by adding a Adaptive[Max/Average]Pooling2d layer just before the classifier, so that the output size is the same and the classifier can work. |
st103343 | @bodokaiser,
my code is
transforms = transforms.Compose([
transforms.CenterCrop(418),
transforms.ToTensor(),
])
dset = datasets.ImageFolder(os.path.join('./data/'), transforms)
dset_loader = torch.utils.data.DataLoader(dset, batch_size=1)
inputs, classes = next(iter(dset_loader))
model_resnet18 = resnet18(pretrained=False)
input_test = Variable(inputs)
print(input_test.size(), input_test)
out = model_resnet18(input_test)
output of input_test.size() is torch.Size([1, 3, 418, 418])
And the complete error stack trace is:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-4-b764430b1113> in <module>()
2 input_test = Variable(torch.Tensor(input_np))
3 print(input_test.size(), input_test)
----> 4 out = model_resnet18(input_test)
5 print(out)
/usr/share/Anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
204
205 def __call__(self, *input, **kwargs):
--> 206 result = self.forward(*input, **kwargs)
207 for hook in self._forward_hooks.values():
208 hook_result = hook(self, input, result)
/usr/share/Anaconda2/lib/python2.7/site-packages/torchvision/models/resnet.pyc in forward(self, x)
148 x = self.avgpool(x)
149 x = x.view(x.size(0), -1)
--> 150 x = self.fc(x)
151
152 return x
/usr/share/Anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
204
205 def __call__(self, *input, **kwargs):
--> 206 result = self.forward(*input, **kwargs)
207 for hook in self._forward_hooks.values():
208 hook_result = hook(self, input, result)
/usr/share/Anaconda2/lib/python2.7/site-packages/torch/nn/modules/linear.pyc in forward(self, input)
52 return self._backend.Linear()(input, self.weight)
53 else:
---> 54 return self._backend.Linear()(input, self.weight, self.bias)
55
56 def __repr__(self):
/usr/share/Anaconda2/lib/python2.7/site-packages/torch/nn/_functions/linear.pyc in forward(self, input, weight, bias)
8 self.save_for_backward(input, weight, bias)
9 output = input.new(input.size(0), weight.size(0))
---> 10 output.addmm_(0, 1, input, weight.t())
11 if bias is not None:
12 # cuBLAS doesn't support 0 strides in sger, so we can't use expand
RuntimeError: size mismatch, m1: [1 x 2048], m2: [512 x 1000] at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:1229 |
st103344 | @bodokaiser, @fmassa thanks a lot!
I found the problem.
It is because the kernal size of avgpool layer in resnet in the code is fixed
self.avgpool = nn.AvgPool2d(7)
so I think I should reimplement it by nn.AdaptiveAvgPool2d(1)
I think I also can redefine the avgpool in the forward function. I want to know is it feasible to redefine it when forward and why we should define all layer during init the module like most code.
Is there other way to define it in the init to have a changeable avgpool kernal size? |
st103345 | facing the same problem here ,just wondering how did you modify the original resnet to take in the avgpool operation? |
st103346 | as @XavierLinNow suggest you just need to use:
self.pool2 = nn.AdaptiveAvgPool2d(1)
after all your ConvNets, therefore, in the end you will always get fix value of outputs. |
st103347 | Currently I am trying to convert a model from Caffe to PyTorch and hope that I can train with the same parameters and get the same (within 1% error) result. However, right now, I am unable to, same parameters will give vastly different results. I am not sure if there is any particular backend implementation difference of the two frameworks that I should take note of so that I can replicate the result.
Or is it not possible? That there is so much difference that adds up that I must use different parameters?
Currently I only know the sgd is slightly different. I have modified it and I still couldn’t get same result with the same parameters. |
st103348 | I have the following error:
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py in forward(self, input)
53
54 def forward(self, input):
—> 55 return F.linear(input, self.weight, self.bias)
56
57 def extra_repr(self):
~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in linear(input, weight, bias)
990 if input.dim() == 2 and bias is not None:
991 # fused op is marginally faster
–> 992 return torch.addmm(bias, input, weight.t())
993
994 output = input.matmul(weight.t())
RuntimeError: Expected object of type torch.cuda.FloatTensor but found type torch.cuda.ByteTensor for argument #4 ‘mat1’ |
st103349 | I’m using a sequence of legacy modules because I have loaded a model by means of torch.utils.serialization.load_lua(), and that is what it gave me.
I have sundry modules inside of a torch.legacy.nn.Sequential. I’m performing inference on a number of images; the first image runs through the Sequential fine, but if I try to pass a second image through, the following error is thrown:
RuntimeError: calling resize_ on a tensor that has non-resizable storage. Clone it first or create a new tensor instead
This occurs in the first torch.legacy.nn.Linear module in the Sequential, at torch/legacy/nn/Linear.py line 47 1:
self.output.resize_(nframe, self.weight.size(0))
I’m at a loss. (All of the images are the same size, in case that matters.) Can anyone advise me? Why does this error arise? What can I do to avoid it? |
st103350 | Solved by richard in post #2
I’m not sure, but that error doesn’t exist any more on master. What version of pytorch are you using? If you’re using 0.4.0, a build from source might make the problem go away. |
st103351 | vaselinessa:
that has non-resizable storage
I’m not sure, but that error doesn’t exist any more on master. What version of pytorch are you using? If you’re using 0.4.0, a build from source might make the problem go away. |
st103352 | Hey all
I am currently experimenting with different LSTM models for time-series prediction and am noticing a strange behaviour difference between two implementations that should be (from my understanding) fundamentally the same.
For my first implementation I used two LSTMCells and avoided pytorch’s built in batching. I initially did this because my training data has variable sequence lengths and I was averse to dealing with the variable input length when it comes to batching, which is why I kind of cheated with the batch dimension in the forward function. Thus to handle batches, (as shown below) I run each sample in the batch through the network individually, calculate the loss and the associated gradients, caching the gradients and updating the network using the accumulated gradients.
For my second model I tried to utilize pytorch optimizations and instead of LSTMCell I used actual LSTM layers and the built-in batching, which should result in a considerable speed up. Note that for this model, I always use num_layers=2.
I then tried to compare how these two models train and found that for implementation 1, the model exhibits two phases, one where it quickly decreases, and another where it has noisy oscillations around a slowly decreasing average (much like an L shape). For model 2 I expected the same behaviour but found that while it also has the same initial phase where it quickly decreases and leads into the same second phase seen in implementation 1, once in this phase the MSELoss occasionally spikes up really high, and then the model enters another phase where it quickly decreases until it levels out again.
Is there any obvious reason why I may be seeing this difference? For both of these models I am using the Adam optimizer with its default parameters, and torch.nn.MSELoss() for the loss function. Ultimately I’d like to use Implementation 2 because it does lead to a considerable speed up in the training process, but I’m concerned that I am doing something wrong.
Update: I’ve realized that since I am not dividing the loss function by the batch size for implementation 1, the gradients may differ significantly in magnitude but this would mean that I’d have larger gradients for the first implementation, would it not? Meaning that if I was to see spiking behaviour, it should be for Imp 1 instead of Imp 2.
Implementation 1: using naive batching
Model
class PyLSTMCells(nn.Module):
# init layers
def __init__(self, input_size, hidden_size, output_size):
super(PyLSTMCells, self).__init__()
# attach hidden layer size to class
self.hidden_size = hidden_size
# define layers
self.l1 = nn.LSTMCell(input_size, hidden_size)
self.l2 = nn.LSTMCell(hidden_size, hidden_size)
self.o = nn.Linear(hidden_size, output_size)
# define forward pass over single time-series
def forward(self, input):
# get tensor type of input
Ttype = input.type()
# add an additional "first" dimension to work
# with the LSTMCell interface
pad_input = input.unsqueeze(0)
# define hidden/cell states
h1 = torch.zeros(1, self.hidden_size).type(Ttype)
c1 = torch.zeros(1, self.hidden_size).type(Ttype)
h2 = torch.zeros(1, self.hidden_size).type(Ttype)
c2 = torch.zeros(1, self.hidden_size).type(Ttype)
# loop over time series and get final output
TS_len = pad_input.shape[1]
for i in range(TS_len):
h1, c1 = self.l1(pad_input[:,:,i], (h1,c1))
h2, c2 = self.l2(h1, (h2,c2))
output = self.o(h2)
# return output, ignoring the false "first" dim
# that was added
return output[0,:]
Training Method
opt.zero_grad() # Adam optimizer with default parameters
batch_loss = 0
for i in range(batch_size):
X, lbl = get_example()
output = model.forward(X) # where model is the network defined above
l = loss(output, lbl) # MSELoss
l.backward()
batch_loss += l
avg_batch_loss = batch_loss/batch_size
opt.step()
Implementation 2: LSTM layers and Pytorch batching
Model
class BatchPyLSTM(nn.Module):
# init network
def __init__(self, input_size, hidden_size, output_size, num_layers=2):
super(BatchPyLSTM, self).__init__()
# attach important vars to class
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
# define layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
self.lin = nn.Linear(hidden_size, output_size)
# init hidden and cell states
def init_hidden(self, Ttype, batch_size):
h0 = torch.zeros(self.num_layers, batch_size, self.hidden_size).type(Ttype) # hidden states
c0 = torch.zeros(self.num_layers, batch_size, self.hidden_size).type(Ttype) # cell states
return h0,c0
# forward pass over entire batch
def forward(self, input, seq_lens):
# get tensor type of input and batch size
Ttype = input.type()
batch_size = input.shape[0]
# init hidden and cell states
h0, c0 = self.init_hidden(Ttype, batch_size)
# reshape before packing to match pytorch interface
pckd_input = input.permute(0,2,1)
# pack input so the LSTM knows now to consider the padded values
pckd_input = nn.utils.rnn.pack_padded_sequence(pckd_input, seq_lens,
batch_first=True)
# feed through LSTM
lstm_otp,_ = self.lstm(pckd_input,(h0,c0))
# repad lstm_otp into normal tensor object so we can actually work with it
lstm_otp,_ = nn.utils.rnn.pad_packed_sequence(lstm_otp, batch_first=True,
padding_value=float('nan'))
# extract lstm output for final time-step of each series
fin_lstm_otp = torch.stack([ o[l-1,:] for o,l in zip(lstm_otp, seq_lens) ])
# feed lstm output tensors
otp = self.lin(fin_lstm_otp)
return otp
Training Method
opt.zero_grad() # Adam optimizer with default parameters
X_batch, lbl_batch, seq_lens = get_batch(batch_size)
output = model.forward() # implementation 2 network
avg_batch_loss = loss(output, lbl_batch) # MSELoss
avg_batch_loss.backward()
opt.step() |
st103353 | Hello,
is it expected that for cpp-extensions with the JIT compilation the compiler will “merge” files if different modules if they have the same filename.
For example with this structure
module1/my_kernel.cu
module2/my_kernel.cu
every functions used in module2 will be overwritten by functions from module1!
Edit: Actually file name does not even matter, it will also work for files with different names |
st103354 | Solved by tjoseph in post #2
Found the issue: when using load(name="my_module_name") you cannot import two modules with the same name! |
st103355 | Found the issue: when using load(name="my_module_name") you cannot import two modules with the same name! |
st103356 | HI. I’m trying weight-decay for the first time. Is 5e-4 a good value? (with the Adam optimizer) Does anyone have good links that I might use as a resource? I have a special problem. My model works with the babi data set. My code is here: https://github.com/radiodee1/awesome-chatbot/blob/master/model/babi_iv.py 1 . My model trains to a 95 - 99 % accuracy but when I validate it on data that it has not seen in training it performs to the 50% level. Watching it train, the first 50% is fine and if I validate at that stage the accuracy at each point matches the training. IOW I think it is very strange that my validation keeps pace with my training and then suddenly stops at the 50% mark.
I am using dropout in most of my pytorch components at about 0.5. I have just started using weight decay. I read somewhere that 5e-4 is a good value. Is this true? I still have this underlying problem with the validation not keeping up with the training after 50%.
I’m trying anything to fix this. I have an open stack overflow question here if anyone is interested.
stackoverflow.com
DMN neural network with poor validation results -- only 50% 1
python-3.x, recurrent-neural-network, pytorch
asked by
D Liebman
on 12:43PM - 03 Jul 18 |
st103357 | Hello,
I’m training a recurrent net with 31 features which can be dived into 3 categories (20+6+5) , in some example it’s better to focus the 20 first features not on the 6 next and in some other it’s better to focus on the 6 over the first 20 , is there a way to tell to neural net of this depending on some extra condition/variable, I can add an extra 32th features in a binary form to address the issue.
And Thank you. |
st103358 | Hi all
I’m currently trying to train a NN where a second network is used as kind of cost function. So I tried to chain the two networks and back-propagate the errors of the second network to the first one.
The second net should not update its weights so i have set require_grad to false and just pass the parameters of the first one to the optimizer. I also set the second network to evaluation mode so the batchnorm layers do not calculate values.
There are only linear layers with ReLU, Batchnorm and BCE loss.
Now i get a strange result of the loss over a few epochs. It increases directly and then converges to some “high” value. Did i miss something or what could be the problem?
Thanks for any help and best regards
Matthias |
st103359 | I cannot post the full code, but i tried to extract the most important parts.
First net:
class Transformer(nn.Module):
def __init__(self):
super(Transformer, self).__init__()
self.net = nn.Sequential()
self.net.add_module('linear_0', nn.Linear(1152, 460))
self.net.add_module('activation_0', nn.ReLU())
self.net.add_module('bnorm_0', nn.BatchNorm1d(460))
self.net.add_module('linear_1', nn.Linear(460, 115))
self.net.add_module('activation_1', nn.ReLU())
self.net.add_module('bnorm_1', nn.BatchNorm1d(115))
self.net.add_module('linear_2', nn.Linear(115, 460))
self.net.add_module('activation_2', nn.ReLU())
self.net.add_module('bnorm_2', nn.BatchNorm1d(460))
self.net.add_module('linear_3', nn.Linear(460, 1152))
self.net.add_module('activation_3', nn.ReLU())
self.net.add_module('bnorm_3', nn.BatchNorm1d(1152))
def forward(self, x):
return self.net(x)
Second net:
class Classifier(nn.Module):
def __init__(self):
super(Classifier, self).__init__()
self.net = nn.Sequential()
self.net.add_module('linear_0', nn.Linear(1152, 2048))
self.net.add_module('activation_0', nn.ReLU())
self.net.add_module('bnorm_0', nn.BatchNorm1d(2048))
self.net.add_module('linear_1', nn.Linear(2048, 2048))
self.net.add_module('activation_1', nn.ReLU())
self.net.add_module('bnorm_1', nn.BatchNorm1d(2048))
self.net.add_module('linear_2', nn.Linear(2048, 2048))
self.net.add_module('activation_2', nn.ReLU())
self.net.add_module('bnorm_2', nn.BatchNorm1d(2048))
self.net.add_module('linear_3', nn.Linear(2048, 2048))
self.net.add_module('activation_3', nn.ReLU())
self.net.add_module('bnorm_3', nn.BatchNorm1d(2048))
self.net.add_module('linear_4', nn.Linear(2048, 2048))
self.net.add_module('activation_4', nn.ReLU())
self.net.add_module('bnorm_4', nn.BatchNorm1d(2048))
self.net.add_module('linear_5', nn.Linear(2048, 2048))
self.net.add_module('activation_5', nn.ReLU())
self.net.add_module('bnorm_5', nn.BatchNorm1d(2048))
self.net.add_module('linear_6', nn.Linear(2048, 2040))
self.net.add_module('softmax', nn.Softmax())
def forward(self, x):
return self.net(x)
Chaining both:
class Chain(nn.Module):
def __init__(self, transformer_model, classification_model):
super(Chain, self).__init__()
self.transformer = transformer_model
self.classifier = classification_model
def forward(self, data):
# compute transformer output
output = self.transformer.forward(data)
# compute classifier output
output = self.classifier.forward(output)
return output
def train(self, mode=True):
self.training = mode
self.transformer.train(mode=mode)
self.classifier.train(mode=False)
Now i create the models as follows, the second one is already trained and works as expected (standalone):
model = tr_fix.Transformer()
sc_model = sc_fix.Classifier()
sc_model.load_state_dict(torch.load(sc_model_state_path))
full_model = state_back_fix.Chain(model, sc_model)
Set requires_grad to false:
for param in full_model.classifier.parameters():
param.requires_grad = False
Optim and loss:
loss_func = torch.nn.BCELoss()
optimizer = optim.Adam(full_model.transformer.parameters(), lr=0.001)
Then i train that thing using a library i wrote for convenience:
trainer = candle.Trainer(full_model, optimizer,
targets=[candle.Target(loss_func, loss_func)],
num_epochs=10,
use_cuda=True)
train_log = trainer.train(train_loader, dev_loader)
eval_log = trainer.evaluate(test_loader)
train/dev/test_loader are custom pytorch dataloaders. The important part from the Trainer you can find here:
github.com
ynop/candle/blob/master/candle/trainer.py#L131 4
return iteration_log
def train_epoch(self, epoch_index, train_loader, dev_loader=None):
"""
Run one epoch of training. Returns a iteration log.
Arguments:
epoch_index (int): An index that identifies the epoch.
train_loader (torch.utils.data.DataLoader): PyTorch loader that provides training data.
dev_loader (torch.utils.data.DataLoader): PyTorch loader that provides validation data.
Returns:
IterationLog: The iteration log.
"""
iteration_log = log.IterationLog(targets=self._targets, metrics=self._metrics)
self._model.train()
self._callback_handler.notify_before_train_epoch(epoch_index, iteration_log)
for batch_index, batch in enumerate(train_loader):
Thanks already for confirming that i am not totally wrong.
Furthermore the loss i get looks like that: |
st103360 | I want to ‘grow’ a matrix using a set of rules. Example of rules: 0->[[1,1,1],[0,0,0],[2,2,2]], 1->[[2,2,2],[2,2,2],[2,2,2]], 2->[[0,0,0],[0,0,0],[0,0,0]].
Example of growing a matrix: [[0]]->[[1,1,1],[0,0,0],[2,2,2]]->[[2,2,2,2,2,2,2,2,2],[2,2,2,2,2,2,2,2,2],[2,2,2,2,2,2,2,2,2],[1,1,1,1,1,1,1,1,1],[0,0,0,0,0,0,0,0,0],[2,2,2,2,2,2,2,2,2],[0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0]].
This is the code I’ve been trying to get to work in Pytorch
rules = np.random.randint(256,size=(10,256,3,3,3))
rules_tensor = torch.randint(256,size=(10,
256, 3, 3, 3),
dtype=torch.uint8, device = torch.device('cuda'))
rules = rules[0]
rules_tensor = rules_tensor[0]
seed = np.array([[128]])
seed_tensor = seed_tensor = torch.cuda.ByteTensor([[128]])
decode = np.empty((3**3, 3**3, 3))
decode_tensor = torch.empty((3**3,
3**3, 3), dtype=torch.uint8,
device = torch.device('cuda'))
for i in range(3):
grow = seed
grow_tensor = seed_tensor
for j in range(1,4):
grow = rules[grow,:,:,i].reshape(3**j,-1)
grow_tensor = rules_tensor[grow_tensor,:,:,i].reshape(3**j,-1)
decode[..., i] = grow
decode_tensor[..., i] = grow_tensor
I can’t seem to select indices the same way as in Numpy in this line:
grow = rules[grow,:,:,i].reshape(3**j,-1) Is there a way to do the following in Pytorch? |
st103361 | I trained a model without any dropout, and now I want to add some dropout layers and resume the training using the last checkpoint I have. However, when I try to do so, I get these errors:
FLOPs: 387.73M, Params: 1.15M
=> loading checkpoint './snapshots/imagenet/simpnets/1mil/nodrp/chkpt_simpnet_imgnet_1m_nodrp_s1_2018-07-08_17-00-55.pth.tar'
Traceback (most recent call last):
File "imagenet_train.py", line 537, in <module>
main()
File "imagenet_train.py", line 122, in main
model.load_state_dict(checkpoint['state_dict'])
File "/home/shishosama/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 721, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.features.5.weight", "module.features.5.bias", "module.features.5.running_mean", "module.features.5.running_var", "module.features.8.weight", "module.features.8.bias", "module.features.9.running_mean", "module.features.9.running_var", "module.features.21.weight", "module.features.21.bias", "module.features.22.running_mean", "module.features.22.running_var", "module.features.30.weight", "module.features.30.bias", "module.features.30.running_mean", "module.features.30.running_var", "module.features.34.weight", "module.features.34.bias", "module.features.34.running_mean", "module.features.34.running_var", "module.features.37.weight", "module.features.37.bias", "module.features.38.running_mean", "module.features.38.running_var", "module.features.42.weight", "module.features.42.bias", "module.features.43.weight", "module.features.43.bias", "module.features.43.running_mean", "module.features.43.running_var", "module.features.46.weight", "module.features.46.bias", "module.features.47.weight", "module.features.47.bias", "module.features.47.running_mean", "module.features.47.running_var", "module.features.50.weight", "module.features.50.bias", "module.features.51.weight", "module.features.51.bias", "module.features.51.running_mean", "module.features.51.running_var".
Unexpected key(s) in state_dict: "module.features.3.weight", "module.features.3.bias", "module.features.4.running_mean", "module.features.4.running_var", "module.features.6.weight", "module.features.6.bias", "module.features.7.weight", "module.features.7.bias", "module.features.7.running_mean", "module.features.7.running_var", "module.features.10.weight", "module.features.10.bias", "module.features.10.running_mean", "module.features.10.running_var", "module.features.19.weight", "module.features.19.bias", "module.features.20.weight", "module.features.20.bias", "module.features.20.running_mean", "module.features.20.running_var", "module.features.23.weight", "module.features.23.bias", "module.features.23.running_mean", "module.features.23.running_var", "module.features.28.weight", "module.features.28.bias", "module.features.29.running_mean", "module.features.29.running_var", "module.features.32.weight", "module.features.32.bias", "module.features.33.running_mean", "module.features.33.running_var", "module.features.35.weight", "module.features.35.bias", "module.features.36.weight", "module.features.36.bias", "module.features.36.running_mean", "module.features.36.running_var", "module.features.39.weight", "module.features.39.bias", "module.features.39.running_mean", "module.features.39.running_var".
While copying the parameter named "module.features.4.weight", whose dimensions in the model are torch.Size([80, 60, 3, 3]) and whose dimensions in the checkpoint are torch.Size([80]).
While copying the parameter named "module.features.9.weight", whose dimensions in the model are torch.Size([80]) and whose dimensions in the checkpoint are torch.Size([80, 80, 3, 3]).
While copying the parameter named "module.features.12.weight", whose dimensions in the model are torch.Size([80, 80, 3, 3]) and whose dimensions in the checkpoint are torch.Size([85, 80, 3, 3]).
While copying the parameter named "module.features.12.bias", whose dimensions in the model are torch.Size([80]) and whose dimensions in the checkpoint are torch.Size([85]).
While copying the parameter named "module.features.13.weight", whose dimensions in the model are torch.Size([80]) and whose dimensions in the checkpoint are torch.Size([85]).
While copying the parameter named "module.features.13.bias", whose dimensions in the model are torch.Size([80]) and whose dimensions in the checkpoint are torch.Size([85]).
While copying the parameter named "module.features.13.running_mean", whose dimensions in the model are torch.Size([80]) and whose dimensions in the checkpoint are torch.Size([85]).
While copying the parameter named "module.features.13.running_var", whose dimensions in the model are torch.Size([80]) and whose dimensions in the checkpoint are torch.Size([85]).
While copying the parameter named "module.features.16.weight", whose dimensions in the model are torch.Size([85, 80, 3, 3]) and whose dimensions in the checkpoint are torch.Size([85, 85, 3, 3]).
While copying the parameter named "module.features.22.weight", whose dimensions in the model are torch.Size([85]) and whose dimensions in the checkpoint are torch.Size([90, 90, 3, 3]).
While copying the parameter named "module.features.22.bias", whose dimensions in the model are torch.Size([85]) and whose dimensions in the checkpoint are torch.Size([90]).
While copying the parameter named "module.features.25.weight", whose dimensions in the model are torch.Size([90, 85, 3, 3]) and whose dimensions in the checkpoint are torch.Size([90, 90, 3, 3]).
While copying the parameter named "module.features.29.weight", whose dimensions in the model are torch.Size([90, 90, 3, 3]) and whose dimensions in the checkpoint are torch.Size([110]).
While copying the parameter named "module.features.29.bias", whose dimensions in the model are torch.Size([90]) and whose dimensions in the checkpoint are torch.Size([110]).
While copying the parameter named "module.features.33.weight", whose dimensions in the model are torch.Size([90, 90, 3, 3]) and whose dimensions in the checkpoint are torch.Size([110]).
While copying the parameter named "module.features.33.bias", whose dimensions in the model are torch.Size([90]) and whose dimensions in the checkpoint are torch.Size([110]).
While copying the parameter named "module.features.38.weight", whose dimensions in the model are torch.Size([110]) and whose dimensions in the checkpoint are torch.Size([150, 127, 3, 3]).
While copying the parameter named "module.features.38.bias", whose dimensions in the model are torch.Size([110]) and whose dimensions in the checkpoint are torch.Size([150]).
What should I do?
Thanks in advance |
st103362 | Inserting layers into serial changes the indices of the layers (the number in the param names).
What you need to do is go through the state dict before loading it and adapt the layer indices.
For example, your first dropout layer seems to be number 3 (the fourth module). Thus the module.features.3.weight needs to be mapped to module.features.4.weight (don’t do it inplace if you use ascending order) and all subsequent layer parameters’ indices need to be increased by at least one, too.
Best regards
Thomas |
st103363 | Thankyou verymuch, is there any examples I can use to do this ? I’m a newbie in Pytorch! |
st103364 | You just do this at the python level as in:
m1 = torch.nn.Sequential(
torch.nn.Linear(10,10),
torch.nn.Linear(10,10),
torch.nn.Linear(10,10),
)
m2 = torch.nn.Sequential(
torch.nn.Linear(10,10),
torch.nn.Dropout(p=0.2),
torch.nn.Linear(10,10),
torch.nn.Dropout(p=0.2),
torch.nn.Linear(10,10),
)
mapping = {}
idxold = 0
for i,l in enumerate(m2):
if not isinstance(l, torch.nn.Dropout):
mapping[idxold] = i
idxold += 1
sd1 = m1.state_dict()
sd1.keys()
sd2 = {}
for k in sd1:
ksplit = k.split('.')
ksplit[0] = str(mapping[int(ksplit[0])])
knew = '.'.join(ksplit)
sd2[knew] = sd1[k]
m2.load_state_dict(sd2) |
st103365 | Thank you very much sir.
I changed my resume section as following :
...
#loading the model in gpu
model = mysimplenet(1000)
model = torch.nn.DataParallel(model).cuda()
if args.resume:
if os.path.isfile(args.resume):
print_log("=> loading checkpoint '{}'".format(args.resume), log)
checkpoint = torch.load(args.resume)
args.start_epoch = checkpoint['epoch']
best_prec1 = checkpoint['best_prec1']
if 'best_prec5' in checkpoint:
best_prec5 = checkpoint['best_prec5']
else:
best_prec5 = 0.00
#model.load_state_dict(checkpoint['state_dict'])
mapping = {}
idxold = 0
for i,l in enumerate(model): # model contains dropout
if not isinstance(l, torch.nn.Dropout):
mapping[idxold] = i
idxold += 1
sd1 = checkpoint['state_dict'] # load from existing checkpoint state_dict which does not have dropout
sd1.keys()
sd2 = {}
for k in sd1:
ksplit = k.split('.')
ksplit[0] = str(mapping[int(ksplit[0])])
knew = '.'.join(ksplit)
sd2[knew] = sd1[k]
model.load_state_dict(sd2)
#loading scheduler state
if no_scheduler_stat:
scheduler.load_state_dict(tmp)
else:
scheduler.load_state_dict(checkpoint['scheduler'])
optimizer.load_state_dict(checkpoint['optimizer'])
model.eval()
print_log("=> loaded checkpoint '{}' (epoch {})".format(args.resume, checkpoint['epoch']), log)
else:
print_log("=> no checkpoint found at '{}'".format(args.resume), log)
However, I’m facing with TypeError: 'DataParallel' object is not iterable
What am I doing wrong here? What should I do? |
st103366 | Sorry to sound stupid for not getting your point!
my model does use nn.sequential. However, the script that I’m using (Link here 5) uses Dataparallel so that we can use GPU for training. I have no idea what to do at this point!
I tried removing that dataparallel part and rerunning the model, however it introduced other errors complaining about floats and stuff like that!
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same |
st103367 | I think, you almost solved the problem.
the above error says that either model or data is in GPU and the other is in CPU.
use .cuda() on both model and data. |
st103368 | Hi,
I am trying to train a model that requires ‘mini-batches of batches’ (something like columns of each row of a table ). Mini-batch is each row. Batch within a mini-batch are the column values for that row. However, the error should be back-propagated once per mini-batch (row). So, it needs a sum over the losses for each column within a row and then back-propagate.
For this design, I observe that PyTorch requires ‘retain_graph’ to be ‘True’ from mini-batch to mini-batch (row to row). This eventually leads to out-of-memory error for the GPU. How can I track the ‘Variables’ that are forcing ‘retain_graph’ to be ‘True’? How can I circumvent this issue?
Thanks! |
st103369 | You are reusing a non-leaf variable. Consider b in the following:
want_to_fail = True
a = torch.randn(5,5, requires_grad=True)
b = a**2
c = b.sum()
c.backward()
if not want_to_fail:
b = a**2
d = b.sum()
d.backward()
There are more subtle ways to trick yourself into having a non-leaf variable, e.g. a = torch.randn(5, 5, requires_grad=True).cuda(). The .cuda() is computation as far as autograd is concerned, so the result a is non-leaf.
You can tell whether a given variable is a leaf or not by checking a.grad_fn. If it has a grad_fn that is not None, it requires grad and is not a leaf. These are precisely the variables that you cannot reuse after your first backprop.
Best regards
Thomas |
st103370 | The following is my code. If I remove inplace=True in relu, I do not have error. However, if I leave it, I will have RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation error
class BasicBlock(nn.Module):
def __init__(self, inplanes, planes, use_prelu, use_se_block, anchor=None):
super(BasicBlock, self).__init__()
self.anchor = anchor
self.conv1 = conv3x3(inplanes, planes, stride=1)
nn.init.xavier_normal_(self.conv1.weight)
self.conv2 = conv3x3(planes, planes)
nn.init.normal_(self.conv2.weight, mean=0, std=0.01)
self.use_prelu = use_prelu
if self.use_prelu:
self.prelu1 = nn.PReLU(planes)
self.prelu2 = nn.PReLU(planes)
else:
self.relu = nn.ReLU(inplace=True)
self.use_se_block = use_se_block
if self.use_se_block:
self.se = SEBlock(planes)
def forward(self, x):
if self.anchor is not None:
x = self.anchor(x)
residual = x
x = self.conv1(x)
if self.use_prelu:
x = self.prelu1(x)
else:
x = self.relu(x)
x = self.conv2(x)
if self.use_prelu:
x = self.prelu2(x)
else:
x = self.relu(x)
if self.use_se_block:
x = self.se(x)
x += residual
return x
I consulted the usage from https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py 23 and cannot understand why there is no such error when used in resnet but I encounter the error in my code.
The usage in resnet.py is also weird. It is initialized with inplace=True. However, it is used with x = self.relu(x) in forward. Doesn’t this defeat the purpose of inplace=True? Is it equivalent to just writing self.relu(x) without reassignment? |
st103371 | Solved by albanD in post #2
Hi,
The difference is that in the resnet there is a batchnorm between the conv and the relu.
The conv operation need it’s output to be able to compute the backward pass. The batchnorm operations does not need it’s output to compute the backward pass.
So the operation that comes just after batchno… |
st103372 | Hi,
The difference is that in the resnet there is a batchnorm between the conv and the relu.
The conv operation need it’s output to be able to compute the backward pass. The batchnorm operations does not need it’s output to compute the backward pass.
So the operation that comes just after batchnorm is allowed to make changes inplace, while an operation coming just after a conv is not.
When you do x = self.relu(x) you basically assign to the python variable x the tensor returned by the self.relu operation. It happens that sometimes, this tensor is the same as the input one (if you have the inplace option enabled). |
st103373 | Correct me if i am wrong. Is writing self.relu(x) when relu is inplace the same as writing x=self.relu(x), such that in the resnet code, the assignment is unnecessary? Thats my understanding of inplace. |
st103374 | inplace means that it will not allocate new memory and change tensors inplace. But from the autograd point of view, you have two different tensors (even though they actually share the same memory). One is the output of conv (or batchnorm for resnet) and one is the output of the relu. |
st103375 | I am still confused if I write x=self.relu(x), I am also not allocating new memory since I replace the original x? |
st103376 | No you are, because the relu operation creates a new tensor, fill it with the result of the operation and then assign this new tensor the python variable x.
If x is a tensor, y = x does not do any copy, it just sets both python variables to the same tensor. |
st103377 | hellow everyone, I have a question about how to run a new dataset? I want to use pytorch run Forest, Reuters,WebKB and 20NG dataset. If there is any relevant code, can you give me a reference! That’s the easiest way. Of course, related methods is ok!
reference:
Minerva: enabling low-power, highly-accurate deep neural network accelerators |
st103378 | Have a look at the Dataloading tutorial 19. In this tutorial you’ll find an example of the Dataset class and how to create your own Dataset.
Let me know, if this helps or if you need more information. |
st103379 | very thanks for your reply!!! I’m trying in this way! But I don’t know if it will succeed. If there is a problem, I would like to ask you. thanks |
st103380 | Let x and emb be 2D matrices of size (bsz, n) and (n, m) respectively.
x = torch.FloatTensor([[0,1,2], [3,4,5]])
emb = torch.FloatTensor([[0,1,2,3], [4,5,6,7], [8,9,10,11]])
# x
# tensor([[ 0., 1., 2.],
# [ 3., 4., 5.]])
# emb
# tensor([[ 0., 1., 2., 3.],
# [ 4., 5., 6., 7.],
# [ 8., 9., 10., 11.]])
I want the result to be a 3D tensor of size (bsz, n, m) where out[j, i, :] = x[j, i] * emb[i, :]. I am using a loop for now as below but I thought there might be a better way?
out = torch.zeros(bsz, n, m)
for i in range(bsz):
out[i] = x[i].view(-1, 1) * emb
# out
# tensor([[[ 0., 0., 0., 0.],
# [ 4., 5., 6., 7.],
# [ 16., 18., 20., 22.]],
#
# [[ 0., 3., 6., 9.],
# [ 16., 20., 24., 28.],
# [ 40., 45., 50., 55.]]]) |
st103381 | Solved by tom in post #2
I like to use indexing and broadcasting for this:
out = x[:, :, None] * emb[None, :, :]
Indexing with None inserts a new dimension (“unsqueeze” or “view” could achieve the same, but I like to use the above to help me see the alignment).
For more advanced uses, torch.einsum can be useful, too. Her… |
st103382 | I like to use indexing and broadcasting for this:
out = x[:, :, None] * emb[None, :, :]
Indexing with None inserts a new dimension (“unsqueeze” or “view” could achieve the same, but I like to use the above to help me see the alignment).
For more advanced uses, torch.einsum 7 can be useful, too. Here, it would be
torch.einsum('bn,nm->bnm', [x, emb]). Here the indices replicate your indexing (except that I use the dimension indices).
Best regards
Thomas |
st103383 | I have a 4x4 matrix (let’s say it consists v1,v2,v3,v4) and I want to learn 4 parameters (a1,a2,a3,a4) that sum to 1 and multiply them and the matrix in order to learn which of the vectors are more important (normalized weight vector). Which is the best way to do that in PyTorch? |
st103384 | Solved by tom in post #2
So the v-matrix is fixed?
You could use a nn.Parameter for pre-normalized a, softmax and then broadcasting: v_weighted = v * a.softmax(0).unsqueeze(0)) if you want to weight the columns of v or .unsqueeze(1) if you want to weight the rows. v would be a variable (or a buffer if you want to have it s… |
st103385 | So the v-matrix is fixed?
You could use a nn.Parameter for pre-normalized a, softmax and then broadcasting: v_weighted = v * a.softmax(0).unsqueeze(0)) if you want to weight the columns of v or .unsqueeze(1) if you want to weight the rows. v would be a variable (or a buffer if you want to have it saved in the state_dict).
Best regards
Thomas |
st103386 | Good idea!
Unfortunately the values of a don’t change. Here is the relevant part of my code (I’m using PyTorch 0.1.12):
self.weight_vector = Variable(torch.FloatTensor(1, 4), requires_grad=True)
def forward(self, v): # v.size() = (4,36,256)
norm_weight_vec = F.softmax(self.weight_vector)
v = v * (norm_weight_vec.transpose(0, 1).unsqueeze(2)).expand_as(x)
v = torch.sum(x, dim=0) # result size: (1,36,256)
.... |
st103387 | I think you want to make the weight_vector a nn.Parameter of the module to make it show up in m.parameters().
Best regards
Thomas |
st103388 | I’m working in a 10-gpu cluster and I’m having some troubles with CUDA_VISIBLE_DEVICES.
Reading the forum I’ve seen it’s the recommended way of choosing arbitrary GPUs.
I’m running the following shell script:
export CUDA_VISIBLE_DEVICES=1,2,3
export PATH=/usr/local/cuda-9.0-cudnn--v7.0/lib64/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda-9.0-cudnn--v7.0/lib64"
export CUDA_HOME=/usr/local/cuda-9.0-cudnn--v7.0
python
Here you can see nvidia-smi display:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.26 Driver Version: 396.26 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN X (Pascal) Off | 00000000:04:00.0 Off | N/A |
| 47% 79C P2 190W / 250W | 6629MiB / 12196MiB | 92% Default |
+-------------------------------+----------------------+----------------------+
| 1 TITAN X (Pascal) Off | 00000000:05:00.0 Off | N/A |
| 23% 26C P8 8W / 250W | 0MiB / 12196MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 TITAN X (Pascal) Off | 00000000:06:00.0 Off | N/A |
| 41% 62C P2 62W / 250W | 11599MiB / 12196MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 TITAN Xp Off | 00000000:07:00.0 Off | N/A |
| 41% 66C P2 185W / 250W | 11177MiB / 12196MiB | 99% Default |
+-------------------------------+----------------------+----------------------+
| 4 TITAN X (Pascal) Off | 00000000:08:00.0 Off | N/A |
| 58% 86C P2 199W / 250W | 11761MiB / 12196MiB | 78% Default |
+-------------------------------+----------------------+----------------------+
| 5 TITAN X (Pascal) Off | 00000000:0B:00.0 Off | N/A |
| 51% 83C P2 213W / 250W | 11761MiB / 12196MiB | 90% Default |
+-------------------------------+----------------------+----------------------+
| 6 GeForce GTX 108... Off | 00000000:0C:00.0 Off | N/A |
| 23% 29C P8 8W / 250W | 0MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 7 TITAN X (Pascal) Off | 00000000:0D:00.0 Off | N/A |
| 54% 83C P2 143W / 250W | 11759MiB / 12196MiB | 93% Default |
+-------------------------------+----------------------+----------------------+
| 8 GeForce GTX 108... Off | 00000000:0E:00.0 Off | N/A |
| 23% 33C P8 11W / 250W | 0MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 9 TITAN X (Pascal) Off | 00000000:0F:00.0 Off | N/A |
| 45% 74C P2 103W / 250W | 11761MiB / 12196MiB | 55% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 52925 C python 6619MiB |
| 2 53553 C python3 11589MiB |
| 3 52925 C python 11167MiB |
| 4 53160 C python 11749MiB |
| 5 53161 C python 11749MiB |
| 7 56769 C python 11747MiB |
| 9 20983 C python 11749MiB |
+-----------------------------------------------------------------------------+
Now I try to store a simple variable in gpu:
import torch
import os
os.environ.get(‘CUDA_VISIBLE_DEVICES’)
‘1’
a=torch.rand(100)
a.cuda()
tensor([ 0.0554, 0.0375, 0.4708, 0.6522, 0.4640, 0.4087, 0.4738,
0.3571, 0.4100, 0.4238, 0.6673, 0.7015, 0.8013, 0.8452,
0.6704, 0.4123, 0.1702, 0.3805, 0.1789, 0.5453, 0.6197,
0.5231, 0.7428, 0.7978, 0.3173, 0.0653, 0.4624, 0.4298,
0.2032, 0.5640, 0.1568, 0.2366, 0.0436, 0.3464, 0.8633,
0.8253, 0.7330, 0.2782, 0.6662, 0.3576, 0.1209, 0.7470,
0.4402, 0.8037, 0.2154, 0.8686, 0.3976, 0.0305, 0.9457,
0.6998, 0.5220, 0.4419, 0.9357, 0.5723, 0.4109, 0.7055,
0.3444, 0.3484, 0.7930, 0.5491, 0.1293, 0.4718, 0.9671,
0.8292, 0.0422, 0.1354, 0.3751, 0.1575, 0.8005, 0.7624,
0.7628, 0.2370, 0.8926, 0.2794, 0.5764, 0.7508, 0.5215,
0.2245, 0.8482, 0.0440, 0.2812, 0.0715, 0.1664, 0.1170,
0.9271, 0.8802, 0.2525, 0.1377, 0.5035, 0.1035, 0.5497,
0.8906, 0.1272, 0.2019, 0.3545, 0.3818, 0.8902, 0.9140,
0.5344, 0.6614], device=‘cuda:0’)
it’s stored in cuda:0 instead of cuda1
According to nvidia-smi it’s
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 52646 C python 509MiB |
| 0 52925 C python 6619MiB |
| 2 53553 C python3 11751MiB |
| 3 52925 C python 11167MiB |
| 4 53160 C python 11749MiB |
| 5 53161 C python 11749MiB |
| 7 56769 C python 11747MiB |
| 9 20983 C python 11749MiB |
+-----------------------------------------------------------------------------+
Trying with GPUs 1,2 from nvidia smi
±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 7467 C python 509MiB |
| 0 52925 C python 6619MiB |
| 1 7467 C python 509MiB |
| 2 7388 C python3 11751MiB |
| 3 52925 C python 11167MiB |
| 4 53160 C python 11749MiB |
| 5 53161 C python 11749MiB |
| 7 56769 C python 11747MiB |
| 9 20983 C python 11749MiB |
±----------------------------------------------------------------------------+
It saves variables in gpu0 and gpu 1.
Why?
Another simple question. devicce_ids from pytorch are relative to the available GPUs? I mean, if u set CVD=4,5,8 GPU 4 from CVD would be cuda:0 for pytorch and so on, right? |
st103389 | Try to set export CUDA_DEVICE_ORDER=PCI_BUS_ID in your terminal before executing your script.
Yes, the device ids will be mapped to the 'cuda:id' starting at zero. |
st103390 | I want to write unit tests with the unittest package. How would I compare two tensors? I’m looking for something like self.assertEqual(tensor1, tensor2). Is there a package with unit test utils? |
st103391 | I have the same question. It seems there are some testing utils in the repo https://github.com/pytorch/pytorch/blob/master/test/common.py 279
but I don’t think they’re exposed in the distribution package, are they? |
st103392 | jupyter: My output format is just like picture here.
1.jpg797×138 13.4 KB
i wonder how to set to the following format.
0.2355 0.8276 0.6279 -2.3826
0.3533 1.3359 0.1627 1.7314
[torch.FloatTensor of size 2x4]
thank you! |
st103393 | Solved by ptrblck in post #2
There was a discussion in this thread about this topic.
Currently you could just use print(a, a.type(), a.size()). |
st103394 | There was a discussion in this thread 78 about this topic.
Currently you could just use print(a, a.type(), a.size()). |
st103395 | Hello,
I was wondering if there was something already inside the PyTorch library to apply cross-validation on the training set, I have tried finding the information and I can’t really see if it’s something I have to implement myself or if there is a built-in tool somewhere.
Thank you in advance for the information, |
st103396 | Hi,
I have developed a function to perform cross-validation in medical images, but it is easily adaptable to other types of problems.
GitHub
alejandrodebus/Pytorch-Utils 464
Pytorch-Utils - Useful functions to work with PyTorch. At the moment, there is a function to work with cross validation.
Regards |
st103397 | The examples at https://github.com/csarofeen/examples/tree/dist_fp16/imagenet 91 are for Pytorch 0.3 it appears. In particular, when I tried to update set_grad in fp16utils by removing .data, I get the following error. Any tips? Thank you!
RuntimeError Traceback (most recent call last)
<ipython-input-18-d670cc97fa5f> in <module>()
174 print("total num params:", np.sum([np.prod(x.shape) for x in conv_model.parameters()]))
175 # conv_model(data[0][0][None,:,None].cuda()).shape
--> 176 train(conv_model,data,100,lr=1e-3, half=half, cuda=cuda)
177 # 1.91E+02
<ipython-input-18-d670cc97fa5f> in train(model, data, nepochs, lr, half, cuda)
142 model.zero_grad()
143 loss.backward()
--> 144 set_grad(param_copy, list(model.parameters()))
145 else:
146 optimizer.zero_grad()
<ipython-input-15-8c52d7e662cf> in set_grad(params, params_with_grad)
4 if param.grad is None:
5 param.grad = torch.nn.Parameter(param.data.new().resize_(*param.data.size()))
----> 6 param.grad.copy_(param_w_grad.grad.data)
RuntimeError: a leaf Variable that requires grad has been used in an in-place operation. |
st103398 | you can look here for 0.4 based fp16 imagenet: https://github.com/NVIDIA/apex/tree/master/examples/imagenet 2.1k |
st103399 | I just tried to build / compile pytorch from source, and got the following error message:
…
[ 63%] Building CXX object caffe2/CMakeFiles/caffe2.dir//aten/src/ATen/native/cpu/SoftMaxKernel.cpp.DEFAULT.cpp.o
[ 63%] Building CXX object caffe2/CMakeFiles/caffe2.dir//aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp.o
[ 64%] Linking CXX shared library …/lib/libcaffe2.so
/usr/bin/x86_64-linux-gnu-ld: cannot find -lonnxifi_loader
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/caffe2.dir/build.make:2170: recipe for target ‘lib/libcaffe2.so’ failed
make[2]: *** [lib/libcaffe2.so] Error 1
CMakeFiles/Makefile2:952: recipe for target ‘caffe2/CMakeFiles/caffe2.dir/all’ failed
make[1]: *** [caffe2/CMakeFiles/caffe2.dir/all] Error 2
Makefile:140: recipe for target ‘all’ failed
make: *** [all] Error 2
Failed to run ‘bash tools/build_pytorch_libs.sh --use-cuda --use-nnpack nccl caffe2 nanopb libshm gloo THD c10d’
I can not find any info about -lonnxifi_loader via google. Two weeks ago (Mid June 2018)
I can compile pytorch without problem. My system is AMD x86_64 Ubuntu 18.04,
with gcc 7.3.0, cuda 9.2, cudnn 7.1.4. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.