id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st104100 | Hi, @apaszke, Is there any schedule for that function? I am also looking forwards to that. |
st104101 | if you are still wondering, it’s been implemented but it should be done properly.
>>> a = torch.rand(64, 20)
>>> b = torch.rand(64, 1)
>>> a/b
tensor([[ 5.0057e-01, 3.5622e-01, 3.1053e-01, ..., 3.2856e-01,
1.0888e+00, 9.7678e-01],
[ 4.7883e+00, 5.7695e+00, 2.8125e+00, ..., 1.6500e+01,
3.5257e+00, 1.5637e+01],
[ 6.8112e-01, 1.8881e+00, 2.0702e+00, ..., 4.2512e-01,
1.4803e+00, 8.5795e-01],
...,
[ 4.1682e-01, 9.6458e-01, 1.1828e+00, ..., 9.9901e-01,
1.0716e+00, 1.4875e+00],
[ 1.2503e-01, 1.2347e+00, 6.0802e-01, ..., 5.0439e-01,
1.2536e+00, 1.3501e+00],
[ 9.1765e-01, 6.7741e-01, 1.0928e+00, ..., 8.1460e-01,
9.7924e-01, 3.8059e-01]])
>>> torch.div(a,b)
tensor([[ 5.0057e-01, 3.5622e-01, 3.1053e-01, ..., 3.2856e-01,
1.0888e+00, 9.7678e-01],
[ 4.7883e+00, 5.7695e+00, 2.8125e+00, ..., 1.6500e+01,
3.5257e+00, 1.5637e+01],
[ 6.8112e-01, 1.8881e+00, 2.0702e+00, ..., 4.2512e-01,
1.4803e+00, 8.5795e-01],
...,
[ 4.1682e-01, 9.6458e-01, 1.1828e+00, ..., 9.9901e-01,
1.0716e+00, 1.4875e+00],
[ 1.2503e-01, 1.2347e+00, 6.0802e-01, ..., 5.0439e-01,
1.2536e+00, 1.3501e+00],
[ 9.1765e-01, 6.7741e-01, 1.0928e+00, ..., 8.1460e-01,
9.7924e-01, 3.8059e-01]])
note: b should be of shape (64,1) not (64,) that’s what RuntimeError: The size of tensor a (20) must match the size of tensor b (64) at non-singleton dimension 1 means. |
st104102 | Hi,
If I use cuda for my network by
model.cuda()
Everything is ok. The model is big, so it consumes 91% of video memory.
If I use
model = nn.DataParallel(model).cuda()
Then it seems to progress at first, but soon it hangs. When I press CTRL-C, I always get messages as follows:
Traceback (most recent call last):
File "/home/polphit/anaconda3/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/home/polphit/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/polphit/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 28, in _worker_loop
r = index_queue.get()
File "/home/polphit/anaconda3/lib/python3.6/multiprocessing/queues.py", line 343, in get
res = self._reader.recv_bytes()
File "/home/polphit/anaconda3/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/home/polphit/anaconda3/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/home/polphit/anaconda3/lib/python3.6/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
I tried on two different machines and got the same issue.
Ubuntu 16.04,
conda 4.3.14,
pytorch installed from source,
python 3.6.0.final.0
requests 2.12.4
CUDA 8.0
cuDNN 5.1
When I run the same code on a machine without conda, and python3, it works well.
Can I get a clue to resolve this issue?
Thank you. |
st104103 | That’s a stack trace of a data loader process, can you paste a full error into a gist and link it here? |
st104104 | Oh, that stack trace is all what I could get, since it just hangs without error.
Well, I guess it’s a kind of synchronization issue.
I have four networks netA, netB, netC, netD, which were
netA = nn.DataParallel(netA).cuda()
netB = nn.DataParallel(netB).cuda()
netC = netC.cuda(0)
netD = netD.cuda(1)
(I have two GPU devices)
Flow is
i (input) -> netA ---> netB -> x (output #1)
+-> netC -> y (output #2)
+-> netD -> z (output #3)
If this is not helpful to guess the cause, I would like to simplify my codes to reproduce the issue with minimal data upload. |
st104105 | Oh, when I add
torch.cuda.synchronize()
at the end of a batch, one machine works properly, although the other machine still has the same issue. |
st104106 | Oh yeah this will happen. It’s because nn.DataParallel uses NVIDIA nccl library and it just deadlocks if you happen to do two calls at the same time… I guess we’ll need to add some mutexes. |
st104107 | Unfortunately even if we add these locs, doing that in two processes that use the same GPUs in DataParallel will deadlock too… |
st104108 | No, it’s a bug in NCCL (NVIDIA’s library). But you probably shouldn’t be using the same GPU in multiple data parallel jobs anyway. |
st104109 | I have similar problem resulting in process hanging if I use DataParallel on 2 K80 GPUs. Do you know what might be an issue @apaszke? If I restrict to one GPU only everything is working fine. |
st104110 | hi everyone, NVIDIA’s @ngimel has investigated this problem, and the hangs might not be related to pytorch. She has written a detailed comment here on figuring out the issue and working around it:
github.com/pytorch/pytorch
Issue: Multi-GPU K80s 236
opened by davidmascharka
on 2017-05-23
closed by apaszke
on 2017-10-22
I'm having trouble getting multi-gpu via DataParallel across two Tesla K80 GPUs. The code I'm using is a modification of the...
Please have a look and see if it applies to you. |
st104111 | Hi! I am facing a similar problem with Titan X GPUs with Pytorch 0.4. I am running it in a docker and allocate 12GB shared memory. On using nn.Dataparallel I get:
Runtime error: NCCL error 1, unhandled cuda error.
I tried the iommu disable option, and have the latest nccl2 library installed.
I tried conda and pip installs as well, but they give the same NCCL 1 error. Sometimes, the code deadlocks and the GPUs show 100% utilization.
p2p bandwidth latency test passes.
Any help would be appreciated.
Thanks |
st104112 | It was some hardware issue, although the p2platency test passed, when i changed the CUDA_VISIBLE_DEVICES , and ran the code without DataParallel it gave illegal memory access error for the faulty GPU. Changed the GPU and now the code works fine. |
st104113 | Dear all, could someone help me with the following issue?
For a common multi-layer neural networks, for the final layer, the output length differs from batch to batch, how can I implement a multi-layer neural networks like this? Is it possible with PyTorch? |
st104114 | I’m trying to deconvolute a 3d image. I want the z-axis to remain the same while increasing the x,y image dimensions by a multiple of 2, so effectively just doubling the size.
When I am normally convolving and compressing an 8x96x96 (batch of 64 images) like so:
nn.Conv3d(nc, 32, 3, (1,2,2), 1)
nn.Conv3d(32, 32, 3, (1,2,2), 1)
I get a tensors of sizes (respectively):
torch.Size([64, 32, 8, 48, 48])
torch.Size([64, 32, 8, 24, 24])
When I deconvolute the compressed version of this which has a size of 1X3X3, like so:
nn.ConvTranspose3d(256, 64, 3, 2, padding=1, output_padding=1)
nn.ConvTranspose3d(64, 64, 3, 2, padding=1, output_padding=1)
I get tensors of sizes (respectively):
torch.Size([64, 64, 2, 6, 6])
torch.Size([64, 64, 4, 12, 12])
However, and this is where I’m stuck. When I want to keep my z-axis constant while still increasing the stride of the y and x by 2 in my deconvolution I’m stuck. I’m trying this like so:
nn.ConvTranspose3d(32, 32, 3, (1,2,2), padding=1, output_padding=1),
When I try this, I’m getting the error in the title. Is there a way I can decompress my y and x dimension while keeping the z dimension constant? (preferably using stride and not a pooling layer) |
st104115 | Solved by ptrblck in post #2
This code might help:
x = torch.randn(1, 256, 1, 3, 3)
conv = nn.ConvTranspose3d(in_channels=256,
out_channels=64,
kernel_size=(1, 4, 4),
stride=(1, 2, 2),
padding=(0, 1, 1),
… |
st104116 | This code might help:
x = torch.randn(1, 256, 1, 3, 3)
conv = nn.ConvTranspose3d(in_channels=256,
out_channels=64,
kernel_size=(1, 4, 4),
stride=(1, 2, 2),
padding=(0, 1, 1),
output_padding=0)
output = conv(x)
print(output.shape)
> torch.Size([1, 64, 1, 6, 6]) |
st104117 | Thanks a lot for your help @ptrblck. I just posted a question in the main posting channel about loading in multiple .npz files efficiently into a data loader. Maybe you have some more intuition on that? |
st104118 | I’m wondering if it’s possible to use any of the existing Sampler classes or a custom class to do the following: I wish to have the DataLoader shuffle data but in contiguous units of size [batch_size]. For example if my batch size is 10, I’d have the DataLoader load
-> data[10-20] -> data[40-50] -> data[0-10] -> …
or
-> data[50-60], data[10-20], data[0-10] -> … etc
If I try to use the dataloader shuffle as it is, the data output is not contiguous. It will just be 10 random indices. Is there any way to enforce this? |
st104119 | Do you want overlapping windows or unique ones?
If overlapping is OK, you could just use the shuffled indices and slice your data.
Otherwise you could use something like this:
class MyDataset(Dataset):
def __init__(self, window=10):
self.data = torch.arange(100).view(-1, 1).expand(-1, 10)
self.window = window
def __getitem__(self, index):
index = index * self.window
x = self.data[index:index+self.window]
return x
def __len__(self):
return len(self.data) / self.window
dataset = MyDataset(window=10)
print(dataset[0]) |
st104120 | Hi,
I’ve got this error:
Traceback (most recent call last):
File "run_Olga.py", line 156, in <module>
output = model(video,audio)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 468, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/data_parallel.py", line 123, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/data_parallel.py", line 133, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/parallel_apply.py", line 77, in parallel_apply
raise output
RuntimeError: arguments are located on different GPUs at /home/olga/Downloads/pytorch/aten/src/THC/generated/../generic/THCTensorMathPointwise.cu:313
I tried to fix it following this thread:
Arguments are located on different GPUs
I use the source version (0.4.0a0+e46043a) and multiple GPUs (with one it works fine…)
In which @ptrblck says it’s a 0.4. bug.
After updating I’m using torch Version : 0.5.0a0+6e28d4d
But it still remains.
Original code is:
from __future__ import division
import time
import logging
import sys
import os
sys.path.insert(0, './drn_MOD')
sys.path.insert(0, './globalnet')
sys.path.insert(0, './Unet')
import torch
import data
import numpy as np
import SoP as SoP
import math
#from tensorboardX import SummaryWriter
import torch.nn as nn
filename = './test_run_bs6'
if not os.path.exists(filename):
os.makedirs(filename)
"""============================LOG CONFIG & SCORE CONFIG PART================================="""
FORMAT = "[%(filename)s: %(funcName)s] %(message)s"
FORMAT2 = "[%(asctime)-15s %(filename)s:%(lineno)d %(funcName)s] %(message)s"
log_format = logging.Formatter(FORMAT2)
disp_format = logging.Formatter(FORMAT)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
file_handler = logging.FileHandler(filename+'/log_file.log')
file_handler.setLevel(logging.INFO)
file_handler.setFormatter(log_format)
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(disp_format)
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self,hist = False):
self.track_hist = hist
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
if self.track_hist:
self.hist = []
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
if self.track_hist:
self.hist.append(val)
"""============================MODEL PARAMETERS CONFIG================================="""
traindir = '../dataset/dic_dataset'
valdir = '../dataset/video'
ade_dir = ['../dataset/ade/ade_binaries','../dataset/ade/ade_binaries2']
BATCH_SIZE = 6
MOMENTUM = 1e-4
WEIGTH_DECAY = 1e-4
EPOCHS = 50
S_EPOCH = 0
STEP_RATIO = 0.1
LR = 0.001
PRINT_FREQ = 10
N= 2
GT_MASK='Binary'
Pretrained = False
CUDA =True
filename = 'test_run_bs6'
"""============================DATABASE, OPTIMIZER AND LOSS================================="""
if torch.cuda.is_available() == True:
CUDA=CUDA
else:
CUDA=False
#Set database
database = data.BinaryData(traindir,data.DRN_transforms(),ade_dir)
iterations = int(math.floor(database.__len__()/(N*BATCH_SIZE)))
#Set dataloader
#Set model
model = SoP.SoP_model(True,cuda=CUDA,n_images=N,GT_MASK=GT_MASK)
optimizer = torch.optim.SGD([{'params': model.unet_model.parameters()},{'params': model.audio_s.parameters()}, {'params': model.drn_model.parameters(), 'lr': 1e-4}], LR,
momentum=MOMENTUM,
weight_decay=WEIGTH_DECAY)
def init_weights(m):
if type(m) == nn.Conv2d:
nn.init.xavier_uniform_(m.weight, gain=nn.init.calculate_gain('conv2d'))
if Pretrained:
model.load_state_dict(torch.load('model_dic.pt'))
# model = torch.load('convergence.pt')
else:
model.apply(init_weights)
if CUDA:
model = torch.nn.DataParallel(model).cuda()
# define loss function (criterion) and pptimizer
if CUDA:
if (GT_MASK == 'Ratio'):
criterion = torch.nn.L1Loss().cuda()
if (GT_MASK == 'Binary') :
criterion = torch.nn.BCELoss().cuda()
# criterion = torch.nn.BCEWithLogitsLoss()
else:
if (GT_MASK == 'Ratio') :
criterion = torch.nn.L1Loss()
if (GT_MASK == 'Binary') :
criterion = torch.nn.BCEWithLogitsLoss
batch_time = AverageMeter()
data_time = AverageMeter()
batch_loss = AverageMeter(hist =True)
epoch_loss = AverageMeter(hist = True)
model.train()
end = time.time()
"""============================TRAINING PART================================="""
for t in range(EPOCHS):
loader = data.DataLoader(database,N,batch_size=BATCH_SIZE,Gt_mask=GT_MASK)
for j in range(iterations):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
audio,video,gt = loader()
data_time.update(time.time() - end)
if CUDA:
gt=torch.autograd.Variable(gt.cuda())
video=torch.autograd.Variable(video.cuda())
# audio = torch.autograd.Variable(audio)
audio = torch.autograd.Variable(audio.cuda())
else:
gt=torch.autograd.Variable(gt)
video=torch.autograd.Variable(video)
audio = torch.autograd.Variable(audio)
output = model(video,audio)
if (GT_MASK == 'Ratio') or (GT_MASK == 'Ratio_noL10'):
loss = criterion(output, gt.float())
if (GT_MASK == 'Binary') or (GT_MASK == 'BinaryComplex'):
loss = criterion(output, gt.float())
# compute gradient and do SGD step
optimizer.zero_grad()
loss.backward()
optimizer.step()
batch_loss.update(loss.data.item())
batch_time.update(time.time() - end)
end = time.time()
logger.info('Epoch: [{0}][{1}/{2}]\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
'Loss {loss.val:.4f} ({loss.avg:.4f})'.format(
t, j, iterations, batch_time=batch_time,
data_time=data_time, loss=batch_loss))
epoch_loss.update(loss.data.item())
torch.save(model.state_dict(), filename+'/state_dic.pt')
Does anyone knows why this happens? |
st104121 | Could you create a smaller runnable code example so that I could try it on my machine? |
st104122 | I’m attempting to load in about 20 .npz files, each about 4 GB large, then concatenate them into one large tensor for my training set (my test set is about half that size). I have about 10 GB or RAM on my machine.
My general procedure thus far has been:
class CustomTensorDataset(Dataset):
def __init__(self, data_tensor):
self.data_tensor = data_tensor
def __getitem__(self, index):
return self.data_tensor[index]
def __len__(self):
return self.data_tensor.size(0)
def return_data(args):
train_data = torch.tensor([])
for train_npz in os.listdir(train_path):
data = np.load(Path(train_npz))
data = torch.from_numpy(data['arr_0']).unsqueeze(1).float()
data /= 255
train_data = torch.cat((train_data,data))
train_kwargs = {'data_tensor':train_data}
dset = CustomTensorDataset
tr = dset(**train_kwargs)
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=num_workers,
pin_memory=True,
drop_last=True)
This isn’t working for memory issues obviously. I’ve looked into a few different options like loading in images lazily (like here Loading huge data functionality 63), which I tried and wasn’t seeming to work, and caching (like here Request help on data loader 22) though I’m not sure that either of these work very well for my data format.
Any help on how to do this efficiently would be great. Thanks. |
st104123 | I’m unable to find any summary of the update/memory model semantics for torch multiprocessing tensors.
Let’s consider the Hogwild mini-example from the docs 4.
Suppose that the model resides in shared CPU memory. When the child processes invoke optimizer.step, they asyncronously update the shared parameter values according to within-process gradient values. If the optimizer is SGD, this boils down to a sub_ or something on the parameter data.
Is this subtraction atomic? I.e., could we lose writes as child processes contended the shared parameter memory.
More pressingly for my use case, suppose the shared parameters are all CUDA tensors on the same device. Do we still get atomicity? |
st104124 | I am changing the vgg code from:
‘’‘VGG11/13/16/19 in Pytorch.’’’
import torch
import torch.nn as nn
from torch.autograd import Variable
import sys
cfg = {
‘VGG19’: [64, 64, ‘M’, 128, 128, ‘M’, 256, 256, 256, 256, ‘M’, 512, 512, 512, 512, ‘M’, 512, 512, 512, 512, ‘M’],
}
device = torch.device(“cuda:0” if torch.cuda.is_available() else “cpu”)
print(“device is {}” .format(device))
class VGG(nn.Module):
def init(self, vgg_name):
super(VGG, self).init()
self.features = self._make_layers(cfg[vgg_name])
self.classifier = nn.Linear(512, 10)
def forward(self, x):
out = self.features(x)
out = out.view(out.size(0), -1)
out = self.classifier(out)
return out
def _make_layers(self, cfg):
layers = []
in_channels = 3
for x in cfg:
if x == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
layers += [nn.Conv2d(in_channels, x, kernel_size=3, padding=1),
nn.BatchNorm2d(x),
nn.ReLU(inplace=True)]
in_channels = x
layers += [nn.AvgPool2d(kernel_size=1, stride=1)]
return nn.Sequential(*layers)
to:
‘’‘VGG11/13/16/19 in Pytorch.’’’
import torch
import torch.nn as nn
from torch.autograd import Variable
import sys
device = torch.device(“cuda:0” if torch.cuda.is_available() else “cpu”)
print(“device is {}” .format(device))
class VGG(nn.Module):
def init(self, vgg_name):
super(VGG, self).init()
self.conv1_1 = nn.Conv2d(3, 64, kernel_size=3, padding=1)
self.conv1_2 = nn.Conv2d(64, 64, kernel_size=3, padding=1)
self.BatchNorm_64 = nn.BatchNorm2d(64)
self.conv2_1 = nn.Conv2d(64, 128, kernel_size=3, padding=1)
self.conv2_2 = nn.Conv2d(128, 128, kernel_size=3, padding=1)
self.BatchNorm_128 = nn.BatchNorm2d(128)
self.conv3_1 = nn.Conv2d(128, 256, kernel_size=3, padding=1)
self.conv3_2 = nn.Conv2d(256, 256, kernel_size=3, padding=1)
self.conv3_3 = nn.Conv2d(256, 256, kernel_size=3, padding=1)
self.conv3_4 = nn.Conv2d(256, 256, kernel_size=3, padding=1)
self.BatchNorm_256 = nn.BatchNorm2d(256)
self.conv4_1 = nn.Conv2d(256, 512, kernel_size=3, padding=1)
self.conv4_2 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv4_3 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv4_4 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv5_1 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv5_2 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv5_3 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv5_4 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.BatchNorm_512 = nn.BatchNorm2d(512)
self.MaxPool = nn.MaxPool2d(kernel_size=2, stride=2)
self.ReLUU = nn.ReLU(inplace=True)
self.AvgPool = nn.AvgPool2d(kernel_size=1, stride=1)
self.classifier = nn.Linear(512, 10)
def forward(self, x):
Out = self.ReLUU(self.BatchNorm_64(self.conv1_1(x)))
Out = self.MaxPool(self.ReLUU(self.BatchNorm_64(self.conv1_2(Out))))
Out = self.ReLUU(self.BatchNorm_128(self.conv2_1(Out)))
Out = self.MaxPool(self.ReLUU(self.BatchNorm_128(self.conv2_2(Out))))
Out = self.ReLUU(self.BatchNorm_256(self.conv3_1(Out)))
Out = self.ReLUU(self.BatchNorm_256(self.conv3_2(Out)))
Out = self.ReLUU(self.BatchNorm_256(self.conv3_3(Out)))
Out = self.MaxPool(self.ReLUU(self.BatchNorm_256(self.conv3_4(Out))))
Out = self.ReLUU(self.BatchNorm_512(self.conv4_1(Out)))
Out = self.ReLUU(self.BatchNorm_512(self.conv4_2(Out)))
Out = self.ReLUU(self.BatchNorm_512(self.conv4_3(Out)))
Out = self.MaxPool(self.ReLUU(self.BatchNorm_512(self.conv4_4(Out))))
Out = self.ReLUU(self.BatchNorm_512(self.conv5_1(Out)))
Out = self.ReLUU(self.BatchNorm_512(self.conv5_2(Out)))
Out = self.ReLUU(self.BatchNorm_512(self.conv5_3(Out)))
Out = self.MaxPool(self.ReLUU(self.BatchNorm_512(self.conv5_4(Out))))
Out = self.AvgPool(Out)
# I am not sure if i should use the nn.Sequential one anymore or not...
# Out = nn.Sequential(Out)
out = Out
# return nn.Sequential(*layers)
out = out.view(out.size(0), -1)
# print "outoutoutout", out.size()
out = self.classifier(out)
return out
I think the above codes are identical, am i write?
If not what i am missing?
But when i run them the first code converge super fast and after 10 epoch it reaches 60% accuracy. however, the second one stucks with 10-15% even after 100 epochs.
I should also emphasized that the accuracy for the training data is almost the same. However, the testing accuracy is so different…
Can anyone tell me what im missing
Thanks |
st104125 | Solved by ptrblck in post #2
You are reusing the BatchnNorm2d layers in your second approach, while you create new ones in the first approach:
Out = self.ReLUU(self.BatchNorm_64(self.conv1_1(x)))
Out = self.MaxPool(self.ReLUU(self.BatchNorm_64(self.conv1_2(Out)))) |
st104126 | You are reusing the BatchnNorm2d layers in your second approach, while you create new ones in the first approach:
Out = self.ReLUU(self.BatchNorm_64(self.conv1_1(x)))
Out = self.MaxPool(self.ReLUU(self.BatchNorm_64(self.conv1_2(Out)))) |
st104127 | Im so dumb :
I aint know BatchnNorm2d layer can be used for training. i thought like maxpool or relu it is just a funtion without having any training!
Well Thanks a lot, it solved the issue… |
st104128 | Hi guys,
I was wondering if there is any example or at least pull request in progress regarding a PyTorch example with video object detection based on CNN-deep-learning ?
Or maybe similar example for activity detection, classification in videos?
In this way i wanna know how to process/feed the videos to pytorch and do the procedure on them.
Any help is greatly appreciated |
st104129 | Well, I don’t know if there is an example or not. As far as I know video architectures manage audio an video in different tower-streams.
Since video fps are very high they usually pick a sequence of frames (depending on architecture).
But in the dataloader you should be perfectly able to deal with this since you can import opencv to read videos and export output them as torch tensors. There are sequential loaders too (you to read frames in order if your database is already sampled) |
st104130 | I know I can specify the input_size in the __init__() method. The problem is we instantiate the model and this remains fixed. Is there a way to change the input_size during training?
Here is my situation, I am training with data from multiple CSV files. The problem is the tables in these CSV files have varying width (and I am passing this data through an LSTM). Can I change that input_width as the loader passes a new batch with new tensor widths ?
I am thinking of something along the lines of:
class Net(nn.Module):
def __init__(self, input_size):
super(Net, self).__init__()
self.input_size = input_size
self.encoding_layer = nn.Linear(self.input_size, 12)
def forward(self, x):
self.input_size = x.size()
x = self.fc(x)
return x
or even calling the attribute from the training loop:
for epoch in range(no_epochs):
# load data
model.input_size = data.size()
output = model(data)
# loss and the rest
would anything like this work? |
st104131 | I use MultivariateNormal(mea, cov).sample()…
but I meet this problem
RuntimeError: Lapack Error in potrf : the leading minor of order 82 is not positive definite at /tmp/pip-cw2dhwyj-build/aten/src/TH/generic/THTensorLapack.c:617
What should I do to solve this problem?
스크린샷 2018-06-27 오전 12.42.28.png1282×688 90.4 KB |
st104132 | Hi guys, I am a PyTorch beginner trying to get my model to train on a specific GPU on my machine. I am giving as an input the following code:
torch.cuda.device_count()
cuda0 = torch.cuda.set_device(0)
torch.cuda.current_device(). # output: 0
torch.cuda.get_device_name(0)
The output for the last command is ‘Tesla K40c’, which is the GPU I want to use. The problem is that my the training time for an epoch keeps being exactly the same as the training time on an outdated GPU1 Quadro 2000 (cuda 2.1)…
The question is: how do I set the device, define the variables and the model correctly (e.g. .cuda(0)??) so that my training runs on the right GPU? Is it possible to do it without passing through bash? |
st104133 | If you are using Pytorch 0.4 you could specify the device by doing
device = torch.device('cuda:0')
X = X.to(device)
Cuda:0 is always the first visible GPU. So if you set CUDA_VISIBLE_DEVICES (which I would recommend since pytorch will create cuda contexts on all other GPUs otherwise) to another index (e.g. 1), this GPU is referred to as cuda:0. Alternatively you could specify the device as torch.device('cpu') for running your model/tensor on CPU.
The global GPU index (which is necessary to set CUDA_VISIBLE_DEVICES in the right way) can be seen by the nvidia-smi command in the shell/bash. |
st104134 | I’m working on classification of ImageNet2012 using VGG16.
I’m currently working with 8 GPUs, which are gtx1080ti.
The problem is that the job allocation is only directed at the 0th GPU. So, they cannot inference with a large batch size.
Below is my code. (The VGG16 parameters were extracted from the pre-trained model.)
from __future__ import print_function
import torch
import torch.nn as nn
import torchvision.datasets as datasets
import torch.backends.cudnn as cudnn
import torchvision.transforms as transforms
from torch.autograd import Variable
import torchvision.models as models
from utils import progress_bar
import os
import VGG16_conv1
import VGG16_conv2
import VGG16_conv3
import VGG16_conv4
import VGG16_conv5
import VGG16_conv6
import VGG16_conv7
import VGG16_conv8
import VGG16_conv9
import VGG16_conv10
import VGG16_conv11
import VGG16_conv12
import VGG16_conv13
import VGG16_linear1
import VGG16_linear2
import VGG16_linear3
use_cuda = torch.cuda.is_available()
best_acc = 0 # best test accuracy
valdir = os.path.join("/home/mhha/", 'val')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
val_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(valdir, transforms.Compose([
transforms.Scale(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])),
batch_size=100, shuffle=False,
num_workers=4, pin_memory=True)
net = models.vgg16(pretrained=True)
net_conv1 = VGG16_conv1.VGG16_CONV1()
net_conv2 = VGG16_conv2.VGG16_CONV2()
net_conv3 = VGG16_conv3.VGG16_CONV3()
net_conv4 = VGG16_conv4.VGG16_CONV4()
net_conv5 = VGG16_conv5.VGG16_CONV5()
net_conv6 = VGG16_conv6.VGG16_CONV6()
net_conv7 = VGG16_conv7.VGG16_CONV7()
net_conv8 = VGG16_conv8.VGG16_CONV8()
net_conv9 = VGG16_conv9.VGG16_CONV9()
net_conv10 = VGG16_conv10.VGG16_CONV10()
net_conv11 = VGG16_conv11.VGG16_CONV11()
net_conv12 = VGG16_conv12.VGG16_CONV12()
net_conv13 = VGG16_conv13.VGG16_CONV13()
net_linear1 = VGG16_linear1.VGG16_LINEAR1()
net_linear2 = VGG16_linear2.VGG16_LINEAR2()
net_linear3 = VGG16_linear3.VGG16_LINEAR3()
print("saving CONV1 weights")
net_conv1.conv1[0].weight = net.features[0].weight
net_conv1.conv1[0].bias = net.features[0].bias
print("saving CONV2 weights")
net_conv2.conv2[0].weight = net.features[2].weight
net_conv2.conv2[0].bias = net.features[2].bias
print("saving CONV3 weights")
net_conv3.conv3[0].weight = net.features[5].weight
net_conv3.conv3[0].bias = net.features[5].bias
print("saving CONV4 weights")
net_conv4.conv4[0].weight = net.features[7].weight
net_conv4.conv4[0].bias = net.features[7].bias
print("saving CONV5 weights")
net_conv5.conv5[0].weight = net.features[10].weight
net_conv5.conv5[0].bias = net.features[10].bias
print("saving CONV6 weights")
net_conv6.conv6[0].weight = net.features[12].weight
net_conv6.conv6[0].bias = net.features[12].bias
print("saving CONV7 weights")
net_conv7.conv7[0].weight = net.features[14].weight
net_conv7.conv7[0].bias = net.features[14].bias
print("saving CONV8 weights")
net_conv8.conv8[0].weight = net.features[17].weight
net_conv8.conv8[0].bias = net.features[17].bias
print("saving CONV9 weights")
net_conv9.conv9[0].weight = net.features[19].weight
net_conv9.conv9[0].bias = net.features[19].bias
print("saving CONV10 weights")
net_conv10.conv10[0].weight = net.features[21].weight
net_conv10.conv10[0].bias = net.features[21].bias
print("saving CONV11 weights")
net_conv11.conv11[0].weight = net.features[24].weight
net_conv11.conv11[0].bias = net.features[24].bias
print("saving CONV12 weights")
net_conv12.conv12[0].weight = net.features[26].weight
net_conv12.conv12[0].bias = net.features[26].bias
print("saving CONV13 weights")
net_conv13.conv13[0].weight = net.features[28].weight
net_conv13.conv13[0].bias = net.features[28].bias
print("saving FC1 weights")
net_linear1.linear1[0].weight = net.classifier[0].weight
net_linear1.linear1[0].bias = net.classifier[0].bias
print("saving FC2 weights")
net_linear2.linear2[0].weight = net.classifier[3].weight
net_linear2.linear2[0].bias = net.classifier[3].bias
print("saving FC3 weights")
net_linear3.linear3[0].weight = net.classifier[6].weight
net_linear3.linear3[0].bias = net.classifier[6].bias
if use_cuda:
net_conv1.cuda()
net_conv1 = torch.nn.DataParallel(net_conv1, device_ids=range(torch.cuda.device_count()))
net_conv2.cuda()
net_conv2 = torch.nn.DataParallel(net_conv2, device_ids=range(torch.cuda.device_count()))
net_conv3.cuda()
net_conv3 = torch.nn.DataParallel(net_conv3, device_ids=range(torch.cuda.device_count()))
net_conv4.cuda()
net_conv4 = torch.nn.DataParallel(net_conv4, device_ids=range(torch.cuda.device_count()))
net_conv5.cuda()
net_conv5 = torch.nn.DataParallel(net_conv5, device_ids=range(torch.cuda.device_count()))
net_conv6.cuda()
net_conv6 = torch.nn.DataParallel(net_conv6, device_ids=range(torch.cuda.device_count()))
net_conv7.cuda()
net_conv7 = torch.nn.DataParallel(net_conv7, device_ids=range(torch.cuda.device_count()))
net_conv8.cuda()
net_conv8 = torch.nn.DataParallel(net_conv8, device_ids=range(torch.cuda.device_count()))
net_conv9.cuda()
net_conv9 = torch.nn.DataParallel(net_conv9, device_ids=range(torch.cuda.device_count()))
net_conv10.cuda()
net_conv10 = torch.nn.DataParallel(net_conv10, device_ids=range(torch.cuda.device_count()))
net_conv11.cuda()
net_conv11 = torch.nn.DataParallel(net_conv11, device_ids=range(torch.cuda.device_count()))
net_conv12.cuda()
net_conv12 = torch.nn.DataParallel(net_conv12, device_ids=range(torch.cuda.device_count()))
net_conv13.cuda()
net_conv13 = torch.nn.DataParallel(net_conv13, device_ids=range(torch.cuda.device_count()))
net_linear1.cuda()
net_linear1 = torch.nn.DataParallel(net_linear1, device_ids=range(torch.cuda.device_count()))
net_linear2.cuda()
net_linear2 = torch.nn.DataParallel(net_linear2, device_ids=range(torch.cuda.device_count()))
net_linear3.cuda()
net_linear3 = torch.nn.DataParallel(net_linear3, device_ids=range(torch.cuda.device_count()))
cudnn.benchmark = True
criterion = nn.CrossEntropyLoss()
def test():
global best_acc
net_conv1.eval()
net_conv2.eval()
net_conv3.eval()
net_conv4.eval()
net_conv5.eval()
net_conv6.eval()
net_conv7.eval()
net_conv8.eval()
net_conv9.eval()
net_conv10.eval()
net_conv11.eval()
net_conv12.eval()
net_conv13.eval()
net_linear1.eval()
net_linear2.eval()
net_linear3.eval()
test_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(val_loader):
if use_cuda:
inputs, targets = inputs.cuda(), targets.cuda()
inputs, targets = Variable(inputs), Variable(targets)
outputs_conv1 = net_conv1(inputs)
outputs_conv2 = net_conv2(outputs_conv1)
outputs_conv3 = net_conv3(outputs_conv2)
outputs_conv4 = net_conv4(outputs_conv3)
outputs_conv5 = net_conv5(outputs_conv4)
outputs_conv6 = net_conv6(outputs_conv5)
outputs_conv7 = net_conv7(outputs_conv6)
outputs_conv8 = net_conv8(outputs_conv7)
outputs_conv9 = net_conv9(outputs_conv8)
outputs_conv10 = net_conv10(outputs_conv9)
outputs_conv11 = net_conv11(outputs_conv10)
outputs_conv12 = net_conv12(outputs_conv11)
outputs_conv13 = net_conv13(outputs_conv12)
outputs_linear1 = net_linear1(outputs_conv13)
outputs_linear2 = net_linear2(outputs_linear1)
outputs_linear3 = net_linear3(outputs_linear2)
loss = criterion(outputs_linear3, targets)
test_loss += loss.data
_, predicted = torch.max(outputs_linear3.data, 1)
total += targets.size(0)
correct += predicted.eq(targets.data).cpu().sum()
progress_bar(batch_idx, len(val_loader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
% (test_loss/(batch_idx+1), 100.*float(correct)/float(total), correct, total))
# Save checkpoint.
acc = 100.*correct/total
if acc > best_acc:
print('Saving..')
state = {
'net_conv1': net_conv1.module if use_cuda else net_conv1,
'net_conv2': net_conv2.module if use_cuda else net_conv2,
'net_conv3': net_conv3.module if use_cuda else net_conv3,
'net_conv4': net_conv4.module if use_cuda else net_conv4,
'net_conv5': net_conv5.module if use_cuda else net_conv5,
'net_conv6': net_conv6.module if use_cuda else net_conv6,
'net_conv7': net_conv7.module if use_cuda else net_conv7,
'net_conv8': net_conv8.module if use_cuda else net_conv8,
'net_conv9': net_conv9.module if use_cuda else net_conv9,
'net_conv10': net_conv10.module if use_cuda else net_conv10,
'net_conv11': net_conv11.module if use_cuda else net_conv11,
'net_conv12': net_conv12.module if use_cuda else net_conv12,
'net_conv13': net_conv13.module if use_cuda else net_conv13,
'net_linear1': net_linear1.module if use_cuda else net_linear1,
'net_linear2': net_linear2.module if use_cuda else net_linear2,
'net_linear3': net_linear3.module if use_cuda else net_linear3,
'acc': acc,
}
if not os.path.isdir('checkpoint'):
os.mkdir('checkpoint')
torch.save(state, './checkpoint/ckpt_vgg16_layer_by_layer_v1.t7')
best_acc = acc
# Train+inference vs. Inference
print("Layer by layer VGG16 Test \n")
test()
Where is the problem?? |
st104135 | What happens if you increase the num_workers of your dataloader to 4*torch.cuda.device_count()?
EDIT: Could you show the output of nvidia-smi? |
st104136 | And if you increase the batch_size to the maximum they are all equally used or is GPU 0 running out of memory? |
st104137 | have you tried what is the maximal batchsize to fit on a single GPU and multiplied this value by 8 or have you simply used an arbitrary high batchsize? |
st104138 | I used just large size and then reduced it.
I think problem is GPU 0. BUT I dont know how to solve it |
st104139 | Hello,
I have a problem with GRU layers in my model were it return values with negative only for testing? and hence the counted loss for the testing is nan !
any suggestions?
Thanks |
st104140 | I have a lot of code than can be run in the environment of 0.3, but when I directly use them in 0.4. Every time I backward, the usage of memory in GPU will increase. I try to use item instead of data[0], and directly use tensor not Variable, and for training no_grad feature should not be use. So is there any other possibilities?
Thanks!!! |
st104141 | Your changes sound good. Is the memory increasing until you get an OOM error?
If so, could you post a small runnable code snippet? |
st104142 | I saw there is no cdf function for student t distribution in pytorch. Does anybody have implemented it and have a stable numerical performance with log_cdf? Tensorflow has log_cdf for student t distribution. I hope there is one for pytorch |
st104143 | Hi,
I try to compile pytorch from source code by python setup.py install, but encounter error:
+ popd
~/liujiaxiang/pytorch/third_party ~/liujiaxiang/pytorch
+ local lib_prefix=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/lib/libgloo
++ uname
+ [[ Linux == \D\a\r\w\i\n ]]
+ popd
~/liujiaxiang/pytorch
+ for arg in '"$@"'
+ [[ THD == \n\c\c\l ]]
+ [[ THD == \g\l\o\o ]]
+ [[ THD == \c\a\f\f\e\2 ]]
+ [[ THD == \T\H\D ]]
+ pushd /home/liyukun01/liujiaxiang/pytorch/torch/lib
~/liujiaxiang/pytorch/torch/lib ~/liujiaxiang/pytorch
+ build THD
+ mkdir -p build/THD
+ pushd build/THD
~/liujiaxiang/pytorch/torch/lib/build/THD ~/liujiaxiang/pytorch/torch/lib ~/liujiaxiang/pytorch
+ BUILD_C_FLAGS=
+ case $1 in
+ BUILD_C_FLAGS=' -DTH_INDEX_BASE=0 -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/TH" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THC" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THS" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THCS" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THNN" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1 -fexceptions'
+ cmake ../../THD -DCMAKE_MODULE_PATH=/home/liyukun01/liujiaxiang/pytorch/cmake/Modules_CUDA_fix -DTorch_FOUND=1 -DCMAKE_INSTALL_PREFIX=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install '-DCMAKE_C_FLAGS= -DTH_INDEX_BASE=0 -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/TH" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THC" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THS" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THCS" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THNN" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1 -fexceptions ' '-DCMAKE_CXX_FLAGS= -DTH_INDEX_BASE=0 -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/TH" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THC" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THS" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THCS" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THNN" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1 -fexceptions -std=c++11 ' '-DCMAKE_EXE_LINKER_FLAGS=-L"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/lib" -Wl,-rpath,$ORIGIN ' '-DCMAKE_SHARED_LINKER_FLAGS=-L"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/lib" -Wl,-rpath,$ORIGIN ' -DCMAKE_INSTALL_LIBDIR=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/lib '-DCUDA_NVCC_FLAGS= -DTH_INDEX_BASE=0 -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/TH" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THC" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THS" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THCS" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THNN" -I"/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1' -DCUDA_DEVICE_DEBUG=0 -DCMAKE_PREFIX_PATH=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install '-Dcwrap_files=/home/liyukun01/liujiaxiang/pytorch/torch/lib/ATen/Declarations.cwrap;/home/liyukun01/liujiaxiang/pytorch/torch/lib/THNN/generic/THNN.h;/home/liyukun01/liujiaxiang/pytorch/torch/lib/THCUNN/generic/THCUNN.h;/home/liyukun01/liujiaxiang/pytorch/torch/lib/ATen/nn.yaml' -DTH_INCLUDE_PATH=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/include -DTH_LIB_PATH=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/lib -DTH_LIBRARIES=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/lib/libTH.so -DCAFFE2_LIBRARIES=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/lib/libcaffe2.so -DTHNN_LIBRARIES=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/lib/libTHNN.so -DTHCUNN_LIBRARIES=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/lib/libTHCUNN.so -DTHS_LIBRARIES=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/lib/libTHS.so -DTHC_LIBRARIES=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/lib/libTHC.so -DTHCS_LIBRARIES=/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/lib/libTHCS.so -DTH_SO_VERSION=1 -DTHC_SO_VERSION=1 -DTHNN_SO_VERSION=1 -DTHCUNN_SO_VERSION=1 -DTHD_SO_VERSION=1 -DUSE_CUDA=1 -DNO_NNPACK=0 -DNCCL_EXTERNAL=1 -Dnanopb_BUILD_GENERATOR=0 -DCMAKE_DEBUG_POSTFIX= -DCMAKE_BUILD_TYPE=Release -DCMAKE_EXPORT_COMPILE_COMMANDS=1
CMake Warning (dev) at /home/liyukun01/liujiaxiang/pytorch/cmake-3.12.0-rc1-Linux-x86_64/share/cmake-3.12/Modules/CMakeGenericSystem.cmake:4 (include):
File
/home/liyukun01/liujiaxiang/pytorch/cmake-3.12.0-rc1-Linux-x86_64/share/cmake-3.12/Modules/CMakeGenericSystem.cmake
includes
/home/liyukun01/liujiaxiang/pytorch/cmake/Modules_CUDA_fix/CMakeInitializeConfigs.cmake
(found via CMAKE_MODULE_PATH) which shadows
/home/liyukun01/liujiaxiang/pytorch/cmake-3.12.0-rc1-Linux-x86_64/share/cmake-3.12/Modules/CMakeInitializeConfigs.cmake.
This may cause errors later on .
Policy CMP0017 is not set: Prefer files from the CMake module directory
when including from there. Run "cmake --help-policy CMP0017" for policy
details. Use the cmake_policy command to set the policy and suppress this
warning.
Call Stack (most recent call first):
/home/liyukun01/liujiaxiang/pytorch/cmake-3.12.0-rc1-Linux-x86_64/share/cmake-3.12/Modules/CMakeSystemSpecificInformation.cmake:21 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Could NOT find Gloo (missing: Gloo_LIBRARY)
-- Caffe2: Found protobuf with new-style protobuf targets.
-- Caffe2: Protobuf version 3.5.0
-- Could NOT find CUDA (missing: CUDA_NVCC_EXECUTABLE) (Required is at least version "7.0")
CMake Warning at /home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/share/cmake/Caffe2/public/cuda.cmake:11 (message):
Caffe2: CUDA cannot be found. Depending on whether you are building Caffe2
or a Caffe2 dependent library, the next warning / error will give you more
info.
Call Stack (most recent call first):
/home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/share/cmake/Caffe2/Caffe2Config.cmake:79 (include)
CMakeLists.txt:48 (FIND_PACKAGE)
CMake Error at /home/liyukun01/liujiaxiang/pytorch/torch/lib/tmp_install/share/cmake/Caffe2/Caffe2Config.cmake:81 (message):
Your installed Caffe2 version uses CUDA but I cannot find the CUDA
libraries. Please set the proper CUDA prefixes and / or install CUDA.
Call Stack (most recent call first):
CMakeLists.txt:48 (FIND_PACKAGE)
-- Configuring incomplete, errors occurred!
See also "/home/liyukun01/liujiaxiang/pytorch/torch/lib/build/THD/CMakeFiles/CMakeOutput.log".
See also "/home/liyukun01/liujiaxiang/pytorch/torch/lib/build/THD/CMakeFiles/CMakeError.log".
Failed to run 'bash tools/build_pytorch_libs.sh --use-cuda --use-nnpack nccl caffe2 nanopb libshm gloo THD c10d'
This is the env:
system = CentOS release 6.3 (Final) Kernel \r on an \m
gcc = gcc (GCC) 4.8.2
What can I do to fix this compile problem?
Thanks in advance. |
st104144 | What’s the commit number you used? We merged a couple patches recently on build systems. Could you pulling from master and build again? If you still see some error, it’d be great if you can give us the full log too. Thanks! |
st104145 | The commit number is fed44cb1b37fc112357b35e1e178a6ecc824cfdb
I will try to pull the master and compile. |
st104146 | I have logged in to a server which has 4 NVIDIA 1080 GPUs. I ran nvidia-smi and found out that the global memory of GPU 0 is almost full but other GPUs have lot of free global memory. The status is as follows:
±----------------------------------------------------------------------------+
| NVIDIA-SMI 390.25 Driver Version: 390.25 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108… Off | 00000000:02:00.0 Off | N/A |
| 68% 87C P2 181W / 250W | 10752MiB / 11178MiB | 98% Default |
±------------------------------±---------------------±---------------------+
| 1 GeForce GTX 108… Off | 00000000:03:00.0 Off | N/A |
| 29% 63C P8 22W / 250W | 25MiB / 11178MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 2 GeForce GTX 108… Off | 00000000:82:00.0 Off | N/A |
| 20% 52C P8 17W / 250W | 25MiB / 11178MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 3 GeForce GTX 108… Off | 00000000:83:00.0 Off | N/A |
| 34% 67C P2 63W / 250W | 536MiB / 11178MiB | 0% Default |
±------------------------------±---------------------±---------------------+
I am trying to run the code snippet where the CNN is shallow with 3 convolution layers:
device = torch.device('cuda:1' if torch.cuda.is_available() else 'cpu')
cnn = CNN()
cnn = cnn.to(device)
Its clear that I want to run this on CUDA device 1 which has got ample global memory. But when I run it I am getting error in the line: cnn = cnn.to(device) as:
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/THCTensorRandom.cu:25
Why is this so? Can somebody help me? Thanks in advance.
Some details:
OS: Ubuntu server 16.04
pytorch version: 0.4.0
python version: 3.5
package manager: pip
CUDA version: 8.0.61
CUDNN version: 7.1.02 |
st104147 | Solved by royboy in post #2
Pytorch will create context on gpu 0 regardless of which gpus you end up using. You should set CUDA_VISIBLE_DEVICES.
For more detailed discussion, this link is helpful https://github.com/pytorch/pytorch/issues/3477. |
st104148 | Pytorch will create context on gpu 0 regardless of which gpus you end up using. You should set CUDA_VISIBLE_DEVICES.
For more detailed discussion, this link is helpful https://github.com/pytorch/pytorch/issues/3477 37. |
st104149 | I am super confused here:
In all the implemented codes in the pytorch website i see we send nn.module as an input of the class:
class VGG(nn.Module):
def __init__(self, vgg_name):
super(VGG, self).__init__()
# self.features = self._make_layers(cfg[vgg_name])
self.features = self._make_layers_Alireza(cfg[vgg_name])
self.classifier = nn.Linear(512, 10)
def forward(self, x):
out = self.features(x)
out = out.view(out.size(0), -1)
out = self.classifier(out)
return out
def _make_layers(self, cfg):
layers = []
in_channels = 3
for x in cfg:
if x == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
layers += [nn.Conv2d(in_channels, x, kernel_size=3, padding=1),
nn.BatchNorm2d(x),
nn.ReLU(inplace=True)]
in_channels = x
layers += [nn.AvgPool2d(kernel_size=1, stride=1)]
return nn.Sequential(*layers)
..
....
Then, when we want to give the input to the class we simply do VGG(Input)…
I dont understand how this work,
what if i wanna design a network that the forward part has two inputs?
meaning:
def forward(self, x1,x2):
......
return out
how does it understand it?
how the convolution functions understand what is the input that they should use??
when we give the conv2d real input and output it makes sense, but how it works when we just give them the size??
the nn.conv2d is explained as:
torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
but we dont give the input to it in the class, in fact we give the size of input to it…!!
like: nn.Conv2d(in_channels, x, kernel_size=3, padding=1),(HOW is that possible???) |
st104150 | You need to first construct a vgg network instance by:
vgg = VGG("name")
This will call the function
__init__
Then
vgg(x)
will call
vgg.forward(x)
automatically. If you need two parameters, simply call
vgg(x1, x2) |
st104151 | conv = nn.Conv2d(in_channels, x, kernel_size=3, padding=1)
are the parameters you use to instantiate a Conv2d instance. When you want to create the computation graph, call the forward method.
output = conv.forward(input)
which is equivalent to
output = conv(input) |
st104152 | based on the pytorch website (LINK 2), the vgg is defined as following:
class VGG(nn.Module):
def __init__(self, features, num_classes=1000, init_weights=True):
super(VGG, self).__init__()
self.features = features
self.classifier = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, num_classes),
)
if init_weights:
self._initialize_weights()
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
if m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 0.01)
nn.init.constant_(m.bias, 0)
def make_layers(cfg, batch_norm=False):
layers = []
in_channels = 3
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
return nn.Sequential(*layers)
in the function make_layers how i can see the input.
for example if i want to see the len(x.size()), where x is the input of make_layers,
how should i do that?
I cannot do the following because it gives me error.
def make_layers(cfg, batch_norm=False):
len(x.size()) # x is the input
layers = []
in_channels = 3
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
return nn.Sequential(*layers)
error:
len(x.size()) # x is the input
UnboundLocalError: local variable ‘x’ referenced before assignment
but for some weird reason it can do the convolution, i dont understand how…
help please |
st104153 | Hi,
In Define-by-Run libraries, we don’t need to specify the input shape/size at the initialization.
You can check input size in forward method of nn.Module, however, nn.Sequential automatically define forward method and doesn’t require us to define forward computation.
VGGs are defined using nn.Sequential. |
st104154 | I’m trying to implement a “BatchScale” Module that is akin to BatchNorm, except that there is no shift/mean/bias, just a scaling/weighting and a running variance. Initially I thought I could just copy and modify the _BatchNorm class, but it calls the C version of batch_norm and as far as I can tell, that function will always compute and apply a running mean.
Any suggestions? |
st104155 | I tried writing a class and it seems straight forward enough, but the final line in forward() that computes the y=x/sqrt(var+eps)*gamma gives the following error, and I’m struggling to understand the problem.
*** RuntimeError: div() received an invalid combination of arguments - got (torch.FloatTensor), but expected one of:
(float other)
didn’t match because some of the arguments have invalid types: (torch.FloatTensor)
(Variable other)
didn’t match because some of the arguments have invalid types: (torch.FloatTensor)
Here’s is my code. Any advice would be much appreciated!
class Scale(Module):
def __init__(self, num_features, eps=1e-5, momentum=0.1, linear=True):
super(Scale, self).__init__()
self.num_features = num_features
self.linear = linear
self.eps = eps
self.momentum = momentum
if linear:
self.weight = Parameter(torch.Tensor(num_features))
else:
self.register_parameter('weight', None)
self.register_buffer('running_var', torch.ones(num_features))
self.reset_parameters()
def reset_parameters(self):
self.running_var.fill_(1)
if self.linear:
self.weight.data.uniform_()
def forward(self, input):
if self.training:
# Update variance estimate if in training mode
batch_var = input.var(dim=0).data
self.running_var = (1-self.momentum)*self.running_var + self.momentum*batch_var
return input / (self.running_var + self.eps).sqrt() * self.weight
def __repr__(self):
return ('{name}({num_features}, eps={eps}, momentum={momentum},'
' linear={linear})'
.format(name=self.__class__.__name__, **self.__dict__)) |
st104156 | Oops. I posted too soon. Wrapping the variance value in a Variable fixed the problem. Just had to change the last line of forward() to this:
return input / Variable((self.running_var + self.eps).sqrt()) * self.weight |
st104157 | Sorry for throwing in multiple questions:
Does pytorch support setting same seed across all devices (CPUs/GPUs)? Do these devices all use the same PRNG? If not, are there any other ways to make layers like dropouts deterministic? |
st104158 | Solved by smth in post #2
torch.manual_seed(seed) will set the same seed on all devices.
However, setting the same seed on CPU and on GPU doesn’t mean they spew the same random number sequences – they both have fairly different PRNGs (for efficiency reasons).
As guidance, treat the PyTorch PRNGs as determinism helpers, but… |
st104159 | torch.manual_seed(seed) will set the same seed on all devices.
However, setting the same seed on CPU and on GPU doesn’t mean they spew the same random number sequences – they both have fairly different PRNGs (for efficiency reasons).
As guidance, treat the PyTorch PRNGs as determinism helpers, but not so much as having PRNG consistency across device types. |
st104160 | Can I get deterministic results for layers like dropout just on GPUs with Pytorch? |
st104161 | yes, if you fix the seed, dropout will be deterministic in pytorch, even on GPUs |
st104162 | How do I go about using the same PRNG on different PyTorch devices? Can I switch the backend for different layers? Also, what sort of backends are available in PyTorch for nn layers? CUDNN, MKL, torch (CPU) and torch (GPU)? |
st104163 | I’m training a sequence where it’s much more efficient to pass in multiple items ala backprop through time. I’ve got a special padding character that i’m using to split the sequences up which trigger resets within the network, but when I looked at how the embedding layer handles padding characters I think it’s only preventing the embeddings from changing, not the entire network. I’d like to have the network untouched for padding as these don’t occur in my live data.
I’m wondering if I can create a simple layer where the forward is a passthrough that sets a flag when it encounters a padding character, and the backwards is either a passthrough or a zeroing depending on the flag. Chain rule should work in this case, preventing the entire network from updating.
Has this been done before? I don’t see why it wouldn’t work but I’ve never seen this type of layer used before. |
st104164 | Say I have my img data in the format:
x_train, y_train, x_test, y_test
How can I turn that into something pytorch can use? I tried
train = TensorDataset(x_train, y_train) and using a dataloader, but that gives me a TypeError |
st104165 | How did you define x_train and y_train?
Using torch.tensors should work:
x_train = torch.randn(10, 3, 24, 24)
y_train = torch.empty(10, dtype=torch.long).random_(0, 10)
dataset = TensorDataset(x_train, y_train)
x, y = dataset[0] |
st104166 | If I have data in shape of [50000, 32, 32, 3], how do I convert that to [5000, 3, 32, 32] for torch tensors? |
st104167 | For a multi-class classification problem, what should the structure of the labels be, and what loss function should I use? Currently the labels are [50000,1] with 10 classes (I’m thinking maybe it should be [50000,10]?), and I’m using nn.CrossEntropyLoss() and I get the error “multi-target not supported”
Thanks |
st104168 | Your targets should be of the size [batch_size], so just squeeze your tensor and it should work.
target = target.squeeze(1)
CrossEntropyLoss is fine. Note that you have to provide the logits, i.e. your model shouldn’t have a non-linearity as the last layer.
You can find more information on the criterion in the docs 2. |
st104169 | Using PyTorch, I am able to create an autoencoder like the one given below. How do I save the output matrix to a .csv file after every layer?
class Autoencoder(nn.Module):
def __init__(self, ):
super(Autoencoder, self).__init__()
self.fc1 = nn.Linear(10000, 5000)
self.fc2 = nn.Linear(5000, 2000)
self.fc3 = nn.Linear(2000, 500)
self.fc4 = nn.Linear(500, 100)
self.fc5 = nn.Linear(100, 500)
self.fc6 = nn.Linear(500, 2000)
self.fc7 = nn.Linear(2000, 5000)
self.fc8 = nn.Linear(5000, 10000)
self.relu = nn.Relu()
def forward(self, x):
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.relu(self.fc3(x))
x = self.relu(self.fc4(x))
x = self.relu(self.fc5(x))
x = self.relu(self.fc6(x))
x = self.relu(self.fc7(x))
x = self.relu(self.fc8(x))
return x |
st104170 | Solved by ptrblck in post #4
You can use @Supreet’s code to return the outputs (just add commas between the returned tensors).
Once you grab the outputs, you could save them to a .csv.
Here is a small example:
class Autoencoder(nn.Module):
def __init__(self, ):
super(Autoencoder, self).__init__()
self.fc1… |
st104171 | def forward(self, x):
x1 = self.relu(self.fc1(x))
x2 = self.relu(self.fc2(x1))
x3 = self.relu(self.fc3(x2))
x4 = self.relu(self.fc4(x3))
x5 = self.relu(self.fc5(x4))
x6 = self.relu(self.fc6(x5))
x7 = self.relu(self.fc7(x6))
x8 = self.relu(self.fc8(x7))
return x1 x2 x3 x4 x5 x6 x7 x8
then when u call the model … u can save all the outputs from x1 to x8 in whatever format as you want. |
st104172 | Can you please write a full code? I am not sure how to save the outputs after calling the forward() function.
@ptrblck sorry for disturbing… Can you please help? |
st104173 | You can use @Supreet’s code to return the outputs (just add commas between the returned tensors).
Once you grab the outputs, you could save them to a .csv.
Here is a small example:
class Autoencoder(nn.Module):
def __init__(self, ):
super(Autoencoder, self).__init__()
self.fc1 = nn.Linear(100, 50)
self.fc2 = nn.Linear(50, 20)
self.fc3 = nn.Linear(20, 5)
self.fc4 = nn.Linear(5, 1)
self.fc5 = nn.Linear(1, 5)
self.fc6 = nn.Linear(5, 20)
self.fc7 = nn.Linear(20, 50)
self.fc8 = nn.Linear(50, 100)
self.relu = nn.ReLU()
def forward(self, x):
x1 = self.relu(self.fc1(x))
x2 = self.relu(self.fc2(x1))
x3 = self.relu(self.fc3(x2))
x4 = self.relu(self.fc4(x3))
x5 = self.relu(self.fc5(x4))
x6 = self.relu(self.fc6(x5))
x7 = self.relu(self.fc7(x6))
x8 = self.relu(self.fc8(x7))
return x1, x2, x3, x4, x5, x6, x7, x8
model = Autoencoder()
x = torch.randn(1, 100)
outputs = model(x)
for i, output in enumerate(outputs):
np.savetxt(
'output{}.csv'.format(i),
output.detach().numpy(),
delimiter=',') |
st104174 | Thanks a lot. It works… I just made one change. I converted my model to double format instead of Float. |
st104175 | Hey, one more question. The code above works in case I want to get the representation of the original data in the middle layers. What if I want to know the weights in the middle layers?
For e.g., as per @ptrblck’s code, can I get the 50x20 and 20x5 matrices containing the weights? |
st104176 | I want to use a 100K dimensional tensor as input,but it is sparse. So ,it uses a lot of memory.Pytorch will be support sparse tensor? |
st104177 | What do you want to give it as an input to?
If you elaborate your use-case, we can help better.
We have some sparse tensor support in torch.sparse |
st104178 | The dataset is a 100K*100K adjacent matrix which represents the relationship of network nodes.The input is some rows(mini-batch) of the adjacent matrix, just like [0 0 0 0 0 0 1 0…… 0 0 0 1 0 0 0 0].I want to input the batch to a fully connected layers.If the tensor is dense, it would use too much memory.
I have read the doc of torch.sparse, but I’m confused to use torch.sparse.FloatTensor as input.just like conventional tensor?And , it doesn’t support cuda now, does it? |
st104179 | Hello, I have a similar problem: I would like to reuse the code coming from an introductory tutorial (this one: https://pytorch.org/tutorials/beginner/nlp/deep_learning_tutorial.html 53) but with some real data.
My dataset is the IMDB movies reviews and I transform it into a bag of word representation with tf-idf entries using successively CountVectorizer and TfidfTransformer both coming from scikit-learn.
But I cannot directly use the rows of the sparse matrix I obtain because they don’t have a “dim” attribute. More precisely a row of the output of a TfidfTransformer object is of type
<1x45467 sparse matrix of type ‘<class ‘numpy.float64’>’
and you cannot give it as an input to an object of the class torch.nn.functional.linear.
Any suggestions? Could I transform my input into something like torch.sparse.FloatTensor?
By the way, I’m not trying to build the most efficient model. It is just to get comfortable with Pytorch API’s. |
st104180 | Assume I have two folder, 0 folder and 1 folder. They have 1,000,000,000 images. It is out of range about list and np,array.
How could i write my dataset function? I have no idea to iterate two folder to get those images. i thought using folder name to determine lable 0 or 1.
Any one has advice? Please Help! |
st104181 | Hi all,
I’m using PyTorch 0.4 on my Windows 10 laptop - but deploying on Ubuntu Linux 16.04 (also PyTorch 0.4). Both OSs use CUDA 9.0 (V9.0.176). Yet I’m getting very (very) slightly different results. That actually produces some different classifications in my multi-class classification problem but I’m mainly bothered by the inference not being bit-exact.
Is this behavior something I should expect? can it be fixed?
Thanks,
Ran |
st104182 | Hi,
I guess this is expected. Default cudnn algorithms are not deterministic.
pytorch cpu random should be the same across platforms, but I’m not sure cuda random will be. Similar for python/numpy random.
I’m afraid it’s going to be very time consuming to make it bit-exact, and you migh end up with a non negligeable slow down for not being able to use cudnn kernels and other fast algorithms that are not bitperferct. |
st104183 | Hi, I cant resume properly from a checkpoint. each time I try to resume, it seems the training statistics are either invalid or missing since the accuracy gets very bad!
For instance, I save a checkpoint at epoch 80, and I get 62.5% accuracy, When I resume from this very checkpoint, the accuracy now becomes 34!!
What am I doing wrong here? here is the snippet for saving and resuming :
# optionally resume from a checkpoint
if args.resume:
if os.path.isfile(args.resume):
print_log("=> loading checkpoint '{}'".format(args.resume), log)
checkpoint = torch.load(args.resume)
args.start_epoch = checkpoint['epoch']
best_prec1 = checkpoint['best_prec1']
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
print_log("=> loaded checkpoint '{}' (epoch {})".format(args.resume, checkpoint['epoch']), log)
else:
print_log("=> no checkpoint found at '{}'".format(args.resume), log)
save_checkpoint({
'epoch': epoch + 1,
'arch': args.arch,
'state_dict': model.state_dict(),
'best_prec1': best_prec1,
'optimizer' : optimizer.state_dict(),
}, is_best, filename, bestname)
# measure elapsed time
epoch_time.update(time.time() - start_time)
start_time = time.time()
def save_checkpoint(state, is_best, filename, bestname):
torch.save(state, filename)
if is_best:
shutil.copyfile(filename, bestname)
Any help is greatly appreciated |
st104184 | Solved by Shisho_Sama in post #5
Its not the AverageMeter, since the accuracy drops hugely as well.
I found the issue . the scheduler needed to be saved and restored upon resuming as well.
It is explained here |
st104185 | The code snippet looks fine to me.
Could you provide some information regarding the training? I assume you are observing the training loss and accuracy, save the model, and after resuming you see a higher price training loss.
I created a small example 28 some time ago and could not reproduce the issue (from another thread).
Probably we would have to inspect other parts of your code to see if something goes wrong. |
st104186 | Hi, Thank you very much, here is the link to the full source code : main.py 16
in the mean time, after posting the question, I had an idea, it occurred to me that it might have something to do with the BatchNormalization statistics not being properly loaded, I’m not sure, if this is the case. so how can I check for that? |
st104187 | I couldn’t find any obvious mistakes, but could it be related to the AverageMeter, which is restarted?
Could you train for a few epochs, reset the AverageMeter and have a look at your loss? |
st104188 | Its not the AverageMeter, since the accuracy drops hugely as well.
I found the issue . the scheduler needed to be saved and restored upon resuming as well.
It is explained here 151 |
st104189 | When I compile from sorce, I hit following error (libgfortran.so.4 not found).
I am just doing following commands on fc22bf3e82723178015708ae1265ec14710c2dec
python setup.py build develop
Is there good way to solve this issue?
[ 92%] Built target caffe2
[ 92%] Linking CXX executable ../bin/verify_api_visibility
[ 92%] Linking CXX executable ../bin/tbb_init_test
[ 92%] Linking CXX executable ../bin/undefined_tensor_test
[ 93%] Linking CXX executable ../bin/scalar_tensor_test
/usr/bin/ld: warning: libgfortran.so.4, needed by //opt/conda/lib/libopenblas.so.0, not found (try using -rpath or -rpath-link)
//opt/conda/lib/libopenblas.so.0: undefined reference to `_gfortran_etime@GFORTRAN_7'
//opt/conda/lib/libopenblas.so.0: undefined reference to `_gfortran_concat_string@GFORTRAN_7'
//opt/conda/lib/libopenblas.so.0: undefined reference to `_gfortran_compare_string@GFORTRAN_7'
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/verify_api_visibility.dir/build.make:95: recipe for target 'bin/verify_api_visibility' failed
make[2]: *** [bin/verify_api_visibility] Error 1
CMakeFiles/Makefile2:896: recipe for target 'caffe2/CMakeFiles/verify_api_visibility.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/verify_api_visibility.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
/usr/bin/ld: warning: libgfortran.so.4, needed by //opt/conda/lib/libopenblas.so.0, not found (try using -rpath or -rpath-link)
//opt/conda/lib/libopenblas.so.0: undefined reference to `_gfortran_etime@GFORTRAN_7'
//opt/conda/lib/libopenblas.so.0: undefined reference to `_gfortran_concat_string@GFORTRAN_7'
//opt/conda/lib/libopenblas.so.0: undefined reference to `_gfortran_compare_string@GFORTRAN_7'
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/tbb_init_test.dir/build.make:95: recipe for target 'bin/tbb_init_test' failed
make[2]: *** [bin/tbb_init_test] Error 1
CMakeFiles/Makefile2:858: recipe for target 'caffe2/CMakeFiles/tbb_init_test.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/tbb_init_test.dir/all] Error 2
/usr/bin/ld: warning: libgfortran.so.4, needed by //opt/conda/lib/libopenblas.so.0, not found (try using -rpath or -rpath-link)
//opt/conda/lib/libopenblas.so.0: undefined reference to `_gfortran_etime@GFORTRAN_7'
//opt/conda/lib/libopenblas.so.0: undefined reference to `_gfortran_concat_string@GFORTRAN_7'
//opt/conda/lib/libopenblas.so.0: undefined reference to `_gfortran_compare_string@GFORTRAN_7'
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/undefined_tensor_test.dir/build.make:95: recipe for target 'bin/undefined_tensor_test' failed
make[2]: *** [bin/undefined_tensor_test] Error 1
CMakeFiles/Makefile2:934: recipe for target 'caffe2/CMakeFiles/undefined_tensor_test.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/undefined_tensor_test.dir/all] Error 2
/usr/bin/ld: warning: libgfortran.so.4, needed by //opt/conda/lib/libopenblas.so.0, not found (try using -rpath or -rpath-link)
//opt/conda/lib/libopenblas.so.0: undefined reference to `_gfortran_etime@GFORTRAN_7'
//opt/conda/lib/libopenblas.so.0: undefined reference to `_gfortran_concat_string@GFORTRAN_7'
//opt/conda/lib/libopenblas.so.0: undefined reference to `_gfortran_compare_string@GFORTRAN_7'
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/scalar_tensor_test.dir/build.make:95: recipe for target 'bin/scalar_tensor_test' failed
make[2]: *** [bin/scalar_tensor_test] Error 1
CMakeFiles/Makefile2:972: recipe for target 'caffe2/CMakeFiles/scalar_tensor_test.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/scalar_tensor_test.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Failed to run 'bash tools/build_pytorch_libs.sh --use-cuda --use-nnpack nccl caffe2 nanopb libshm gloo THD c10d' |
st104190 | your installed openblas needs, but cannot find gfortran.
Is there a reason you installed openblas, instead of recommend MKL?
Did you follow instructions from: https://github.com/pytorch/pytorch#from-source 194 |
st104191 | How should EncoderDecoder with mixin of Encoder and Decoder be implemented?
In the following snippet self.enc is not registered as a module.
class Encoder(nn.Module):
def __init__(self):
super().__init__()
self.enc = nn.Linear(2,2)
class Decoder(nn.Module):
def __init__(self):
super().__init__()
self.dec = nn.Linear(2,2)
class EncoderDecoder(Encoder, Decoder):
def __init__(self):
super(EncoderDecoder, self).__init__()
super(Encoder, self).__init__()
print(self.modules)
EncoderDecoder()
output:
EncoderDecoder(
(dec): Linear(in_features=2, out_features=2, bias=True)
) |
st104192 | Solved by ptrblck in post #2
You could remove the super(Encoder, self).__init__().
Generally you could inherit from multiple classes, but I think it would be easier to just register the Modules in EncoderDecoder:
class Encoder(nn.Module):
def __init__(self):
super(Encoder, self).__init__()
self.enc = nn.Li… |
st104193 | You could remove the super(Encoder, self).__init__().
Generally you could inherit from multiple classes, but I think it would be easier to just register the Modules in EncoderDecoder:
class Encoder(nn.Module):
def __init__(self):
super(Encoder, self).__init__()
self.enc = nn.Linear(2,2)
class Decoder(nn.Module):
def __init__(self):
super(Decoder, self).__init__()
self.dec = nn.Linear(2,2)
class EncoderDecoder(nn.Module):
def __init__(self):
super(EncoderDecoder, self).__init__()
self.enc = Encoder()
self.dec = Decoder() |
st104194 | Hey, I did some minor modifications to the CNN tutorial, and it trains fine but when testing the images and labels aren’t being send to the GPU? Thanks for any help:
def forward(self, x):
x = self.pool1(F.leaky_relu(self.conv1(x)))
x = self.pool2(F.leaky_relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.leaky_relu(self.fc1(x))
x = F.leaky_relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
net.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=.01, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 500 == 499: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 500))
running_loss = 0.0
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
print(images.type())
images.to(device), labels.to(device)
print(images.type())
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
torch.FloatTensor
torch.FloatTensor
Traceback (most recent call last):
File “trainNet.py”, line 111, in
outputs = net(images)
File “/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 491, in call
result = self.forward(*input, **kwargs)
File “trainNet.py”, line 61, in forward
x = self.pool1(F.leaky_relu(self.conv1(x)))
File “/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 491, in call
result = self.forward(*input, **kwargs)
File “/asugar/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py”, line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 ‘weight’ |
st104195 | You are missing the assignment:
images, labels = images.to(device), labels.to(device) |
st104196 | Suppose i have a 3D tensor A of size num_class * batch_size * Dim, and a mask B of size num_class * batch_size. Then , i want to mask A using the B tensor. However, I really dont know how to do it, can anyone help me ?
Thanks. |
st104197 | What do you mean by masking exactly?
Do you want to repeat B along all dimensions??
For that A = B.unsqueeze(-1).expand_as(A) would do! |
st104198 | `
something like the following tensorflow equivalent.
tf.TensorArray.scatter(indices, value, name=None)
Scatter the values of a Tensor in specific indices of a TensorArray.
Args:
indices: A 1-D Tensor taking values in [0, max_value). If the TensorArray is not dynamic, max_value=size().
value: (N+1)-D. Tensor of type dtype. The Tensor to unpack.
name: A name for the operation (optional).
Returns:
A new TensorArray object with flow that ensures the scatter occurs. Use this object all for subsequent operations.
Raises:
ValueError: if the shape inference fails. |
st104199 | I am using torchvision.transforms.RandomRotation to try and create random rotations for both my input and ground truth. How do I make sure that RandomRotation applies the same randomness to both sets? Currently when I show my images I can see each pair of X and Y are getting different rotations. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.