id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st104500 | Hi ,
I’m new to deep learning and PyTorch , I have a classification problem and I want to try different variations of RNN cells to try to solve it . So to describe more my dataset it will be the following [ examples , dates , features ] , so I have a 3D tensor where the first dimension will be the examples followed by dates where the observations ( features ) are taken and last the number of features , for now [9230 , 23 , 20] . I want to train my first RNN with a simple RNN cell ( tanh cell ) to try and classify my dataset ( the output , number of classes are 10 ) but I can’t figure out how to input my dataset into the classifier how to input it properly , any help and thank you. |
st104501 | Assume you have a tensor y of size M x 10 and another tensor x of size M. The values of both x and y are integers between 0 and 9.
I want to get a new tensor z of size M which is of the following form:
z[k]= y[k][x[k]]
Is it possible to do this without using a for loop? |
st104502 | Solved by ptrblck in post #2
You could use torch.gather for this:
M = 10
y = torch.empty(M, 10, dtype=torch.long).random_(10)
x = torch.empty(M, dtype=torch.long).random_(10)
w = torch.gather(y, 1, x.unsqueeze(1))
w = w.view(-1)
z = torch.zeros(M, dtype=torch.long)
for k in range(M):
z[k] = y[k, x[k]]
print((z == w)) |
st104503 | You could use torch.gather for this:
M = 10
y = torch.empty(M, 10, dtype=torch.long).random_(10)
x = torch.empty(M, dtype=torch.long).random_(10)
w = torch.gather(y, 1, x.unsqueeze(1))
w = w.view(-1)
z = torch.zeros(M, dtype=torch.long)
for k in range(M):
z[k] = y[k, x[k]]
print((z == w)) |
st104504 | Hi all,
Is there any function for pre allocate necessary GPU memory and keep it fixed for the entire training loop.?
Thanks |
st104505 | This does not exists.
But pytorch is using a custom memory allocator on the GPU that does not release memory right away and will reuse it. Hence having similar behaviour. |
st104506 | Hi @albanD,
Suppose my code took 5 GB of GPU memory. I freed some of the variables using del x (say around 3gb). So now I still have the 5 GB allocated to my process (3 GB in use and 2 GB is free) and the other processes cannot use this 5 GB. Which implies I have 5 GB of memory for my process. Am I correct?
Thanks!! |
st104507 | Yes exactly. That memory will still be available for your process (but not others). |
st104508 | Hi
I tried training LSTM network with multiple processes (but single GPU) using torch.multiprocessing but I can’t get rid of this warning:
UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
Below is a simple code that can reproduce it. I tried using flatten_parameters after model creation (before sharing it by multiple processes, (1) in code). This makes no difference at all. Another place where I tried using it was (2) when each process gets its own copy but this makes memory not shared anymore.
Without flattening parameters networks are using whole lot more memory on GPU (around two times more). Is there a way to solve this issue and flatten_parameters?
I’m using Manjaro OS.
import time
import torch
from torch import nn
import torch.multiprocessing as mp
num_proc = 3
units = 1024
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.lstm = nn.LSTM(units, units, num_layers=8)
def forward(self, x):
return self.lstm(x)
def train(pid, model):
time.sleep(pid + 1)
x = torch.randn(1, 1, units).cuda()
# (2) calling flatten_parameters here makes warning
# go away but it also makes memory not shared anymore
# model.lstm.flatten_parameters()
while True:
model(x)
with torch.no_grad():
for w in model.parameters():
w += 1.
print(pid, list(model.parameters())[0][0, 0].item(), flush=True)
time.sleep(num_proc)
def main():
mp.set_start_method('spawn')
pool = mp.Pool(num_proc + 1)
model = Net().cuda()
# (1) this line doesn't make any difference
# no matter if after sharing memory or before
# model.lstm.flatten_parameters()
model.share_memory()
processes = []
for pid in range(num_proc):
processes.append(pool.apply_async(train, args=(pid, model)))
try:
for p in processes:
p.get(timeout=1000000.)
except KeyboardInterrupt:
print('Terminating pool...')
pool.terminate()
if __name__ == '__main__':
main() |
st104509 | I’m also seeing this with shared memory on Ubuntu 16.04: I printed out the parameters and there is indeed no memory sharing going on if you call flatten_parameters() in the subprocesses. |
st104510 | Hi all,
Couldn’t find a better title for this, so I’ll explain it here:
I have a tensor with model outputs, e.g. in the shape [5, 3].
I’m trying to retain only a few of these, e.g. indices 2 and 4.
I’ve tried using torch.masked_select() to retain these entries, but it requires a 1D vector to operate over.
What alternatives do I have to extracting only the entries in these indices?
I have an implementation that uses torch.cat to copy the relevant entries over into a dummy tensor iteratively, but this feels hacky and I feel that there is a more efficient solution.
What are my options to implement this?
EDIT: Simplified the question |
st104511 | Solved by ptrblck in post #2
I’m not really understand, what kind of indices you would like to slice.
Are the indices in dim0?
If so, you could just use:
outputs = torch.randn(5, 3)
outputs[[2, 4], :] |
st104512 | I’m not really understand, what kind of indices you would like to slice.
Are the indices in dim0?
If so, you could just use:
outputs = torch.randn(5, 3)
outputs[[2, 4], :] |
st104513 | Thanks for your reply, this was exactly what I was looking for. Looks like I was greatly overthinking the problem! |
st104514 | Hey there, I am trying to implement euclidian loss (from VGG paper 1). It is not really described there well, but what I am assuming is, that it just measures the euclidian distance between two coordinates and returns it as sort of a loss:
So, my deep CNN wants to predict two coordinates (output tensor of size (batch_size, 1, 2)) lets say out = [[0.3, 0.6]] where batch_size = 1, and my ground_truth = [[0.5, 0.7]] the euclidian loss should be loss = math.sqrt((0.3 - 0.5)**2) + math.sqrt((0.6 - 0.7)**2)
1st question: Do I understand the concept of euclidian loss right?
2nd question: Does the below implementation looks right to you? (first time I create a own loss function)
class EuclidianError(nn.Module):
"""Implements euclidian distance as an error"""
def forward(self, x_pred, x_ground_truth):
err = torch.zeros(1)[0]
for i in range(len(x_ground_truth)): # i is index of batch
for j in range(len(x_ground_truth[i])): # j is index of to predict coordinates within one item of a batch
for k in range(len(x_ground_truth[i][j])): # k is index of x, y position in coordinates
err += math.sqrt((x_ground_truth[i][j][k] - x_pred[i][j][k])**2)
return err
Thank you for your help! |
st104515 | I’m not sure why there are 3 nested loops, since your data seems to have only 2 dimensions.
If you want to use it as a criterion without implementing the backward method, you should stick to PyTorch functions. So you should change math.sqrt to torch.sqrt.
Alternatively, you could get rid of the for loops (which can be pretty slow) and use a faster approach:
x = torch.randn(10, 2)
y = torch.randn(10, 2)
loss = torch.sqrt((x - y)**2).sum() |
st104516 | I’ve been trying to define a neural net using some for-loops so that I can more easily change the structure of the neural network without having to type a bunch of extra statements. Basically, in the __init__() method of my net I have stuff like for i in range(n_layers): self.dense_layers.append(nn.Linear(10,10)) and then in the forward method I have stuff like for layer in self.dense_layers: x=layer(x). This way if I want to add more layers, I just have to increase n_layers without having to type anything else.
The net actually runs if I put some arbitrary (random) input into it; however, when I go to print the parameters in the network or print the network itself, it seems almost all the layers are missing. I see only a few layer’s parameters and a few layer’s names in the net. Is initializing a network this way not allowed? |
st104517 | Basically I’d like to know if there’s any way (or best way) to make creating a custom neural net not require me to type out all the layers. So I can create an API like net = ConvNet(network_parameters) and have it build the neural network that I want. |
st104518 | Try to use a nn.ModuleList 1.5k for self.dense_layers, which will properly register all your parameters. |
st104519 | In your current approach your are manipulating the array in-place.
This could mess up your indices, since new_IDX could point to an already manipulated index.
Also, are the loops supposed to iterate to 64? |
st104520 | Hi,
I want to implement a customized Conv2d in which some multiplications during the convolution operation are dropped by some probability. This would happen during testing, and preferably different multiplications would drop for different kernels in a layer. Is there an easy way to do this? Do I need to implement the Conv2d function using pytorch functions from scratch?
Thanks! |
st104521 | Have a look at https://github.com/szagoruyko/diracnets/blob/master/diracconv.py 1.2k , it might be a way of implementing what you want |
st104522 | Is that possible to implement the conv2d layer with the unfold function? I think it should be, but not sure about it. Thanks in advance for any hint or suggestion. |
st104523 | I guess this should work:
x = torch.rand(4, 3, 6, 6)
k = torch.rand(3, 10, 2, 2)
(x.unfold(2, 2, 1).unfold(3, 2, 1).unsqueeze(2) * k.unsqueeze(0).unsqueeze(3).unsqueeze(4)).sum(-1).sum(-1).sum(1)
where the batch size is 4, number of input channel is 3, number of output channel is 10, and kernel size is 2, stride is 1, in this example. |
st104524 | Hi, what I am trying to do is the following:
I have a data array A (n, m) and an index array I of same size (n, m) and a result array R (x, n).
I am trying to scatter elements of A into R while also summing up all values which scatter to the same index.
This can be done in numpy for example in 1D arrays using np.histogram with the weights option.
This is one example in numba.cuda if it helps better explain what I want to do:
@cuda.jit
def rewireValues(R, A, I, totalthreads):
threadidx = (cuda.threadIdx.x + (cuda.blockDim.x * cuda.blockIdx.x))
if threadidx >= totalthreads:
return
nj = I.shape[1]
nk = I.shape[2]
idx = threadidx % nk
source = int(threadidx / nk) % nj
frame = int(threadidx / (nj * nk))
target = I[frame, source, idx]
if target == -1:
return
cuda.atomic.add(R, (frame, target, 0), A[frame, source, idx, 0]) |
st104525 | Solved by ptrblck in post #4
You could achieve this with tensor.scatter_add_:
A = torch.tensor([[0.7, 1.3],
[56.1, 7. ]])
I = torch.tensor([[1, 2],
[0, 0]])
torch.zeros(2, 3).scatter_add_(1, I, A) |
st104526 | In this example
A = [[0.7, 1.3],
[56.1, 7. ]]
I = [[1, 2],
[0, 0]]
Then if R is an array of zeros and shape (2, 4) it would end up being
R = [[0, 0.7, 1.3, 0],
[63.1, 0, 0, 0]]
But this I guess is a peculiar example since I assume that each row in I corresponds to a row in A. I imagine there are different ways of storing the index in I.
I see now that A, I and R must have same length first dimension since it’s the number of samples, but can have different number of columns |
st104527 | You could achieve this with tensor.scatter_add_:
A = torch.tensor([[0.7, 1.3],
[56.1, 7. ]])
I = torch.tensor([[1, 2],
[0, 0]])
torch.zeros(2, 3).scatter_add_(1, I, A) |
st104528 | Oh indeed! I don’t know how but in my tests in the past it didn’t work. Thank you a lot!!! |
st104529 | Ok I managed to make it work with 3D arrays now but I have another question.
How do I define a non-existing index in the index array. Like if some value of A should not be summed anywhere? (in my above CUDA code I set the index to -1 and skipped it). I guess I could make a garbage column in the result array and dump all the ones I don’t want there. But maybe there is a more elegant solution? |
st104530 | I’m not sure and would suggest to use exactly your suggestion, i.e. a garbage column. |
st104531 | Hi,
I make a network which has the following structure, which I would like to input a testing image and a list of maps
class SimpleNet(nn.Module):
def __init__(self, in_shape, num_classes):
super(SimpleNet, self).__init__()
...
def forward(self, x, maps):
out = self.block0(x)
print(out.size())
assert out.size() == maps[4]
out = torch.mul(out, maps[4])
out = self.block1(out)
print(out.size())
out = torch.mul(out, maps[3])
# print(out.size())
out = self.block2(out)
out = torch.mul(out, maps[2])
# print(out.size())
out = self.block3(out)
out = torch.mul(out, maps[1])
# print(out.size())
out = self.block8(out)
out = torch.mul(out, maps[0])
# print(out.size())
out = self.prob(out)
out = torch.squeeze(out)
# print(out.size())
return out
When I execute the model like this,
model2 = SimpleNet(img_shape, cls_num)
model2_output = model2(img_var, select_maps)
It comes the following error:
File "/home/kang/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/kang/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 57, in forward
inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
File "/home/kang/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 68, in scatter
return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
File "/home/kang/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 30, in scatter_kwargs
inputs = scatter(inputs, target_gpus, dim)
File "/home/kang/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 25, in scatter
return scatter_map(inputs)
File "/home/kang/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 18, in scatter_map
return tuple(zip(*map(scatter_map, obj)))
File "/home/kang/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 20, in scatter_map
return tuple(map(list, zip(*map(scatter_map, obj))))
File "/home/kang/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 16, in scatter_map
assert not torch.is_tensor(obj), "Tensors not supported in scatter."
AssertionError: Tensors not supported in scatter.
It seems that pytorch cannot support this.
How to solve this error?
Thank you. |
st104532 | I’m running int o a similar error as you upon trying to run training on a network I modified. I’m an absolute beginner at pytorch. Would you mind elaborating your solution further? |
st104533 | If you are using pytorch version before 0.4.0, you would have to use Variable from torch.autograd.
For example in his case, maps must have been a tensor. So to transform to a Variable,
from torch.autograd import Variable
maps = Variable(maps)
But I am not sure why this is done though. |
st104534 | Hi,I wanna apply weights data to model.
weight=torch.load(weights_file)
print(weight)
self.model.load_state_dict(weight)
print(self.model.state_dict())
This torch.Tensor which name is ‘weight’ has “device=‘cuda:0’”.
But “self.model.state_dict()” doesn’t have it.
Why? |
st104535 | It seems to me, that the weights you are trying to load, are loaded onto the GPU and your model is on the CPU, try to load the weights onto the CPU.
Here is how you load it correctly onto the CPU:
model.load_state_dict(torch.load('/path/to/weights.pt', map_location='cpu')) |
st104536 | It has displayed as below
load_state_dict() got an unexpected keyword argument 'map_location'
But your comment provides me with a clue.
I have solved this problem by loading the model on GPU.
self.modedl=self.model.to('cuda')
Thank you:) |
st104537 | map_location should be in the torch.load() method-paranthesis as in my example above, but glad I could help you anyways. |
st104538 | I want to do something like this: https://github.com/mrharicot/monodepth/blob/master/bilinear_sampler.py 177
Inputs are input_image of size 1x1x256x512 and offset of size 1x1x256x512. I need to sample input_image and move each pixel in x direction according to offset.
Can I use torch.nn.functional.grid_sample 66? If yes, how? Any suggestion is highly appreciated. |
st104539 | You could adjust the following code that performs backward-warping for optical flow.
github.com
sniklaus/pytorch-spynet/blob/master/run.py#L127 53
torchHorizontal = torch.linspace(-1.0, 1.0, variableInput.size(3)).view(1, 1, 1, variableInput.size(3)).expand(variableInput.size(0), 1, variableInput.size(2), variableInput.size(3))
torchVertical = torch.linspace(-1.0, 1.0, variableInput.size(2)).view(1, 1, variableInput.size(2), 1).expand(variableInput.size(0), 1, variableInput.size(2), variableInput.size(3))
self.tensorGrid = torch.cat([ torchHorizontal, torchVertical ], 1).cuda()
# end
variableGrid = torch.autograd.Variable(data=self.tensorGrid, volatile=not self.training)
variableFlow = torch.cat([ variableFlow[:, 0:1, :, :] / ((variableInput.size(3) - 1.0) / 2.0), variableFlow[:, 1:2, :, :] / ((variableInput.size(2) - 1.0) / 2.0) ], 1)
return torch.nn.functional.grid_sample(input=variableInput, grid=(variableGrid + variableFlow).permute(0, 2, 3, 1), mode='bilinear', padding_mode='border')
# end
# end
self.modulePreprocess = Preprocess()
self.moduleBasic = torch.nn.ModuleList([ Basic(intLevel) for intLevel in range(6) ])
self.moduleBackward = Backward()
# end
Disparity is essentially just optical flow without vertical motion. |
st104540 | Hi, I got the following error. Is there any clue what would cause 0 elements in grad_variable? Many Thanks
Traceback (most recent call last):
File "train-ucf24.py", line 459, in <module>
main()
File "train-ucf24.py", line 178, in main
train(args, net, optimizer, criterion, scheduler)
File "train-ucf24.py", line 280, in train
loss.backward()
File "/home/rusu5516/.local/lib/python3.5/site-packages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/rusu5516/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: invalid argument 2: size '[1]' is invalid for input with 0 elements at /pytorch/torch/lib/TH/THStorage.c:41 |
st104541 | Hello everyone, a fellow PyTorch noob here
I want to develop a time series future prediction LSTM model, that would take a sequence n_in historical samples and predict the n_out future samples of a given time series (where n_in and n_out are fixed), so it should be a a many to many LSTM network.
I have checked out the time_sequence_prediction 21 example, and I am aware how to develop such a model with for loops (by segmenting the sequence over timesteps, and looping through the historical data - n_in times, and then through the future - n_out times).
However, as n_in and n_out are fixed in my case, this approach seems a bit ineffective to me, and I was wondering would it be possible to vectorize the model and pass directly an input tensor (seq_len x batch_size x in_features) to the LSTM layer, and get the required output?
If this is possible, how do I need to format the input tensor (should I set seq_len=n_in, or should I pad it with zeros to be seq_len=n_in+n_out) to get the required output sequence as a lstm_output[-n_out:]?
Also, in such case, when loading minibatches, do I need to account for the possibility of the last minibatch of the dataset being shorter than the rest (e.g. if I had a dataset of 235 samples, with a batch size of 50, the last batch would be 35 samples)?
Thanks in advance.
Cheers!!! |
st104542 | I have 2 models denoted m1 and m2. They have a group of shared parameters denoted p. Besides p, m1 and m2 respectively have other parameters.
Now if I use optim.SGD( {‘param’ : m1.parameters(), ‘lr’ : 0.1}, {‘param’ : m2.parameters(), ‘lr’ : 0.01} ), then which lr is used for the shared parameters p? |
st104543 | Hello, I found the distribution.log_prob() sometimes outputs values larger than 0 while trying to use TransformedDistribution.
>>> torch.__version__
'0.4.0a0+3749c58'
>>> from torch.distributions import Normal, SigmoidTransform, TransformedDistribution
>>> m=TransformedDistribution(Normal(torch.tensor([0.]), torch.tensor([1.])), [SigmoidTransform()])
>>> m.sample()
tensor([ 0.5783])
>>> m.sample()
tensor([ 0.8666])
>>> m.log_prob(m.sample())
tensor([ 0.2810])
>>> m.log_prob(m.sample())
tensor([-0.6253])
>>> m.log_prob(m.sample())
tensor([ 0.3911])
>>> m.log_prob(m.sample())
tensor([-1.2592])
>>> m.log_prob(m.sample()).exp()
tensor([ 1.4093])
>>> m.log_prob(m.sample()).exp()
tensor([ 0.5151])
>>> m.log_prob(m.sample()).exp()
tensor([ 1.4236])
>>> m.log_prob(m.sample()).exp()
tensor([ 1.4300])
>>> m.log_prob(m.sample()).exp()
tensor([ 1.5884])
Is it normal behavior to output values where log_prob(val).exp() is larger than 1?
If it is, what does the log_prob mean?
Thank you. |
st104544 | I am sorry, but I found this issue: https://github.com/pytorch/pytorch/issues/7637 28
I thought that it is an issue for Transform, sorry for duplicate. |
st104545 | Hi,
I want to build pytorch from source on windows. I follow the step in the official github site, and all required dependencies are installed.
However, when i run ``python setup.py build", i meet the following error:
LINK: fatal error LNK1104: cannot open file `tbb.lib’ [C:\Users\yizhiw\pytorch\build\caffe2\apply_utils_test.vcxproj]
A complete error report is as:
image.png1155×205 9.39 KB
The Visual studio is 2017, and i use anaconda 3 with python 3.6.
Please help me! |
st104546 | It seems the error is brought by caffe2.
Actually, i do not need caffe2, is it possible to build without caffe2? |
st104547 | @magicwyzh I could reproduce this. Could you please send this to the github repo? |
st104548 | And the temporary workground is to install tbb through conda or pip using conda install tbb or pip install tbb. |
st104549 | I’m facing the same issue.
I’m rather new to all this, so I’m assuming I’m just doing something wrong with/in my setup script
(I’ve tried the default in the readme, and the various solutions from issue tracker, this is the latest ensemble I’ve tried)
conda create -n pyto python=3.6 -y
activate pyto
conda install numpy pyyaml mkl mkl-include setuptools cmake cffi typing pandas seaborn plotly scipy statsmodels jupyter notebook cython tbb -y
pip --no-cache-dir install cufflinks
pip --no-cache-dir install sklearn
pip --no-cache-dir install tbb
cd %USERPROFILE%
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
python setup.py clean
set ANACONDA_ROOT=%USERPROFILE%\Anaconda3
set MKLProductDIR=C:\Program Files (x86)\IntelSWTools\compilers_and_libraries\windows
set USER_LDFLAGS=/LIBPATH:%ANACONDA_ROOT%\envs\pyto\Library\lib
set "VS150COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build"
set CMAKE_GENERATOR=Visual Studio 15 2017 Win64
set DISTUTILS_USE_SDK=1
call "%VS150COMNTOOLS%\vcvarsall.bat" x64 -vcvars_ver=14.11
set CMAKE_INCLUDE_PATH=%ANACONDA_ROOT%\envs\pyto\Library\include
set LIB=%MKLProductDir%\mkl\lib\intel64;%LIB%
set LIB=%MKLProductDIR%\tbb\lib\intel64\vc14;%LIB%
set LIB=%ANACONDA_ROOT%\Library\lib;%LIB%
set LIB=%ANACONDA_ROOT%\envs\pyto\Library\lib;%LIB%
set LD_LIBRARY_PATH=%MKLProductDIR%\tbb\lib\intel64\vc14;%LD_LIBRARY_PATH%
set LIBRARY_PATH=%MKLProductDIR%\tbb\lib\intel64\vc14;%LIBRARY_PATH%
set CMAKE_INCLUDE_PATH=%MKLProductDIR%\mkl\include;%CMAKE_INCLUDE_PATH%
python setup.py install
I installed both tbb and mkl from intel and I think I’ve installed the right packages from visual studio. I’m using anaconda3 5.2 with 3.6
Screenshot_5.png427×586 25.3 KB
Any insight into my issue will be much appreciated! |
st104550 | Hi, could you please extend the USER_LDFLAGS a little bit, like the following steps?
set USER_LDFLAGS=/LIBPATH:%ANACONDA_ROOT%\envs\pyto\Library\lib /LIBPATH:%MKLProductDir%\mkl\lib\intel64 /LIBPATH:%MKLProductDir%\mkl\lib\intel64\vc14 /LIBPATH:%ANACONDA_ROOT%\Library\lib
If the issue persists, would you please clean the build first?
python setup.py clean |
st104551 | I fail to find the words to express my appreciation @peterjc123!
I’ll note a few extra caveats I encountered in case someone else has the same problems I did.
My paths for the intel components contain invalid characters (namely spaces) which led to an error like:
The C compiler "*\cl.exe" is not able to compile a simple test program.
Reinstalling to a valid path like C:\IntelSWTools fixed that problem.
Then I encountered a LNK 1257 : code generation failed error, which stemmed from having VC++ 2015.3 v14.00 (v140) (in previous screen shot, installed). Since this package is a requirement during the installation of the nvidia components for windows there probably should be an explicit step/reminder in the readme for it’s removal
My final build script is a combination of the the information provided here and these issues:
https://github.com/pytorch/pytorch/issues/8150 1 and 8394
https://github.com/peterjc123/pytorch-scripts/issues/23 2 – highlighted the problem with vc++ 2015
conda create -n pyto python=3.6 -y
activate pyto
conda install numpy pyyaml mkl mkl-include setuptools cmake cffi typing pandas seaborn plotly scipy statsmodels jupyter notebook cython tbb -y
pip --no-cache-dir install cufflinks
pip --no-cache-dir install sklearn
pip --no-cache-dir install tbb
cd %USERPROFILE%
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
python setup.py clean
set "VS150COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build"
set CMAKE_GENERATOR=Visual Studio 15 2017 Win64
set DISTUTILS_USE_SDK=1
call "%VS150COMNTOOLS%\vcvarsall.bat" x64 -vcvars_ver=14.11
set ANACONDA_ROOT=%USERPROFILE%\Anaconda3
export CMAKE_PREFIX_PATH=%ANACONDA_ROOT%\envs\pyto
set MKLProductDIR=C:\IntelSWTools\compilers_and_libraries_2018.3.210\windows
set USER_LDFLAGS=/LIBPATH:%ANACONDA_ROOT%\envs\pyto\Library\lib /LIBPATH:%MKLProductDir%\mkl\lib\intel64_win /LIBPATH:%MKLProductDir%\tbb\lib\intel64\vc14 /LIBPATH:%ANACONDA_ROOT%\Library\lib
set CMAKE_INCLUDE_PATH=%ANACONDA_ROOT%\envs\pyto\Library\include
set LIB=%MKLProductDir%\mkl\lib\intel64_win;%LIB%
set LIB=%MKLProductDIR%\tbb\lib\intel64_win\vc14;%LIB%
set LIB=%ANACONDA_ROOT%\Library\lib;%LIB%
set LIB=%ANACONDA_ROOT%\envs\pyto\Library\lib;%LIB%
set LD_LIBRARY_PATH=%MKLProductDIR%\tbb\lib\intel64_win\vc14;%LD_LIBRARY_PATH%
set LIBRARY_PATH=%MKLProductDIR%\tbb\lib\intel64_win\vc14;%LIBRARY_PATH%
set CMAKE_INCLUDE_PATH=%MKLProductDIR%\mkl\include;%CMAKE_INCLUDE_PATH%
python setup.py install
Cheers,
A.Noob |
st104552 | Hi, for Pytorch 0.4, it introduces a new scalar torch.tensor() with dim 0.
I feel confused since all the function of scalar tensor can be replaced by dim=1 Tensor(1).
Why need another new type making more complex for the API? |
st104553 | When you index into a vector, you get a scalar. So scalar support is natural. In fact, not having it is a more complex API because it makes it a special case of indexing. |
st104554 | Is there a way to create a scalar other than indexing? Currently I am doing this:
scalar = torch.Tensor([0])[0]
A bit weird, I would say. |
st104555 | Nice. I found a problem though:
x = torch.tensor(0)
x
Out[26]: tensor(0)
x += 3.2
x
Out[28]: tensor(3)
x = torch.tensor(0, dtype=torch.float32)
x += 3.2
x
Out[31]: tensor(3.2000)
Isn’t the default dtype supposed to be torch.float32? |
st104556 | Suppose you have a Tensor a = [n, 3, h, w] and another tensor b = [n, h, w]
And you want to do this:
torch.stack((torch.bmm(a[:,0,:,:],b), torch.bmm(a[:,1,:,:],b), torch.bmm(a[:,2,:,:],b)), dim=1)
is there any better way of doing that that is applying torch.bmm() on tensors where the batches have channels but each channel need to to multiplied(matrix multiplication) with the same matrix for each channel |
st104557 | Solved by ptrblck in post #2
You could add an additional dimension at dim1 in b and let broadcasting do the rest:
a = torch.randn(10, 3, 24, 24)
b = torch.randn(10, 24, 24)
c = torch.stack((torch.bmm(a[:,0,:,:],b), torch.bmm(a[:,1,:,:],b), torch.bmm(a[:,2,:,:],b)), dim=1)
d = torch.matmul(a, b.unsqueeze(1))
(c == d).all… |
st104558 | You could add an additional dimension at dim1 in b and let broadcasting do the rest:
a = torch.randn(10, 3, 24, 24)
b = torch.randn(10, 24, 24)
c = torch.stack((torch.bmm(a[:,0,:,:],b), torch.bmm(a[:,1,:,:],b), torch.bmm(a[:,2,:,:],b)), dim=1)
d = torch.matmul(a, b.unsqueeze(1))
(c == d).all() |
st104559 | Thanks, I never really was quite good at understanding much less properly using Broadcasting |
st104560 | I’ve seen that in many models there are layer parameters named like 'features.0.weight'/'features.0.bias' and 'features.3.weight'/'features.3.bias', skipping numbers 1 and 2. Why are they skipped? |
st104561 | Solved by ptrblck in post #2
Maybe because the layers 1 and 2 are nn.ReLU() and nn.MaxPool2d.
Not easy to tell without a model definition, but this might be the case. |
st104562 | Maybe because the layers 1 and 2 are nn.ReLU() and nn.MaxPool2d.
Not easy to tell without a model definition, but this might be the case. |
st104563 | It makes sense, but it complicates things when someone wants to iterate through the state dictionary. |
st104564 | I need to retrieve 2D-convolutional weights (and biases) from a state_dict.
The easiest thing it comes to my mind is iterating over the dictionary keys for 4d shaped Tensors whose name ends with .weight and then retrieve also their .bias counterpart.
This approach seems error prone to me, since I have not the guarantee that the .weight/.bias I retrieve come from a convolutional layer. I mean, the only guarantee I have is that it is 4d shaped.
Wouldn’t it be more practical to store keys like features.conv2d.0.weight, features.conv2d.1.weight, and so on? |
st104565 | Here is a dummy example to get the conv parameters:
model = nn.Sequential(
nn.Conv2d(3, 6, 3, 1, 1),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Linear(10, 10)
)
for child in model.children():
if isinstance(child, nn.Conv2d):
print(child.weight)
print(child.bias)
You could also call .named_children() to get the name of the current layer, if you need it. |
st104566 | I tried to write a custom dataloader for mnist where I want only items with specific labels, and when I try to run my model Cuda gives me out of memory errors after a couple of epochs. When I run my model with the standard MNIST dataloader, the program works fine. Any idea why this is happening?
class MNISTCustomDataset(Dataset):
def __init__(self, numbers, transform=None, data_dir='./data/'):
#Training Data
f = open('./data/train-images-idx3-ubyte')
loaded = np.fromfile(file=f, dtype=np.uint8)
all_images = loaded[16:].reshape((60000, 28, 28, 1)).astype(np.float32) / 255
f = open('./data/train-labels-idx1-ubyte')
loaded = np.fromfile(file=f, dtype=np.uint8)
all_labels = loaded[8:].reshape((60000)).astype(np.int32)
self.images = []
self.labels = []
for idx in range(0,len(all_images)):
if all_labels[idx] in numbers:
self.images.append(all_images[idx])
self.labels.append(all_labels[idx])
self.transform=transform
def __getitem__(self, index):
img = self.images[index]
label = self.labels[index]
if self.transform is not None:
img = self.transform(img)
return img, label
def __len__(self):
return len(self.images)
def load_custom_mnist(bsize, numbers):
dataset = MNISTCustomDataset(numbers, transform=transforms.Compose([
transforms.ToTensor(),
]))
loader = DataLoader(dataset, batch_size=bsize, shuffle=True)
return loader |
st104567 | Check the size of the model and the number of batches you use, maybe your GPU is too small in memory to support some models with a large batch size. |
st104568 | Could be unrelated, but try to close your files after loading the numpy arrays or use a context manager, which closes the files automatically. |
st104569 | Hi everyone!
Just learning and have a few questions.
Fist what is meant by using a ‘pre trained’ model? If I have my own image set that I train with how is this model pretrained or how does this affect my model training? For example I’m using
model = models.vgg16(pretrained=True)
The doc’s say - Parameters: pretrained (bool) – If True, returns a model pre-trained on ImageNet
Looking at code examples when someone uses the pretrained = true in their code but has a training function on their own image set. What is the point of downloading a trained model when they are training it anyway? |
st104570 | Means that all layers are updated with those of a network trained on another data-set, instead of having random weights you have well initialise weights. Then, pretrained=True you activate it. |
st104571 | Ok, so it would be used to help speed up training then? When would one want to use a non pretrained model - is there a situation where this would be required? |
st104572 | It serves to have a better generalisation of the model and works well with very deep nets. It turns out to be a good practice to pre-initialise the weights when you can. |
st104573 | If you use a pre-trained model, you are more or less bound to the architecture, which is fine in most use cases.
Sometimes you would like to define a specific model architecture and cannot use pre-trained weights, since they are not matching.
It really depends on the use case. Often you can replace a part of your overall model with a pre-trained model, which might help the training as @Nicolo_Savioli explained.
On the other hand, there are not many pre-trained models working with e.g. accelerometer data for activity recognition. |
st104574 | Seems like I am getting some extra memory overheads with dropout. Here is a toy code to illustrate the problem.
import torch.nn as nn
import torch.nn.functional as F
import torch
import gc
from py3nvml.py3nvml import *
nvmlInit()
handle = nvmlDeviceGetHandleByIndex(0)
def print_free_memory(point):
info = nvmlDeviceGetMemoryInfo(handle)
print("Used memory: {:10.4f}GB at point {}".format(info.used/(1024**3), point))
class Test(nn.Module):
def __init__(self):
super(Test, self).__init__()
pass
def forward(self, x):
print_free_memory("before dropout")
output = F.dropout(x, training=True)
print(output.shape)
print_free_memory("after dropout")
return output
model = Test().cuda()
def run():
device = torch.device('cuda')
for i in range(1,2):
x = torch.rand(30, 175, 4096).to(device)
out = model(x)
run()
For this run, output is:
Used memory: 0.7822GB at point before dropout
torch.Size([30, 175, 4096])
Used memory: 1.2705GB at point after dropout
AFAIK x will occupy (30*175*4096*32) / (8*1024*1024) = 82MB of memory and there is a x.clone() in dropout so in total it should occupy 82*2=164MB. But as we can see the difference here is roughly 490MB. Although the difference is not very high here, in my case where I stack multiple layers with each of them having dropout enabled, makes the model go out of memory.
UPDATE:
If I use inplace=True then there is a slight reduction of used memory after dropout (from 1.2705GB to 1.1885GB) which is exactly equal to the memory occupied by output variable. |
st104575 | I think you storage in global memory within training=True, try without this flag and let see. |
st104576 | If I do training=False then dropout doesn’t work. This line 5 returns back the original tensor if flag is not enabled. |
st104577 | Hey, sorry because you use the in-function version, try that:
import torch.nn as nn
drop = nn.Dropout2d()
then, you use drop as a function like: drop(…)
By the way which CuDNN version did you use? |
st104578 | I’ve tried your code with print(torch.cuda.memory_allocated()) instead of your nvml functions, since I’m not familiar with them.
It seems the code uses approx. 82MB:
class Test(nn.Module):
def __init__(self):
super(Test, self).__init__()
def forward(self, x):
print(torch.cuda.memory_allocated() / 1024**2)
output = F.dropout(x, training=True)
print(torch.cuda.memory_allocated() / 1024**2)
return output
def run():
for i in range(1,2):
x = torch.rand(30, 175, 4096).to(device)
out = model(x)
device = torch.device('cuda')
model = Test().to(device)
run()
> 165
> 247 |
st104579 | Current i have a big data matrix with 3D dimension, could I ask how can I save them in sparse format and input them to CNN for model training? thanks. |
st104580 | Maybe I am a bit late here but I just got to know that Tensor and Variable behave the same now. Where as we had to wrap a tensor before with the Variable tag, now we don’t have to.
So my question is:
Are there any extra advantages of still using the Variable wrapper over a tensor? If not, then can we stop using Variable and continue all our autograd operations on a tensor( provided requires_grad=True)? |
st104581 | Using Variable wrapper will just return an object of the new Tensor class. And yes, autograd works with Tensor now. |
st104582 | I can not understand torch.nn.utils.clip_grad correctly. I saw following code.
http://pytorch.org/docs/master/_modules/torch/nn/utils/clip_grad.html#clip_grad_norm 270
In this function, I think max_norm is maximum norm of each parameter. But it calculates sum of all norms.
Assume if there are two same grad parameters, (3, 4) and (3, 4) which l2 norm are 5. And given max_norm is 5.
I think parameters’ value will be not changed by this func. But it did.
Now, total_norm is 50 ** 0.5 almost equal to 7.07. So updated value is (3*5/7.07, 4*5/7.07)=(2.12, 2.83)
So it depends on number of parameters because of total_norm.
How do I usually use this func and set max_norm?
I found only one example of using this func.
github.com
pytorch/examples/blob/master/word_language_model/main.py#L162 68
data, targets = get_batch(train_data, i)
# Starting each batch, we detach the hidden state from how it was previously produced.
# If we didn't, the model would try backpropagating all the way to start of the dataset.
hidden = repackage_hidden(hidden)
model.zero_grad()
output, hidden = model(data, hidden)
loss = criterion(output.view(-1, ntokens), targets)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
torch.nn.utils.clip_grad_norm(model.parameters(), args.clip)
for p in model.parameters():
p.data.add_(-lr, p.grad.data)
total_loss += loss.data
if batch % args.log_interval == 0 and batch > 0:
cur_loss = total_loss[0] / args.log_interval
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches | lr {:02.2f} | ms/batch {:5.2f} | '
'loss {:5.2f} | ppl {:8.2f}'.format( |
st104583 | I found the explanation here doc 506
“The norm is computed over all gradients together, as if they were concatenated into a single vector.” |
st104584 | This question stems from comparing the caffe way of batchnormalization layer and the pytorch way of the same. To provide a specific example, let us consider the ResNet50 architecture in caffe (prototxt link 10). We see “BatchNorm” layer followed by “scale” layers. While in the pytorch model of ResNet50 we see only “BatchNorm2d” layers (without any “scale” layer). If, in particular, I compare the first batchnorm layer in pytorch model and the first batchnorm+scale layer in caffe model we get the following.
Pytorch:-
Param Name size
========== ====
bn1.weight torch.Size([64])
bn1.bias torch.Size([64])
bn1.running_mean torch.Size([64])
bn1.running_var torch.Size([64])
Caffe:
Param Name size
========== ====
bn_conv1[0] (64,)
bn_conv1[1] (64,)
bn_conv1[2] (1,)
scale_conv1[0] (64,)
scale_conv1[1] (64,)
My question is what is the correspondence between these parameters (basically which one in caffe is what in pytorch)?
I also have a second question. And it is regarding the ‘affine’ argument/parameter of the BatchNorm2d module in pytorch. Does setting it False mean \gamma=1, \beta=0? |
st104585 | I think that
bn_conv10 -> bn1.running_mean torch.Size([64])
bn_conv11 -> bn1.running_var torch.Size([64])
scale_conv1[0] (64,) -> bn1.weight torch.Size([64])
scale_conv1[1] (64,) -> bn1.bias torch.Size([64])
I also got confused by the extra parameter bn_conv12. In my caffe model, I do not have this parameter. |
st104586 | It is said in https://pytorch.org/docs/master/notes/cuda.html 163 that GPUs operations are asynchronous. Operations are enqueued and executed in parallel. But, there is also a caveat that this process is under the hood. Users are expected to see it as synchronous.
If I understand correctly, if the user demands the result of an operation, it can’t be waited any longer, it then must perform the operation stripping away a chance for better optimization.
What operations then that would force the execution of such operations? What are some guidelines to take most out of this asynchronous execution thing? |
st104587 | Sorry, I’m a bit confused about what you’re asking. What do you mean by “user demands the result of an operation”? |
st104588 | For example “print(tensor)”, if the user side demands this it must block whatever expressions that come after this expression, must it not?
Are there some other expressions of that kind?
I think the question I have is rather by what is a tensor “represented” in the Python perspective? Like if I create one tensor, I just get a placeholder rather than a real array of values. And whatever I do to that placeholder is just that I get another placeholder. All the operations are scheduled and optimized under the hood. Only if I demand the result of it to be represented in non Pytorch way, it blocks until the placeholder is resolved. |
st104589 | Operations that require a synchronize will block (see cudaStreamSynchronize and cudaDeviceSynchronize). In particular, device-to-host transfers require a synchronize, which is why print will block.
Tensors are backed by a python storage that holds a pointer to data that can be on GPU or CPU. |
st104590 | That makes sense. Does it mean that appending a Python list with a tensor could be done promptly without the need for synchronization? |
st104591 | Yeah, that should be correct. (Unless there’s some device-to-host transfer going on there, but I don’t think that’s the case.) |
st104592 | Hi, I am transfering Style Swap(based on torch) into pytorch. I wonder if this could be improved.
Input is 4 dimensional, 1*C*H*W.
_, argmax = torch.max(input,1)
for i=1,self.output:size(3) do
for j=1,self.output:size(4) do
ind = argmax[{1,1,i,j}]
self.output[{1,ind,i,j}] = 1
end
end
return self.output
end
Can this be improved in the lua version? So I can pull a request to the original repo.
How to improved it in pytorch version.
BTW, If I want to record the ind. How to do it efficiently?
--addtional
local spSize = self.output:size(3)*self.output:size(4)
self.ind_vec = torch.Tensor(spSize):zero()
_, argmax = torch.max(input,1)
for i=1,self.output:size(3) do
for j=1,self.output:size(4) do
ind = argmax[{1,1,i,j}]
self.output[{1,ind,i,j}] = 1
-- additional
tmp = (i-1)*self.output:size(3)+j
self.ind_vec[tmp] = ind
end
end
return self.output
end |
st104593 | Solved by LambdaWill in post #2
Have solved by myself… |
st104594 | Sorry, I’ve just posted a solution to another problem and it has been withdrawn.
To accelerate it, I use advance indexing.(I haven’t finished complete perf comparison. But it does run much faster for my case.) I only use batchsize=1 in my training. So I haven’t work out a plan to handle multi batchsize.
import torch
import datetime
import time
import numpy
bs = 1 # has to be 1 for now.
c = 256
h = 32
w = 32
input = torch.randn(1,c,h,w).mul_(10).floor()
sp_y = torch.arange(0,w).long()
sp_y = torch.cat([sp_y]*h)
lst = []
for i in range(h):
lst.extend([i]*w)
sp_x = lst
sp_x = torch.from_numpy(numpy.array(sp_x))
print('Method 1')
start1 = time.time()
_,c_max = torch.max(input, 1)
c_max_flatten = c_max.view(-1)
input_zero = torch.zeros_like(input)
input_zero[:,c_max_flatten,sp_x,sp_y]=1
indlst1 = c_max_flatten
end1 = time.time()
print(type(sp_x))
print(type(sp_y))
input_zero=input_zero.cuda()
print('Method 2')
input_zero2 = torch.zeros_like(input).cuda()
indlst2 = torch.zeros(h*w).long()
start2 = time.time()
_, arg_max = torch.max(input,1,keepdim=True)
for i in range(h):
for j in range(w):
ind = arg_max[:,0,i,j]
input_zero2[:,ind[0],i,j] = 1
tmp = i*w+j
indlst2[tmp] = ind[0]
end2 = time.time()
print('Speedup ratio:')
print((end2-start2)/(end1-start1)) # about 1-15 times faster
print('Before:')
print(end2-start2)
print('After:')
print(end1-start1)
print('error:')
print(torch.sum(indlst1-indlst2))
print(torch.sum(input_zero-input_zero2)) |
st104595 | Hi, I do have a question regarding the style swap. Why do they use one hot of the max index, and then deconvolve with un-normalized style patch to represent the reconstructed image (feature map)?
Should they use the original K value (of course 0s except for the largest one) to deconvolve the style patch?
Thanks |
st104596 | I need to pre-training my model, I have the following characteristics:
The pre-train model is saved in “.pt”
load the new model (to pre-trained) from code.
I tried these two solutions, but neither works:
Solution 1) from Adam Paszke (How to load part of pre trained model? 12)
model = model_in_code_from_autograd().cuda()
pretrain_model = torch.load("path/../model.pt").cuda()
model_dict = model.state_dict()
pretrained_dict = pretrain_model.state_dict()
# 1 filters
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
# 2. overwrite entries in the existing state dict
model_dict.update(pretrained_dict)
# 3. load the new state dict
model.load_state_dict(model_dict)
Solution 2) I used copy_ and I saved in the “state_dict” of the new model
params_p_model = pretrain_model.named_parameters()
for name_p, param_p in params_p_model:
model.state_dict()[name_p].data.copy_(param_p)
The codes do not give errors but the behavior of the network is like a random weights.
Thank you. |
st104597 | You can print some of the weight and see if they are the same. (before saving/ after loading)
Did you call model.eval() aftrer loading? |
st104598 | hey thanks have a look of re-post After transfer learning model restart from zero 25 |
st104599 | I am struggling to fit my model in a 16GB gpu due to cuda out of memory error. What is even more intriguing is that the model runs fine for roughly first 2000 steps and the memory allocated as per nvidia-smi starts increasing from 14GB to 16GB gradually and then crash finally. I have lot of tensors declared in the forward function using method new_tensor or new_zeros which I suspect are not getting dereferenced or freed from memory and that’s why the accumulation from 14GB to 16GB is happening. Here is a dummy code
class Test(nn.Module):
def __init__(self):
super(Test, self).__init__()
self.weights = nn.Parameter(torch.zeros(5,5))
def forward(self, x):
dummy_constant = x.new_ones(self.weights.shape[0], x.shape[1])
output = self.weights @ x
output += dummy_constant
return output
model = Test()
for i in range(1,100):
x = torch.rand(5,i)
out = model(x)
#loss.backward() and other stuff
So all in all, will every instance of dummy_constant stay in memory even when it goes out of scope? |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.