id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st84768
|
Solved by LeviViana in post #2
Maybe these two questions can make things clearer:
In your application, what is the most important metric ?
In production, what is the distribution you expect to encounter (balanced or imbalanced) ?
My guess is that you want to apply early stopping for the metric the most matters under the distri…
|
st84769
|
Maybe these two questions can make things clearer:
In your application, what is the most important metric ?
In production, what is the distribution you expect to encounter (balanced or imbalanced) ?
My guess is that you want to apply early stopping for the metric the most matters under the distribution you expect to find in production.
|
st84770
|
Was wondering if there is a way to check tensor_a.is_view_of(tensor_b).
parent = torch.arange(4 * 5).view(4, 5)
child1 = parent[[0, 1]]
child2 = parent[:2]
child3 = parent[2:]
child2 and child3 are views of parent because of the slice-style indexing, while child1 is detached. I tried child2.data_ptr() == parent.data_ptr() and this correctly determines that child2 shares memory with parent, but will not work for child3 because of the memory offset (maybe with some pointer arithmetic).
|
st84771
|
Maybe This thread 43 can help you.
BTW, this snippet works for your case:
import torch
def same_storage(x, y):
x_ptrs = set(e.data_ptr() for e in x.view(-1))
y_ptrs = set(e.data_ptr() for e in y.view(-1))
return (x_ptrs <= y_ptrs) or (y_ptrs <= x_ptrs)
same_storage(parent, child1) # False
same_storage(parent, child2) # True
same_storage(parent, child3) # True
|
st84772
|
Problem Statement
Given is a corpus of large files with multiple non-overlapping chunks (i.e. samples) per file.
The corpus is too large to be pre-loaded completely.
Since DataLoader relies on lazy loading, each time a sample is requested, the entire file is loaded, the corresponding chunk extracted, then the file discarded. To remedy this inefficiency, cache file’s signal.
But, how to fit caching (sharing state) seamlessly with DataLoader or multiprocessing in general?
Attempts
Chunking offline: not desired
Pre-Loading corpus partially, split epoch into sub-epochs
not nice
reduces randomness
Sharing memory
need to write to shared memory for loading to cache (slow)
Splitting corpus, using DistributedSampler and DistributedDataParallel with 0 workers
avoiding shared memory
sampling to cure loss of randomness
Is there another more convenient way of handling this?
|
st84773
|
The new ChunkDataset API might help you!
The way it works is through a hierarchical sampling, meaning it splits the dataset in chunks (set of examples) which are shuffled. Each chunk has its samples shuffled too (second layer of shuffling). C++ or Python Dataloader will retrieve batches from an internal buffer that holds just a few chunks, not the whole corpus.
In order to use it, all you need to do is to implement your own C++ ChunkDataReader, which parses a single chunk of data. This new chunk datareader is then passed to ChunkDataset, which will handle all the shuffling and chunk by chunk loading for you.
Look at an test example at: DataLoaderTest::ChunkDataSetGetBatch
Currently there is only C++ support, but python bindings are on the way (https://github.com/pytorch/pytorch/pull/21232 9) and any feedback is welcome
|
st84774
|
Very Interesting!
Let’s say, we have a data set of N chunks each of which has M samples. N * M samples in total.
When buffering L chunks, I assume, the ChunkDataset has access to all L * M samples of the current buffer? After drawing batches from the buffer, the memory of the corresponding samples is freed, no matter which chunk the originate from? And as soon as there’s enough free memory, the next chunk is loaded to the buffer?
|
st84775
|
ChunkDataset will cache a few random chunks (n < N) in memory and create minibatches out of them. The internal cache is continuously replenished in background, in a way there is always available data for the user, but without loading the whole thing in memory
This is useful in a distributed computing scenario, where each worker will load only part of the dataset
|
st84776
|
Hello, I’m very new to machine learning and PyTorch. I’m looking at the Learning PyTorch with Examples page.
https://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-custom-nn-modules 157
I’m confused about the use of activation functions in this example code below. It doesn’t seem like there’s any kind of activation function being used here, like a ReLU, as in previous examples. Am I completely missing something here? If I want to apply this code to my project, do I need to introduce a ReLU at some point? I see that a h_relu variable is created in the forward function, but this doesn’t seem like the same thing.
Any explanations would be greatly appreciated.
class TwoLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
"""
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
"""
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
"""
In the forward function we accept a Tensor of input data and we must return
a Tensor of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Tensors.
"""
h_relu = self.linear1(x).clamp(min=0)
y_pred = self.linear2(h_relu)
return y_pred
|
st84777
|
Hi!
The method clamp(min=0) is functionally equivalent to ReLU. All ReLU does is to set all negative values to zero and keep all positive values unchanged, which is what is being done in that example with the use of clamp set to min=0.
Here’s the documentation for torch.clamp 64.
|
st84778
|
So in this case models “activation function” is ReLU?
Thank you for the response!
|
st84779
|
I was fine-tuning inception_v3 from torchvision, but when I use DataParallel I met a TypeError:__new__() missing 1 required positional argument: 'aux_logits' , here are my code:
model = inception_v3(pretrained=True)
class_num = 8
channel_in = model.fc.in_features
model.AuxLogits.fc = nn.Linear(768, class_num)
model.fc = nn.Sequential(nn.Dropout(p=0.2),
nn.Linear(channel_in, class_num))
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
loss_fn = nn.CrossEntropyLoss()
model = nn.DataParallel(model)
model.aux_logits = True
model = model.cuda()
loss_fn = loss_fn.cuda()
outputs, aux_outputs = model(batch_x)
loss1 = loss_fn(outputs, batch_y)
loss2 = loss_fn(aux_outputs, batch_y)
loss = loss1 + 0.4*loss2
could anyone tell me how to solve this problem?Thank you!
|
st84780
|
Hello, I am having the same issue. Did you figure out a workaround for this other than disabling the aux_logits output in the model?
|
st84781
|
Hi all,
I am totally new to pytorch and I want to know its capabilities in developing new optimization ideas.
More precisely, I have previously written a paper entitled “a new framework to train autoencoders using nonsmooth regularization” where the codes where implemented in Matlab from scratch. Now I want to use the capabilities of pytorch in developing my idea in the above paper.
Especially I want to know the following items:
1- Can I define a dynamic cost function in pytorch? In other words consider I have a term in cost function like ||w-v||_F^2 where w is the weight matrix for a special layer and v is a fix matrix which is changed in each epoch based on a specific rule.
2- Can we evaluate the gradient of dynamic cost function with respect to network parameters?
3- Can we change the point where the gradient is evaluated? In other word consider I want to evaluate the gradient in a point where the value of weight matrix w is w+a*v where a is a constant scalar and v is a constant matrix.
Thanks.
|
st84782
|
yes
yes
Not without re-evaluating the model. Most if not all auto differentiation (as opposed to symbolic differentiation) offer to evaluate the derivative at the point where you evaluated the function. This is because they need intermediate outputs etc. You can, of course, evaluate the model twice.
|
st84783
|
Hi,
Wondering if there is a native support for federated learning in PyTorch ?
Any input is appreciated.
Thanks!
Kartik
|
st84784
|
Solved by RicCu in post #2
Try out PySyft.
|
st84785
|
Hi,
Is there an easy way to check if my entire sequence of operations is connected? Apart from manually debugging through the entire flow and checking if each intermediate variable has a grad_fn? I haven’t used Tensorflow much either, but what I am looking for is probably a way to print out the entire “graph” in PyTorch (though I understand in PyTorch it’s dynamically created)?
Sorry if that didn’t make much sense. I mean if I had something like:
x = tensor which requires grad
y=x^2+3
z=y.mean()
Now I want something which tells me the entire sequence of grad_functions. The purpose is to troubleshoot a project where I have quite a long such chain of functions and hopefully I am not messing up somewhere.
|
st84786
|
Sorry this isn’t much (it’s kinda late here), but this post 107 has a snippet that lets you examine the gradient flow, and @apaszke has a GitHub Gist 34 that should let you see the network’s graph. Hope that helps!
|
st84787
|
Hello.
Three different input: images1, images1_sub(a subset of images1), images2,.
One network: model.
... ...
optimizer.zero_grad()
output1,intermediate_feature_map_1 = model(images1)
# get output and intermediate feature map
output1_sub, intermediate_feature_map_sub1 = model(images1_sub)
# get output and intermediate feature map
output2, intermediate_feature_map2 = model(images2)
# calculate the cross_entropy ce_loss1 for output1
ce_loss1 = cross_entropy(output1, label)
# calculate the cross_entropy ce_loss2 for output2
ce_loss2 = cross_entropy(output2, label) # the same label
# the distance of intermediate feature map for images_sub1 and images2
#(the shape of images_sub1 is the same as the shape of images2 )
loss_d = distance(intermediate_feature_map_sub1, intermediate_feature_map2 )
loss = ce_loss1 + ce_loss2 + loss_d
loss.backward()
optimizer.step()
I’d like to sum the loss and backward. Then update parameter for network.
Is there anything wrong with my code?
Appreciated for your help!
|
st84788
|
Solved by ptrblck in post #9
Thanks for the information. I though you would like to split the spatial size.
If you split the batch, you would have to take care of e.g. batch norm layers, as the running stats will differ.
Besides that it should yield the same output.
What results did your test yield?
|
st84789
|
Hi, thanks for your reply!
I have not seen anything strange happening so far. But I have a question:
images1_sub in the code above is a subset of images1. If images1_sub = images1[:half]
Compared to the code:
output1_sub, intermediate_feature_map_sub1 = model(images1_sub)
will the following code achieve the same goal and save some calculation resouces?
output1_sub, intermediate_feature_map_sub1 = output1[:half], intermediate_feature_map1[:half]
Thank you.
|
st84790
|
This won’t necessarily give the same output.
E.g. if you are using conv layers with some padding, the outputs might differ:
conv = nn.Conv2d(1, 1, 3, 1, 1)
x = torch.randn(1, 1, 6, 6)
output = conv(x)
output_half = conv(x[:, :, 3:, 3:])
print(output[:, :, 3:, 3:] == output_half)
> tensor([[[[0, 0, 0],
[0, 1, 1],
[0, 1, 1]]]], dtype=torch.uint8)
|
st84791
|
Yeah, but, I think my original question is : about half batch of images ~
images1_sub = images1[:half]
output1_sub, intermediate_feature_map_sub1 = model(images1_sub)
and
outptut, inter = model(images)
output_half, inter_half = outptut[:half], inter[:half]
Is output1_sub different from output_half ? I have tested it, maybe same?
|
st84792
|
Thanks for the information. I though you would like to split the spatial size.
If you split the batch, you would have to take care of e.g. batch norm layers, as the running stats will differ.
Besides that it should yield the same output.
What results did your test yield?
|
st84793
|
ptrblck:
If you split the batch, you would have to take care of e.g. batch norm layers, as the running stats will differ.
Yes, with batch norm layers, I tested dummy data. The output is different. Appreciate for your reply!
|
st84794
|
I want to invoke a pre-trained model like this:
for module in model.children():
x = module(x)
but sadly function model.children() does not return in order of the invoking in forward function.
Examples:
when I define my network like this:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5,padding=(2,2))
self.conv2 = nn.Conv2d(6, 16, 5,padding=(2,2))
self.bn2 = nn.BatchNorm2d(16)
self.bn1 = nn.BatchNorm2d(6)
self.pool = nn.MaxPool2d(2, 2)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 7 * 7, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.relu = nn.ReLU()
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.pool(F.relu(x))
x = self.conv2(x)
x = self.bn2(x)
x = self.pool(F.relu(x))
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
x = torch.softmax(x,dim=1)
return x
function list(net.children()) output:
[Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2)), Conv2d(6, 16, kernel_size=(5, 5),
stride=(1, 1), padding=(2, 2)), BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True,
track_running_stats=True), BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True,
track_running_stats=True), MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1,
ceil_mode=False), Linear(in_features=784, out_features=120, bias=True), Linear(in_features=120,
out_features=84, bias=True), Linear(in_features=84, out_features=10, bias=True), ReLU()]
not the order defined in function forward but function __init__.
After search in Google, a package torchsummary is a good idea for this question, but if I invoke F.relu in forward function,
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 6, 28, 28] 156
BatchNorm2d-2 [-1, 6, 28, 28] 12
MaxPool2d-3 [-1, 6, 14, 14] 0
Conv2d-4 [-1, 16, 14, 14] 2,416
BatchNorm2d-5 [-1, 16, 14, 14] 32
MaxPool2d-6 [-1, 16, 7, 7] 0
Linear-7 [-1, 120] 94,200
Linear-8 [-1, 84] 10,164
Linear-9 [-1, 10] 850
================================================================
There is no ReLU in results.
Any API about this function?
|
st84795
|
Hello I am getting the below error and its running for 2 batches and throwing this error.
Screen Shot 2019-07-11 at 2.10.53 PM.png1892×1260 289 KB
Screen Shot 2019-07-11 at 2.11.17 PM.png1610×1222 194 KB
Thanks
|
st84796
|
Could you add an assert statement in the else condition in your __getitem__?
Maybe the condition is not met and you return nothing, which might yield this error.
|
st84797
|
I would just add a dummy assert statement or raise an exception, so that the code crashes:
assert False, "empty batch"
raise RuntimeError("empty batch")
|
st84798
|
ptrblck:
assert False, “empty batch” raise RuntimeError(“empty batch”)
Got the below error.Does that mean the next batch of 64 images aren’t present in folder.
AssertionError Traceback (most recent call last)
in
2 res.train()
3 running_loss=0
----> 4 for i,(data,target) in enumerate(train_load):
5
6 print(i,data.shape,target.shape)
/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py in next(self)
558 if self.num_workers == 0: # same-process loading
559 indices = next(self.sample_iter) # may raise StopIteration
–> 560 batch = self.collate_fn([self.dataset[i] for i in indices])
561 if self.pin_memory:
562 batch = _utils.pin_memory.pin_memory_batch(batch)
/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py in (.0)
558 if self.num_workers == 0: # same-process loading
559 indices = next(self.sample_iter) # may raise StopIteration
–> 560 batch = self.collate_fn([self.dataset[i] for i in indices])
561 if self.pin_memory:
562 batch = _utils.pin_memory.pin_memory_batch(batch)
in getitem(self, idx)
19 self.cj.append(img)
20 #assert(path.exists(img)),“wronggg”
—> 21 assert False, “empty batch”
22 raise RuntimeError(“empty batch”)
23
AssertionError: empty batc
|
st84799
|
Yes, apparently neither the file is “open/closed” (not sure what this entry means) nor the path can be found.
Add the img to the exception/assert so that you’ll see which path is wrong and can fix it.
|
st84800
|
Thanks.But I tried to add to list and see if 64 images aren’t in proper format.But I get the following now.Can you please explain what does that mean.Thanks
Screen Shot 2019-07-11 at 3.43.15 PM.png1288×662 82.4 KB
Screen Shot 2019-07-11 at 3.43.23 PM.png1476×1312 184 KB
|
st84801
|
It looks like more than a single path cannot be found, so that the length of the list grows.
Can you print this list after the code crashed using print(dataset.cj), where dataset refers to the instance of your current Dataset.
PS: It’s better to post the code directly by wrapping it in three backticks ```, as this will e.g. allow to search for the code
|
st84802
|
Hi,
Looking at the example for how LBFGS needs a closure() function to be used (https://github.com/pytorch/examples/blob/master/time_sequence_prediction/train.py 25), is there a way I could plot the loss as well? Simply appending the loss calculated inside closure() to a list initialized outside closure doesn’t seem to work.
|
st84803
|
Solved by ptrblck in post #2
Do you get any error or why is it not working?
Simply appending loss.item() to a list seems to work:
losses = []
def closure():
optimizer.zero_grad()
out = seq(input)
loss = criterion(out, target)
print('loss:', loss.item())
losses.append(loss.item())
loss.backward()
r…
|
st84804
|
kekday:
Simply appending the loss calculated inside closure() to a list initialized outside closure doesn’t seem to work.
Do you get any error or why is it not working?
Simply appending loss.item() to a list seems to work:
losses = []
def closure():
optimizer.zero_grad()
out = seq(input)
loss = criterion(out, target)
print('loss:', loss.item())
losses.append(loss.item())
loss.backward()
return loss
optimizer.step(closure)
|
st84805
|
My bad, it does seem to work indeed. Not quite sure what I missed last time since I tried the exact same thing and the list just stayed empty. Thanks a lot!
|
st84806
|
I am converting a tf model to pytorch:
import tensorflow.contrib.slim as slim
def net(posenet_inputs):
with slim.arg_scope([slim.conv2d],
normalizer_fn=slim.batch_norm,
normalizer_params=batch_norm_params,
weights_regularizer=slim.l2_regularizer(scale=0.0001),
activation_fn=tf.nn.relu):
conv1 = slim.conv2d(posenet_inputs, 16, 7, 2)
conv2 = slim.conv2d(conv1, 32, 5, 2)
...
I have written the basic network in pytorch.
I wanted to implement the exact same regularization for weights in pytorch. I found that changing weight_decay in the AdamOptimizer can add regularization, but I don’t think that would be exactly recreating the TF implementation.
Is there any other way that would similarly use the scale parameter?
|
st84807
|
Is it recommemded to use torch.cuda.empty_cache() before every training for every batch? Is it going to affect the performance ?
My model leads to OOM error for some batches becasue of its dynamic nature. So I am trying to emtpy_cache and garbage collect before every batch of data.
|
st84808
|
You won’t be able to use more memory, as you are just deleting the cache.
I haven’t profiled it, but I would assume the performance might be affected in a negative way, since the memory has to be reallocated in order to be used again.
If you are seeing an OOM error, you could try to lower e.g. the batch size.
If that’s not an option, have a look at checkpointing 5 to trade compute for memory.
|
st84809
|
I have a multi-output model in PyTorch when I train them using the same loss and then to backpropagate I combine the loss of both the output but when one output loss decreases others increase and so on. How can I fix the problem?
def forward(self, x):
#neural network arch. forward pass
x = dense1(x)
x1 = dense2(x)
x2 = dense2(x)
x1 = F.log_softmax(x1)
x2 = F.log_softmax(x2)
return x1, x2
out1, out2 = model(data)
loss1 = NLLL(out1, target1)
loss2 = NLLL(out2, target2)
loss = loss1 + loss2
loss.backward()
When loss1 decrease loss2 increase and when loss2 decrease loss1 increase how can i fix the issue.
Can any other operator other than ‘+’ be used to combine the loss or should I put weights to the different loss?
|
st84810
|
I’m not sure that what you have here is a multi-output model, per se. Instead, you have a single output model where you would like the output to be as close as possible to two targets as possible.
You have one input x, and you pass it through the same network layers (with identical weights) to produce x1 and x2, which are identical. You then compare this single value (albeit with two different names) to the targets target1 and target2 to calculate loss1 and loss2. Thus, your network is searching for the single set of weights for dense1 and dense2 that will produce a single output value with the lowest loss value on average. It makes sense that when one loss value decreases, the other increases, and vice versa, because if target1 and target2 are different, the network will only be able to predict one at a time (again, because the values x1 and x2 are identical).
Assuming that what you actually want is a network that predicts both targets from a given input:
def forward(self, x):
#neural network arch. forward pass
x = dense1(x)
x1 = dense2a(x)
x2 = dense2b(x)
x1 = F.log_softmax(x1)
x2 = F.log_softmax(x2)
return x1, x2
for some_iterator_over_data:
out1, out2 = model(data)
loss1 = NLLL(out1, target1)
loss2 = NLLL(out2, target2)
loss = loss1+loss2
loss.backward()
optimizer.step() # assuming the target for your optimizer is loss
optimizer.zero_grad()
In this way, different weights will be learned for layers dense2a and dense2b which make them more suitable for predicting target1 and target2, resepectively.
Hope this is what you were looking for!
|
st84811
|
Thanks for the suggestion. It was helpful i would also like to know weather there is a different way to combine the loss such that bot increase at the same time.
Like (x1^2+x2^2)^1/2
|
st84812
|
Ok, it’s important to understand what the loss actually is here so that you can achieve the results you want. Loss is just a number, calculated as some function on the outputs and targets of your network. Different functions give different representations of the error your network is making, but the lowest possible error you can achieve is 0. Thus, there is no need to sum the squares of these terms (this is generally done when you expect some negative values.
Now, to get to your main question. You would like to be able to constrain the network so that it cannot exploit just one of the loss functions. If you find that one portion of the loss is getting optimized much more than another, you can weight them with coefficients to achieve desired training properties.
total_loss = loss1 + 10* loss2
Alternatively, you could multiply these two components together or perform any number of other operations on them. For example, in the following formula, the total_loss is increased in proportion to the difference in magnitude of loss1 and loss2, helping to constrain them to be roughly equal.
total_loss = (loss1 + loss2)* (1 + |loss1-loss2|)
Ultimately, the single loss value is all that your network sees and uses for determining the magnitude of updates to its weights, so craft your loss value such that it has the properties you want.
|
st84813
|
I want to define an nn.ModuleDict() and iterate through it preserving the order of keys as defined. That is, I would like to define the dict as follows:
D = nn.ModuleDict(
{
'b': nn.Linear(2,4),
'a': nn.Linear(4,8)
}
)
so as
for k, v in D.items():
print(k, v)
b Linear(in_features=2, out_features=4, bias=True)
a Linear(in_features=4, out_features=8, bias=True)
However, by default, PyTorch will iterate through the dict after sorting alphanumerically its keys:
import torch
import torch.nn as nn
D = nn.ModuleDict(
{
'b': nn.Linear(2,4),
'a': nn.Linear(4,8)
}
)
D
ModuleDict(
(a): Linear(in_features=4, out_features=8, bias=True)
(b): Linear(in_features=2, out_features=4, bias=True)
)
for k, v in D.items():
print(k, v)
a Linear(in_features=4, out_features=8, bias=True)
b Linear(in_features=2, out_features=4, bias=True)
On the contrary, if we use update(), it works as intended. That is
D = nn.ModuleDict()
D.update({'b': nn.Linear(2, 4)})
D.update({'a': nn.Linear(4, 8)})
D
ModuleDict(
(b): Linear(in_features=2, out_features=4, bias=True)
(a): Linear(in_features=4, out_features=8, bias=True)
)
for k, v in D.items():
print(k, v)
b Linear(in_features=2, out_features=4, bias=True)
a Linear(in_features=4, out_features=8, bias=True)
Is there any way of iterating through the dict using the order of keys given during dict’s definition? One solution would be using keys that are ordered in the first place, but I would like to know if this is possible in the general case.
|
st84814
|
Solved by tom in post #2
If you pass in an ordered dict, the ordering will be preserved:
nn.ModuleDict(OrderedDict({
'b': nn.Linear(2,4),
'a': nn.Linear(4,8)
} ))
The crux is that Python 2 does not preserve order in dict (and for early Python 3.x it’s an implementation detail), so in order to have a deterministic…
|
st84815
|
If you pass in an ordered dict, the ordering will be preserved:
nn.ModuleDict(OrderedDict({
'b': nn.Linear(2,4),
'a': nn.Linear(4,8)
} ))
The crux is that Python 2 does not preserve order in dict (and for early Python 3.x it’s an implementation detail), so in order to have a deterministic dict->OrderedDict conversion, the key are sorted.
Alternatively, you can also pass in an iterator {...}.items() - looks funny, probably deserves an explanatory comment in the code, but works.
Best regards
Thomas
|
st84816
|
Hi @tom, many thanks for your quick response!
I see. That’s not bad solution of course, but especially in the case of a “Module Dictionary”, I cannot see why this isn’t the default behavior. I mean, when someone defines a modules dictionary, like I did above, then they would also expect to iterate it in the order it’s been defined.
Many thanks again!
|
st84817
|
Also if you use python 3.7, it should preserve the order of the keys for a regular dict.
|
st84818
|
you can write test to determine there evaluation.
Mathematically one would assume that derivative of ceil, round and floor is not determined for integers, but everywhere else the derivative will be zero, since these functions convert any function to a step function of a sort. The torch derivative of these function is just zero for the whole function.
def test(x, f):
\t p = torch.tensor(x).float().requires_grad_();
\t f( p ).backward()
\t return float(p.grad())
Some samples:
test(-1.2, torch.round) = 0.0
test(-1.2, torch.ceil) = 0
|
st84819
|
How do I implement a exponentially decaying cosine annealing lr_scheduler using the individually
existing ones.
|
st84820
|
Hi,
I have been having difficulties getting the basic cmake example working with pytorch, as in https://pytorch.org/tutorials/advanced/cpp_export.html 26. I have spent about 5 hours adding different flags for CUDA/cuDNN (I am not using GPUs anyway, but it seems like these packages are required and I do have them installed) and messing around with the CMakeLists.txt file. I haven’t been succesful so I ask for help. I am seeing the following log when I run a script make_cmake.sh (which runs cmake with flags) and then make:
-- The C compiler identification is GNU 8.2.0
-- The CXX compiler identification is GNU 8.2.0
-- Check for working C compiler: /cm/shared/sw/pkg/devel/gcc/8.2.0/bin/cc
-- Check for working C compiler: /cm/shared/sw/pkg/devel/gcc/8.2.0/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /cm/shared/sw/pkg/devel/gcc/8.2.0/bin/c++
-- Check for working CXX compiler: /cm/shared/sw/pkg/devel/gcc/8.2.0/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found CUDA: /cm/shared/sw/pkg/devel/cuda/9.0.176 (found suitable version "9.0", minimum required is "7.0")
-- Caffe2: CUDA detected: 9.0
-- Caffe2: CUDA nvcc is: /cm/shared/sw/pkg/devel/cuda/9.0.176/bin/nvcc
-- Caffe2: CUDA toolkit directory: /cm/shared/sw/pkg/devel/cuda/9.0.176
-- Caffe2: Header version is: 9.0
-- Found CUDNN: /cm/shared/sw/pkg/devel/cudnn/v7.0-cuda-9.0/include
-- Found cuDNN: v7.0.5 (include: /cm/shared/sw/pkg/devel/cudnn/v7.0-cuda-9.0/include, library: /cm/shared/sw/pkg/devel/cudnn/v7.0-cuda-9.0/lib)
-- Automatic GPU detection failed. Building for common architectures.
-- Autodetected CUDA architecture(s): 3.0;3.5;5.0;5.2;6.0;6.1;7.0;7.0+PTX
-- Added CUDA NVCC flags for: -gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70
-- Found torch: /mnt/ceph/users/mcranmer/Downloads/libtorch/lib/libtorch.so
-- Configuring done
-- Generating done
-- Build files have been written to: /mnt/ceph/users/mcranmer/.../build
make (note: see updated error below!):
Scanning dependencies of target run_pytorch
make[2]: *** No rule to make target `/cm/shared/sw/pkg/devel/cudnn/v7.0-cuda-9.0/lib', needed by `run_pytorch'. Stop.
make[1]: *** [CMakeFiles/run_pytorch.dir/all] Error 2
make: *** [all] Error 2
Here, the make_cmake.sh file (in the build directory) is as follows (the … is a long directory):
#!/bin/bash
rm CMakeCache.txt
module load cuda/9.0.176 cudnn/v7.0-cuda-9.0 gcc/8.2.0 lib/openblas/0.2.19-haswell slurm openmpi
FLAGS="-DCUDA_TOOLKIT_ROOT_DIR=/cm/shared/sw/pkg/devel/cuda/9.0.176 -DTORCH_LIBRARIES=/mnt/ceph/users/mcranmer/Downloads/libtorch -DCMAKE_INSTALL_PREFIX=/mnt/ceph/users/mcranmer/.../build -DCMAKE_PREFIX_PATH=/mnt/ceph/users/mcranmer/Downloads/libtorch -DCUDA_HOST_COMPILER=/usr/bin/gcc44 -DCUDNN_INCLUDE_DIR=/cm/shared/sw/pkg/devel/cudnn/v7.0-cuda-9.0/include -DCUDNN_LIBRARY=/cm/shared/sw/pkg/devel/cudnn/v7.0-cuda-9.0/lib"
CMAKE=/mnt/ceph/users/mcranmer/Downloads/cmake-3.13.0-rc2-Linux-x86_64/bin/cmake
$CMAKE $FLAGS ..
My CMakeLists.txt file is the standard:
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(custom_ops)
find_package(Torch REQUIRED)
add_executable(run_pytorch run_pytorch_1d.cpp)
target_link_libraries(run_pytorch "${TORCH_LIBRARIES}")
set_property(TARGET run_pytorch PROPERTY CXX_STANDARD 11)
The code I am attempting to compile (run_pytorch_1d.cpp) is (it should just load a pytorch model and not do anything with it):
#include <torch/script.h> // One-stop header.
#include <cstdlib>
#include <iostream>
#include <memory>
#include "run_pytorch_1d.h"
#define N_FEATURES 13
float run_pytorch_1d_cpp(float *x) {
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load("/mnt/ceph/users/mcranmer/.../model_to_load_from_cpp.pt");
return x[0] * x[0];
}
int main(int argc, const char* argv[]) {
float x[N_FEATURES] = {1};
printf("%f\n", x[0]);
return 0;
}
Any idea what’s going on? Earlier I had the issue of it trying to build some cuda library (libcu…a) instead of using the ones in my installation, but it was looking in the wrong directory. I guess the flags fixed it.
|
st84821
|
So the actual issue is:
make[2]: *** No rule to make target `/usr/local/cuda/lib64/libculibos.a', needed by `run_pytorch'. Stop.
not the earlier one. The earlier one was because I wrote …/cudnn…/lib instead of …/cudnn…/lib64.
I note that even if I append the absolute location of libculibos.a (which isn’t in /usr/local/cuda/lib64) to CMakeLists.txt in target_link_libraries, I see the same error. I have no idea why it thinks any of my cuda libraries are in /usr/local/cuda/lib64 as this directory does not exist.
|
st84822
|
Okay so I have a band-aid fix that works for my current set up. It’s ugly but it works. I would not consider this a solution. I still want to know what went wrong with cmake.
After running cmake, I edit CMakeFiles/run_pytorch.dir/build.make and comment out the following lines:
run_pytorch: /usr/lib64/libcuda.so
and
run_pytorch: /usr/local/cuda/lib64/libculibos.a
I have no idea why these lines are included. After this, I edit CMakeFiles/run_pytorch.dir/build.make (which I found by grep-ing for “libculibos”) and remove both /cm/shared/sw/pkg/devel/cudnn/v7.0-cuda-9.0/lib64
and /usr/local/cuda/lib64/libculibos.a from the line (it is trying to build a directory?), to leave:
/cm/shared/sw/pkg/devel/gcc/8.2.0/bin/c++ -rdynamic CMakeFiles/run_pytorch.dir/run_pytorch_1d.cpp.o -o run_pytorch -Wl,-rpath,/mnt/ceph/users/mcranmer/Downloads/libtorch/lib -Wl,-Bstatic -lculibos -Wl,-Bdynamic /mnt/ceph/users/mcranmer/Downloads/libtorch/lib/libtorch.so -lcuda -lnvrtc -lnvToolsExt -Wl,-Bstatic -lcudart_static -Wl,-Bdynamic -lpthread -ldl -lrt -Wl,--no-as-needed,/mnt/ceph/users/mcranmer/Downloads/libtorch/lib/libcaffe2.so -Wl,--as-needed -Wl,--no-as-needed,/mnt/ceph/users/mcranmer/Downloads/libtorch/lib/libcaffe2_gpu.so -Wl,--as-needed -Wl,-Bstatic -lcudart_static -Wl,-Bdynamic -ldl -lrt /mnt/ceph/users/mcranmer/Downloads/libtorch/lib/libcaffe2.so /mnt/ceph/users/mcranmer/Downloads/libtorch/lib/libc10.so -lpthread -lcufft /cm/shared/sw/pkg/devel/cuda/9.0.176/lib64/libcurand.so -lcublas -Wl,-Bstatic -lcublas_device -Wl,-Bdynamic
Running “make” works, and I can execute the pytorch executable without problems. Note that I am not using CUDA so linking problems with CUDA libraries won’t show up for me.
Does anybody know what I am doing wrong, and why I have to manually edit the cmake output?
|
st84823
|
I am also facing this issue. I want to use libtorch on a cluster, where CUDA is not installed in /usr/.
@Miles_Cranmer , I followed your workaround, and I got to compile my small example. It also works with CUDA.
There is a problem with your last post though, you say you edit the same file twice. I think the second file you are referring to is link.txt. I found it using the command:
grep -nr /usr/local/cuda/
In link.txt, I removed the arguments (or parts of arguments) that referred to this path, and it worked.
|
st84824
|
I ran
grep -nr /usr/local/cuda
inside the libtorch source directory, and I got these results:
share/cmake/Caffe2/Caffe2Targets.cmake:82: INTERFACE_LINK_LIBRARIES "caffe2::cudart;c10_cuda;caffe2;caffe2::cufft;caffe2::curand;caffe2::cudnn;/usr/local/cuda/lib64/libculibos.a;dl;/usr/local/cuda/lib64/libculibos.a;caffe2::cublas"
share/cmake/Caffe2/Modules_CUDA_fix/upstream/FindCUDA.cmake:34:# ``CUDA_BIN_PATH=/usr/local/cuda1.0`` instead of the default
share/cmake/Caffe2/Modules_CUDA_fix/upstream/FindCUDA.cmake:35:# ``/usr/local/cuda``) or set ``CUDA_TOOLKIT_ROOT_DIR`` after configuring. If
share/cmake/Caffe2/Modules_CUDA_fix/upstream/FindCUDA.cmake:939: list(APPEND CUDA_LIBRARIES -Wl,-rpath,/usr/local/cuda/lib)
share/cmake/Gloo/GlooTargets.cmake:58: INTERFACE_LINK_LIBRARIES "/usr/local/cuda/lib64/libcudart.so;\$<LINK_ONLY:pthread>"
share/cmake/Gloo/GlooTargets.cmake:74: INTERFACE_LINK_LIBRARIES "/usr/local/cuda/lib64/libcudart.so;gloo;/pytorch/build/nccl/lib/libnccl_static.a;dl;rt"
Binary file lib/libcaffe2_gpu.so matches
I think this might be a problem, since not everybody has CUDA installed under /usr/.
|
st84825
|
I removed the references to /usr/local/cuda inside the libtorch source, and now no need to edit build.make and link.txt before running make. I will try to find the time to make a pull request.
|
st84826
|
My dockerfile is from https://github.com/pytorch/pytorch/commit/4dbeb87e52982c2e1aecf901e6742fb9eb64c9c7#diff-305c1a8c354e4056b2827374b606efd0 18 https://github.com/pytorch/pytorch/blob/master/docker/pytorch/Dockerfile 11
I changed cuda version 10.0 to 9.0
here is my Dockerfile:
FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
ARG PYTHON_VERSION=3.6
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cmake \
git \
curl \
ca-certificates \
libjpeg-dev \
libpng-dev && \
rm -rf /var/lib/apt/lists/*
RUN curl -o ~/miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda install -y python=$PYTHON_VERSION numpy pyyaml scipy ipython mkl mkl-include ninja cython typing && \
/opt/conda/bin/conda install -y -c pytorch magma-cuda100 && \
/opt/conda/bin/conda clean -ya
ENV PATH /opt/conda/bin:$PATH
# This must be done before pip so that requirements.txt is available
WORKDIR /opt/pytorch
COPY . .
RUN git submodule update --init --recursive
RUN TORCH_CUDA_ARCH_LIST="3.5 5.2 6.0 6.1 7.0+PTX" TORCH_NVCC_FLAGS="-Xfatbin -compress-all" \
CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" \
pip install -v .
RUN git clone https://github.com/pytorch/vision.git && cd vision && pip install -v .
WORKDIR /workspace
RUN chmod -R a+w .
I executed docker build, and I got this:
Sending build context to Docker daemon 3.072kB
Step 1/13 : FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
---> afc5ab1e9a0d
Step 2/13 : ARG PYTHON_VERSION=3.6
---> Using cache
---> 956aaeb0f1b7
Step 3/13 : RUN apt-get update && apt-get install -y --no-install-recommends build-essential cmake git curl ca-certificates libjpeg-dev libpng-dev && rm -rf /var/lib/apt/lists/*
---> Using cache
---> e18e7fec7691
Step 4/13 : RUN curl -o ~/miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && chmod +x ~/miniconda.sh && ~/miniconda.sh -b -p /opt/conda && rm ~/miniconda.sh && /opt/conda/bin/conda install -y python=$PYTHON_VERSION numpy pyyaml scipy ipython mkl mkl-include ninja cython typing && /opt/conda/bin/conda install -y -c pytorch magma-cuda100 && /opt/conda/bin/conda clean -ya
---> Using cache
---> 121b8d486e0b
Step 5/13 : ENV PATH /opt/conda/bin:$PATH
---> Using cache
---> 4f8332eed85b
Step 6/13 : WORKDIR /opt/pytorch
---> Using cache
---> 249592e692ad
Step 7/13 : COPY . .
---> a248c8bc4865
Step 8/13 : RUN ls
---> Running in 9dc1ca1e2c9f
Dockerfile
Removing intermediate container 9dc1ca1e2c9f
---> 0600255e2c98
Step 9/13 : RUN git submodule update --init --recursive
---> Running in 8c683be2721a
fatal: Not a git repository (or any of the parent directories): .git
|
st84827
|
Make sure your current working directory is inside the pytorch folder.
Also, you might want to change the magma-cuda100 version to magma-cuda90.
|
st84828
|
I trained a classifier and saved the model using:
torch.save(model, "/home/zaianir/Documents/code/tuto/classif/MNIST_model.pth")
I’m trying to train a new classifier on top of the pretrained saved model without its last layer.
I want to train only the parameters of the new added layers (I don’t want to update the saved parameters).
Here is some of my code:
class VAE2(nn.Module):
def __init__(self):
super(VAE2, self).__init__()
self.fc3=nn.Linear(50,20)
self.fc4=nn.Linear(20,10)
def forward(self, x):
x=F.relu(self.fc3(x))
x=F.relu(self.fc4(x))
x=F.log_softmax(x)
return x
class VAE(nn.Module):
def __init__(self, VAE1, VAE2):
super(VAE, self).__init__()
self.VAE1=VAE1
self.VAE2=VAE2
def forward(self):
x=self.VAE1
x=self.VAE2(x)
return x
def loss_function(inp, target):
l=F.nll_loss(inp, target)
return l
def train(train_dl, model, epoch_nb,lr1):
optimizer=torch.optim.Adam(model.parameters(), lr=lr1)
train_loss1=[]
for epoch in range(1,epoch_nb):
model.train()
train_loss=0.0
for idx, (data, label) in enumerate(train_dl):
data, label= data.to(device), label.to(device)
out=model(data)
loss=loss_function(out,data)
train_loss+=loss.item()
model.zero_grad()
loss.backward()
optimizer.step()
av_loss= train_loss / len(train_dl.dataset)
print('Epoch: {} Average loss: {:.4f}'.format(
epoch, av_loss))
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=128, shuffle=True, **kwargs)
VAE1 = torch.load("/home/zaianir/Documents/code/tuto/classif/MNIST_model.pth")
VAE2_model=VAE2().to(device)
model=VAE(VAE1, VAE2_model)
epoch=30
learning_rate=0.001
train(train_loader, model, epoch, learning_rate)
Here are the different layers of my loaded model (VAE1):
I’m new to pytorch and don’t know how to proceed. Is my approach correct?
Thank you for your help.
|
st84829
|
There are some issues in your code:
in VAE.forward you are not passing x to self.VAE1, so that x will be the submodule not its output
I would suggest to save and load the state_dict instead of the complete model as described in the Serialization docs 11
if you would like to remove the last layer from VAE1, you could e.g. replace it with an nn.Identity layer
to freeze the base model’s parameters, set their requires_grad flag to False as described in the Finetuning tutorial 23
|
st84830
|
Afternoon,
I have a Autoencoder working now, but at the start i get the following warning:
UserWarning: Using a target size (torch.Size([16, 2048])) that is different to the input size (torch.Size([16, 1, 2048])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
16 is my batch size and the vector length is 2048, so i would expect to see 16,1,2048. I cannot understand where 16,2048 comes from…
how do I reshape the target size to 16,1,2048 to solve the probelm so that the warning goes away?
It looks like the input vector size is 16,2048, and using data=data.unsqueeze(1) changes this to 16,1,2048 but i still have the warning…
thanks,
Chaslie
|
st84831
|
If you are already changing the tensor shape in target then you shouldnt be getting the warning. Can you you post your code?
|
st84832
|
hi Tahir,
I get the warning after the loss function below:
for epoch in range(num_epochs):
for data, target in train_loader:
print("data=",data.shape)# this gives the size as [16,2048]
data = data.cuda()
z_loc, z_scale = model.Encoder(data)
z = model.reparam(z_loc, z_scale)
out = model.Decoder(z)
loss = loss_fn(out, data, z_loc, z_scale)
optimizer.zero_grad()
Which is after I have used the unsqueeze here:
def forward(self, data):
data=data.unsqueeze(1)
print("data_mod=",data.shape) # this reshapes to [16,1,2048]
|
st84833
|
Hi, I am trying to run a script from github https://github.com/vsitzmann/deepvoxels 4. I have a RTX 2060 GPU on my laptop and a RTX 2080Ti GPU on a server. I use the exact same conda environment on both machines, both have CUDA version 10.1. I am able to run the code without a problem on my laptop, however I can’t make it work on the server.
Initially, the code didn’t work on either machine. I simply updated the version of pytorch on my laptop and it solved the problem. I ran this command: conda install pytorch torchvision cudatoolkit=10.0 -c pytorch on the server , it updated pytorch and torchvision as asked however the same problem remains. Here is the ouput
Begin training...
Traceback (most recent call last):
File "run_deepvoxels.py", line 395, in <module>
main()
File "run_deepvoxels.py", line 386, in main
train()
File "run_deepvoxels.py", line 183, in train
grid2world=grid_origin)
File "/store/usagers/tamez/deepvoxels/projection.py", line 90, in comp_lifting_idcs
voxel_bounds_min, voxel_bounds_max, _ = self.compute_frustum_bounds(camera_to_world, world2grid)
File "/store/usagers/tamez/deepvoxels/projection.py", line 73, in compute_frustum_bounds
p = torch.bmm(camera_to_world.repeat(8, 1, 1), corner_points)
RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:450
This is my first post here so if I did not provide some information please let me know.
Thanks in advance.
|
st84834
|
Solved by Tal_Mezheritsky in post #2
After reinstalling pytorch multiple times through conda I could not make the script work.
I identified the problem by running torch.version.cuda in the python console on the server and realising python was using CUDA 9 instead of CUDA 10. The command torch.__file__ then showed that it was not even …
|
st84835
|
After reinstalling pytorch multiple times through conda I could not make the script work.
I identified the problem by running torch.version.cuda in the python console on the server and realising python was using CUDA 9 instead of CUDA 10. The command torch.__file__ then showed that it was not even using torch from my conda environment, it was using another torch install on the server. Once I got rid of the other pytorch install torch.version.cuda showed CUDA 10 and everything works now.
|
st84836
|
Afternoon,
I am hoping someone can help me i am getting the following error message:
output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [2 x 10], m2: [2 x 10] at C:/w/1/s/tmp_conda_3.7_044431/conda/conda-bld/pytorch_1556686009173/work/aten/src\THC/generic/THCTensorMathBlas.cu:268
class Decoder(nn.Module):
def __init__(self,lat_dim):
super(Decoder, self).__init__()
self.fc1 = nn.Linear(lat_dim, 10)
self.decode = nn.Sequential(OrderedDict([
# ('fc1',nn.Linear(lat_dim,30)),
('HT1',nn.Hardtanh()),
('fc2',nn.Linear(30,64)),
('co1',nn.ConvTranspose1d(1,1,2,stride=2)),#128
('co2',nn.ConvTranspose1d(1,1,2,stride=2)),#256
('up1',nn.Upsample(scale_factor=2)), #512
('co3',nn.Conv1d(1,1,8,stride=2,padding=4)),#256
('co4',nn.ConvTranspose1d(1,1,8,stride=4,padding=2)),#1024
('co5',nn.ConvTranspose1d(1,1,4,stride=2,padding=1)),#2048
('up2',nn.Upsample(scale_factor=2)),#4096
('co6',nn.Conv1d(1,1,8,stride=4,padding=4)),#1024
('co7',nn.ConvTranspose1d(1,1,2,stride=1)),#1025
('co8',nn.ConvTranspose1d(1,1,2,stride=2)),#2050
('up3',nn.Upsample(scale_factor=2)),#4100
('co9',nn.Conv1d(1,1,16,stride=8,padding=6)),#512
('co10',nn.ConvTranspose1d(1,1,8,stride=4,padding=8)),#2048
]))
def forward(self, z):
#x.view returns a new tensor with the same data as the self tensor but different shape
print(z.shape)
z = F.relu(self.fc1(z))
print(z)
print(z.shape)
z = z.view(-1, 32, 21)
z = self.decode(z)
out=torch.sigmod(z)
return out
Can anyone help me please?
Cheers
Chaslie
|
st84837
|
Solved by ptrblck in post #11
I’m not familiar with your use case, but you could reshape the output of your linear layer before feeding it to the nn.ConvTranpose1d layer or just add a dummy channel dimension using:
output = output.unsqueeze(1)
Based on the number of input channels in co1, it seems the dummy channel dimension i…
|
st84838
|
Some additional information - it may help, or it may mean nothing.
changing
summary(model.Decoder,(1,10))
to
summary(model.Decoder,(1,2))
changed the error to
RuntimeError: shape ‘[-1, 32, 21]’ is invalid for input of size 60
so i changed z = z.view(-1, 32, 21) to z = z.view(-1, 5, 6)
RuntimeError: size mismatch, m1: [10 x 6], m2: [30 x 64] at C:/w/1/s/tmp_conda_3.7_044431/conda/conda-bld/pytorch_1556686009173/work/aten/src\THC/generic/THCTensorMathBlas.cu:268
Which leads to believe the problem is somewhere within the decoder input tensor size, but i don’t know how to fix it…
help???
Chaslie
|
st84839
|
I guess the error is thrown while passing the output of fc2 to co1?
nn.ConvTranspose1d expects a 3-dimensional input, so you might need to reshape the output of your linear layer (if you are using a 2-dimensional input for it).
|
st84840
|
ptrblck,
would that be best done by moving the z.view term from def forward, or by adding a z.view term between fc2 & co1?
|
st84841
|
Personally, I would apply the linear layer separately in forward and reshape the output using view after it.
Once you have the right shape, just pass it to the nn.Sequential block, which would then only contain the conv/transposed conv/upsampling layers (3-dim input).
|
st84842
|
Hi ptrblck,
brilliant you are a genius (I have been pulling my hair out all day over this one )…
Now I have:
AttributeError: ‘list’ object has no attribute ‘cuda’
for epoch in range(num_epochs):
for data in train_dataset:
data = data.cuda()
This is really frustrating because this worked as a normal autoencoder, but the moment i converted it to a variational autoencoder it stopped working…
|
st84843
|
Could you post the code for your train_dataset?
It seems you are returning a list instead of a tensor or tuple of tensors.
|
st84844
|
hi Ptrblck,
I am loading the data from a local npy file:
hi P
train_array=np.load(‘load_directory’)
x_train=train_array.transpose([2,0,1]).reshape(42000,2048)
X_train=torch.Tensor(X_train)
y_train=np.arange(1,42001,1)
y_train=y_train.reshape(42000,1)
y_train=torch.from_numpy(y_train)
train_dataset = TensorDataset(X_train, y_train)
train_dataset2 = DataLoader(train_dataset, batch_size=BATCHSIZE, shuffle=True)
I was having trouble with this and I don’t understand why (though when i look at the data it says its a tensor)
|
st84845
|
Thanks for the code.
Your DataLoader returns a data and target batch, so either unwrap it in the loop:
for batch in train_dataset2:
data = batch[0]
target = batch[1]
or assign the two variables in the loop statement:
for data, target in train_dataset2:
...
Also, I would recommend to name the DataLoader something like train_loader so avoid confusion.
|
st84846
|
Thanks for all your help,
final silly question,
I now get:
RuntimeError: Expected 3-dimensional input for 3-dimensional weight 1 1, but got 2-dimensional input of size [16, 2048] instead
I believe that this is showing the 16 (batch size) and 2048 (length of the vector), but do i need to resize this?
|
st84847
|
I’m not familiar with your use case, but you could reshape the output of your linear layer before feeding it to the nn.ConvTranpose1d layer or just add a dummy channel dimension using:
output = output.unsqueeze(1)
Based on the number of input channels in co1, it seems the dummy channel dimension is probably what you want.
|
st84848
|
Is it possible to load a onnx model as a pytorch model directly? Or can you only load it into caffe2?
|
st84849
|
Now, PyTorch does not support loading ONNX model. Yes, you can load ONNX model into Caffe2.
|
st84850
|
Is there any plan to implement the ability to load an ONNX model into pytorch? It seems like an exchange format should have that capability.
|
st84851
|
Is there anybody working on this? If yes, the GitHub issue link is much appreciated!
|
st84852
|
When using JIT tracing a python model, it throw out an error:
ValueError: TracedModules don't support parameter sharing between modules
But I do not understand what is the difference between paramaters sharing and reuse, doesn’t they are the same thing?
we define a Model into a class, and defined some modules when init, then call it one by one in forward, does this not resuse? Then what is the sharing parameters case?
In my model, How do I find those layers sharing parameters where everywhere I thought it was reuse module rather than parameters sharing.
From this architecture, does it can figure out whether it shared parameters or not?
unet8: TUM(
(layers): Sequential(
(0): BasicConv(
(conv): Conv2d(384, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(4): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
(toplayer): Sequential(
(0): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
(latlayer): Sequential(
(0): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(4): BasicConv(
(conv): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
(smooth): Sequential(
(0): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(1): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(3): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(4): BasicConv(
(conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(256, eps=1e-05, momentum=0.01, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
)
)
|
st84853
|
Could you post the code to initialize this model so that we could check for parameter sharing?
|
st84854
|
Thanks for you reply ptrblck!
I made an experiment on this:
from torch import nn
from alfred.utils.log import logger as logging
from alfred.dl.torch.common import device
import torch
class BasicConv(nn.Module):
def __init__(self, in_planes, out_planes, kernel_size, stride=1, padding=0, dilation=1,
groups=1, relu=True, bn=True, bias=False):
super(BasicConv, self).__init__()
self.out_channels = out_planes
self.conv = nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size,
stride=stride, padding=padding, dilation=dilation, groups=groups, bias=bias)
self.bn = nn.BatchNorm2d(out_planes, eps=1e-5, momentum=0.01, affine=True) if bn else None
self.relu = nn.ReLU(inplace=True) if relu else None
def forward(self, x):
x = self.conv(x)
if self.bn is not None:
x = self.bn(x)
if self.relu is not None:
x = self.relu(x)
return x
class FuckNet(nn.Module):
def __init__(self):
super(FuckNet, self).__init__()
self.welcome_layer = BasicConv(3, 256, 3, 1, 1)
self.fuck_layers = nn.Sequential()
for i in range(5):
self.fuck_layers.add_module('{}'.format(i), BasicConv(256, 256, 3, 1, 1))
def forward(self, x):
x = self.welcome_layer(x)
return self.fuck_layers(x)
class FuckNet2(nn.Module):
def __init__(self):
super(FuckNet2, self).__init__()
self.welcome_layer = BasicConv(3, 256, 3, 1, 1)
self.block_1 = BasicConv(256, 256, 3, 1, 1)
self.fuck_layers = nn.Sequential()
for i in range(5):
self.fuck_layers.add_module('{}'.format(i), self.block_1)
def forward(self, x):
x = self.welcome_layer(x)
return self.fuck_layers(x)
if __name__ == '__main__':
model1 = FuckNet()
model2 = FuckNet2()
model2.eval().to(device)
# start to trace model
example = torch.rand(1, 3, 512, 512).to(device)
traced_script_module = torch.jit.trace(model2, example)
traced_script_module.save('test.pt')
The Net2 can not be traced, error:
self._modules[name] = TracedModule(submodule, id_set, optimize=optimize)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/__init__.py", line 1046, in init_then_register
original_init(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/__init__.py", line 1469, in __init__
check_unique(param)
File "/usr/local/lib/python3.6/dist-packages/torch/jit/__init__.py", line 1461, in check_unique
raise ValueError("TracedModules don't support parameter sharing between modules")
ValueError: TracedModules don't support parameter sharing between modules
My question is:
Why net2 can not be traced, that operation is common in building models;
How do I solve the problem once my model all written in the Net2 format?
|
st84855
|
So I was experimenting with my NN when while change F.leaky_relu(x,0.2) to F,relu(x) I accidentally changed it toF.relu(x, 0.2). The training progressed fine, but is there any significance of the 0.2?
|
st84856
|
Solved by ptrblck in post #2
The 0.2 in F.relu will be treated as the inplace argument, which will be interpreted as inplace=True.
|
st84857
|
The 0.2 in F.relu will be treated as the inplace argument, which will be interpreted as inplace=True.
|
st84858
|
I’ve a simple shallow neural network. When I train this model, GPU utilisation is just ~ 35%. Could anyone suggest what can cause this under-utilisation of resources? Training examples are fed to the model from 32 processes using the torch.utils.data.DataLoader API with pinned memory. I use torch.nn.Embedding with sparse gradients as input layer. The model in trained on Tesla P100 in GCP. The host machine has 16 vCPUs. According to cProfile, the run_backward method takes the most time, and the CPU -> GPU transfer (the cuda method) is second. According to the autograd profiler, pin_memory takes the most time. Matrix multiplication (bmm) is second.
I would appreciate any suggestions for improving GPU utilisation.
|
st84859
|
Hello,
I have 10 input color-images ([10, 3, 224, 224]), I want to convert to gray image in PyTorch…
torch.Size([10, 3, 224, 224]) --> torch.Size([10, 224, 224])
rgb2gray.jpg803×332 84.9 KB
How can I do that, should I chose one channel from RGB or is there any function to convert RGB to GRAY…?
|
st84860
|
The size after turning into Gray isn’t [10,244,244] but [10,1,244,244]
You can do this easily with
torchvision.transforms. Grayscale ( num_output_channels=1 )
here is a little tutorial for this:
https://pytorch.org/tutorials/beginner/data_loading_tutorial.html 195
If you need the form [10,224,224] you cann use
pictures.view(10,244,244)
|
st84861
|
So the lazy way to do this is gray = img.mean(1) (but be careful when you have an alpha channel).
However, that isn’t a good way, as RGB are not equally bright. People have thought about this and came up with various weights 17. A great way to apply these weights is to carry out a pointwise convolution (that you also see in ResNets and friends to change the number of channels, here 3 channels in, one out, 1x1 pixel, use torch.nn.functional.conv_2d with a weight of shape 1, 3, 1, 1). But weighted average is not the end of the story either, there is gamma correction etc.
There are many more links at stack overflow 40, particularly noteworthy is this study 16.
Best regards
Thomas
P.S.: @Filos92’s way works on PIL images not tensors, but uses whatever PIL provides for us (hopefully a good choice). The more PyTorch-y way to get rid of the singleton dimension would be img_gray.squeeze(1) rather than .view.
|
st84862
|
Hey Everyone,
i train a generative network to produce pictures from a few input variables.
right now it works suprisingly well. All generated Pictures have a shape which are not distingquishable to original.
My Problem is, that the Position of the Object in the Picture is more important to me than the shape. but every generated Object is is located in or close to the center.
Do you have any idea how to train a Network, which is more sensitive to the Position of the Object?
|
st84863
|
I have a CUDA supported GPU (Nvidia GeForce GTX 1070) and I have installed both of the CUDA (version 10) and the CUDA-supported version of PyTorch.
Despite my GPU is detected, and I have moved all the tensors to GPU, my CPU is used instead of GPU as I see almost no GPU usage when I monitor it.
Here is the code:
num_epochs = 10
batch_size = 20
learning_rate = 0.0001
log_interval = 50
class AndroModel(torch.nn.Module):
def __init__(self, input_size):
super(AndroModel, self).__init__()
self.kernel_size = 3
self.padding = 0
self.stride = 1
self.input_size = input_size
self.conv1 = nn.Sequential(
nn.Conv1d(in_channels=1, out_channels=16, kernel_size=self.kernel_size, padding=self.padding,
stride=self.stride, bias=False),
nn.ReLU(inplace=True)
)
self.conv2 = nn.Sequential(
nn.Conv1d(in_channels=16, out_channels=32, kernel_size=self.kernel_size, padding=self.padding,
stride=self.stride, bias=False),
nn.ReLU(inplace=True)
)
self.conv3 = nn.Sequential(
nn.Conv1d(in_channels=32, out_channels=64, kernel_size=self.kernel_size, padding=self.padding,
stride=self.stride, bias=False),
nn.ReLU(inplace=True)
)
self.conv4 = nn.Sequential(
nn.Conv1d(in_channels=64, out_channels=128, kernel_size=self.kernel_size, padding=self.padding,
stride=self.stride, bias=False),
nn.ReLU(inplace=True)
)
self.conv5 = nn.Sequential(
nn.Conv1d(in_channels=128, out_channels=256, kernel_size=self.kernel_size, padding=self.padding,
stride=self.stride, bias=False),
nn.ReLU(inplace=True)
)
self.num_conv_layers = 5
last_conv_layer = self.conv5
new_input_size = self.calculate_new_width(self.input_size, self.kernel_size, self.padding, self.stride, self.num_conv_layers, max_pooling=None)
out_channels = last_conv_layer._modules['0'].out_channels
dimension = out_channels * new_input_size
self.fc1 = nn.Sequential(
nn.Linear(in_features=dimension, out_features=3584),
nn.Dropout(0.5))
self.fc2 = nn.Sequential(
nn.Linear(in_features=3584, out_features=1792),
nn.Dropout(0.5))
self.fc3 = nn.Sequential(
nn.Linear(in_features=1792, out_features=448),
nn.Dropout(0.5))
self.fc4 = nn.Sequential(
nn.Linear(in_features=448, out_features=112),
nn.Dropout(0.5))
self.fc5 = nn.Sequential(
nn.Linear(in_features=112, out_features=28),
nn.Dropout(0.5))
self.fc6 = nn.Sequential(
nn.Linear(in_features=28, out_features=6),
nn.Dropout(0.5))
self.fc7 = nn.Sequential(
nn.Linear(in_features=6, out_features=2))
def forward(self, x):
x = x.reshape((-1, 1, self.input_size))
output = self.conv1(x)
output = self.conv2(output)
output = self.conv3(output)
output = self.conv4(output)
output = self.conv5(output)
output = output.view(output.size(0), -1)
output = self.fc1(output)
output = self.fc2(output)
output = self.fc3(output)
output = self.fc4(output)
output = self.fc5(output)
output = self.fc6(output)
output = self.fc7(output)
return output
@staticmethod
def calculate_new_width(input_size, kernel_size, padding, stride, num_conv_layers, max_pooling=2):
new_input_size = input_size
for i in range(num_conv_layers):
new_input_size = ((new_input_size - kernel_size + 2 * padding) // stride) + 1
if max_pooling is not None:
new_input_size //= max_pooling
return new_input_size
class AndroDataset(Dataset):
def __init__(self, features_as_ndarray, classes_as_ndarray):
self.features = torch.from_numpy(features_as_ndarray).float().to(device)
self.classes = torch.from_numpy(classes_as_ndarray).float().to(device)
def __getitem__(self, index):
return self.features[index], self.classes[index]
def __len__(self):
return len(self.features)
def main():
start = time()
global device
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Device: {}'.format(device))
if torch.cuda.is_available():
print('GPU Model: {}'.format(torch.cuda.get_device_name(0)))
csv_data = pd.read_csv('android_binary.csv')
num_of_features = csv_data.shape[1] - 1
x = csv_data.iloc[:, :-1].values
y = csv_data.iloc[:, -1].values
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
training_data = AndroDataset(x_train, y_train)
test_data = AndroDataset(x_test, y_test)
print('\n~~~~~~~~ TRAINING HAS STARTED ~~~~~~~~')
print('# of training instances: {}'.format(len(training_data)))
train_loader = DataLoader(dataset=training_data, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(dataset=test_data, batch_size=batch_size, shuffle=True)
model = AndroModel(num_of_features)
model = model.to(device)
print('Model Overview:')
print(model, '\n')
criterion = nn.CrossEntropyLoss()
criterion = criterion.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
losses_in_epochs = []
# training
total_step = len(train_loader) # ayni zamanda VERI/BATCH_SIZE a esit
for epoch in range(num_epochs):
losses_in_current_epoch = []
for i, (features, classes) in enumerate(train_loader):
features, classes = features.to(device), classes.to(device, dtype=torch.int64)
optimizer.zero_grad()
output = model(features)
loss = criterion(output, classes)
loss.backward()
optimizer.step()
if (i + 1) % log_interval == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch + 1, num_epochs, i + 1, total_step,
loss.item()))
losses_in_current_epoch.append(loss.item())
avg_loss_current_epoch = 0
for tmp_loss in losses_in_current_epoch:
avg_loss_current_epoch += tmp_loss
avg_loss_current_epoch /= len(losses_in_current_epoch)
print('End of the epoch #{}, avg. loss: {:.4f}'.format(epoch + 1, avg_loss_current_epoch))
losses_in_epochs.append(avg_loss_current_epoch)
print('Average loss: {:.4f}'.format(losses_in_epochs[-1]))
print(f'Training Duration (in minutes): {(time() - start) / 60}')
print('\n~~~~~~~~ TEST HAS STARTED ~~~~~~~~')
print('# of test instances: {}'.format(len(test_data)))
# test
accuracy = 0
with torch.no_grad():
correct = 0
for features, classes in test_loader:
features, classes = features.to(device), classes.to(device, dtype=torch.int64)
output = model(features)
output = output.to(device)
_, predicted = torch.max(output.data, 1)
correct += (predicted == classes).sum().item()
accuracy = 100 * correct / len(test_loader.dataset)
print('Accuracy of the model on the {} test instances: {:.4f} %'.format(len(test_loader.dataset), accuracy))
|
st84864
|
Solved by ptrblck in post #5
That’s strange, since the data loading time seem to be completely hidden behind the computation.
I just tried your code on my machine and simplified it a bit:
removed the test loop
used random inputs (torch.randn as data and torch.randint as target)
used an input batch of [20, 1, 100]
Using thi…
|
st84865
|
Do you see and peaks in the GPU usage?
Also, could you just pass random data to your GPU and see, if the utilization increases?
You might also want to time your data loading as shown in the ImageNet example 27 to see, if you have a bottleneck there.
|
st84866
|
Thanks for your interest. Regarding your question, no, unfortunately have not seen any peaks in the GPU usage. And here are the values of the two average meters as I have used in the same way as it is applied in the ImageNet example that you directed me:
Time 0.025 ( 0.027)
Data 0.000 ( 0.000)
|
st84867
|
Despite the PyCharm process’s power usage is very high and GPU engine (copy) is used, here is the GPU utilization which is as low as 4%:
a.jpg1424×1079 176 KB
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.