id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st83668 | Hi, I’ve been using sklearn for a while in a personal project, and it’s generally been very good. I’ve built a model that works pretty well using their built in LogisticRegression. However I know there are some features that are not linear responses. And I also know generally how to use PyTorch so wanted to try out incorporating that into my project. Ideally, I’m looking to run what I believe are the non-linear features through some shallow NN’s, and then concatenate those with the linear features into a basic LogisticRegression model. One key thing I need is for the probabilities to be well-calibrated. NN’s typically don’t provide this. But sklearn’s LogisticRegression does this extremely well.
So! I want to replicate sklearn’s logistic regression in PyTorch, and then build the model described above in an end-to-end fashion. However, I can’t seem to get it to replicate, and I’m hoping I can get some advice. If you look at the sklearn docs 2, they show the loss function, and and from the source code, they appear to use LibLinear, and are essentially optimizing an NLLLoss with an L2 penalty. Seemed easy enough with PyTorch, but no luck!
Here’s what I’ve tried…
class DeepLogisticRegression(nn.Module):
def __init__(self, num_in):
super().__init__()
output_units = 2
self.linear = nn.Linear(num_in, output_units)
self.sigmoid = nn.Sigmoid()
self.sequential = nn.Sequential(self.linear, self.sigmoid)
def forward(self, X):
return self.sequential(X.float())
# Optimizer = Adagrad(weight_decay=2, lr = 0.001, batch_size=4096)
Note I’m fitting the above using Adagrad, and the weight decay is meant to replicate sklearn’s L2 penalty. I run this for about 30 epochs. I’ve tried various configs of the params, including SGD, different weight decay, different lr, etc. But my PyTorch version always ends up just getting to a local minima, where it essentially always picks the class that is slightly more common (62.26% of total samples). I can’t seem to get it to actually “train”, and find a legit model. But sklearn’s implementation has no problem with this.
Any ideas or thoughts would be so much appreciated! Thanks you thank you!
PS: I guess theoretically if I could create transformations of my non-linear features and just hand those directly into an sklearn LogisticRegression, and train the whole thing end to end, that would be sweet too. I don’t think that’s possible though? |
st83669 | What libraries or frameworks do you wish existed for Pytorch?
An example of this might be a set of operations you watch batched, or a set of functions that do not exist yet.
My team and I were thinking of making a sparse training library for Pytorch, but would like to survey users to see if there’s anything more pressing. |
st83670 | Hi! I’d say a data vizualisation library that can hook into Visdom (& tensorboardX I guess…). Visdom is great and all in that it gives you the freedom to plot anything but I believe that features on top of it would be great for convenience. It could help with gradient vizualisation, model explainability, model architecture, deeper look into losses which consists of different summations etc.
Another one that I feel is missing is some sort of metric library. Precision, recall, AP & mAP for bounding boxes, segmentation metrics and god knows what.
Edit: I’m not entirely sure about what content exists other frameworks such as ignite & fastai. Do they already cover this? |
st83671 | Yes, I agree on easily integrated model explainability will be interesting.
@Oli training viz is something can be more or less simply inserted into ignite, there are already provided tensorboard, visdom etc loggers and we can plot out-of-box losses, scalar/vector metrics, model weights norms, grads etc: https://pytorch.org/ignite/contrib/handlers.html#ignite.contrib.handlers.tensorboard_logger.TensorboardLogger 4
Metric library for all cases (as you site detection tasks) will be awesome. Today in ignite, we have a lot of basic metrics for classification, segmentation tasks and a bunch of regression metrics. Detection mAP, AP will be a of next steps…
Edit: I’m not entirely sure about what content exists other frameworks such as ignite & fastai. Do they already cover this?
Hope to answer the question from ignite side. |
st83672 | I am trying to rewrite inference part so as to also return hidden layer activations like embeddings.
is this correct implementation, any potential issues with this ?
class FeatureExtration(torch.nn.Module):
def __init__(self, pretrained_model):
super().__init__()
self.__dict__ = pretrained_model.__dict__.copy()
def forward(self,x):
#... copy original forward ...
return proba, embedings, some_other_layer
model2 = FeatureExtration(pretrained_model) |
st83673 | Solved by ptrblck in post #2
If you would like to return additional values in the forward method, but keep the model at it was otherwise, you could directly derive your class from the base model:
class FeatureExtractedModel(BaseModel)
This would make the implementation cleaner in my opinion. |
st83674 | If you would like to return additional values in the forward method, but keep the model at it was otherwise, you could directly derive your class from the base model:
class FeatureExtractedModel(BaseModel)
This would make the implementation cleaner in my opinion. |
st83675 | I am moving from CUDA C to PyTorch to achieve high performance parallel computing.
If I need add a constant variable to a tensor, the following way is certainly working.
t = torch.tensor.ones(10, 10000, 1000)
k_a = range(10)
for i in range(10):
t[i] = t[i] + k_a[i]
But for achieve better performance,
Do I need copy k_a to GPU memory first?
Can I copy k_a to GPU constant memory?
Or any thing i can do to improve it?
Thanks, |
st83676 | I would suggest to create k_a using torch.arange(10).float() instead of the Python range.
Loops are generally slower than vectorized code, so you could unsqueeze k_a in dim1 and dim2 and just add it in a single call:
device = 'cuda'
t = torch.ones(10, 10000, 1000, deive=device)
k_a = torch.arange(10, device=device).float()
ret = t + k_a.view(-1, 1, 1)
If you set device='cuda', this operation will automatically be executed on the GPU. |
st83677 | Hi All. I have a trained Xception model that I want to use as a backbone in another network. In particular I want to drop all the layers starting with the global average pooling. In the code below this means dropping the last block (GAPClassifier).
The problem is that after doing so I get a NotImplementedError.
I’m assuming that the issue is how I’m dropping the last block, namely
model=torch.nn.Sequential(*(list(model.children())[:-1]))
Are there situations where dropping a block is more complicated? Whats the best way to handle this?
Here is a run through of what is happening (unfortunately I can’t include the entire model in this post). Thanks!
test_batch=torch.rand((3,14,SIZE,SIZE)).cuda()
model(test_batch) #<== executes successfully
def block_names(model,end=None,start=None):
layer_list=list(model.children())
for i,layer in enumerate(layer_list[start:end]):
print(i,layer._get_name())
block_names(model)
"""OUTPUT
0 EntryBlock
1 ModuleList
2 SeparableStack
3 XBlock
4 SeparableStack
5 GAPClassifier
"""
model=torch.nn.Sequential(*(list(model.children())[:-1]))
block_names(model)
"""OUTPUT
0 EntryBlock
1 ModuleList
2 SeparableStack
3 XBlock
4 SeparableStack
"""
model(test_batch) # throws NotImplementedError |
st83678 | By wrapping submodules in an nn.Sequential block you are assuming all children are applied sequentially in the original model.
While this might be often the case, some models are a bit more complicated so that this simple container won’t restore the original work flow anymore or will throw an error.
E.g. this .view() operation 1 will be missing, if you are extracting all child modules from this model.
I’m not sure, which module throws the NotImplementedError and it’s a bit strange, as I would expect some shape mismatch etc.
If you just want to remove the last layer, you could replace it with an nn.Identity module. |
st83679 | class Conv2dQuant(nn.Module):
def __init__(self):
super(Conv2dQuant, self).__init__()
self.conv_weight = 5
self.conv_bias = 5
self.scale = 3
def forward(self, x):
x = torch.round(x / self.scale) * self.scale
x = x * self.conv_weight + self.conv_bias
return x
If i define a module above, I know how to forward, However, When it comes to backward, I do not know the process of round function, and I wonder how to autograd this functions x = torch.round(x / self.scale) * self.scale |
st83680 | I try to train this function, as weight and bias a autogrand = True, Then I got
7 def forward(self, x):
----> 8 x = torch.round(x / self.scale) * self.scale
9 x = x * self.conv_weight + self.conv_bias
10 return x
RuntimeError: round_vml_cpu not implemented for 'Long' |
st83681 | I’m not sure, if I understood the use case correctly, but if you would like to train self.conv_weight and self.conv_bias, you should define them as nn.Parameters (containing float values):
class Conv2dQuant(nn.Module):
def __init__(self):
super(Conv2dQuant, self).__init__()
self.conv_weight = nn.Parameter(torch.tensor([5.]))
self.conv_bias = nn.Parameter(torch.tensor([5.]))
self.scale = 3
def forward(self, x):
x = torch.round(x / self.scale) * self.scale
x = x * self.conv_weight + self.conv_bias
return x
model = Conv2dQuant()
x = torch.randn(1, 1)
output = model(x)
output.backward()
print(model.conv_weight.grad)
tensor([3.]) |
st83682 | Hello! I am trying to implement something similar to this (the CartPole example), using Pytorch. Basically I want to build a NN that predicts an action given the state of the system, computes the next state given the action, then backpropagates through everything. They have a code implemented in Julia here 1 and here is the beginning of my code for Pytorch:
import gym
import argparse
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
env = gym.make('CartPole-v0')
class NN_cart(nn.Module):
def __init__(self):
super().__init__()
self.net = nn.Sequential(
nn.Linear(state_len, 24),
nn.ReLU(True),
nn.Linear(24, 48),
nn.ReLU(True),
nn.Linear(48, 1),
nn.Tanh(),
)
def forward(self, x):
x = self.net(x)
x = (torch.sign(x)+1)/2
return x
model = NN_cart().cuda()
optimizer = optim.Adam(model.parameters(), lr = 1e-3)
done = False
env.reset()
state = torch.from_numpy(env.state).float().cuda()
while not done:
model.train()
optimizer.zero_grad()
action = int(model(state)[0].cpu().data.numpy())
state, reward, done, info = env.step(action)
state = torch.from_numpy(state).float().cuda()
state.requires_grad = True
loss = state[2]**2
loss.backward()
optimizer.step()
I get no error, but the NN doesn’t learn. So one thing is the gradient of the sign function. In the Julia implementation they define their own gradient but I am not sure how to do it. Also, is my code the way it is right now able to propagate through everything (including the .step() function)? Thank you! |
st83683 | If I’m not mistaken, the gradient of the sign method should be zero everywhere.
Since the authors implemented a custom backward method, you could have a look at this tutorial 2 to see, how to do the same in PyTorch.
I’m not exactly sure, what env.step() is doing, but generally if you leave PyTorch and use e.g. numpy, you will detach the operations from the computation graph, which seems to be the case here. |
st83684 | Hi,
I am wondering where is dropout exactly added. For fc layers of y=Wx+b, dropout randomly drops the parameters of in W matrix rather than the feature x, and for conv layers, dropout randomly drops the parameters in the convolution kernels rather than the feature maps. Is this what exactly happens in the dropout layers? |
st83685 | nn.Dropout will randomly zero out some elements of the input, not the weights.
The same applies for 4-dimensional tensors (e.g. conv outputs) using nn.Dropout.
If you want to zero out complete channels, you should use nn.Dropout2d (docs 7). |
st83686 | I have a few of .npy files as (25,512,512), and I need use it to input my network by dataloader . What should I do? Please! |
st83687 | You could write a custom Dataset to lazily load each numpy file:
class MyDataset(Dataset):
def __init__(self, np_file_paths):
self.files = np_file_paths
def __getitem__(self, index):
x = np.load(self.files[index])
x = torch.from_numpy(x).float()
return x
def __len__(self):
return len(self.files)
After creating the Dataset instance, you could wrap it in a DataLoader, which will create a batches of [batch_size, 25, 512, 512]. |
st83688 | class MultiOp(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
multiNum = 2
ctx.save_for_backward(multiNum)
result = multiNum * input
return result
@staticmethod
def backward(ctx, grad_output):
multiNum = ctx.saved_variables
result = grad_output / multiNum
return result
multiOp = MultiOp.apply
class Net(torch.jit.ScriptModule):
def __init__(self):
super(Net, self).__init__()
@torch.jit.script_method
def forward(self, x):
x = multiOp(x)
return x
scriptNet = Net()
torch.jit.save(scriptNet, "sample.pt")
RuntimeError Traceback (most recent call last)
<ipython-input-55-40f7e60ca520> in <module>
----> 1 torch.jit.save(scriptNet, "sample.pt")
~/miniconda3/envs/dingyongchao/lib/python3.7/site-packages/torch/jit/__init__.py in save(m, f, _extra_files)
196 (sys.version_info[0] == 2 and isinstance(f, unicode)) or \
197 (sys.version_info[0] == 3 and isinstance(f, pathlib.Path)):
--> 198 m.save(f, _extra_files=_extra_files)
199 else:
200 ret = m.save_to_buffer(_extra_files=_extra_files)
~/miniconda3/envs/dingyongchao/lib/python3.7/site-packages/torch/jit/__init__.py in save(self, *args, **kwargs)
1203
1204 def save(self, *args, **kwargs):
-> 1205 return self._c.save(*args, **kwargs)
1206
1207 def save_to_buffer(self, *args, **kwargs):
RuntimeError:
could not export python function call MultiOp. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList, add it to __constants__.:
@torch.jit.script_method
def forward(self, x):
x = multiOp(x)
~~~~~~~ <--- HERE
return x
So I want to how to define a Function ? |
st83689 | I am working on a binary classification problem. My batch_y and y_pred do not have the same shape, yet I get no warnings/errors when I compute the loss with nn.BCELoss(y_pred, batch_y):
batch_y.shape: torch.Size([10, 1, 1])
tensor([[[0.]],
[[1.]],
[[0.]],
[[0.]],
[[0.]],
[[1.]],
[[1.]],
[[1.]],
[[1.]],
[[0.]]], device='cuda:0')
y_pred.shape: torch.Size([10, 1])
tensor([[0.5170],
[0.5114],
[0.5103],
[0.4971],
[0.4974],
[0.5024],
[0.5008],
[0.4954],
[0.5035],
[0.4987]], device='cuda:0', grad_fn=<SigmoidBackward>)
A more extreme version of this occurs when I do classification with three classes instead of two. Here again, I get no warnings/errors when I compute the loss with nn.CrossEntropyLoss(y_pred, batch_y):
batch_y.shape: torch.Size([3])
tensor([1, 2, 0], device='cuda:0')
y_pred.shape: torch.Size([3, 3])
tensor([[-0.0718, -0.1237, -0.1143],
[-0.0757, -0.1294, -0.1150],
[-0.0792, -0.1128, -0.1106]],
device='cuda:0', grad_fn=<ThAddmmBackward>)
Any ideas why this might be the case? Could this potentially (negatively) impact training? |
st83690 | Which PyTorch version are you using?
This code:
criterion = nn.BCELoss()
output = torch.sigmoid(torch.randn(10, 1, 1))
target = torch.randint(0, 1, (10, 1)).float()
loss = criterion(output, target)
raises a warning:
UserWarning: Using a target size (torch.Size([10, 1])) that is different to the input size (torch.Size([10, 1, 1])) is deprecated. Please ensure they have the same size.
in a slightly old nightly build 1.2.0.dev20190718.
Regarding the nn.CrossEntropyLoss, the shapes are correct.
This loss function expects a model output containing the class logits in the shape [batch_size, nb_classes, *] and a target containing the class indices in the shape [batch_size, *], where the asterisk denotes additional dimensions. |
st83691 | Hi,
I want to know how to change the tensor’s dimension in net.
self.net.add_module("out1", nn.softmax(dim=1)
and I got a tensor[ 1681, 1, 1 ] from output.But I want to change it into tensor[ 41, 41, 1, 1 ].
Can someone please give me some advice.
Thanks. |
st83692 | Solved by Oli in post #2
Hi, you can change its shape with the view command. new_tensor = tensor.view(41, 41, 1, 1). If you want to do this within the network, do that in the forward function |
st83693 | Hi, you can change its shape with the view command. new_tensor = tensor.view(41, 41, 1, 1). If you want to do this within the network, do that in the forward function |
st83694 | Hello to everyone,
I’m a newcomer to the Pytorch realm. I am faced with a question like this:
Is it necessary to define the sub-layers that we will use within the main Net module as a class outside the module, or can it be defined as a function?
Herein: https://github.com/FrancescoSaverioZuppichini/Pytorch-how-and-when-to-use-Module-Sequential-ModuleList-and-ModuleDict/blob/master/notebook.ipynb 1, it’s defined as function, but in https://github.com/pytorch/vision/blob/master/torchvision/models/googlenet.py, inception is created by using class…
Ex: conv - BN - ReLU
def myConv(x):
x = conv2d(x)
x = bnorm(x)
return ReLU(x)
or
class myConv(nn.Module) …
and also what is the difference?
Thanks… |
st83695 | Both approaches will work the same, if you are using a pure functional approach.
A custom nn.Module class is useful in case your custom module needs to store parameters, buffers etc.
E.g. nn.Conv2d itself is defined as an nn.Module, since it stores the weight, bias parameters as well as some internal attributes, e.g. padding, stride, etc.
In your current example you are just applying some modules, so you can easily define it as a function. |
st83696 | Background
From my understanding manual_seed sets seeds for RNG, and torch.backends.cudnn.deterministic = True makes cuDNN’s output deterministic – fixed when given the same inputs and outputs.
Since there is no RNG factor in convolution (cuDNN documentation on convolution forward doesn’t mention anything about taking a seed as input or RNG of convolution), setting manual_seed should make no difference. However, when we test 2 settings, the results are not as expected:
setting 1:
Only torch.backends.cudnn.deterministic = True the output is NOT fixed on every iteration. It deviates on every iteration.
setting 2:
torch.backends.cudnn.deterministic = True and manual_seed set, the output is fixed on every iteration
Experiment
Given the same input & weight (yes, we manually gave weight), and with torch.backends.cudnn.deterministic = True turned on, the output of
weight = # some code that reads weight file
conv = nn.Conv1D(...)
conv.weight.data = weight
changes everytime it is called:
example outputs:
iteration 1
[-3.0552e+00, -5.3343e+00, -6.5944e-01, 1.1911e+00, 4.3999e+00,
1.4698e+00, -3.8650e+00, 1.4742e+00, 1.2590e+00, 5.3744e+00,
-1.1283e+01, 1.1128e+01, -1.3646e+01, 1.2124e+00, -9.6420e-01,
7.5311e+00, -5.4766e+00, 2.8123e+00, -9.1796e+00, 6.2736e+00, ...
iteration 2
-2.9771e+00, -5.2562e+00, -5.8136e-01, 1.2692e+00, 4.4779e+00,
1.5479e+00, -3.7869e+00, 1.5522e+00, 1.3371e+00, 5.4525e+00,
-1.1205e+01, 1.1206e+01, -1.3568e+01, 1.2905e+00, -8.8612e-01,
7.6092e+00, -5.3985e+00, 2.8903e+00, -9.1016e+00, 6.3517e+00, ...
You can see that the outputs are different.
However, when we set the random seed with: torch.manual_seed(0), then the output becomes identical on every iteration.
why does the output differ given the same inputs and weights, and with torch.backends.cudnn.deterministic = True?
Why does torch.manual_seed(0) make the outputs identical? |
st83697 | When you use Modules from nn, you use
the functions,
the parameter-holding of the modules,
the default initialization of the parameters.
The latter uses random numbers.
Best regards
Thomas |
st83698 | I have trained a model that uses nn.LSTMCell. For reasons of throughput, I want to directly call cuDNN’s cudnnRNNForwardInference. So I have to export weights of nn.LSTMCell. For other layers such as linear or convolution, this wasn’t hard. However, this is very difficult for nn.LSTMCell because nn.LSTMCell takes 2 sets of weights and 2 sets of biases, while cudnnRNNForwardInference only takes a single set of weight and a single set of bias:
nn.LSTMCell
~LSTMCell.weight_ih – the learnable input-hidden weights, of shape (4*hidden_size, input_size)
~LSTMCell.weight_hh – the learnable hidden-hidden weights, of shape (4*hidden_size, hidden_size)
~LSTMCell.bias_ih – the learnable input-hidden bias, of shape (4*hidden_size)
~LSTMCell.bias_hh – the learnable hidden-hidden bias, of shape (4*hidden_size)
cudnnRNNForwardInference
cudnnStatus_t cudnnRNNForwardInference(
cudnnHandle_t handle,
const cudnnRNNDescriptor_t rnnDesc,
const int seqLength,
const cudnnTensorDescriptor_t *xDesc,
const void *x,
const cudnnTensorDescriptor_t hxDesc,
const void *hx,
const cudnnTensorDescriptor_t cxDesc,
const void *cx,
const cudnnFilterDescriptor_t wDesc,
const void *w,
const cudnnTensorDescriptor_t *yDesc,
void *y,
const cudnnTensorDescriptor_t hyDesc,
void *hy,
const cudnnTensorDescriptor_t cyDesc,
void *cy,
void *workspace,
size_t workSpaceSizeInBytes)
How would I pack each of the weights: weight_ih, weight_hh, bias_ih, bias_hh s.t. it can be passed to cuDNN’s cudnnRNNForwardInference . as wDesc parameter? |
st83699 | I want to do something like this:
class myDataset():
def __init__():
self.iter = some_iterator_on_my_dataset
def __len__():
return len_of_my_dataset
def __getitem__():
return self.iter.next()
loader = DataLoader(myDataset, num_workers=16)
which works well when num_workers=1, failed when num_workers>1 however.
I guess that self.iter, the iterator failed in multi-process loader.
How shuold I do that? |
st83700 | It usually works good for more number of workers, have you tried with 2 or 4 workers?
Also, how many cores does your computer have? |
st83701 | I got more than 40 cores in my computer, I tried some num_workers like 20, 25, it failed anyway. |
st83702 | Hi,
You might want to try out the new IterableDataset 124 on PyTorch master or nightly release. However note that to correctly use num_workers>0 you will have to configure your dataset based on the worker info 54 to avoid generating duplicate data. |
st83703 | It doesn’t seem to work for me. I was trying to benchmark the old dataset with the new IterableDataset.
Can you look out what’s wrong here?
My Cifar100 images are in folders.
def read_files(files):
images = [torch.tensor(np.array(Image.open(i))) for i in files]
yield images
class Cifar100Dataset(IterableDataset):
def __init__(self, path, folder, start, end):
self.files = [file for directory in (path / folder).ls() for file in directory.ls()]
self.start = start
self.end = end
def __iter__(self):
worker_info = torch.utils.data.get_worker_info()
if worker_info is None:
return read_files(self.files[self.start:self.end])
else:
per_worker = int(math.ceil((self.end - self.start) / float(worker_info.num_workers)))
worker_id = worker_info.id
iter_start = self.start + worker_id * per_worker
iter_end = min(iter_start + per_worker, self.end)
return iter(read_files(self.files[iter_start:iter_end]))
It works well with num_workers = 0, but does not work with num_worker>1
ds = Cifar100Dataset(path, train_folder, start=0, end=1024)
image.png706×297 7.01 KB
image.png1273×366 14.1 KB
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x000001F99F5CB9D8>
Traceback (most recent call last):
File "C:\Users\Divyansh J\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 883, in __del__
self._shutdown_workers()
File "C:\Users\Divyansh J\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 860, in _shutdown_workers
if self.workers_status[worker_id]:
IndexError: list index out of range
---------------------------------------------------------------------------
BrokenPipeError Traceback (most recent call last)
<timed exec> in <module>
~\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py in __iter__(self)
240 return _SingleProcessDataLoaderIter(self)
241 else:
--> 242 return _MultiProcessingDataLoaderIter(self)
243
244 @property
~\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py in __init__(self, loader)
639 # before it starts, and __del__ tries to join but will get:
640 # AssertionError: can only join a started process.
--> 641 w.start()
642 self.index_queues.append(index_queue)
643 self.workers.append(w)
~\Anaconda3\envs\pytorch\lib\multiprocessing\process.py in start(self)
110 'daemonic processes are not allowed to have children'
111 _cleanup()
--> 112 self._popen = self._Popen(self)
113 self._sentinel = self._popen.sentinel
114 # Avoid a refcycle if the target function holds an indirect
~\Anaconda3\envs\pytorch\lib\multiprocessing\context.py in _Popen(process_obj)
221 @staticmethod
222 def _Popen(process_obj):
--> 223 return _default_context.get_context().Process._Popen(process_obj)
224
225 class DefaultContext(BaseContext):
~\Anaconda3\envs\pytorch\lib\multiprocessing\context.py in _Popen(process_obj)
320 def _Popen(process_obj):
321 from .popen_spawn_win32 import Popen
--> 322 return Popen(process_obj)
323
324 class SpawnContext(BaseContext):
~\Anaconda3\envs\pytorch\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj)
87 try:
88 reduction.dump(prep_data, to_child)
---> 89 reduction.dump(process_obj, to_child)
90 finally:
91 set_spawning_popen(None)
~\Anaconda3\envs\pytorch\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
58 def dump(obj, file, protocol=None):
59 '''Replacement for pickle.dump() using ForkingPickler.'''
---> 60 ForkingPickler(file, protocol).dump(obj)
61
62 #
BrokenPipeError: [Errno 32] Broken pipe
Also I want to know the usage of start, and end and what should be the batch_size I should pass in the Dataloader, because I am already specifying the start and end in Dataloader. |
st83704 | I got thiserror “RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn” when I run this simple code. I don’t know what the problem is?
def cost(params, Y):
res = [myNetwork(params)]
return square_loss(res, Y)
def square_loss(res, Y):
loss = 0
for l, p in zip(res, Y):
loss += (l - p) ** 2
loss = loss / len(Y)
loss = torch.from_numpy(loss)
return torch.mean(loss)
var1 = Variable(torch.tensor([0.2]), requires_grad=True)
opt = torch.optim.Adam([var1], lr=0.1)
print(cost(var1, Y))
for i in range(100):
opt.zero_grad()
loss = cost(var1, Y)
loss.backward()
opt.step()
print(“Cost:”, loss) |
st83705 | You are detaching the loss from the computation graph by creating a new tensor:
loss = torch.from_numpy(loss)
If looks like you are passing PyTorch tensors to the function, so I think you don’t need this operation.
PS: You can add code snippets by wrapping them in three backticks ```
Also, Variables are deprecated, so you can now directly use tensors. |
st83706 | Ok, I changed the code but got the same error?
def cost(var1, Y):
res = myNetowrkt(var1)
loss = torch.sqrt(torch.mean((res-Y)**2))
return loss
X = torch.FloatTensor([[0, 0, 0, 0]])
Y = torch.FloatTensor([[0, 1, 1]])
var1 = torch.tensor(3.14159, requires_grad=True)
opt = torch.optim.Adam([var1], lr=0.5)
loss = cost(var1, Y)
print(loss)
opt.zero_grad()
loss.backward(retain_graph=True) |
st83707 | Could you post an executable code snippet to reproduce this issue?
PS: you can add code snippets by wrapping them in three backticks ``` |
st83708 | model.train()
likelihood.train()
Use the adam optimizer
optimizer = torch.optim.Adam([
{‘params’: model.parameters()}, # Includes GaussianLikelihood parameters
], lr=0.1)
“Loss” for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
n_iter = 50
for i in range(n_iter):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
print(‘Iter %d/%d - Loss: %.3f’ % (i + 1, n_iter, loss.item()))
optimizer.step()
RuntimeError Traceback (most recent call last)
in ()
16 optimizer.zero_grad()
17 output = model(train_x)
—> 18 loss = -mll(output, train_y)
19 loss.backward()
20 print(‘Iter %d/%d - Loss: %.3f’ % (i + 1, n_iter, loss.item()))
3 frames
/usr/local/lib/python3.6/dist-packages/gpytorch/distributions/multivariate_normal.py in log_prob(self, value)
112
113 mean, covar = self.loc, self.lazy_covariance_matrix
–> 114 diff = value - mean
115
116 # Repeat the covar to match the batch shape of diff
RuntimeError: expected backend CPU and dtype Double but got backend CPU and dtype Float
I am getting a runtime error i have two tensors one x and two y values and I am trying the multitask example. |
st83709 | I would like to calculate JSD across N probabilities. This is a correct way to implement JSD ?
def jsd_loss(logits1, logits2, logits3, logits4):
softmax1 = torch.softmax(logits1 + 1e-10, 1)
softmax2 = torch.softmax(logits2 + 1e-10, 1)
softmax3 = torch.softmax(logits3 + 1e-10, 1)
softmax4 = torch.softmax(logits4 + 1e-10, 1)
M = 0.25 * (softmax1 + softmax2 + softmax3 + softmax4)
return 0.25 * (F.kl_div(M.log(), softmax1) +
F.kl_div(M.log(), softmax2) +
F.kl_div(M.log(),softmax3) +
F.kl_div(M.log(), softmax4))
I have added epsilon to logits to make sure that it does not go to inf |
st83710 | Hello I have a few questions regarding the workings of ONNX.
Let’s say I have a DNN that uses an activation function not implemented in PyTorch (a.k.a maxout). If this operation is implemented by using operators that ONNX supports (max, view, etc), will this cause any sort of problem?
Is there a way to implement a function inside the DNN that has an flow-control statement (if, else) and export it to ONNX?
Is JIT connected to ONNX in any way? Adding flow-control statements and custom operators in JIT seems very easy, is there a way to connect the 2 so that I can use operators created in JIT exporting them to ONNX?
Thanks in advance!! |
st83711 | I know a simple if inside a script works in the latest ONNX
@torch.jit.script
def yolo_resize_helper(net_h, net_w, im_h, im_w):
if net_w / im_w < net_h / im_h:
new_w = net_w
new_h = ((im_h * net_w) / im_w).floor()
else:
new_h = net_h
new_w = ((im_w * net_h) / im_h).floor()
return new_h, new_w
But something more complicated, e.g. recursive scripts, I am trying to figure out myself. |
st83712 | I have a 2D tensor and want to select elements from it according to another Tensor with a list of cell ids.
I am using Tensor_A[Tensor_B] but it is behaving not as I expected.
Why slicing a tensor using another tensor behaves differently than using a numpy array? See below for example:
In [1]: import torch
import numpy as np
In [3]: A = torch.randint(0,10, size=(3,4))
tensor([[6, 6, 5, 3],
[1, 7, 7, 3],
[5, 8, 5, 4]])
In [5]: A[np.array([[0,0],[1,1]])]
Out[5]: tensor([6, 6])
In [6]: A[torch.tensor([[0,0],[1,1]])]
Out[6]:
tensor([[[6, 6, 5, 3],
[6, 6, 5, 3]],
[[1, 7, 7, 3],
[1, 7, 7, 3]]])
What is the difference between slicing using numpy and a tensor? |
st83713 | If you are using the numpy array as the index, you will get the same result as using a nested list:
A[[[0,0],[1,1]]]
> tensor([6, 6])
To reproduce this behavior, you would need to split the index tensor:
A[torch.tensor([[0,0],[1,1]]).split(1)]
> tensor([6, 6]) |
st83714 | I use this code for dataset, DataLoader.
my env is window 10, anaconda3, python3.7, spyder
class Dataset(object):
def init(self, fname ,img_transform=None, mask_transform = None, edge_weight= False):
#nothing special here, just internalizing the constructor parameters
self.fname=fname
self.edge_weight = edge_weight
self.img_transform=img_transform
self.mask_transform = mask_transform
self.tables=tables.open_file(self.fname)
self.numpixels=self.tables.root.numpixels[:]
self.nitems=self.tables.root.img.shape[0]
self.tables.close()
self.img = None
self.mask = None
def __getitem__(self, index):
#opening should be done in __init__ but seems to be
#an issue with multithreading so doing here
with tables.open_file(self.fname,'r') as db:
self.img=db.root.img
self.mask=db.root.mask
#get the requested image and mask from the pytable
img = self.img[index,:,:,:]
mask = self.mask[index,:,:]
#the original Unet paper assignes increased weights to the edges of the annotated objects
#their method is more sophistocated, but this one is faster, we simply dilate the mask and
#highlight all the pixels which were "added"
if(self.edge_weight):
weight = scipy.ndimage.morphology.binary_dilation(mask==1, iterations =2) & ~mask
else: #otherwise the edge weight is all ones and thus has no affect
weight = np.ones(mask.shape,dtype=mask.dtype)
mask = mask[:,:,None].repeat(3,axis=2) #in order to use the transformations given by torchvision
weight = weight[:,:,None].repeat(3,axis=2) #inputs need to be 3D, so here we convert from 1d to 3d by repetition
img_new = img
mask_new = mask
weight_new = weight
seed = random.randrange(sys.maxsize) #get a random seed so that we can reproducibly do the transofrmations
if self.img_transform is not None:
random.seed(seed) # apply this seed to img transforms
img_new = self.img_transform(img)
if self.mask_transform is not None:
random.seed(seed)
mask_new = self.mask_transform(mask)
mask_new = np.asarray(mask_new)[:,:,0].squeeze()
random.seed(seed)
weight_new = self.mask_transform(weight)
weight_new = np.asarray(weight_new)[:,:,0].squeeze()
return img_new, mask_new, weight_new
def __len__(self):
return self.nitems
========================================
and i use this code
for ii , (X, y, y_weight) in enumerate(dataLoader[phase]): #for each of the batches
X = X.to(device) # [Nbatch, 3, H, W]
y_weight = y_weight.type(‘torch.FloatTensor’).to(device)
y = y.type(‘torch.LongTensor’).to(device) # [Nbatch, H, W] with class indices (0, 1)
error occur
File “”, line 1, in
debugfile(‘C:/Users/mbmhm/Desktop/unet/train_unet.py’, wdir=‘C:/Users/mbmhm/Desktop/unet’)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 856, in debugfile
debugger.run(“runfile(%r, args=%r, wdir=%r)” % (filename, args, wdir))
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\bdb.py”, line 585, in run
exec(cmd, globals, locals)
File “”, line 1, in
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 827, in runfile
execfile(filename, namespace)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 110, in execfile
exec(compile(f.read(), filename, ‘exec’), namespace)
File “c:/users/mbmhm/desktop/unet/train_unet.py”, line 267, in
for ii , (X, y, y_weight) in enumerate(dataLoader[phase]): #for each of the batches
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 193, in iter
return _DataLoaderIter(self)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 469, in init
w.start()
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\process.py”, line 112, in start
self._popen = self._Popen(self)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\context.py”, line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\context.py”, line 322, in _Popen
return Popen(process_obj)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\popen_spawn_win32.py”, line 89, in init
reduction.dump(process_obj, to_child)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File “stringsource”, line 2, in tables.hdf5extension.Array.reduce_cython
TypeError: self.dims,self.dims_chunk,self.maxdims cannot be converted to a Python object for pickling
=========================
then i tried this code
x,y,w in dataLoader[‘train’]:
print(x.shape, y.shape, w.shape)
File “”, line 1, in
debugfile(‘C:/Users/mbmhm/Desktop/unet/train_unet.py’, wdir=‘C:/Users/mbmhm/Desktop/unet’)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 856, in debugfile
debugger.run(“runfile(%r, args=%r, wdir=%r)” % (filename, args, wdir))
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\bdb.py”, line 585, in run
exec(cmd, globals, locals)
File “”, line 1, in
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 827, in runfile
execfile(filename, namespace)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 110, in execfile
exec(compile(f.read(), filename, ‘exec’), namespace)
File “c:/users/mbmhm/desktop/unet/train_unet.py”, line 200, in
for w, y, z in dataLoader[‘train’]:
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 576, in next
idx, batch = self._get_batch()
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 543, in _get_batch
success, data = self._try_get_batch()
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 519, in _try_get_batch
raise RuntimeError(‘DataLoader worker (pid(s) {}) exited unexpectedly’.format(pids_str))
RuntimeError: DataLoader worker (pid(s) 7744, 4584) exited unexpectedly
=====================
what can i do to solve this problem.
pla give me a answer. |
st83715 | It looks like you are running into some issues using multiprocessing and a HDF5 file.
Have a look at this topic 53. |
st83716 | I am training a multilabel classifier in keras ,i want to convert that to pytorch , i am mostly confused about how to handle the loss ,this is what the code looks like
model = keras.applications.densenet.DenseNet121(include_top=False, input_shape=(224, 224, 3))
x = model.output
x = Flatten()(x)
x = Dense(512)(x)
x = Activation('relu')(x)
x = Dropout(0.5)(x)
output1 = Dense(1, activation = 'sigmoid')(x)
output2 = Dense(1, activation = 'sigmoid')(x)
output3 = Dense(1, activation = 'sigmoid')(x)
output4 = Dense(1, activation = 'sigmoid')(x)
output5 = Dense(1, activation = 'sigmoid')(x)
output6 = Dense(1, activation = 'sigmoid')(x)
output7 = Dense(1, activation = 'sigmoid')(x)
output8 = Dense(1, activation = 'sigmoid')(x)
model = Model(model.inputs,[output1,output2,output3,output4,output5, output6, output7, output8])
# print(model.summary())
model.compile(optimizers.rmsprop(lr = 0.0001, decay = 1e-6),
loss = ["binary_crossentropy","binary_crossentropy","binary_crossentropy","binary_crossentropy", "binary_crossentropy","binary_crossentropy","binary_crossentropy","binary_crossentropy"],metrics = ["accuracy"])
How can i do this in pytorch ,this is what i have till now
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(2304, 256)
self.fc2 = nn.Linear(256, 8)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(x.size(0), -1) # Flatten layer
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.sigmoid(x)
model = Net().cuda()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
def train(epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# data, target = data.cuda(async=True), target.cuda(async=True) # On GPU
data, target = Variable(data).cuda(), Variable(target).float().cuda()
optimizer.zero_grad()
output = model(data)
loss = F.binary_cross_entropy(output, target)
loss.backward()
optimizer.step()
if batch_idx % 10 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
for epoch in range(1, 10):
train(epoch)
Thanks in advance, any suggestions will be very helpful |
st83717 | The output and criterion look alright for a multi-label classification.
Make sure that the target has the same shape as output.
Are you seeing any issues, e.g. lower accuracy etc.?
PS: Variables are deprecated and you can just use tensors |
st83718 | @ptrblck The model that i currently have trains, but the gpu volatility keeps going to 0, which is making the training really slow is there a way solve this?Thanks in advance.
class OdirDataset(Dataset):
"""Dataset wrapping images and target labels for Kaggle - Planet Amazon from Space competition
Arguments:
A CSV file path
Path to image folder
Extension of images
PIL transforms
"""
def __init__(self, csv_path, img_path, img_ext, transform=None):
tmp_df = pd.read_csv(csv_path)
self.img_path = img_path
self.img_ext = img_ext
self.transform = transform
# import pdb;pdb.set_trace()
self.X_train = tmp_df['Left-Fundus']
self.y_train = tmp_df.iloc[:, 7:16].values
def __getitem__(self, index):
# import pdb;pdb.set_trace()
img_left = Image.open(self.img_path + self.X_train[index] + self.img_ext)
img_right= Image.open(self.img_path + self.X_train[0].split('_')[0] +'_right.jpg')
img_left = img_left.convert('RGB')
img_right = img_right.convert('RGB')
if self.transform is not None:
img_left = self.transform(img_left)
img_right = self.transform(img_right)
img = np.concatenate([img_left,img_right],axis=0)
label = torch.from_numpy(self.y_train[index])
return img, label
def __len__(self):
return len(self.X_train.index)
transformations = transforms.Compose([transforms.Resize((224,224)),transforms.ToTensor()])
dset_train = OdirDataset(TRAIN_DATA,IMG_PATH,IMG_EXT,transformations)
dset_val = OdirDataset(TEST_DATA,IMG_PATH,IMG_EXT,transformations)
train_loader = DataLoader(dset_train,
batch_size=84,
shuffle=True,
num_workers=12,pin_memory=True)
val_loader = DataLoader(dset_val,
batch_size=1,
shuffle=False,
num_workers=12,pin_memory=True) |
st83719 | Since you are using PIL to load and process the images, you could try to install PIL-SIMD, which is a replacement for the standard PIL lib and might give you a faster preprocessing pipeline.
The np.concatenate operation doesn’t look alright, as the transforms should return a tensor already.
Could you swap it for torch.cat?
Also, the number of workers is currently quite high for train_loader. I would recommend to play around with this number (lower it a bit) and check, if you can get rid of the bottleneck.
That being said, did you make sure that the data loading is actually the bottleneck?
Have a look at the ImageNet example 12 to see, how to time the data loading. |
st83720 | Thanks a lot . let me look into each one of these one by one.
I am guessing data loading is the bottleneck here,not sure what else could cause this. |
st83721 | Hi Vainaijr!
vainaijr:
What is the reason behind this?
The short answer is that we don’t need a mean of 0 and a standard
deviation of 1.
I assume that you are talking about “targets” – the output values you
are using to train with and then will be using the trained network to
predict.
There can be some benefits to “normalizing” your targets – rescaling
your targets so that their mean and standard deviation are roughly
0 and 1, respectively, but this is in no way necessary.
(My comments apply equally well to the question of normalizing
inputs, but let me speak in terms of targets for simplicity.)
If your targets are very large, you could run into overflow problems
(NaNs) when training.
If you start out with a sensible random initialization of your network
weights (and biases), your network won’t start out knowing anything
about the mean and standard deviation of your targets, and will
(sort of) start out tending to predict them to be 0 and 1, respectively.
So it will have to learn the mean and standard deviation, but this
isn’t a big deal.
If you change the scale of your targets you will often be changing the
scale of your loss function as a result. For example, if you double
your targets, you will, in effect, multiply the mean-squared-error
loss function by four. This will then, in effect, multiply your learning
rate by four.
But (with the exception of overflow for extreme values) these are
minor issues and won’t significantly affect the training or predictive
performance of your network.
You can see this for yourself. Generate some multivariate regression
data (with a little noise in it). Your targets don’t have to be a Gaussian
distribution, but don’t make them too wacky. Rescale the targets
twice – once to a mean and standard deviation of 0 and 1, and
again to something else, say a mean of -5 and a standard deviation
of 10. Train a simple network, say a single hidden layer and a
mean-squared-error loss function. Train on both the normalized
and unnormalized datasets. You should get very similar results,
with (perhaps) slightly slower initial training for the unnormalized
data as the network learns the overall scale of the data.
The biggest difference will be that by scaling up the standard
deviation by a factor of ten you will have, in effect, multiplied
your learning rate by one hundred. So reducing your learning
rate for the unnormalized data will make the two training runs
even more similar.
Have fun!
K. Frank |
st83722 | There is such an error in the operation, why? Because the memory is not enough? It was still good when I was running a few days ago. |
st83723 | Could you check your memory usage (RAM and swap)? This error is usually thrown, if your run out of memory. |
st83724 | I know how to avoid this mistake. If the host configuration is not high and the amount of data is large, it is recommended to turn off multithreaded reading, that is, num_workers is set to 0 |
st83725 | Hello, using torch.utils.tensorboard and tb-nightly I am having some issues adding a graph to the summary. The model I am using is similar to the rgbd ENet implementation (ENetDepth at https://github.com/montrealrobotics/ENetDepth/blob/master/models/enet.py 11).
I am getting the error:
UserWarning: ONNX export failed on ATen operator max_unpool2d because torch.onnx.symbolic.max_unpool2d does not exist
.format(op_name, op_name))
The model uses maxunpool layers in the UpsamplingBottlenecks:
self.main_unpool1 = nn.MaxUnpool2d(kernel_size=2)
and forwarded:
# Main branch shortcut
main = self.main_conv1(x)
main = self.main_unpool1(main, max_indices)
# Extension branch
ext = self.ext_conv1(x)
ext = self.ext_conv2(ext)
ext = self.ext_conv3(ext)
ext = self.ext_regul(ext)
# Add main and extension branches
out = main + ext
return self.out_prelu(out)
I was wondering if anyone had any tips on a solution to my problem, or a quick workaround?
Thanks |
st83726 | Hello, I need to run these 1 tutorials. I installed PyTorch on Ubuntu 16 with the instructions in this book and it works well with Python, but when I test it in c++ (Caffe2 C++ api), it prints that:
root@baki:~/caffe2_cpp_tutorial/build# cmake ..
../caffe2: warning: directory does not exist.
CMake Error at CMakeLists.txt:36 (message):
Caffe2 lib not found
-- Configuring incomplete, errors occurred!
What can I do? |
st83727 | i use window 10 pro, graphic card geforce 1050ti, and anaconda spyder python3.7
I am learning about Unet and practice code.
but i have some problem. Maybe… enumerate{dataLoader]) make error.
==============
i use this code for dataset and dataloader
class Dataset(object):
def init(self, fname ,img_transform=None, mask_transform = None, edge_weight= False):
#nothing special here, just internalizing the constructor parameters
self.fname=fname
self.edge_weight = edge_weight
self.img_transform=img_transform
self.mask_transform = mask_transform
self.tables=tables.open_file(self.fname)
self.numpixels=self.tables.root.numpixels[:]
self.nitems=self.tables.root.img.shape[0]
self.tables.close()
self.img = None
self.mask = None
def __getitem__(self, index):
#opening should be done in __init__ but seems to be
#an issue with multithreading so doing here
with tables.open_file(self.fname,'r') as db:
self.img=db.root.img
self.mask=db.root.mask
#get the requested image and mask from the pytable
img = self.img[index,:,:,:]
mask = self.mask[index,:,:]
#the original Unet paper assignes increased weights to the edges of the annotated objects
#their method is more sophistocated, but this one is faster, we simply dilate the mask and
#highlight all the pixels which were "added"
if(self.edge_weight):
weight = scipy.ndimage.morphology.binary_dilation(mask==1, iterations =2) & ~mask
else: #otherwise the edge weight is all ones and thus has no affect
weight = np.ones(mask.shape,dtype=mask.dtype)
mask = mask[:,:,None].repeat(3,axis=2) #in order to use the transformations given by torchvision
weight = weight[:,:,None].repeat(3,axis=2) #inputs need to be 3D, so here we convert from 1d to 3d by repetition
img_new = img
mask_new = mask
weight_new = weight
seed = random.randrange(sys.maxsize) #get a random seed so that we can reproducibly do the transofrmations
if self.img_transform is not None:
random.seed(seed) # apply this seed to img transforms
img_new = self.img_transform(img)
if self.mask_transform is not None:
random.seed(seed)
mask_new = self.mask_transform(mask)
mask_new = np.asarray(mask_new)[:,:,0].squeeze()
random.seed(seed)
weight_new = self.mask_transform(weight)
weight_new = np.asarray(weight_new)[:,:,0].squeeze()
return img_new, mask_new, weight_new
def __len__(self):
return self.nitems
In[ ]:
#note that since we need the transofrmations to be reproducible for both masks and images
#we do the spatial transformations first, and afterwards do any color augmentations
img_transform = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomVerticalFlip(),
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(size=(patch_size,patch_size),pad_if_needed=True), #these need to be in a reproducible order, first affine transforms and then color
transforms.RandomResizedCrop(size=patch_size),
transforms.RandomRotation(180),
transforms.ColorJitter(brightness=0, contrast=0, saturation=0, hue=.5),
transforms.RandomGrayscale(),
transforms.ToTensor()
])
mask_transform = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomVerticalFlip(),
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(size=(patch_size,patch_size),pad_if_needed=True), #these need to be in a reproducible order, first affine transforms and then color
transforms.RandomResizedCrop(size=patch_size,interpolation=PIL.Image.NEAREST),
transforms.RandomRotation(180),
])
dataset={}
dataLoader={}
for phase in phases: #now for each of the phases, we’re creating the dataloader
#interestingly, given the batch size, i’ve not seen any improvements from using a num_workers>0
dataset[phase]=Dataset(f"./{dataname}_{phase}.pytable", img_transform=img_transform , mask_transform = mask_transform ,edge_weight=edge_weight)
dataLoader[phase]=DataLoader(dataset[phase], batch_size=batch_size, shuffle=True, num_workers=2, pin_memory=True)
=======================================
Then, using this code, some error occurs.
writer=SummaryWriter() #open the tensorboard visualiser
best_loss_on_test = np.Infinity
edge_weight=torch.tensor(edge_weight).to(device)
start_time = time.time()
for epoch in range(num_epochs):
#zero out epoch based performance variables
all_acc = {key: 0 for key in phases}
all_loss = {key: torch.zeros(0).to(device) for key in phases}
cmatrix = {key: np.zeros((2,2)) for key in phases}
for phase in phases: #iterate through both training and validation states
if phase == 'train':
model.train() # Set model to training mode
else: #when in eval mode, we don't want parameters to be updated
model.eval() # Set model to evaluate mode
for ii , [X, y, y_weight] in enumerate(dataLoader[phase]): #for each of the batches
X = X.to(device) # [Nbatch, 3, H, W]
y_weight = y_weight.type('torch.FloatTensor').to(device)
y = y.type('torch.LongTensor').to(device) # [Nbatch, H, W] with class indices (0, 1)
with torch.set_grad_enabled(phase == 'train'): #dynamically set gradient computation, in case of validation, this isn't needed
#disabling is good practice and improves inference time
prediction = model(X) # [N, Nclass, H, W]
loss_matrix = criterion(prediction, y)
loss = (loss_matrix * (edge_weight**y_weight)).mean() #can skip if edge weight==1
if phase=="train": #in case we're in train mode, need to do back propogation
optim.zero_grad()
loss.backward()
optim.step()
train_loss = loss
all_loss[phase]=torch.cat((all_loss[phase],loss.detach().view(1,-1)))
if phase in validation_phases: #if this phase is part of validation, compute confusion matrix
p=prediction[:,:,:,:].detach().cpu().numpy()
cpredflat=np.argmax(p,axis=1).flatten()
yflat=y.cpu().numpy().flatten()
cmatrix[phase]=cmatrix[phase]+confusion_matrix(yflat,cpredflat,labels=range(n_classes))
all_acc[phase]=(cmatrix[phase]/cmatrix[phase].sum()).trace()
all_loss[phase] = all_loss[phase].cpu().numpy().mean()
#save metrics to tensorboard
writer.add_scalar(f'{phase}/loss', all_loss[phase], epoch)
if phase in validation_phases:
writer.add_scalar(f'{phase}/acc', all_acc[phase], epoch)
writer.add_scalar(f'{phase}/TN', cmatrix[phase][0,0], epoch)
writer.add_scalar(f'{phase}/TP', cmatrix[phase][1,1], epoch)
writer.add_scalar(f'{phase}/FP', cmatrix[phase][0,1], epoch)
writer.add_scalar(f'{phase}/FN', cmatrix[phase][1,0], epoch)
writer.add_scalar(f'{phase}/TNR', cmatrix[phase][0,0]/(cmatrix[phase][0,0]+cmatrix[phase][0,1]), epoch)
writer.add_scalar(f'{phase}/TPR', cmatrix[phase][1,1]/(cmatrix[phase][1,1]+cmatrix[phase][1,0]), epoch)
print('%s ([%d/%d] %d%%), train loss: %.4f test loss: %.4f' % (timeSince(start_time, (epoch+1) / num_epochs), epoch+1, num_epochs ,(epoch+1) / num_epochs * 100, all_loss["train"], all_loss["val"]),end="")
#if current loss is the best we've seen, save model state with all variables
#necessary for recreation
if all_loss["val"] < best_loss_on_test:
best_loss_on_test = all_loss["val"]
print(" **")
state = {'epoch': epoch + 1,
'model_dict': model.state_dict(),
'optim_dict': optim.state_dict(),
'best_loss_on_test': all_loss,
'n_classes': n_classes,
'in_channels': in_channels,
'padding': padding,
'depth': depth,
'wf': wf,
'up_mode': up_mode, 'batch_norm': batch_norm}
torch.save(state, f"{dataname}_unet_best_model.pth")
else:
print("")
error is…
ipdb> Traceback (most recent call last):
File “”, line 1, in
debugfile(‘C:/Users/mbmhm/Desktop/unet/train_unet.py’, wdir=‘C:/Users/mbmhm/Desktop/unet’)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 856, in debugfile
debugger.run(“runfile(%r, args=%r, wdir=%r)” % (filename, args, wdir))
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\bdb.py”, line 585, in run
exec(cmd, globals, locals)
File “”, line 1, in
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 827, in runfile
execfile(filename, namespace)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 110, in execfile
exec(compile(f.read(), filename, ‘exec’), namespace)
File “c:/users/mbmhm/desktop/unet/train_unet.py”, line 265, in
for ii , [X, y, y_weight] in enumerate(dataLoader[phase]): #for each of the batches
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 193, in iter
return _DataLoaderIter(self)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 469, in init
w.start()
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\process.py”, line 112, in start
self._popen = self._Popen(self)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\context.py”, line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\context.py”, line 322, in _Popen
return Popen(process_obj)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\popen_spawn_win32.py”, line 89, in init
reduction.dump(process_obj, to_child)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File “stringsource”, line 2, in tables.hdf5extension.Array.reduce_cython
TypeError: self.dims,self.dims_chunk,self.maxdims cannot be converted to a Python object for pickling
===================================
I use different code, but error occer.
for x,y,w in dataLoader[‘train’]:
print(x.shape, y.shape, w.shape)
ipdb> _CudaDeviceProperties(name=‘GeForce GTX 1050 Ti’, major=6, minor=1, total_memory=4096MB, multi_processor_count=6)
total params: 122466
ipdb> Traceback (most recent call last):
File “”, line 1, in
debugfile(‘C:/Users/mbmhm/Desktop/unet/train_unet.py’, wdir=‘C:/Users/mbmhm/Desktop/unet’)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 856, in debugfile
debugger.run(“runfile(%r, args=%r, wdir=%r)” % (filename, args, wdir))
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\bdb.py”, line 585, in run
exec(cmd, globals, locals)
File “”, line 1, in
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 827, in runfile
execfile(filename, namespace)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 110, in execfile
exec(compile(f.read(), filename, ‘exec’), namespace)
File “c:/users/mbmhm/desktop/unet/train_unet.py”, line 200, in
for x,y,w in dataLoader[‘train’]:
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 576, in next
idx, batch = self._get_batch()
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 543, in _get_batch
success, data = self._try_get_batch()
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 519, in _try_get_batch
raise RuntimeError(‘DataLoader worker (pid(s) {}) exited unexpectedly’.format(pids_str))
RuntimeError: DataLoader worker (pid(s) 500, 6872) exited unexpectedly
what can i do to solve this problem. |
st83728 | Could you run your code with num_workers=0 and check, if you get a proper error message?
Maybe something raises an error in your __getitem__, which might be lost due to multiprocessing. |
st83729 | When running my Rainbow agent in Python 2, e.g. to quickly fail via python main.py --learn-start 1000 --evaluation-interval 1100 --disable-cuda, I get the error RuntimeError: invalid argument 2: out of range at /opt/conda/conda-bld/pytorch_1556653194318/work/aten/src/TH/generic/THTensor.cpp:639 on this line. In Python 2, the two last arguments get incredibly large negative numbers (e.g. -9223372036854775808), and -inf, but the same set of arguments under Python 3 get the expected inputs: 0, 1, …, 1630, and 0.0049, 0.0053… Not sure why changing Python version should cause this problem.
Using Python 1.1.0 with CUDA 10 for both Python 2 and 3. |
st83730 | Thanks for the reproducible code snippet.
In Python2.x the / operator is an integer division, if the inputs are integers (in Python3 it’s //).
Therefore, self.delta_z will be zero:
#Python2.7
(10 - -10) / (51 - 1)
> 0
#Python3.6
(10 - -10) / (51 - 1)
> 0.4
Since delta_z is zero, b will be Inf.
The small number (-922...) is apparently the “int64-representation” of an Inf, but I’m not sure.
Add this line as the first import in agent.py and your script should work:
from __future__ import division |
st83731 | Ah thanks for spotting that! Not used to writing completely cross-compatible Python code, but someone requested and I thought I’d try. Probably the only place it is an issue, but sprinkling that import in any file with a division just in case With this I’ve hopefully tracked down any other cross-compatibility issues as I’ve now got it running fine on Python 2 |
st83732 | I’m getting a size mismatch error that I can’t understand.
(Pdb) self.W_di
Linear(in_features=68, out_features=1024, bias=True)
(Pdb) indices.size()
torch.Size([32, 6, 68])
(Pdb) self.W_di(indices)
*** RuntimeError: size mismatch, m1: [192 x 68], m2: [1024 x 68] at /opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/generic/THCTensorMathBlas.cu:268
Why is there a mismatch?
Maybe because of the way I defined the weight in forward (instead of _init_)?
This is how I defined self.W_di:
def forward(self):
if self.W_di is None:
self.W_di_weight = nn.Parameter(torch.randn(mL_n * 2,1024).to(device))
self.W_di_bias = nn.Parameter(torch.ones(1024).to(device))
self.W_di = nn.Linear(mL_n * 2, 1024)
self.W_di.weight = self.W_di_weight
self.W_di.bias = self.W_di_bias
result = self.W_di(indices)
Any pointer would be highly appreciated! |
st83733 | Solved by Nikronic in post #2
Hi,
I have tested your code and error is about your weight initialization. Actually, you have to transpose the weight you have considered or reverse sizes.
but the conventional way of initializing weight or std is using the torch.nn.init module.
Here is your code using this convention, and it wo… |
st83734 | Hi,
I have tested your code and error is about your weight initialization. Actually, you have to transpose the weight you have considered or reverse sizes.
but the conventional way of initializing weight or std is using the torch.nn.init module.
Here is your code using this convention, and it works:
import torch
import torch.nn as nn
w_di = nn.Linear(68, 1024, bias= True).cuda()
indices = torch.randn(32, 6, 68).cuda()
torch.nn.init.normal_(w_di.weight)
torch.nn.init.constant_(w_di.bias, 1.0)
w_di(indices)
And this is your updated code:
import torch
import torch.nn as nn
w_di = nn.Linear(68, 1024, bias= True).cuda()
indices = torch.randn(32, 6, 68).cuda()
w_di.weight = nn.Parameter(torch.randn(34 * 2,1024).t().cuda())
w_di.bias = nn.Parameter(torch.ones(1024).cuda())
w_di(indices)
Bests
Nik |
st83735 | Trying to solve a multi label image classification problem,an 8 class multi label problem . My labels look like this
[0. 1. 0. 0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0. 0. 0. 0.]
[0. 1. 0. 1. 0. 0. 0. 0.]
The code looks like this
model = models.resnet50(pretrained=True)
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, 8)
model = model.cuda()
# Decay LR by a factor of 0.1 every 7 epochs
# exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
loss_fn = torch.nn.BCELoss()
# def train(epoch):
for epoch in range(15):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# import pdb;pdb.set_trace()
data, target = data.cuda(), target.float().cuda()
optimizer.zero_grad()
output = model(data)
output = torch.sigmoid(output)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
For now the results are not good, am i doing something wrong?
Any suggestions would be really helpful.Thanks in advance. |
st83736 | Tensor.backward, as per documentation, frees the computation graph after the call to backward unless retain_graph is set to True. This is the exactly the behaviour I observed for the following example.
x = torch.ones(1., requires_grad=True)
y = torch.ones(1., requires_grad = True)
w = x+y
u=2*w
u.backward()
u.backward()
2nd backward call results error which matches with what docs say.
But for the following example, I don’t find any error.
x = torch.ones(1., requires_grad=True)
y = torch.ones(1., requires_grad = True)
w = x+y
w.backward()
w.backward()
Kindly clarify. This post is similar to https://discuss.pytorch.org/t/inconsistent-behavior-for-running-backward-twice-without-retain-graph-true/39917 4 |
st83737 | Solved by ptrblck in post #2
Similar to this question: |
st83738 | Similar to this question 2:
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time
This is an edge case, since the only op you do is an add and the add does not need any buffer, then there is no buffer that are missing when you do the second backward |
st83739 | Thanks for the response. I get it now. But I find another issue, with create_graph option of backward method. The docs say that setting it to True would build the graph of the derivative. But there is no clarity as to how it is built. I found some inconsitency in using this option as explained in the following example:
x = torch.tensor(3., requires_grad = True)
y = x**3
print(f’{y.grad_fn}’)
y.backward(create_graph = True) # to facilitate computation of second order derivative of y
print(f"y’: {x.grad}")
x.grad.data = torch.zeros_like(x)
print(f’{x.grad.grad_fn}’)
x.grad.backward(create_graph = True) # to facilitate computation of third order derivative of y
print(f’y": {x.grad}’)
x.grad.data = torch.zeros_like(x)
print(f’{x.grad.grad_fn}’)
x.grad.backward()
print(f’y^3: {x.grad}’)
I get the results as follows:
<PowBackward0 object at 0x7fb0d3503e10>
y’: 27.0
<CloneBackward object at 0x7fb0d3503dd8>
y": 18.0
<AddBackward0 object at 0x7fb0d3503908>
y^3: 24.0
As can be seen, first and second derivatives (I have zeroed out the grad before computing second derivative) are fine but third derivative is wrong. I thought by CloneBackward being the grad_fn, the graph of the derivative is simply the copy of the original graph with the node associated with the expression y=x**2 replaced by the node associated with the expression x.grad. But I don’t understand how AddBackward0 is the grad_fn after second call to backward with create_graph = True.
Seeking clarity on this. |
st83740 | Hello,
I am trying to install Pytorch using the Quick Start Locally menu on the website. I chose the Stable 1.1 version for Windows, selected Pip as my package manager, chose my python version (3.7), and no CUDA. I have already installed numpy and updated my Pip, but when I run the commands in my command line, I get “ERROR: torch-1.1.0-cp37-cp37m-win_amd64.whl is not a supported wheel on this platform” and “ERROR: torchvision-0.3.0-cp37-cp37m-win_amd64.whl is not a supported wheel on this platform”. I have already checked to make sure that I am using the correct command for my python version and tried reading other solutions online. Could anyone give me some advice and guidance for fixing this? Thanks!
FYI: I am using windows 7 64 bit with python 3.7.4 installed and pip 19.2.1 without cuda or the Anaconda thing |
st83741 | Just figured it out, I had 64 bit python installed, but I was using a 32 bit version. Thanks anyway! |
st83742 | Hey there,
I’m coding my first LSTM and am having issues getting the network to train. The network is meant to perform a binary classification task.
It seems there’s some issue with backpropagating the loss as the gradients I see for the network’s parameters are usually extremely small (<e-03). In particular, I’ve noticed that even after backpropagation, output.grad and hidden.grad are both None, which doesn’t seem right.
Here’s the code for defining the network:
class LSTM(nn.Module):
def __init__(self, input_size, embedding_dim,hidden_size, output_size, batch_size):
super(LSTM, self).__init__()
self.input_size= input_size
self.embedding_size = embedding_dim
self.hidden_size = hidden_size
self.output_size = 1
self.batch_size = batch_size
self.linear_f = nn.Linear(embedding_dim + hidden_size, hidden_size)
self.linear_i = nn.Linear(embedding_dim + hidden_size, hidden_size)
self.linear_ctilde = nn.Linear(embedding_dim + hidden_size, hidden_size)
self.linear_o = nn.Linear(embedding_dim + hidden_size, hidden_size)
self.decoder = nn.Linear(hidden_size, output_size)
self.init_weights()
self.length=None
def forward(self, x, hidden,c):
x_emb = x
length=x_emb.shape[0]
embs = torch.chunk(x_emb, self.length, 1)
outputs=[]
def step(emb, hid, c_t):
combined = torch.cat((hid, emb), 1)
f = torch.sigmoid(self.linear_f(combined))
i = torch.sigmoid(self.linear_i(combined))
c_tilde = torch.tanh(self.linear_ctilde(combined))
c_t = f * c_t + i * c_tilde
o = torch.sigmoid(self.linear_o(combined))
hid = o * torch.tanh(c_t)
return hid, c_t
for i in range(len(embs)):
hidden, c = step(embs[i], hidden, c)
decoded=self.decoder(hidden)
output=torch.softmax(decoded,1)
return output, hidden
def init_hidden(self):
h0 = torch.zeros(self.batch_size, self.hidden_size,requires_grad=True)
c0 = torch.zeros(self.batch_size, self.hidden_size,requires_grad=True)
return h0, c0
def init_weights(self):
initrange = .1
lin_layers = [self.linear_f, self.linear_i, self.linear_ctilde, self.linear_o, self.decoder]
for layer in lin_layers:
layer.weight.data.uniform_(-initrange, initrange)
if layer in lin_layers:
layer.bias.data.fill_(0)
And here’s the function I’m using to train the network:
def training_loop(batch_size, num_epochs, model, loss_, optim, training_iter, dev_iter, train_eval_iter,verbose,end_early):
step = 0
epoch = 0
total_batches = int(len(training_set) / batch_size)
epoch_loss=[]
start_time=time.time()
outputs=[]
ground_truths=[]
last_fi=model.linear_f.weight
while epoch <= num_epochs:
model.train()
x=next(training_iter)
vectors, conversions = get_batch(x)
vectors = torch.stack(vectors).view([len(vectors),len(vectors[0])]).float() # batch_size, seq_len
conversions = torch.stack(conversions).long().view([batch_size])
model.length=len(vectors[0])
hidden, cell_state = model.init_hidden()
output, hidden = model(vectors, hidden, cell_state)
lossy= loss_(output, conversions)
lossy.backward()
print(output.grad)
optim.step()
model.zero_grad()
if step % total_batches == 0:
if not epoch%1:
model.eval()
print("Epoch %i; Step %i; Loss %f; Train acc: %f; Dev acc %f"
%(epoch, step, item,\
evaluate(model, train_eval_iter, lstm),\
evaluate(model, dev_iter, lstm)))
print('')
step += 1
Note: Since the sequences are of varying lengths and I didn’t want to do zero-padding (it seemed like it would alter the meaning of the data) I use a batch size of 1 and pass the length of each sequence by setting the model’s ‘length’ attribute |
st83743 | Hi All,
I am performing Semantic segmentation I can print the loss during the iteration using the code below
for iter in range(num_epochs):
print(iter)
for (i,l) in trainloader:
i= i.to(device)
l = l.to(device=device, dtype=torch.int64)
outt = model(i)
loss = criterion(outt['out'], l.squeeze(1))
print(loss)
loss.backward()
optimizer.step()
Is there a way to print out accuracy as well? Any help is appreciated
Thanks
Nishanth |
st83744 | Something like this should work:
N = 5
nb_classes = 10
h, w = 24, 24
output = torch.randn(N, nb_classes, h, w)
target = torch.randint(0, nb_classes, (N, h, w))
pred = torch.argmax(output, 1)
acc = (pred == target).float().mean() |
st83745 | I am interested in computing f’(x) where f is an ordinary pytorch computation. The thing is that I don’t really care about f(x), I only need the gradient. I could write the computation by hand but that’s pretty tedious. Is using pytorchs autograd-framework efficient for these sort of things? f’(x) is inside a torch.no_grad(), so I am not interested in the gradient of the gradient-computation.
I have a few use-cases, but one is that I need the gradient for |1-Var(x)| for x where the variance is computed over the batch-dim. |
st83746 | I’ve been trying to get PyTorch to work on my AMD GPU using the ROCm drivers. And I’ve had to compile PyTorch from source in order to accomplish this (the prebuilt PyTorch binaries with ROCm enabled only support Vega GPUs and I have an RX 470). There were no errors during the build process and the installation, but it seems something must’ve gone wrong because when I try to just run “import torch” in the python shell, it appears to start loading for a few seconds before returning this and exiting:
python3: Relink /lib/x86_64-linux-gnu/libsystemd.so.0' with/lib/x86_64-linux-gnu/librt.so.1’ for IFUNC symbol clock_gettime' python3: Relink/lib/x86_64-linux-gnu/libudev.so.1’ with /lib/x86_64-linux-gnu/librt.so.1' for IFUNC symbolclock_gettime’
Segmentation fault (core dumped)
I can’t find any information on this error online, so any help in getting this to work would be greatly appreciated. |
st83747 | Hello,
i am a Newbie in PyTorch and AI and make this for privacy.
My code have to take X numbers (floats) from a list and give me back the X+1 number (float) but all what i become back is:
for Output-tensor
tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
device='cuda:0', grad_fn=<ThAddBackward>)
and for loss:
tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
i dont know what this is
Here is my Code, thank you for your help:
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import torch.optim as optim
import os
DatasetNumber = 1
DataAmount = 2
matrix1 = torch.Tensor(DataAmount * 5)
matrix2 = torch.Tensor(DataAmount * 5)
class TestNetz(nn.Module):
# Netz erzeugen
def __init__(self):
super(TestNetz, self).__init__()
self.lin1 = nn.Linear(DataAmount * 5, DataAmount * 5) # Schichten (Hiddenlayer) (Funktionen die erlernt werden um vom Input auf Output zu kommen)
self.lin2 = nn.Linear(DataAmount * 5, DataAmount * 5)
def forward(self, x):
x = F.log_softmax(self.lin1(x), 0) # Aktivierungsfunktion relu
x = self.lin2(x)
return x
def num_flat_features(self, x):
size = x.size()[1:]
num = 1
for i in size:
num *= i
return num
# Daten vorbereiten
k = open('Datenset.txt', 'r')
lines = k.readlines()
i = 0
while i < DataAmount:
j = 0
while j < 5:
matrix1[j + (5 * i)] = float(lines[j + (5 * i) + (DatasetNumber * 5)])
matrix2[j + (5 * i)] = float(lines[(DatasetNumber * DataAmount) + (DataAmount * 5)])
j = j + 1
i = i + 1
print(matrix1)
print(matrix2)
netz = TestNetz()
netz = netz.cuda()
print(netz)
if os.path.isfile('TestNetz.pt'):
netz = torch.load('TestNetz.pt')
for i in range(100):
# Input
input = Variable(matrix1)
input = input.cuda()
out = netz(input)
print(out)
# Ziel
target = Variable(matrix2)
target = target.cuda()
criterion = nn.MSELoss() # Fehlerberechnung
loss = criterion(out, target)
#print(loss)
netz.zero_grad()
loss.backward()
optimizer = optim.SGD(netz.parameters(), lr=0.01) # Optimizer (SGD) mit Lernrate
optimizer.step()
torch.save(netz, 'TestNetz.pt') |
st83748 | Could you check your input for NaN values?
Just use
print((matrix1==matrix1).all())
There are a few minor issues in your code:
torch.Tensor creates an uninitialized tensor. I would recommend to use e.g. torch.zeros instead, so that the values are zero in case you are not initializing them.
It’s uncommon so add F.log_softmax between layers. Usually you would want to use it at your last linear layer in case you have a classification use case. F.relu would be a common non-linearity between layers.
Variables are deprecated since 0.4.0. You can directly use tensors instead now.
Probably not an issue, but you are not closing the file k. A good way is to use with open('Dataset.txt', 'r') as k: so that the file will be automatically closed.
I would suggest to create the criterion and optimizer outside the for loop. It’s not that important for the criterion. In case you would use an optimizer with running estimates (e.g. Adam) you would re-initialize it in each iteration. |
st83749 | Thank you for your Answere,
the reaction of this line:
print((matrix1==matrix1).all())
was this line:
tensor(1, dtype=torch.uint8)
All my Datas are floats here the first 100 Numbers from my List “Datenset.txt”:
1.19532
1.194
1.19525
1.194
1620
1.19447
1.19387
1.19401
1.19417
1382
1.19461
1.19358
1.19415
1.19378
1508
1.19408
1.19377
1.19377
1.19391
1340
1.19466
1.19386
1.19392
1.19438
1318
1.19488
1.19362
1.19437
1.19417
2254
1.19478
1.19371
1.19417
1.19474
1748
1.19474
1.19414
1.19474
1.19422
1140
1.19454
1.19409
1.19421
1.1944
953
1.19491
1.19435
1.1944
1.19486
963
1.19549
1.19482
1.19486
1.19545
1164
1.19583
1.19458
1.19544
1.19521
2341
1.19687
1.19518
1.1952
1.19669
3691
1.19874
1.1967
1.19671
1.1981
3277
1.19848
1.19732
1.19813
1.19802
2924
1.19891
1.19788
1.19801
1.19869
1970
1.19949
1.19843
1.1987
1.19947
2850
1.19983
1.19822
1.19946
1.19866
2410
1.19947
1.19853
1.19865
1.19926
2262
1.20111
1.19875
1.19924
1.20095
4166
Ive try to write all your proposals in my code but the problem is the same:
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import torch.optim as optim
import os
DatasetNumber = 1
DataAmount = 2
matrix1 = torch.zeros(DataAmount * 5)
matrix2 = torch.zeros(DataAmount * 5)
class TestNetz(nn.Module):
# Netz erzeugen
def __init__(self):
super(TestNetz, self).__init__()
self.lin1 = nn.Linear(DataAmount * 5, DataAmount * 5) # Schichten (Hiddenlayer) (Funktionen die erlernt werden um vom Input auf Output zu kommen)
F.log_softmax
self.lin2 = nn.Linear(DataAmount * 5, DataAmount * 5)
F.log_softmax
def forward(self, x):
x = F.relu(self.lin1(x), 0) # Aktivierungsfunktion relu
F.log_softmax
x = self.lin2(x)
F.log_softmax
return x
def num_flat_features(self, x):
size = x.size()[1:]
num = 1
for i in size:
num *= i
return num
# Daten vorbereiten
k = open('Datenset.txt', 'r')
lines = k.readlines()
i = 0
while i < DataAmount:
j = 0
while j < 5:
matrix1[j + (5 * i)] = float(lines[j + (5 * i) + (DatasetNumber * 5)])
matrix2[j + (5 * i)] = float(lines[(DatasetNumber * DataAmount) + (DataAmount * 5)])
j = j + 1
i = i + 1
k.close()
#print(matrix1)
#print(matrix2)
netz = TestNetz()
netz = netz.cuda()
#print(netz)
if os.path.isfile('TestNetz.pt'):
netz = torch.load('TestNetz.pt')
for i in range(100):
# Input
input = matrix1
input = input.cuda()
out = netz(input)
#print(out)
# Ziel
target = matrix2
target = target.cuda()
torch.save(netz, 'TestNetz.pt')
criterion = nn.MSELoss() # Fehlerberechnung
loss = criterion(out, target)
# print(loss)
netz.zero_grad()
loss.backward()
optimizer = optim.SGD(netz.parameters(), lr=0.01) # Optimizer (SGD) mit Lernrate
optimizer.step()
print((matrix1==matrix1).all())
print(matrix1)
print(matrix2)
edit: Ive checked all entries with print(type(entrie)) and all entries are “class float”. |
st83750 | Hello, I have run into the same problem and when I nomarlized my inputs to be in the interval [-1,1], all hidden and output values do not have NaN values anymore. I hope you may have another option to try. |
st83751 | Manually managing which modules go where can be tedious and suboptimal, and data parallelization is memory inefficient. With nvlink providing high-bandwidth inter-GPU interconnect, is it possible to abstract away the individual gpu devices and provide a single unified compute pool? |
st83752 | As the title says, the latest nightly build for linux-64 is missing. See https://anaconda.org/pytorch/pytorch-nightly-cpu.
This only affects the builds for CPU (i.e., python-nightly-cpu, not python-nightly). Does anyone know what could cause that? I was highly expecting a feature that has been pushed yesterday. |
st83753 | I was looking at IndRNN 5:
Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN
and was looking for an official pytorch implementation. Is there one? |
st83754 | import torch
# Assuming that I have a tensor a.
a = torch.Tensor([-1,4,6,3,-1,-1])
# Then I need to select a random element(not -1) from a.
# How to do that efficiently? |
st83755 | This code should work:
a = torch.Tensor([-1,4,6,3,-1,-1])
valid_idx = (a!=-1).nonzero().view(-1)
choice = torch.multinomial(valid_idx.float(), 1)
a[valid_idx[choice]]
I’m not sure if it’s more efficient than just calling torch.randint(0, a.size(0), (1,)) in a while loop to skip the -1 entries. It might depend on the ratio between valid and invalid entries. |
st83756 | Hi, ptrblck
import torch
# Assuming that I have a tensor a.
a = torch.Tensor([[3,4, -1, -1,5,6],
[-1,5,-1,1,-1,-1],
[-1,4,4,2,3,0]])
# Then I need to select a random element(not -1) from a for each line
# How to do that efficiently?
# Eg.
# a_sample = [[4],[1],[3]] |
st83757 | In that case, this should work:
a = torch.Tensor([[3,4, -1, -1,5,6],
[-1,5,-1,1,-1,-1],
[-1,4,4,2,3,0]])
valid_idx = (a!=-1).nonzero()
choice = torch.multinomial(torch.arange(valid_idx.size(0)).float(), 1)
a[valid_idx[choice].squeeze().chunk(2)]
EDIT: Sorry, I didn’t realized you would like to sample from each line.
Will update the code in a minute.
EDIT2: I’m not sure, if we can avoid using a loop in this case:
valid_idx = (a!=-1).nonzero()
unique_rows = valid_idx[:, 0].unique()
valid_row_idx = [valid_idx[valid_idx[:, 0] == u] for u in unique_rows]
ret = []
for v in valid_row_idx:
choice = torch.multinomial(torch.arange(v.size(0)).float(), 1)
ret.append(a[v[choice].squeeze().chunk(2)])
ret = torch.stack(ret) |
st83758 | Hi, thank you for a great code snippet.
I found an error case where your code returns a Runtime Error.
When there is only one element in valid_idx and that element is at the 0th position in a it returns the following error:
RuntimeError: invalid argument 2: invalid multinomial distribution (sum of probabilities <= 0) at /Users/distiller/project/conda/conda-bld/pytorch_1556653464916/work/aten/src/TH/generic/THTensorRandom.cpp:343
My reproducible code is as follows:
a = torch.Tensor([-1,4,6,3])
indices = (a == -1).nonzero().view(-1)
choice = torch.multinomial(indices.float(), 1)
Your second suggestion works fine with this case. |
st83759 | Hello, I have some questions about torch.unique().
For a cpu tensor:
input = torch.tensor([[2,5,7,6],[9,7,4,8],[1,3,2,3],[2,5,7,6]])
input
tensor([[2, 5, 7, 6],
[9, 7, 4, 8],
[1, 3, 2, 3],
[2, 5, 7, 6]])
torch.unique(input)
tensor([3, 1, 8, 4, 9, 6, 7, 5, 2])
torch.unique(input, sorted=True)
tensor([1, 2, 3, 4, 5, 6, 7, 8, 9])
torch.unique(input, dim=0)
tensor([[1, 3, 2, 3],
[2, 5, 7, 6],
[9, 7, 4, 8]])
torch.unique(input, sorted=True, dim=0)
tensor([[1, 3, 2, 3],
[2, 5, 7, 6],
[9, 7, 4, 8]])
torch.unique(input, sorted=False, dim=0)
tensor([[1, 3, 2, 3],
[2, 5, 7, 6],
[9, 7, 4, 8]])
I note that for torch.unique the default sorted=True in official documentation (https://pytorch.org/docs/stable/_modules/torch/functional.html#unique 2). However, when using torch.unique(input), the output is unsorted. When I use torch.unique(input, sorted=False, dim=0), the output is still sorted.
Furthermore, I also test a gpu tensor, and find all the output results are sorted regardless of whether setting sorted=True or False.
Why? There seems something wrong about it? |
st83760 | Solved by ptrblck in post #2
Depending on the underlying implementation the tensor is sorted e.g. for performance reasons.
All CUDA tensors are sorted by default, due to limitations in thrust. |
st83761 | Depending on the underlying implementation the tensor is sorted e.g. for performance reasons.
All CUDA tensors are sorted by default, due to limitations in thrust. |
st83762 | So the only special case is that when input is a cpu tensor and the dim arg is None, output is unsorted by default. Right? |
st83763 | Hi! I have a model that is too large to fit inside a single TITAN X (even with 1 batch size). I want to split it over several GPUs such that the memory cost is shared between GPUs. That is, place different parts of the same model on different GPUs and train it end-to-end.
Questions:
Is this possible in pyTorch? If not, is this possible in Torch?
Would inter-GPU communication (say, for transferring activations to later layers) involve GPU->host->GPU type transfers? |
st83764 | Yes it is possible. Just put some of the layers in GPU0 (.cuda(0)) and others on GPU1 (.cuda(1)). Then, in the forward function, once the base on the first GPU finishes processing, call .cuda(1) on the output. Of course this can be extended to as many GPUs as you want. See an example below.
No. Calling .cuda(i) on a CUDA tensor that’s on GPU j (j != i) is purely a peer to peer copy. Host doesn’t have to do anything.
class MyModel(nn.Module):
def __init__(self, split_gpus):
self.large_submodule1 = ...
self.large_submodule2 = ...
self.split_gpus = split_gpus
if split_gpus:
self.large_submodule1.cuda(0)
self.large_submodule1.cuda(1)
def forward(self, x):
x = self.large_submodule1(x)
if split_gpus:
x = x.cuda(1) # P2P GPU transfer
return self.large_submodule2(x) |
st83765 | apaszke:
and others on GPU1 (.cuda(1)). Then, in the forward function, once the base on the first GPU finishes processing, call .c
@apaszke Hi, very thanks for your examples.
I notice that when I split the whole model in 4 gpus and do forward/backward, the GPU memory on the fisrt GPU cost much more than it should be. For example, if the whole model cost 12GB on a single GPU, when split it to four GPUs, the first GPU cost 11GB and the sum of others cost about 11GB.
Is there an explaination for how does the GPU memory be malloced when using multiple GPUs for model parallelism.
Another question, when forward with the model parallelism, there is only one gpu hasing the Volatile GPU-Util with 100%, the others are 0%.
Is there any method to leverage all GPU-Util with the all four GPUs? |
st83766 | I am doing late fusion of features extracted by two large resnet.
After that, the classifier on the concatenated features can be run in the second GPU?
Will that be slow?
How to combine data parallel and model parallel?
(say if i have 4 gpu)
def forward(self, x):
x1= self.resnet_1(x[:,0:3,:,:]).cuda(0)
x2= self.resnet_2(x[:,3:6,:,:]).cuda(1)
flat = torch.cat([x1, x2],1).cuda(???)
logit = self.fc(flat).cuda(???)
return logit |
st83767 | I am also interested in late fusion and running particular submodels in different GPUs. Did someone find out anything on how to do it? I am currently doing something similar as @Hengck suggested but it is not working.
Thanks |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.