id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st83868 | For weighted sampling you would have to create a weight for each sample.
If you don’t have the target tensors already computed, you could iterate your dataset and store the target tensors.
Here is a small example, which should match your use case:
# Create dummy data with class imbalance 99 to 1
numDataPoints = 1000
data_dim = 5
bs = 100
data = torch.randn(numDataPoints, data_dim)
target = torch.cat((torch.zeros(int(numDataPoints * 0.99), dtype=torch.long),
torch.ones(int(numDataPoints * 0.01), dtype=torch.long)))
print('target train 0/1: {}/{}'.format(
(target == 0).sum(), (target == 1).sum()))
# Create ConcatDataset
dataset = torch.utils.data.TensorDataset(data, target)
train_dataset = ConcatDataset((dataset, dataset))
# Get all targets
targets = []
for _, target in train_dataset:
targets.append(target)
targets = torch.stack(targets)
# Compute samples weight (each sample should get its own weight)
class_sample_count = torch.tensor(
[(targets == t).sum() for t in torch.unique(targets, sorted=True)])
weight = 1. / class_sample_count.float()
samples_weight = torch.tensor([weight[t] for t in targets])
# Create sampler, dataset, loader
sampler = WeightedRandomSampler(samples_weight, len(samples_weight))
train_loader = DataLoader(
train_dataset, batch_size=bs, num_workers=1, sampler=sampler)
# Iterate DataLoader and check class balance for each batch
for i, (x, y) in enumerate(train_loader):
print("batch index {}, 0/1: {}/{}".format(
i, (y == 0).sum(), (y == 1).sum()))
In the first part I’m creating a dummy imbalanced dataset.
You should of course just skip this step and use your original concatDataset.
After storing all targets, the class_sample_count and the corresponding samples_weight tensor is created, which is used to create the WeightedRandomSampler.
As you can see in the last loop, each batch should be balanced using the sampler.
Let me know, if that would work for you. |
st83869 | First up thank you for the quick and detailed response!
I have tried executing the code at #Get all targets but have run into the following error:
TypeError: expected Tensor as element 0 in argument 0, but got int
which appears at the following line:
targets = torch.stack(targets)
When looking at targets, I noticed in my case it is full of ints, but torch.stack rather expects Tensors, if I understand it correctly?
Is this problematic or should I just use different functions to compute the samples weight? |
st83870 | I’m experiencing memory leak when using a linear layer and a Relu layer.
def forward(self, noise, elem_class):
process = psutil.Process(os.getpid())
print("Memory 1.1: ", process.memory_info().rss, "bytes", flush=True)
in_vector = torch.cat((noise, elem_class), dim=-1)
process = psutil.Process(os.getpid())
print("Memory 1.2: ", process.memory_info().rss, "bytes", flush=True)
in_vector = self.shared_gen_linear(in_vector)
process = psutil.Process(os.getpid())
print("Memory 1.3: ", process.memory_info().rss, "bytes", flush=True)
in_vector = F.relu(in_vector) # batch_size x (2*standard_dim)
process = psutil.Process(os.getpid())
print("Memory 1.4: ", process.memory_info().rss, "bytes", flush=True)
feats = in_vector[:, :self.standard_dim]
feats2 = in_vector[:, self.standard_dim:]
for layer in range(0, len(self.feat_gen_linear)):
feats = self.feat_gen_linear[layer](feats)
feats = feats.view(-1, self.feature_size, self.num_classes)
feats = torch.softmax(feats, dim=-1)
feats = feats.view(-1, self.num_classes*self.feature_size)
feats2 = self.feats2_gen_linear(feats2)
feats2 = feats2.view(-1, self.num_classes, 3)
feats2 = torch.softmax(feats2, dim=-1)
feats2 = feats2.view(-1, self.num_classes*3)
return feats, feats2
And this layer is defined in the init function of the class.
self.shared_gen_linear = nn.Linear(self.noise_dim + self.num_classes, 2*self.standard_dim)
From the output of the process memory, it appears that the memory leak occurs in
in_vector = self.shared_gen_linear(in_vector)
in_vector = F.relu(in_vector) # batch_size x (2*standard_dim)
As seen here,
Memory 1.1: 1217851392 bytes
Memory 1.2: 1217851392 bytes
Memory 1.3: 1218121728 bytes
Memory 1.4: 1218392064 bytes
Memory 1.1: 1219203072 bytes
Memory 1.2: 1219203072 bytes
Memory 1.3: 1219473408 bytes
Memory 1.4: 1220014080 bytes
Any insights on why this memory leak would occur? I am using PyTorch 1.1.0. |
st83871 | Solved by ptrblck in post #2
Could you post an executable code snippet to reproduce this issue?
Using the forward method and removing undefined parts of the code does not result in a memory leak:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.shared_gen_linear = nn.Linea… |
st83872 | Could you post an executable code snippet to reproduce this issue?
Using the forward method and removing undefined parts of the code does not result in a memory leak:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.shared_gen_linear = nn.Linear(10, 10)
def forward(self, noise, elem_class):
process = psutil.Process(os.getpid())
print("Memory 1.1: ", process.memory_info().rss, "bytes", flush=True)
in_vector = torch.cat((noise, elem_class), dim=-1)
process = psutil.Process(os.getpid())
print("Memory 1.2: ", process.memory_info().rss, "bytes", flush=True)
in_vector = self.shared_gen_linear(in_vector)
process = psutil.Process(os.getpid())
print("Memory 1.3: ", process.memory_info().rss, "bytes", flush=True)
in_vector = F.relu(in_vector) # batch_size x (2*standard_dim)
process = psutil.Process(os.getpid())
print("Memory 1.4: ", process.memory_info().rss, "bytes", flush=True)
feats = in_vector[:, :5]
feats2 = in_vector[:, 5:]
process = psutil.Process(os.getpid())
print("Memory 1.4: ", process.memory_info().rss, "bytes", flush=True)
return feats, feats2
model = MyModel()
model(torch.randn(1, 5), torch.randn(1, 5))
>Memory 1.1: 247803904 bytes
Memory 1.2: 247803904 bytes
Memory 1.3: 247803904 bytes
Memory 1.4: 247803904 bytes
Memory 1.4: 247803904 bytes |
st83873 | I’ve found the problem while creating an example executable code.
The problem actually is not related to those two operations at all, even though the memory seems to accumulate after those operations.
In fact, within my original training code, I concatenated the loss tensors instead of adding the loss, thus increasing the memory size required. Specifically, I had
total_gen_loss += gen_loss
instead of
total_gen_loss += gen_loss.item()
where gen_loss is an outputted torch.tensor (e.g., gen_loss == tensor(2.9921, grad_fn=))
For some reason, the memory accumulation occurs during the Linear and Relu layer operations. However, fixing the loss addition stopped the memory accumulation.
Thank you for your help! |
st83874 | I want to use GN/IN in my model.But APEX only keep BN with FP32.So what should i do when use GN/IN with FP32 in APEX?Thanks very much。 |
st83875 | F.instance_norm uses batch_norm internally (line of code 5).
nn.GroupNorm is defined as an FP32 method here 6. |
st83876 | Hello,
Sorry if this is a duplicate question yet I couldn’t find a similar question.
Is there a reason why PyTorch uses [N, X] dimension format? That is the prebuilt layers take inputs with dimensionality of [BatchSize, NumberOfInputFeatures] rather than [NumberOfInputFeatures, BatchSize]. The Broadcasting semantics are also arranged accordingly.
Was this an arbitrary design choice or was it intentional because of some efficiency purpose? I am asking this as this is against the dimension conventions generally embraced in papers etc…
Thanks |
st83877 | Solved by Nikronic in post #2
Hi,
I have enrolled different courses and worked with some of the popular frameworks and it seems it is all about conventions. And the most used one is the method pytorch uses.
By the way, Andrew Ng uses same convetion has been used in PyTorch in computer vision and deep learning courses he instru… |
st83878 | Hi,
I have enrolled different courses and worked with some of the popular frameworks and it seems it is all about conventions. And the most used one is the method pytorch uses.
By the way, Andrew Ng uses same convetion has been used in PyTorch in computer vision and deep learning courses he instruct in Stanford university.
Good luck |
st83879 | I am a novice.
I don’t learn about caffe2,so I use the flask to deploy the model with python.
But I don’t know which is better?Does caffe2 is faster?or if I optimize the inference code and the model,I can get the similar speed? |
st83880 | I think it depends on your use case.
As far as I understand your current PyTorch model runs in a Flask app and works.
Do you see any bottlenecks regarding the performance, i.e. do you expect a huge workload which your current approach cannot handle?
I would look for the current bottlenecks and optimize according to these. |
st83881 | Here’s a tutorial for deploying a pytorch model: https://pytorch.org/tutorials/beginner/deploy_seq2seq_hybrid_frontend_tutorial.html 424. Hope it helps. |
st83882 | what was your model? i m struggling to figure out how to deploy a linear regression model. do we need to save the weights ? |
st83883 | Might be off topic, but I did not find where to report a bug on nvidia website. I tried training a GAN based on pix2pixhd architecture with Amp ‘O1’ opt_level with cuda 10.1 cudnn 7.6 pytorch nightly build ubuntu 18.04.
Code breaks here:
File “./samplegenerator.py”, line 263, in
scaled_loss.backward()
File “/usr/local/lib/python3.6/dist-packages/torch/tensor.py”, line 118, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py”, line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: _cublasOpFromChar input should be ‘t’, ‘n’ or ‘c’ but got `
This kind of code should reproduce the behaviour:
class Modelis(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3,
stride=2, padding=1)
self.deconv = nn.ConvTranspose2d(in_channels=256, out_channels=128,
kernel_size=3, stride=2, padding=1, output_padding=1)
def forward(self, x):
x = self.conv(x)
x = self.deconv(x)
return x
Criterion = nn.bcewithlogits()
netG = Modelis()
netG = netG.cuda()
optimizerG = optim.Adam(netG.parameters(), lr=0.001, betas=(0.5, 0.999))
netG, optimizerG = amp.initialize(netG, optimizerG, opt_level='O1')
for i in range(100):
batch = (torch.randn(8,128,16,16).cuda() - 0.5) * 2
output = netG(batch)
loss = Criterion(output, torch.ones_like(output))
with amp.scale_loss(loss, optimizerG) as scaled_loss:
scaled_loss.backward()
It works in fp32 mode. After experiments have found that backward breaks on nn.transposeconv2d when output_padding is 1 (not 0). Using pytorch docker container (cuda 10.0, cudnn 7, pytorch 1.0) both fp32 and fp16 works fine. |
st83884 | We could reproduce this issue and are tracking it here 75.
Thanks @ngimel for the support tracking down this issue. |
st83885 | Consider the following loop:
for batch in dataloader:
batch = batch.cuda()
features = model(batch)
If forwarding a batch takes up almost the memory in my GPU (let’s say 7gb out of 8), then this loop will fail on the second iteration due to a OOM error.
This version won’t:
for batch in dataloader:
batch = batch.cuda()
features = model(batch)
del features
Even though features is only a very small tensor (a dozen values). What does del exactly do here? Why do I need to manually free the memory to achieve what I want? Is this good practice? If no, what is an alternative that doesn’t require to halve the batch size? |
st83886 | The whole computation graph is connected to features, which will also be freed, if you didn’t wrap the block in a torch.no_grad() guard.
However, the second iteration shouldn’t cause an OOM issue, since the graph will be freed after optimizer.step() is called.
If you run out of memory after the training and in the first evaluation iteration, you might keep unnecessary variables alive due to Python’s function scoping as explained in this post 100. |
st83887 | Indeed, I understand now. The previous iteration of features remains in memory while model(batch) is evaluated the second time, and so there are points in the execution where two different graphs exist in memory.
You’re right, in inference mode I should be wrapping the call with no_grad.
Would this not happen in a training loop, as the additional backward pass would free the graph? |
st83888 | In a training loop you would usually reassign the output to the same variable, thus deleting the old one and store the current output.
If you are using e.g. different variables for the output, losses etc. in the training and validation loop, you would waste a bit of memory, which could be critical, if you are using almost the whole GPU memory. |
st83889 | I have already installed pytorch by conda. Can I use mkldnn without building from source? |
st83890 | mkldnn should be used in the conda binaries.
You can check the MKL-DNN version using print(torch.__config__.show()). |
st83891 | I have a neural network model that I have mapped to a GPU device (hence the CUDA network description in the topic title) and during training I wish to put the entire training set through the network. (For exploratory purposes, not for training)
If I was in evaluation mode for a test set I would map the saved (trained) model to a CPU device and pass the test set in as one giant batch.
However, if I try to do this during training with the CUDA network I overload the gpu memory. Is there a good way to temporarily stop the model relying on GPU memory within training just to put the training set through (without recording gradients)?
My current ideas are:
find a way to temporarily map the network to the CPU without saving and reloading the model
put the training data through one example at a time (this is very slow and goes against the point of the original problem)
Any help on this would be really appreciated! Apologies for not providing any code, I feel like it’s a problem that maybe has a generalized solution anyway. |
st83892 | Solved by ptrblck in post #2
I assume “overloading” means your GPU is running our of memory?
If so, you could try to wrap this special forward pass in a with torch.no_grad() block so avoid storing the intermediate activations (which are not needed, if you don’t plan on calling backward).
If that doesn’t help and you are still… |
st83893 | I assume “overloading” means your GPU is running our of memory?
If so, you could try to wrap this special forward pass in a with torch.no_grad() block so avoid storing the intermediate activations (which are not needed, if you don’t plan on calling backward).
If that doesn’t help and you are still running out of memory, you could of course chunk the data in smaller batches, which should still be faster than passing each sample one by one.
spacemeerkat:
o temporarily map the network to the CPU without saving and reloading the model
You don’t have to save and reload the model in order to push it to the CPU.
Just call model.to('cpu') to move all parameters to the CPU. |
st83894 | In the end I used the model.to('cpu') option and that worked for me, many thanks again! |
st83895 | There are libraries to convert PyTorch to ONNX. But is there some library to convert ONNX to Pytorch? |
st83896 | There is a feature request for this functionality. link 27
Please feel free to add on to why this may be useful. Also feel free to contribute by creating this feature yourself and submitting a pull request. |
st83897 | Is there any way to shuffle between batch but don’t interrupt the order within each batch?
Many Thanks! |
st83898 | i dont actually understand what you mean (the sentence is very confusing). Can you give an example? |
st83899 | You can do it at a data preprocessing stage:
List of batches -> shuffle it -> feed each batch to the model on the training loop |
st83900 | u can form batch data at Dataset and get a list of batched data, then use Dataloader to shuffle this list. |
st83901 | Hi there,
I have an tensor, and I want to rotate it at a given degree, i.e. 0.05.
How can I do it efficiently? I saw the torchvision.transforms.RandomRotation but this is used for data augmentation.
Any advice?
Thank you |
st83902 | You can do it in multiple ways.
Use can pil Library directly for rotation as pytorch uses pil for this kind of image operations.
Another simple way I can think of is use random rotation class itself. But give the min and max with the same value.
If you want to write.a proper class, copy the random rotation class source code and change the way you need it to be like getting the rotation angle alone. |
st83903 | Thank you for suggestion,
The problem is this is the tensor as an intermediate layer in a network. Using PIL lib would requires to convert it back to numpy which is significantly slow.
Secondly, as I haven’t yet figure out how to use use random rotation class caused it only allow to use in the augmentation process.
Thirdly, making a custom class as modified of random rotation sounds reasonable. But I haven’t try yet. Just wonder why pyTorch does not support this function.
Thank you for your suggestions |
st83904 | I’ve been getting Floating point exception at different iterations in my code. I am not using shuffling, but even then this error arises randomly. There is no traceback of this error as well, so I’m unable to find the occurence of this error. I’ve tried pdb/(try-except) but they won’t work in in this case. I haven’t tried with fpectl, but not sure if it would be useful for this. Is there a better way to find the occurence (in the form of a traceback) of Floating point exception (core dumped) ? |
st83905 | Could you try to get a stack trace using gdb:
$ gdb --args python my_script.py
...
Reading symbols from python...done.
(gdb) run
...
(gdb) backtrace
... |
st83906 | Thanks for replying. I got this:
During startup program terminated with signal SIGFPE, Arithmetic exception.
(gdb) backtrace
No stack.
I’ve done everything to remove bad samples, but still it persists. |
st83907 | Hi,
In the example of word_language_model, we have
def repackage_hidden(h):
"""Wraps hidden states in new Variables, to detach them from their history."""
if type(h) == Variable:
return Variable(h.data)
else:
return tuple(repackage_hidden(v) for v in h)
I dont think I fully understand what the “history” includes, can somebody helps clarify this?
Thanks! |
st83908 | Every variable has a .creator attribute that is an entry point to a graph, that encodes the operation history. This allows autograd to replay it and differentiate each op. So each hidden state will have a reference to some graph node that has created it, but in that example you’re doing BPTT, so you never want to backprop to it after you finish the sequence. To get rid of the reference, you have to take out the tensor containing the hidden state h.data and wrap it in a fresh Variable, that has no history (is a graph leaf). This allows the previous graph to go out of scope and free up the memory for next iteration. |
st83909 | I was going to add that .detach() does the same thing, but I checked the code and realized that I’m not at all sure about the semantics of var2 = var1.detach() vs var2 = Variable(var1.data)… |
st83910 | Right now the difference is that .detach() still retains the reference, but it should be fixed.
It will change once more when we add lazy execution. In eager mode, it will stay as is (always discard the .creator and mark as not requiring grad). In lazy mode var1.detach() won’t trigger the compute and will save the reference, while Variable(var1.data) will trigger it, because you’re accessing the data. |
st83911 | So we do not need to repackage hidden state when making predictions ,since we don’t do a BPTT ? |
st83912 | For any latecomers, Variable object does not have creator attribute any more, which is renamed to grad_fn. You can see here 841 for more information. |
st83913 | Shouldn’t the code set requires_grad=True to the hidden state as shown below?
As per my understanding, each bptt set should be able to have gradients computed for h.
def repackage_hidden(h):
"""Wraps hidden states in new Variables, to detach them from their history."""
if type(h) == Variable:
return Variable(h.data, requires_grad=True)
else:
return tuple(repackage_hidden(v) for v in h)
Thanks. |
st83914 | it has already been updated to be compatible with the latest PyTorch version:
def repackage_hidden(h):
"""Wraps hidden states in new Tensors, to detach them from their history."""
if isinstance(h, torch.Tensor):
return h.detach()
else:
return tuple(repackage_hidden(v) for v in h) |
st83915 | Hi,
I have downloaded the compiled version of LibTorch 1.0 and saw that all DLLs are compiled for 64bit application.
I have a 32bit application which I want to link to LibTorch so I need 32bit version DLLs.
Is there a way to download\compile it?
Thanks! |
st83916 | zarlior:
downloaded
Do you know how to get the 32bit version DLLs,or whether pytorch supports 32bit in the future? |
st83917 | I kind of understand that when there is no argument passed in .backward(), it means the gradient that we want to compute is a scaler tensor, otherwise there would be an error. However, today I read a code that works with no argument in .backward() for a vector tensor. Does anyone knows why this works?
I paste a snippet here
loss = F.binary_cross_entropy_with_logits(xbhat, xb, size_average=False) / B
loss.backward()
where xbhat is predicted values and xb are original mnist data |
st83918 | Hi Yifan_Xu,
Backward for a vector tensor certainly does not work without the input gradient.
One reason the above code may be working is that the size_average argument has been deprecated in the current version of pytorch, in favour of the reduction argument, which has a default value of ‘mean’. Hence, the above code may unintentionally calculated gradient on a scalar tensor.
Hope this helps. |
st83919 | Hey Mazhar_Shaikh,
Thanks for your replay. I finally understand what was going on under the hood. I didn’t make it clear that this code was from a neural net. That means although xb is passed as a vector, it does not calculated as a vector. Say we have xb = [x1, x2, x3] and a simple MLP that outputs xbhat = g(w0b+w1x1 + w2x2 + w3x3). As you easily tell, x1, x2, x3 is done separately not as a whole. That’s why using .backward() without argument is legal here. |
st83920 | Is there a simple way to use dropout during evaluation mode?
I have a plan to use a pre-trained network where I reapply the training dropout, and find outliers by measuring the variance in predictions -all done in evaluation mode so that no gradients are backpropogated.
Many thanks in advance! |
st83921 | Solved by SimonW in post #2
Assuming that you are using the dropout modules.
model.eval()
for m in model.modules():
if m.__class__.__name__.startswith('Dropout'):
m.train() |
st83922 | Assuming that you are using the dropout modules.
model.eval()
for m in model.modules():
if m.__class__.__name__.startswith('Dropout'):
m.train() |
st83923 | Thanks for the fast reply! So I assume this makes an exception for ‘Dropout’ and sets it active using train() but leaves batchnorm etc. switched off in eval mode? |
st83924 | One quick question: will this method zero different nodes every time you present a new image to the network like it would in training? |
st83925 | That’s great, I was just checking because I was unsure if it would use a particular dropout selection and then stick to that selection until the model was re-instantiated with a line like “m.eval()” before presenting a new image |
st83926 | Hi Simon, Thanks for suggesting this approach! I tried it, and it gives predictions with certain randomness for in-class inputs. But the randomness is almost non-existent for out-of-class inputs. However, if I put the model in the training mode without back propagation (so that the weights are not updated) using model.train(), I observe significantly more randomness in the predictions for both in-class and out-of-class inputs, but the accuracy for the in-class inputs becomes very low (dropped from 95% to 63%). To me, the two approaches should give the same results. Any thoughts on the different behavior? My model also has batch norm. Thanks. |
st83927 | I have trained a network for detection using bounding boxes. During evaluation I remove boxes using IoU technique and thresholding. However, I am getting multiple boxes for a single object (as seen in picture below). The bounding boxes are
tensor([[448.00, 219.25, 476.33, 287.26], [453.84, 223.89, 473.85, 279.77], [450.89, 218.27, 467.31, 276.45], [448.76, 219.92, 484.23, 283.18]])
Is there any way i can combine or remove the extra boxes?
test.png720×720 214 KB |
st83928 | I am currently working on a module for a research paper where I need to perform a few calculations, like solving quadratic equations. Because the module is used a lot throughout the network and my memory is under pressure, I want to perform the calculations inplace. But the code is turning out to be quite ugly and borderline incomprehensible. Is there any way to improve the readability so that it’s not a massive chain of .mul_, .add_ and parentheses everywhere?
Also the autograd-errors are quite hard to debug if something is wrong in the inplace-calcuation.
Can something like the JIT-compiler help in this case? If yes, I can’t jit only part of the code, can I? I imagine the analysis for the compiler to be quite easy and well-understood. |
st83929 | Hi,
I am porting some of my old code (Pytorch 0.4.0) to latest pytorch version, although there is hardly any syntactic difference. But, i have observed that some architectures like an autoencoder trains much slower on the latest pytorch, while using a classification network like taking off-the-shelf resnet from torchvision models takes the same amount of time for training.
I have tried using the torch.backends.cudnn.benchmark=True solution, but that doesn’t help.
I have also tried reinstalling conda and the environments, but it makes no difference.
My machine’s configuration is Ubuntu 18.04, Nvidia GTX 1080 Ti
For a sample run, I took this simple convolutional autoencoder from [here] and trained it on mnist.(https://github.com/L1aoXingyu/pytorch-beginner/tree/master/08-AutoEncoder 5)
With pytorch 0.4.0, torchvision 0.2.1, cudatoolkit 9.0, cudnn 7.6.0, each epoch takes ~4 seconds.
With pytorch 1.1.0, torchvision 0.3.0 and cudatoolkit 10.0, cudnn 7.5.1, each epoch takes ~7 seconds.
Let me know if someone has observed a similar issue and any help in rectifying is appreciated. |
st83930 | I don’t know if that’s the issue, but your 0.4.0 version comes with cudnn 7.6.0, which should be faster than the cudnn version of your pytorch 1.1.0 according to this 32 link. If you use GPUs, the operations are often dispatched to cudnn ops. |
st83931 | Thats probably the issue I guess.
But thats what is installed with the respective versions of pytorch and cuda. It seems like I will have to install pytorch from source. |
st83932 | when use:
optimizer = torch.optim.SGD( model.parameters() )
What does the model.parameters() include?
e.g. what I know are as follows:
network weight, bias
BN beta, gamma
nn.Parameter
What anything else?
Thank you ? |
st83933 | Hi all!
Is the following pipeline suitable to train at the same time two models in two different GPUs, but using the same input data to optimize the RAM usage?
Or are there any “hidden” issues or data sharing/concurrency I am unaware of?
elements = [list of input data loaded in RAM memory]
customDataset_valid = CustomDataset(elements[valid_from: valid_to])
customDataset_test = CustomDataset(elements[test_from: test_to])
customDataset_train = CustomDataset(elements[train_from: train_to])
loader_valid_1 = torch.utils.data.DataLoader(dataset=customDataset_valid, batch_size=BS, num_workers=0)
loader_test_1 = torch.utils.data.DataLoader(dataset=customDataset_test, num_workers=0)
loader_train_1 = torch.utils.data.DataLoader(dataset=customDataset_train, batch_size=BS num_workers=0)
loader_valid_2 = torch.utils.data.DataLoader(dataset=customDataset_valid, batch_size=BS, num_workers=0)
loader_test_2 = torch.utils.data.DataLoader(dataset=customDataset_test, num_workers=0)
loader_train_2 = torch.utils.data.DataLoader(dataset=customDataset_train, batch_size=BS num_workers=0)
net1 = NET_A()
net1 = net1.to(device_1, dtype=torch.float)
criterion1 = nn.CrossEntropyLoss().to(device_1, dtype=torch.float))
optimizer1 = optim.SGD(net1.parameters())
thread1 = threading.Thread(target=my_train, args=(device_1, criterion1, optimizer1, net1, loader_train_1, loader_valid_1))
net2 = NET_B()
net2 = net2.to(device_2, dtype=torch.float)
criterion2 = nn.CrossEntropyLoss().to(device_2, dtype=torch.float))
optimizer2 = optim.SGD(net2.parameters())
thread2 = threading.Thread(target=my_train, args=(device_2, criterion2, optimizer2, net2, loader_train_2, loader_valid_2))
thread1.start()
thread2.start()
thread1.join()
thread2.join()
my_train is a function that simply performs the training operations using the given arguments.
Thanks. |
st83934 | Hello, everyone! How can I define a function that takes a dataset and an index list as input and returns a tensor?
For example, given a dataset defined as below
import torch
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
dataroot = "datasets/celebA/"
image_size = 64
dataset = dset.ImageFolder(root=dataroot,
transform=transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
I want to define a function
def minibatch(dataset,indices):
"""
:param dataset:
:param indices: an index list
:return: a N*C*H*W tensor that contains feature vectors corresponding to samples with these indices
""" |
st83935 | Solved by eltoto1219 in post #4
Once you load the dataset, create a dataloader, and since the dataloader and then you can do something like:
a = iter(dataloader)
b = next(a)
where b is a minibatch.
An example:
dataset = VQA_dataset(ROOT_DIR, train = False)
loader = torch.utils.data.DataLoader(
dat… |
st83936 | You can order dataloader indices as you want and use typicall pytorch dataloader. You will sequentially load batches with the mixtures that you want. |
st83937 | I find out this is actually very easy
def batch(batch_size=batch_size, lower=0, upper=n_sample):
indices = np.random.randint(low=lower, high=upper, size=batch_size)
batch_data = torch.FloatTensor(batch_size, nc, image_size, image_size)
for i in range(batch_size):
sample, target = dataset[indices[i]]
batch_data[i] = sample
return batch_data |
st83938 | Once you load the dataset, create a dataloader, and since the dataloader and then you can do something like:
a = iter(dataloader)
b = next(a)
where b is a minibatch.
An example:
dataset = VQA_dataset(ROOT_DIR, train = False)
loader = torch.utils.data.DataLoader(
dataset,
batch_size= 3,
shuffle=True,
)
data = next(iter(loader))
v, bb, spat, obs, a, q, q_len, item = data
torch.save(data, DUMMY_DATA) |
st83939 | Training runs on a server with a single gpu 1080ti however it isn’t running on another workstation with 2 rtx 2080 ti nvlink with the error:
Traceback (most recent call last):
File “”, line 1, in
File “/opt/conda/lib/python3.6/multiprocessing/spawn.py”, line 105, in spawn_main
exitcode = _main(fd)
File “/opt/conda/lib/python3.6/multiprocessing/spawn.py”, line 114, in _main
prepare(preparation_data)
File “/opt/conda/lib/python3.6/multiprocessing/spawn.py”, line 225, in prepare
_fixup_main_from_path(data[‘init_main_from_path’])
File “/opt/conda/lib/python3.6/multiprocessing/spawn.py”, line 277, in _fixup_main_from_path
run_name=“mp_main”)
File “/opt/conda/lib/python3.6/runpy.py”, line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File “/opt/conda/lib/python3.6/runpy.py”, line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File “/opt/conda/lib/python3.6/runpy.py”, line 85, in _run_code
exec(code, run_globals)
File “/home/elib/Dev/Retinanet_PT/UseExample_train.py”, line 39, in
main.main(args)
File “/home/elib/Dev/Retinanet_PT/retinanet/main.py”, line 185, in main
torch.multiprocessing.spawn(worker, args=(args, world, model, state), nprocs=world)
File “/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 158, in spawn
process.start()
File “/opt/conda/lib/python3.6/multiprocessing/process.py”, line 105, in start
self._popen = self._Popen(self)
File “/opt/conda/lib/python3.6/multiprocessing/context.py”, line 284, in _Popen
return Popen(process_obj)
File “/opt/conda/lib/python3.6/multiprocessing/popen_spawn_posix.py”, line 32, in init
super().init(process_obj)
File “/opt/conda/lib/python3.6/multiprocessing/popen_fork.py”, line 19, in init
self._launch(process_obj)
File “/opt/conda/lib/python3.6/multiprocessing/popen_spawn_posix.py”, line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File “/opt/conda/lib/python3.6/multiprocessing/spawn.py”, line 143, in get_preparation_data
_check_not_importing_main()
File “/opt/conda/lib/python3.6/multiprocessing/spawn.py”, line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.’’’)
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable. |
st83940 | I figured out:
a script that goes to multiple gpu has to include main. Works now |
st83941 | a group for discussing the problem of GNN, including paper reading, implement, ideas |
st83942 | I want to understand how the transformer works with a simple example. For example, predict the next word. All the examples that I have found for me while complex. If anyone has a simple example, so that I can understand, I would be very grateful. |
st83943 | for example:
fc = torch.nn.Linear(128,512)
data1 = torch.randn(4,512,128)
data2 = data1[0:1,200,128)#the length of data is changed
result1 = fc(data1)[0:1,200,:]
result2 = fc(data2)
print(result1-result2)#the result1 and result2 should be the same, however they are different in 10^-7
I didn’t find any reason for this result, I hope someone can help me. Thank You.(torch version is 1.1.0)
the result is different in the conditions shown in the following image: |
st83944 | Floating point numbers have a limited precision, which results in these small differences.
If you need more precision for whatever reason, you could cast your tensors to .double().
Have a look at this nice blog post 6 about floating point precision. |
st83945 | Hi guys,
I am working with ArcFaceLoss, code taken from: https://github.com/ronghuaiyang/arcface-pytorch/blob/master/models/metrics.py 6.
Here is the following snippet:
class ArcMarginProduct(nn.Module):
r"""Implement of large margin arc distance: :
Args:
in_features: size of each input sample
out_features: size of each output sample
s: norm of input feature
m: margin
cos(theta + m)
"""
def __init__(self, in_features, out_features, s=30.0, m=0.50, easy_margin=False):
super(ArcMarginProduct, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.s = s
self.m = m
self.weight = nn.Parameter(torch.FloatTensor(out_features, in_features))
nn.init.xavier_uniform_(self.weight)
self.easy_margin = easy_margin
self.cos_m = math.cos(m)
self.sin_m = math.sin(m)
self.th = math.cos(math.pi - m)
self.mm = math.sin(math.pi - m) * m
def forward(self, input, label):
# --------------------------- cos(theta) & phi(theta) ---------------------------
cosine = F.linear(F.normalize(input), F.normalize(self.weight))
sine = torch.sqrt((1.0 - torch.pow(cosine, 2)).clamp(0, 1))
phi = cosine * self.cos_m - sine * self.sin_m
if self.easy_margin:
phi = torch.where(cosine > 0, phi, cosine)
else:
phi = torch.where(cosine > self.th, phi, cosine - self.mm)
# --------------------------- convert label to one-hot ---------------------------
# one_hot = torch.zeros(cosine.size(), requires_grad=True, device='cuda')
one_hot = torch.zeros(cosine.size())
print(one_hot.size(), label.size(), label.view(-1, 1).long().size())
# torch.Size([32, 1108]) torch.Size([32]) torch.Size([32, 1])
one_hot.scatter_(1, label.view(-1, 1).long(), 1) # ERROR HERE
# -------------torch.where(out_i = {x_i if condition_i else y_i) -------------
output = (one_hot * phi) + ((1.0 - one_hot) * cosine) # you can use torch.where if your torch.__version__ is 0.4
output *= self.s
# print(output)
return output
After trying the above loss function,
metric_fc = ArcMarginProduct(512, NUM_CLASSES)
inputs = torch.randn(32, 512, 1, 1)
labels = torch.randn(32)
metric_fc(inputs.squeeze(), labels)
I am getting the error of Invalid index in scatter at /opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:551.
I check the dimensions by printing out and it seems that nothing is wrong. Any help is appreciated. Thank you. |
st83946 | Solved by ptrblck in post #2
label.view(-1, 1).long() contains invalid indices for the scatter_ operation.
Since one_hot has the shape [32, 1108], and you are trying to scatter into dim1, label should be clamped at [0, 1108]:
label = label.clamp(0, 1108)
one_hot.scatter_(1, label.view(-1, 1).long(), 1) |
st83947 | label.view(-1, 1).long() contains invalid indices for the scatter_ operation.
Since one_hot has the shape [32, 1108], and you are trying to scatter into dim1, label should be clamped at [0, 1108]:
label = label.clamp(0, 1108)
one_hot.scatter_(1, label.view(-1, 1).long(), 1) |
st83948 | Hi,
So say I have a 3D tensor [Channels x Width x Height] and I would like to perform softmax along the channel dimension. Do I specify the parameter dim=0 or dim=2?
Thank you |
st83949 | Solved by Nikronic in post #2
Hi,
We read dimensionality left to right, so in your case it have to be dim=0.
Good luck |
st83950 | Hi,
We read dimensionality left to right, so in your case it have to be dim=0.
Good luck |
st83951 | Hi, I want to load my saved model as described in the tutorials and it worked very well a few days ago. But now I get the following error message
optimal_network.load_state_dict(t.load('trained_Model_Dropout_Test.pt'))
File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 751, in load_state_dict
state_dict = state_dict.copy()
AttributeError: 'function' object has no attribute 'copy'
The Code where it happens is:
def trace_Model(self):
optimal_network = Network()
optimal_network.load_state_dict(t.load('trained_Model_Dropout_Test.pt')) #error here
traced_model = t.jit.trace(optimal_network, t.rand(1,1,128,128))
print(f"is Training Mode: {optimal_network.training}")
traced_model.save("traced_Model_Dropout_Test.pt")
It seems that there is a bug in the nn.Module class. How can I fix this? I get also the Message, that Code has been changed and I tried the advice but it did not help.
SourceChangeWarning: source code of class '__main__.Network' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) |
st83952 | I hope to rewrite a step function according to my own need.
In this function, I need to new some tensors which are the same size to all weights’ tensors(exclude biases’ tensors).
I hope to get the size from elements in param_groups.But I can’t realize this dict’s elements’ meaning.
Someone can explain this?or where can I learn the information about this? |
st83953 | When you initialize the optimizer using
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
or similar, pytorch creates one param_group. The learning rate is accessible via param_group['lr'] and the list of parameters is accessible via param_group['params']
If you want different learning rates for different parameters, you can initialise the optimizer like this.
optim.SGD([
{'params': model.base.parameters()},
{'params': model.classifier.parameters(), 'lr': 1e-3}
], lr=1e-2, momentum=0.9)
This creates two parameter groups with different learning rates. That is the reason for having param_groups.
You might find reading the source for SGD to be useful. http://pytorch.org/docs/0.3.1/_modules/torch/optim/sgd.html#SGD 248 |
st83954 | Thanks for your reply.I still have two problems about the source code:
for p in group[‘params’]:
if p.grad is None:
continue
d_p = p.grad.data
if weight_decay != 0:
d_p.add_(weight_decay, p.data)
if momentum != 0:
param_state = self.state[p]
problem1: Each p in param_groups is a data about weights’ or biases’ tensor data or grad or other parameters(kind of class Variable?)? So p1 is about conv1’s weight ,p2 is about conv1’s bias; p3 is about conv2’s weight; p4 is about conv2’s bias?
problem2:Is there only one group which include all parameters like weights,biases,weight decay and so on?``` If so, why there is a “s” in name param_group(s)? It’s funny. |
st83955 | Please use the code formatting tool in future.
Each p is one of the parameter Variables of the model. p.grad is the Variable containing the gradients for that parameter.
There will be several param_groups if you specify different learning rates for different parameters when you initialize the optimizer (as explained above). |
st83956 | Thank you very much!
I will learn how to use that tool, thank you for your reply and advice! |
st83957 | What if I want to use different optimizers for different param groups?
Do I have to define two optimizers or is there any other way? |
st83958 | https://www.kaggle.com/shubhendumishra/recognizing-faces-in-the-wild-vggface-pytorch/
Here is a kernel I’m working on. Can somebody tell me why this is happening? During training model outputs are fine but during Validation only one class is the output |
st83959 | Hi All,
I am struggling with sampling problem.
I want to sample only some values in matrix.
I would like to sample in a masked matrix.
For example, There is a 10x10 matrix like below.
[[0.894 0.107 0.94 0.801 0.793 0.992 0.437 0. 0. 0. ]
[0.444 0.956 0.002 0 0.043 0.504 0.904 0. 0. 0. ]
[0.978 0.22 0 0.918 0.342 0.168 0.927 0. 0. 0. ]
[0.334 0.166 0.094 0.853 0.619 0.58 0.745 0. 0. 0. ]
[0.021 0 0.23 0.232 0.339 0.794 0.351 0. 0. 0. ]
[0.758 0.566 0.01 0.141 0.306 0.766 0.757 0. 0. 0. ]
[0.71 0 0 0.738 0.6 0.154 0.917 0. 0. 0. ]
[0.757 0.573 0.324 0.04 0.075 0.216 0.872 0. 0. 0. ]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ]]
It is not necessarily zero.
I want to sample only in non-masking parts.
I want to do sample for each row. Ex) torch.multinomial
Please give me some tips for this task.
Thanks. |
st83960 | Hi,
I am trying to implement a 1D CNN network for 1D signal processing. I managed to implement a simple network taking some input and giving me an output after processing in a conv1D layer followed by a fully connected relu output layer.
However, I wanted to apply MaxPool1d and I get in trouble with the size of its output, necessary to calculate the input size of the fully connected output layer. It seems that if stride = kernel_size, then for odd input length, my implementation of the formula 22 provided in the docs does not allow to calculate properly the output size of MaxPool1d.
The following code illustrates this:
import numpy as np
import torch
# Some fake X input of dimension (100 samples, 1 channel, 1999 length)
X = torch.FloatTensor(np.ones((100,1,1999)))
# init parameters
input_channels = 1
# First conv layer parameters
out_channels_conv1 = 10 # number of kernels
Conv1_padding = 1
Conv1_dilation = 1
Conv1_kernel_size = 3
Conv1_stride = 1
# MasPool parameters
MaxPool_kernel_size = 3
MaxPool_stride = 3
MaxPool_padding = 0
MaxPool_dilation = 1
input_size = X.shape[2]
# first conv layer
conv1 = torch.nn.Conv1d(input_channels, out_channels_conv1, Conv1_kernel_size, stride=Conv1_stride, padding = padding)
out1 = conv1(X)
# calculating the output size of Conv1 with the formula from Pytorch docs
L_out = ((input_size + 2*Conv1_padding - Conv1_dilation*(Conv1_kernel_size-1) -1)/Conv1_stride +1)
if int(L_out) == out1.shape[2]:
print("Length at output of conv1 equals calculated length so everything looks good.\n Now pushing output of conv1 in MaxPool1d...")
# Now pushing data through MaxPool1d
MP1 = torch.nn.MaxPool1d(MaxPool_kernel_size,stride=MaxPool_stride)
out2 = MP1(out1)
print("Observed length is {}".format(out2.shape[2]))
Lout2 = (L_out+2*MaxPool_padding-MaxPool_dilation*(MaxPool_kernel_size-1)-1)/MaxPool_stride+1
print("Calculated length with Pytorch doc is {}".format(Lout2))
Output highlights that observed length is 666 while calculated length with the formula from Pytorch doc is 666.33…
Did I miss something? Should stride always be smaller than kernel_size? Or is there something wrong with my calculation? |
st83961 | Solved by Joel_Wu in post #2
you missing round down operation(⌊·⌋) in you calculation of Lout2 |
st83962 | Ah thanks, I did not notice the round down operation I misread it for brackets
Thanks! |
st83963 | Solved by ptrblck in post #2
set_grad_enabled can be used as a function or context manager as an alternative for torch.no_grad and torch.enable_grad. |
st83964 | set_grad_enabled can be used as a function or context manager as an alternative for torch.no_grad and torch.enable_grad. |
st83965 | just like for standard deviation torch.std() exists,
what are the inbuilt functions to take covariance, and correlation?
i. e.
cov = (a * b).mean() - a.mean() * b.mean()
cor = cov / (a.std() * b.std()) |
st83966 | I’m dealing with segmentation now.
I want to use with cross_entropy_loss and dice_loss. The form of input shape is[4,512,512], having value of 0,1 in indices and the output form is [4,2,512,512].
masks_shape --> torch.Size([4, 512, 512])
output_masks_shape --> torch.Size([4, 2,512, 512])
I have got the dice_loss function from another site.
def dice_loss(input, target):
smooth = 1.
loss = 0.
for c in range(n_classes):
iflat = input[:, c ].view(-1)
tflat = target[:, c].view(-1)
intersection = (iflat * tflat).sum()
w = class_weights[c]
loss += w*(1 - ((2. * intersection + smooth) /
(iflat.sum() + tflat.sum() + smooth)))
return loss
How to reshape input shape [4,512,512] to [4,2,512,512] including indices(0,1)??
And I want to use class_weights because I’m dealing with an unbalanced classes.
How to calculate class_weights?? |
st83967 | we dont support C, but we do support C++.
See documentation here:
https://pytorch.org/cppdocs/ 6
See our examples here:
GitHub
pytorch/examples
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc. - pytorch/examples |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.