id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st82668 | To start off, I’m not sure if this is a Windows only issue or not, since many objects aren’t pickable under Windows. If anyone knows for certain, please let me know. I could get access to a Linux machine as well.
None the less, I’m trying to piece together a dataloader for a large set of very long videos. I need to sample random frames from these videos so a sequential decoding (which would be the fastest and wouldn’t need multiprocessing) is out of the question. I’ve been profiling different approaches and found that cv2 (~3 seconds for random 100 indices) > pyav (~10 seconds) > ffmpeg (~20 seconds, using subprocesses) is the fastest. Unfortunately, running the dataloader with cv2 and multiple workers is not possible as cv2.VideoCapture objects aren’t pickleable. Creating the capture object in the __iter__ is slower then the ffmpeg method though.
In short this is what the Dataset looks like:
class VideoDataset(Dataset):
def __init__(self, vid_paths):
self.caps = [cv2.VideoCapture(vid_path) for vid_path in vid_paths]
def grab_idcs(self, idx):
...
def __iter__(self, idx):
vid_idx, frame_idx = self.grab_idcs(idx)
cap = self.caps[vid_idx]
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_idx)
frame = cap.read()
return frame
The solution would be to be able to create the list of capture objects within each worker, but I doubt there is an option to do that. Does anyone have an idea of how to avoid the pickling error? |
st82669 | I found a solution, which is still not quite fast enough though. Just save the vid paths in init, and initialize the capture objects in the first call of iter:
class VideoDataset(Dataset):
def __init__(self, vid_paths):
self.vid_paths = vid_paths
self.caps = None
def grab_idcs(self, idx):
...
def __iter__(self, idx):
vid_idx, frame_idx = self.grab_idcs(idx)
if self.caps is None:
self.caps = [cv2.VideoCapture(vid_path) for vid_path in self.vid_paths]
cap = self.caps[vid_idx]
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_idx)
frame = cap.read()
return frame |
st82670 | I did something like this:
data = data.to(device)
label = label.to(device)
out1 = model1(data, label)
loss1 = model1.loss
loss1.backward()
out2 = model2(data, label)
loss2 = model2.loss
loss2.backward()
Model1 and model2 are the same models. With the codes above, model2 was not learned correctly at all.
But if I duplicate the batch into GPU twice and learn model1 and model2 with them respectively, both model1 and model2 are gonna be ok.
I think once the backward of loss1 was done, the computational graph would be freed and have no effects on the following computations.
Is there anyone can explain it? |
st82671 | Are you manipulating data inplace in model1 somehow, so that model2 gets a different data distribution? |
st82672 | Hi, all,
I implement my code about an Autoendocer with dense blcok on the basis of Pytorch. The total number of parameters around 0.8M. The image size is 100*100. the batchsize for training 100. However, when I launch CUDA, an error "cuda out of memory" will be ocurred. Can anyone can help me. My GPU card :
-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.14 Driver Version: 430.14 CUDA Version: 10.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 745 Off | 00000000:01:00.0 On | N/A |
| 25% 59C P0 N/A / N/A | 4037MiB / 4043MiB | 1% Default |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|===========================================================================
Thanks. |
st82673 | Hi,
There is not much you can do to reduce GPU memory usage except reducing the batch size and use inplace operations for activations like ReLUs if possible (if you see an error saying that a tensor needed for backward has been changed by an inplace operation, then that means that you cannot use them at that particular place). |
st82674 | Hi,
Thanks for your replying. My GPU card only has around 4G memory. This error will be resolved if I use a GPU card with larger memory?
best regard |
st82675 | This might be the case. If it’s possible to post the model definition and some dummy inputs (and targets) I could check the memory usage on larger GPUs. |
st82676 | Hi, ptrblck,
Thanks for your kind help. The following link is the github of this code:
https://github.com/xclmj/Dense-Convolutional-Encoder-Decoder-Network-with-Dense-Block.git
You can download the data from : https://drive.google.com/drive/folders/1VkYtS2oe-vwapUjwIG_0GdFzR_RMw2xW?usp=sharing.
best regard |
st82677 | Thanks for the code.
I’ve used the default arguments and this script to run the test:
device = 'cuda'
# enters time in latent
model = DenseEDT(1,
2,
blocks=(4, 9, 4),
times=(5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100),
growth_rate=24,
drop_rate=0,
bn_size=100,
num_init_features=48,
bottleneck=False,
time_channels=1).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001,
weight_decay=5e-4)
criterion = nn.MSELoss()
data = torch.randn(10, 1, 224, 224, device=device)
times = torch.randn(10, device=device)
output = model(data, times)
target = torch.empty_like(output).normal_()
loss = criterion(output, target)
loss.backward()
print('mem allocated {:.3f}MB'.format(torch.cuda.memory_allocated()/1024**2))
> mem allocated 16.433MB
print('max mem allocated {:.3f}MB'.format(torch.cuda.max_memory_allocated()/1024**2))
> max mem allocated 2146.784MB
print('mem cached {:.3f}MB'.format(torch.cuda.memory_cached() / 1024**2))
> mem cached 2440.000MB
nvidia-smi shows a usage of approx. 3500MB using a Titan V GPU with 12GB memory.
The additional usage comes from the cuda context, which will take some space on the GPU. |
st82678 | Hello,
I’d like to do the following using element wise operation :
I have a tensor a = [1,2,3,4]
And I have a bijection, that is represented by a dictionary for example
b = {1:4,2:1,3:2,4:3}
The transform would be :
f(a,b) = [4,1,2,3]
1 become a 4, 2 became a 1 and so forth…
Is this operation possible using only element wise operations? |
st82679 | You could try to create an index tensor from b and index a directly:
a = torch.tensor([1, 2, 3, 4])
b = torch.tensor([3, 0, 1, 2])
a[b]
Note that the index starts at 0 in python/PyTorch. |
st82680 | I try to use U-Net for segmentation but the error appears in cat layer
This is down-conv and up-conv part
from torch.nn import Module
import torch.nn.functional as F
class DownConv(Module):
def __init__(self, in_feat, out_feat, drop_rate=0.4, bn_momentum=0.1):
super(DownConv, self).__init__()
self.conv1 = nn.Conv2d(in_feat, out_feat, kernel_size=3, padding=1)
self.conv1_bn = nn.BatchNorm2d(out_feat, momentum=bn_momentum)
self.conv1_drop = nn.Dropout2d(drop_rate)
self.conv2 = nn.Conv2d(out_feat, out_feat, kernel_size=3, padding=1)
self.conv2_bn = nn.BatchNorm2d(out_feat, momentum=bn_momentum)
self.conv2_drop = nn.Dropout2d(drop_rate)
def forward(self, x):
x = F.relu(self.conv1(x))
x = self.conv1_bn(x)
x = self.conv1_drop(x)
x = F.relu(self.conv2(x))
x = self.conv2_bn(x)
x = self.conv2_drop(x)
return x
class UpConv(Module):
def __init__(self, in_feat, out_feat, drop_rate=0.4, bn_momentum=0.1):
super(UpConv, self).__init__()
self.up1 = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
self.downconv = DownConv(in_feat, out_feat, drop_rate, bn_momentum)
def forward(self, x, y):
x = self.up1(x)
x = torch.cat([x, y], dim=1)
x = self.downconv(x)
return x
This is model part
class Unet(Module):
def __init__(self, drop_rate=0.4, bn_momentum=0.1):
super(Unet, self).__init__()
#Downsampling path
self.conv1 = DownConv(1, 64, drop_rate, bn_momentum)
self.mp1 = nn.MaxPool2d(2)
self.conv2 = DownConv(64, 128, drop_rate, bn_momentum)
self.mp2 = nn.MaxPool2d(2)
self.conv3 = DownConv(128, 256, drop_rate, bn_momentum)
self.mp3 = nn.MaxPool2d(2)
# Bottleneck
self.conv4 = DownConv(256, 256, drop_rate, bn_momentum)
# Upsampling path
self.up1 = UpConv(512, 256, drop_rate, bn_momentum)
self.up2 = UpConv(384, 127, drop_rate, bn_momentum)
self.up3 = UpConv(191, 64, drop_rate, bn_momentum)
self.conv9 = nn.Conv2d(64, 1, kernel_size=3, padding=1)
def forward(self, x):
x1 = self.conv1(x)
x2 = self.mp1(x1)
x3 = self.conv2(x2)
x4 = self.mp2(x3)
x5 = self.conv3(x4)
x6 = self.mp3(x5)
# Bottom
x7 = self.conv4(x6)
# print(x7.size(), x5.size())
# Up-sampling
x8 = self.up1(x7, x5)
print(x8.size(), x3.size())
x9 = self.up2(x8, x3)
x10 = self.up3(x9, x1)
x11 = self.conv9(x10)
preds = F.sigmoid(x11)
return preds
In the last line I have no idea why the size have changed to this and my input size have resample from [2,1,512,512] to [2,1,256,256]
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 86, 86]) torch.Size([2, 128, 173, 173])
error exception
32 def forward(self, x, y):
33 x = self.up1(x)
---> 34 x = torch.cat([x, y], dim=1)
35 x = self.downconv(x)
36 return x
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 173 and 172 in dimension 2 |
st82681 | This is the last line from x1 to x11
second is x before up-convo.
last is after concat.
torch.Size([2, 64, 256, 256]) torch.Size([2, 64, 128, 128]) torch.Size([2, 128, 128, 128]) torch.Size([2, 128, 64, 64]) torch.Size([2, 256, 64, 64]) torch.Size([2, 256, 32, 32]) torch.Size([2, 256, 32, 32]) torch.Size([2, 256, 64, 64]) torch.Size([2, 127, 128, 128]) torch.Size([2, 64, 256, 256]) torch.Size([2, 1, 256, 256])
torch.Size([2, 256, 86, 86])
torch.Size([2, 256, 172, 172]) |
st82682 | I see, i think the input tensor shape is the problem.
i tried to run your code:
This works fine
rand_tensor= torch.rand(8,1, 256, 256)
output= model(rand_tensor)
gives same error as yours
rand_tensor= torch.rand(8,1, 255, 256)
output= model(rand_tensor)
so try adjusting the input shape to an even number or power of 2.
PS: Your code has memory leaks, it wont train for very long. |
st82683 | Of course, my input size is torch.Size([2, 1, 256, 256])
If I resample with my manual code there is no problem but when it tries to use resample from medicaltorch I got an error like that.
original size is torch.Size([2, 1, 512, 512])
train_transform = transforms.Compose([
mt_transforms.Resample(1.385, 1.385),
mt_transforms.ElasticTransform(alpha_range=(40.0, 60.0), sigma_range=(2.5, 4.0), p=0.3),
mt_transforms.ToTensor(),
after resample got torch.Size([2, 1, 256, 256])
and how to avoid the memory leak?
Thank you. |
st82684 | In the forward function of Unet you are using several variables (x1,x2,…) to the store intermediate results of layers which remain in the memory rather i would use only one variable like x as in your other class. |
st82685 | Thank you I get it anyway did you have any idea why get concatenate and the size of y get +1
Here is resample class from libary.
class Resample(MTTransform):
def __init__(self, wspace, hspace,
interpolation=Image.BILINEAR,
labeled=True):
self.hspace = hspace
self.wspace = wspace
self.interpolation = interpolation
self.labeled = labeled
def __call__(self, sample):
rdict = {}
input_data = sample['input']
input_metadata = sample['input_metadata']
# Voxel dimension in mm
hzoom, wzoom = input_metadata["zooms"]
hshape, wshape = input_metadata["data_shape"]
hfactor = hzoom / self.hspace
wfactor = wzoom / self.wspace
hshape_new = int(hshape * hfactor)
wshape_new = int(wshape * wfactor)
input_data = input_data.resize((wshape_new, hshape_new),
resample=self.interpolation)
rdict['input'] = input_data
if self.labeled:
gt_data = sample['gt']
gt_metadata = sample['gt_metadata']
gt_data = gt_data.resize((wshape_new, hshape_new),
resample=self.interpolation)
np_gt_data = np.array(gt_data)
np_gt_data[np_gt_data >= 0.5] = 1.0
np_gt_data[np_gt_data < 0.5] = 0.0
gt_data = Image.fromarray(np_gt_data, mode='F')
rdict['gt'] = gt_data
sample.update(rdict)
return sample |
st82686 | I have made a simple MNIST digit recognition model using pytorch.
Now i want an android app in which a camera opens and takes a picture from it, then that image runs in that model of mnist and tells the output…
My question is that how can i implement pytorch model in andoroid; do i need to convert it to Tensorflow/keras first? i tried that also but couldn’t do it…
please someone either tell me how to run the model directly on android or how can i convert the model into tenorflow…
kindly tell in step by step that i can follow easily… Thank you in Advance ! |
st82687 | You can use ONNX to export your models to Caffe2 and run it on an Android device.
Have a look at this 1.4k or this 777 tutorial. |
st82688 | Bro thanks for replying, but one more issue that i am facing is -
I run all my python code in Jupyter notebook, i have created a virtual environment and installed all the libraries in it…
I have also installed onnx using pip… It says successfuly installed.
But when i try to import onnx in Jupyter it says - “No module named onnx” as if it is not installed, what to do please help…
I have installed protobuf also… Still no luck! |
st82689 | Probably you didn’t select the right kernel. You can see your current selected kernel in the upper right.
To change it go to Kernel -> Change Kernel -> Select your kernel. |
st82690 | Hi Rahul,
I am experiencing the same problem. I started this project to document down all the steps, issues and problems I experience along the way in my attempt to ship a neural network developed in PyTorch 1.0 nightly to Android devices:
github.com
cedrickchee/data-science-notebooks/blob/master/notebooks/deep_learning/fastai_mobile/README.md 137
# Fast.ai Mobile Camera
:tada: Check out a working PyTorch-Caffe2 implementation on mobile: :tada:
- [Android Camera app demo (video)](https://youtu.be/TYkoaVNCMos)
**Guide - How I Shipped a Neural Network on Android/iOS Phones with PyTorch and Android Studio/Xcode**
I'll walk you through every step, from problem all the way to building and deploying the Android/iOS app to mobile phones.
- Learn how to ship SqueezeNet from PyTorch to Caffe2 + Android/iOS app. Please take a look at this [notebook](https://nbviewer.jupyter.org/github/cedrickchee/data-science-notebooks/blob/master/notebooks/deep_learning/fastai_mobile/shipping_squeezenet_from_pytorch_to_android.ipynb).
- Android project for AI Camera app tutorial in this [notebook](https://nbviewer.jupyter.org/github/cedrickchee/data-science-notebooks/blob/master/notebooks/deep_learning/fastai_mobile/shipping_squeezenet_from_pytorch_to_android.ipynb#Fast.ai-Mobile-Camera-Project).
- iOS project (TBD)
- Get started with an introduction to [Open Neural Network Exchange format (ONNX)](https://onnx.ai/) in this Jupyter [notebook](https://nbviewer.jupyter.org/github/cedrickchee/data-science-notebooks/blob/master/notebooks/deep_learning/fastai_mobile/onnx_from_pytorch_to_caffe2.ipynb).
[Source code for the Android app](https://github.com/cedrickchee/pytorch-android).
---
## Background
This file has been truncated. show original
The specific error “No module named onnx” and the resolutions are available in this project. You can jump to this specific Jupyter notebook 24 and find your answer there.
I hope that is useful. |
st82691 | Hi Rahul,
you can also install pytorch directly on Android using Termux/Archlinux
then take picture with Termux-API [termux-camera-photo]
The performance is enough for testings https://youtu.be/ox9TxZhmJ30 169
I started this github to describe how to do it.
GitHub
smscryptor/pytorch-on-android 223
setup pytorch on android. Contribute to smscryptor/pytorch-on-android development by creating an account on GitHub.
Regards |
st82692 | Since we have pytorch 1 now, Should we install caffe2 also?, How about if the project contains a pytorch c++ layer? |
st82693 | In the meantime, I put up my patch to build libtorch for Android 257. I was going to write a tutorial, too, but it’s hard to find time for it.
Best regards
Thomas |
st82694 | @ptrblck
chatbot model using pytorch.
How i embed on android app, how can i implement pytorch model in andoroid?
please someone either tell how can i convert the model into tenorflow…
kindly tell in step by step that i can follow easily… Thanks in Advance ! |
st82695 | Using onnx --> caffe2 pipeline had never really succeed for me. It’s always there’s cpp compile error regarding to some undefined Tensor function. Sad. |
st82696 | I have a feature map size of Bx3xDxHxW, the range is from 0 to 1. I want to use it as flow field and input to grid_sample function. As the document said
grid specifies the sampling pixel locations normalized by the input spatial dimensions. Therefore, it should have most values in the range of [-1, 1] . For example, values x = -1, y = -1 is the left-top pixel of input , and values x = 1, y= 1 is the right-bottom pixel of input
So, How can I normalize the feature map to range [-1,1] to use as grid? Thanks
This is what I did
spatial_gds = torch.rand(1,2,3,3,3)
print (spatial_gds)
_, _, D, H, W = spatial_gds.size()
grids = [torch.cumsum(spatial_gds[:, i, ...], dim=i+2)
for i in range(2)]
#print(grids)
grids_norm = torch.stack([(grid/dim)*2-1. for dim, grid in zip([D,H,W], grids)], dim=4)
print(grids_norm) |
st82697 | adagrad and sparseAdam work great for sparse training because there’s separate sums for each of the parameters.
Are there any other recommended optimizers for embedding training? |
st82698 | Hi, I know that argmax can work differently with cpu and gpu as follows,
import torch
a=torch.tensor([[1,1,1,1,0],[1,1,0,0,0]])
ac=torch.tensor([[1,1,1,1,0],[1,1,0,0,0]]).cuda()
a.argmax(dim=1)
=>tensor([3, 1])
ac.argmax(dim=1)
=>tensor([0, 0], device=‘cuda:0’)
But how to get the index of the last max with GPU? I need to get [3,1] instead of [0,0] as in the above example. |
st82699 | def grid_minimal_dist(centroids,grids):
r"""
Get the minimal distance for each point
in the grid to a centroid
"""
# Number of sample
t = centroids.size()[0]
res = []
for c,g in zip(centroids,grids):
# Number of centroids
n = c.size()[0]
# Number of grid point
m = g.size()[0]
# Match the size of centroids and grid point
c = c.repeat(m,1)
g = g.repeat_interleave(n,dim=0)
# Compute the L2 norm
distance = g - c
distance = distance.pow(2)
distance = torch.sum(distance,1)
distance = torch.sqrt(distance)
# Regroup for each point of the grid
distance = distance.view(m,n)
# Get the minimal distance
distance,_ = torch.min(distance,1)
# Append the result
res.append(distance)
res = torch.stack(res)
return res
Let us note t the number of samples.
centroids is a collection (list, tuple, etc. ) of t matrix (n,2), n is the same as in the code
Example : centroids = [ [ [1,0], [0,2] ], [ [0,0] ] ]
grids is a (t,m,2) tensor. There are t grids of m points in a 2 dimensional space.
The purpose of this function is to compute the distance of the nearest centroids for each point in grids.
So, if we take 1 take centroid matrix [[[1,0],[0,2]]] and a grid [[[0,0]]].
In this example, k = 1, m = 1, n = 2
We want to compute the result vectorof length m (the size of the grid). The result in this case will be
res = [[1]]
Because the nearest point to (0,0) was (1,0) and their L2 norm is 1.
This is done inside the for loop.
Now I would like to parallelize this, so t can be as great as 1000 for example.
The only problem to using only element wise operation is because of n, as it may varies for each value of t. Therefore, centroids cannot be a tensor (unless by using a mask perhaps?)
Sorry if this is confusing. Please ask any question for clarification purposes.
Best, |
st82700 | I don’t know if I got the point of your function, but as far as I understood, I think you can do the following:
import torch
def grid_minimal_dist(centroids, grids):
z = (centroids[:, None, None, :] - grids).norm(dim=3)
return z.min(0)[0].min(0)[0]
centroids = torch.Tensor([ [1,0], [0,2] , [0,0] ])
grids = torch.Tensor([[[0,0]]])
print(grid_minimal_dist(centroids, grids))
n = 3
t = 4
m = 5
centroids = torch.rand(n, 2)
grids = torch.rand(t, m, 2)
print(grid_minimal_dist(centroids, grids))
Is it what you wanted ? |
st82701 | Thanks for your answer.
Let’s note a function named g. g takes as input two tensors : centroid which is of size (n,2) and grid which is of size (m,2)
g will return a size (m) tensor called ‘res’ for which each value of ‘res’ is the distance of the nearest point for each point in the grid to all centroids
And that is done inside the for loop.
The for loop is there because ideally I’d like to take multiple grids and multiple centroids and use element wise operation to make the computation quick. There number of multiple grids and multiple centroids is denoted by t.
In your example, centroids would be
centroids = torch.rand(t,n,2)
But the problem is that n may vary for each t, so I think centroids must be a list of tensors, or find a way to use a mask somehow so that the function knows how many centroids it has to deal with.
Sorry if it’s still confusing… |
st82702 | Thanks, I think I got it better now. You can always repeat on centroid, say the last one, in order to stack all centroids into one single tensor. You just need to take n as the max size of the centroids for all t. The snippet above can be easily adapted to this case. I guess the desired output should be of the shape (t, m), is that right ? |
st82703 | I have two vectors each of length n, I want element wise multiplication of two vectors. result will be a vector of length n. |
st82704 | ex. a = (a1, a2, … an) and b = (b1, b2, … bn)
I want c = (a1*b1, a2*b2, … , an*bn) |
st82705 | Well this works in my case. Do you get a scalar running this?
a = torch.randn(10)
b = torch.randn(10)
c = a * b
print(c.shape) |
st82706 | If both tensors are stored on the GPU, then yes.
The operations are written for CPU and GPU tensors, so as long as the data is pushed to the device, the GPU will be used. |
st82707 | Hi there,
Is there a rule of thumb regarding mixing CPU and GPU based variables? As in, if I have a tensor going through a function with a mix of CPU-stored (non-tensor) and GPU-stored variables, is it worth it to declare each variable as a torch tensor on the GPU at the beginning of the function call? |
st82708 | Do you mean plain Python variables by “CPU-stored (non-tensor) variables”, e.g. like x = torch.randn(1) * 1.0?
Generally you should transfer the data to the same device, if you are working with tensors.
However, you won’t see much difference, if you are using scalars, as the wrapping will be done automatically. |
st82709 | Yeah, exactly. I’m venturing a little off the path and wasn’t sure if it was reasonable or if the scalars would pull the tensor from the GPU for computation (outside of a module).
Thanks for taking the time! |
st82710 | The documentation 2 states:
[I]n order to make computations deterministic on your specific problem on one specific platform and PyTorch release, there are a couple of steps to take. […] A number of operations have backwards that use atomicAdd , in particular […] many forms of pooling, padding, and sampling. There currently is no simple way of avoiding non-determinism in these functions.
Does this mean if I follow the guidelines, I will get deterministic results between individual runs, given the soft- and hardware does not change? Or does “no simple way” mean “no currently implemented way”?
I’m asking, because I do the following
def make_reproducible(seed=0):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
make_reproducible()
cnn_training_loop()
and the loss deviates between runs after a few iterations. This deviation is at first only present on the 4th or 5th significant digit of the loss, but builds up over time. Since it is only this small at first, I suspect that it stems from some non-deterministic behavior rather than a bug in my implementation. |
st82711 | Solved by albanD in post #2
Hi,
You will get deterministic results as long as you don’t use the functions that are listed.
Indeed for these functions, no deterministic alternative is implemented in pytorch. |
st82712 | Hi,
You will get deterministic results as long as you don’t use the functions that are listed.
Indeed for these functions, no deterministic alternative is implemented in pytorch. |
st82713 | Hi,
I am trying to figure out why my GPU is crashing. At first I was passing model and the train/val dataloaders to cuda, and got the message" cuda out of memory".
Now I am passing just the model and every batch in the train loop to cuda.
There is something I don’t understand. At the very beginning of my notebook I run this code:
print(torch.cuda.memory_allocated())
print(torch.cuda.memory_cached())
and get:
1024
2097152
I don’t understand where the 2097152 is coming from.
My training loop is as follows:
1- loop on the train loader and calculate the train loss.
2 -loop on the val loader and calculate the val loss.
3 - call a defined score() function on train loader (that loops train loader and predict using the model)
4 - call a defined score() function on val loader.
Everything works fine until step 3. during the score function I receive this error:
CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 2.61 GiB already allocated; 2.10 MiB free; 354.91 MiB cached)
the memory state at the end of step 2 is:
print(torch.cuda.memory_allocated())
print(torch.cuda.memory_cached())
3274752
48234496
and Finally here’s what im doing inside the score() function:
predicted = torch.tensor([])
Y = torch.tensor([])
for i, (inputs , targets) in enumerate(train_loader):
inputs, targets = inputs.cuda(), targets.cuda()
pred = model(inputs)
pred = pred.cpu()
predicted = torch.cat((predicted, pred.float()), 0)
Y = torch.cat((Y, targets.cpu()), 0)
y_true = Y.numpy()
y_pred = predicted.detach().numpy()
and here’s the GPU I have:
Any tips to train my models locally ? I really appreciate your help ! |
st82714 | At the beginning of the script no memory should be allocated.
import torch
print(torch.cuda.memory_allocated())
> 0
print(torch.cuda.memory_cached())
> 0
In your score function you are storing the computation graphs of each forward pass in predicted.
This will most likely cause the out of memory issue.
The usual work flow would be to calculate the loss for each batch and perform an optimization step.
Could you explain your use case a bit and why you are storing predicted just to detach it afterwards?
If you don’t need to calculate a loss and call backward on it, you should detach pred after calculating it or wrap the loop in a with torch.no_grad() block. |
st82715 | Sorry for the late reply! I just realized that
1024
2097152
is coming from these two lines:
pos_weights = class_weight["1"] * torch.ones(1, dtype = torch.float32, device = 'cuda')
neg_weights = class_weight["0"] * torch.ones(1, dtype = torch.float32, device = 'cuda')
Which is weird! it’s just a one element tensor! Can you please explain why this huge difference between cache and allocated memory.
You are right! Not doing detach() on pred was causing the problem. Well apparently I missed the fact that the graphs were saved for backprop on every iteration. Now everything works well.
In score() function I am calculating y_pred, and then return some scores to my training loop ( accuracy, roc-auc, f1 …)
One last question: Now as everything is working well, I re-checked my GPU performance and found that utilization is between 3% and 6% Even when I increase the batchsize to 256 the range is 5% to 20%. Why the GPU is not fully used ? |
st82716 | ilyes:
Which is weird! it’s just a one element tensor! Can you please explain why this huge difference between cache and allocated memory.
If I’m not mistaken, cudaMalloc rounds to 2MB blocks in newer GPUs, which is exactly (2097152 / 1024**2), which would explain the minimal cache size.
ilyes:
One last question: Now as everything is working well, I re-checked my GPU performance and found that utilization is between 3% and 6% Even when I increase the batchsize to 256 the range is 5% to 20% . Why the GPU is not fully used ?
You might have a data loading bottleneck.
Profile your data loading overhead using the code from the ImageNet example 8. If you see some overhead in loading the data, check this post 12 from @rwightman, where he explains some possible workarounds. |
st82717 | I read through the posts you shared with me, and many other forum’s questions regarding this issue. I changed my dataloaders to the following:
X_train, X_val, y_train, y_val = train_test_split(X.astype('float32'), Y.astype('float32'), test_size=0.1, random_state=2)
X_train = torch.tensor(X_train)
y_train = torch.tensor(y_train)
train = torch.utils.data.TensorDataset(X_train,y_train)
train_loader = torch.utils.data.DataLoader(train, batch_size = 256, shuffle = True, num_workers= 4, pin_memory= True )
and:
X_val = torch.tensor(X_val)
y_val = torch.tensor(y_val)
val = torch.utils.data.TensorDataset(X_val,y_val)
val_loader = torch.utils.data.DataLoader(val, batch_size =256, shuffle = True, num_workers= 4, pin_memory= True)
here’s my simple train loop:
for epoch in range(last_epoch, 100):
end = time.time()
for i, (inputs , targets) in enumerate(train_loader):
inputs , targets = inputs.cuda() , targets.cuda()
print("train: time for loading batch {} is {}".format(i+1,time.time() - end))
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
running_loss_train += loss.data.item()
end = time.time()
end = time.time()
for i, (inputs , targets) in enumerate(val_loader):
inputs , targets = inputs.cuda() , targets.cuda()
print("val: time for loading batch {} is {}".format(i+1,time.time() - end))
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
running_loss_val += loss.data.item()
end = time.time()
This resulted in my GPU utilization flactuating between 1% and 34% from the Timing output I think loading the very first batch is causing this bottleneck:
epoch 1
train: time for loading batch 1 is 0.5566799640655518
train: time for loading batch 2 is 0.0019986629486083984
train: time for loading batch 3 is 0.0019989013671875
train: time for loading batch 4 is 0.001997709274291992
train: time for loading batch 5 is 0.0009999275207519531
train: time for loading batch 6 is 0.001999378204345703
train: time for loading batch 7 is 0.0019981861114501953
train: time for loading batch 8 is 0.001999378204345703
train: time for loading batch 9 is 0.0019991397857666016
train: time for loading batch 10 is 0.0019991397857666016
train: time for loading batch 11 is 0.0019991397857666016
train: time for loading batch 12 is 0.001999378204345703
train: time for loading batch 13 is 0.0009996891021728516
train: time for loading batch 14 is 0.001998424530029297
train: time for loading batch 15 is 0.0010004043579101562
train: time for loading batch 16 is 0.0019986629486083984
train: time for loading batch 17 is 0.0010006427764892578
train: time for loading batch 18 is 0.0019989013671875
train: time for loading batch 19 is 0.0019996166229248047
train: time for loading batch 20 is 0.0
val: time for loading batch 1 is 0.4817237854003906
val: time for loading batch 2 is 0.0019991397857666016
val: time for loading batch 3 is 0.0
epoch 2
train: time for loading batch 1 is 0.46173715591430664
train: time for loading batch 2 is 0.0019996166229248047
train: time for loading batch 3 is 0.0019996166229248047
train: time for loading batch 4 is 0.0009996891021728516
.
.
.
Any insights ? PS: I played around with the batch size and the num_workers, but they dont seem to solve the issue!
Here’s another timing using non_blocking=True :
epoch 1
train: time for loading batch 1 is 0.6221673488616943
train: time for loading batch 2 is 0.0
train: time for loading batch 3 is 0.0
train: time for loading batch 4 is 0.0
train: time for loading batch 5 is 0.0009992122650146484
train: time for loading batch 6 is 0.0
train: time for loading batch 7 is 0.0
train: time for loading batch 8 is 0.0
train: time for loading batch 9 is 0.0
train: time for loading batch 10 is 0.0
train: time for loading batch 11 is 0.0
train: time for loading batch 12 is 0.0009992122650146484
train: time for loading batch 13 is 0.0
train: time for loading batch 14 is 0.00099945068359375
train: time for loading batch 15 is 0.0009996891021728516
train: time for loading batch 16 is 0.0
train: time for loading batch 17 is 0.0009992122650146484
train: time for loading batch 18 is 0.0010004043579101562
train: time for loading batch 19 is 0.0
train: time for loading batch 20 is 0.0
val: time for loading batch 1 is 0.46076011657714844
val: time for loading batch 2 is 0.0
val: time for loading batch 3 is 0.0
epoch 2
train: time for loading batch 1 is 0.48410654067993164
train: time for loading batch 2 is 0.00099945068359375
train: time for loading batch 3 is 0.00099945068359375
train: time for loading batch 4 is 0.0
train: time for loading batch 5 is 0.0 |
st82718 | The prefetching starts by entering the data loader loop and I’m not sure, if there are easy workarounds.
I played around with creating the iterators manually to force the prefetching manually, but I would consider this a hack. Here 22 is the gist I’ve created, but as I said, it’s not the cleanest way of using the DataLoader. |
st82719 | @ilyes @ptrblck There is a size of model + data below which you’re going to have a really hard time utilizing your GPU 100% with the combined overhead of Python, the framework, and getting data to/from the GPU, etc. I’ve seen a number of these sorts of posts where that is an issue.
If you can fit the whole dataset in GPU memory, you don’t have CPU augmentations, you might get some more utilization by preprocessing the data, moving to the GPU, and manually indexing that GPU tensor for the batches instead of using a dataloader.
One other thing, try setting pin_memory=False and see how it compares. I’ve had nothing but issues with it on. Recently re-confirmed looking into another issue. Enabling pin_memory choked up all of my CPU cores with 30-40% utilization in the kernel (some sort of synchronization contention?) .
EDIT: I posted a pretty picture of the CPU usage with pin_memory=True in another thread CPU usage extremely high 6 |
st82720 | I ran a few more quick what-if experiments to satisfy my curiosity.
I can achieve the highest GPU utilization on the MNIST demo, and see some expected performance scaling with higher batch size, with num_workers=0 and pin_memory=True … this is a little bit higher than num_workers=0 and pin_memory=False.
Setting pin_memory=True with any number of worker processes > 0 is an absolute disaster that pins all cores at 100%. |
st82721 | When I use the module:
class AngleLoss(nn.Module):
def __init__(self, gamma=0):
super(AngleLoss, self).__init__()
self.gamma = gamma
self.it = 0
self.LambdaMin = 5.0
self.LambdaMax = 1500.0
self.lamb = 1500.0
def forward(self, input, target):
self.it += 1
cos_theta,phi_theta = input
target = target.view(-1,1) #size=(B,1)
index = cos_theta.data * 0.0 #size=(B,Classnum)
index.scatter_(1,target.data.view(-1,1),1)
index = index.byte()
index = Variable(index)
self.lamb = max(self.LambdaMin,self.LambdaMax/(1+0.1*self.it ))
output = cos_theta * 1.0 #size=(B,Classnum)
output[index] -= cos_theta[index]*(1.0+0)/(1+self.lamb)
output[index] += phi_theta[index]*(1.0+0)/(1+self.lamb)
logpt = F.log_softmax(output)
logpt = logpt.gather(1,target)
logpt = logpt.view(-1)
pt = Variable(logpt.data.exp())
loss = -1 * (1-pt)**self.gamma * logpt
loss = loss.mean()
return loss
I will get error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 2]], which is output 0 of MulBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
It seems that the error part is
output[index] -= cos_theta[index]*(1.0+0)/(1+self.lamb)
output[index] += phi_theta[index]*(1.0+0)/(1+self.lamb)
How to modify the code? |
st82722 | You can try:
output[index] = output[index] - cos_theta[index](1.0+0)/(1+self.lamb)
output[index] = output[index] + phi_theta[index](1.0+0)/(1+self.lamb) |
st82723 | Thank you! I have tried this method. It will not solve my problem.
I finally solve it by adding .clone() to output.
output = cos_theta * 1.0
output1 = output.clone()
output[index] = output1[index]-cos_theta[index]*(1.0+0)/(1+self.lamb)
output[index] = output1[index] phi_theta[index]*(1.0+0)/(1+self.lamb)
It seems that x = x+1 is not in_place operation, but x[index] = x[index]+1 is in_place operation.
To be honest, I don’t quite understand why. |
st82724 | Hi,
When you do x[index] = anything, it is an inplace operation because you modify only part of x. The same thing happens if you do this with a python list for example. |
st82725 | I have some function and its graph (you can take any for example). My task is to find such parameters of this function so that its value is maximum. Can’t I figure out which Loss function to apply to me? Who knows how to solve this problem? |
st82726 | I created a dataset for a very small set of images (408 images download link is here 1).
it contains a csv file that has image file names and labels. and this is the class I made :
# we use csv for reading csv file
import csv
# we use PIL.Image for reading an image
import PIL.Image as Image
import os
class AnimeMTLDataset(torch.utils.data.Dataset):
def __init__(self, image_folder, csv_file_path, transformations, is_training_set = True) :
super().__init__()
self.path = csv_file_path
self.transforms = transformations
self.is_training_set = is_training_set
self.image_folder = image_folder
self.length = -1
if self.is_training_set:
# read the csv file into a dictionary
with open(csv_file_path, 'r') as csv_file :
csv_reader = csv.reader(csv_file)
# to skip header we simply do
next(csv_reader)
self.dataset = {}
for i, line in enumerate(csv_reader):
self.dataset[i] = line
self.length = len( self.dataset)
else:
self.image_folder = os.path.join(self.image_folder,'test')
self.length = len(os.listdir(self.image_folder))
def _format_input(self, input_str, one_hot=False):
one_hot_tensor = torch.tensor([float(i) for i in input_str])
if one_hot:
return one_hot_tensor
if one_hot_tensor.size(0) > 1 :
return torch.argmax(one_hot_tensor)
else:
return one_hot_tensor[0].int()
# lets create the corsponding labels for each category
def _parse_labels(self, input_str):
# white,red,green,black,blue,purple,gold,silver
colors = self._format_input(input_str[4:11], True)
# gender_Female,gender_Male
genders = self._format_input(input_str[12:13])
# region_Asia,region_Egypt, region_Europe, region_Middle East
regions = self._format_input(input_str[14:17])
# fighting_type_magic, fighting_type_melee, fighting_type_ranged
fighting_styles = self._format_input(input_str[18:20])
# alignment_CE, alignment_CG, alignment_CN, alignment_LE,
# alignment_LG, alignment_LN, alignment_NE, alignment_NG, alignment_TN
alignments = self._format_input(input_str[21:])
return colors, genders, regions, fighting_styles, alignments
def __getitem__(self, index):
if self.is_training_set:
img_path = self.dataset[index][1]
labels = self._parse_labels(self.dataset[index])
# image files must be read as bytes so we use 'rb' instead of simply 'r'
# which is used for text files
with open(os.path.join(self.image_folder, img_path), 'rb') as img_file:
# since our datasets include png images, we need to make sure
# we read only 3 channels and not more!
img = Image.open(img_file).convert('RGB')
print(img_path)
# apply the transformations
img = self.transforms(img)
print(img.shape)
return img, labels
else:
for img_path in os.listdir(self.image_folder):
with open(os.path.join(self.image_folder, img_path), 'rb') as img_file:
img = Image.open(img_file).convert('RGB')
# apply the transformations
img = self.transforms(img)
return img, None
def __len__(self):
return self.length
transformations = transforms.Compose([transforms.Resize(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
anime_dataset = AnimeMTLDataset(image_folder = 'mtl_dataset',
csv_file_path = r'mtl_dataset\fgo_multiclass_labels.csv',
transformations=transformations)
# lets test our dataset class and see if it works ok:
#unnormalize
def unnormalize(img):
img = img.detach().numpy().transpose(1,2,0)
return img * [0.229, 0.224, 0.225] + [0.485, 0.456, 0.406]
#training:
print('dataset size: {}'.format(len(anime_dataset)))
img, labels = anime_dataset[0]
plt.imshow(unnormalize(img))
this works. but when I try to use torch.utils.data.SubsetRandomSampler() to create a validation set as well, or even a plain simple dataloader with no sampler, it fails with the error message :
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
21
22 # test
---> 23 imgs, labels = next(iter(dataloader_train))
24 print(imgs[0].shape)
25 plt.imshow(unnormalize(imgs[0]))
~\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py in __next__(self)
558 if self.num_workers == 0: # same-process loading
559 indices = next(self.sample_iter) # may raise StopIteration
--> 560 batch = self.collate_fn([self.dataset[i] for i in indices])
561 if self.pin_memory:
562 batch = _utils.pin_memory.pin_memory_batch(batch)
~\Anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py in default_collate(batch)
66 elif isinstance(batch[0], container_abcs.Sequence):
67 transposed = zip(*batch)
---> 68 return [default_collate(samples) for samples in transposed]
69
70 raise TypeError((error_msg_fmt.format(type(batch[0]))))
~\Anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py in (.0)
66 elif isinstance(batch[0], container_abcs.Sequence):
67 transposed = zip(*batch)
---> 68 return [default_collate(samples) for samples in transposed]
69
70 raise TypeError((error_msg_fmt.format(type(batch[0]))))
~\Anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py in default_collate(batch)
41 storage = batch[0].storage()._new_shared(numel)
42 out = batch[0].new(storage)
---> 43 return torch.stack(batch, 0, out=out)
44 elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
45 and elem_type.__name__ != 'string_':
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 224 and 316 in dimension 2 at ..\aten\src\TH/generic/THTensor.cpp:711
So the following snippet fails and causes the previous error message :
# lets create a validation and training set
import numpy as np
import torch.utils.data as data
samples_count = len(anime_dataset)
all_samples_indexes = list(range(samples_count))
np.random.shuffle(all_samples_indexes)
val_ratio = 0.2
val_end = int(samples_count * 0.2)
val_indexes = all_samples_indexes[0:val_end]
train_indexes = all_samples_indexes[val_end:]
assert len(val_indexes) + len(train_indexes) == samples_count , 'the split is not valid'
sampler_train = data.SubsetRandomSampler(train_indexes)
sampler_val = data.SubsetRandomSampler(val_indexes)
dataloader_train = data.DataLoader(anime_dataset, batch_size = 32, sampler = sampler_train)
dataloader_val = data.DataLoader(anime_dataset, batch_size = 32, sampler = sampler_val)
# test
imgs, labels = next(iter(dataloader_train))
print(imgs[0].shape)
plt.imshow(unnormalize(imgs[0]))
What is wrong and what am I missing?
Thank you all in advance |
st82727 | Solved by ptrblck in post #4
Could you specify the size in Resize as a tuple?
transforms.Resize((224, 224))
as a single value will work differently, if your images are not quadratic. |
st82728 | Yes, This is actually the whole code so far :
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch import optim
from torchvision import datasets, transforms, models
import matplotlib.pyplot as plt
%matplotlib inline
# we use csv for reading csv file
import csv
# we use PIL.Image for reading an image
import PIL.Image as Image
import os
class AnimeMTLDataset(torch.utils.data.Dataset):
def __init__(self, image_folder, csv_file_path, transformations, is_training_set = True) :
super().__init__()
self.path = csv_file_path
self.transforms = transformations
self.is_training_set = is_training_set
self.image_folder = image_folder
self.length = -1
if self.is_training_set:
# read the csv file into a dictionary
with open(csv_file_path, 'r') as csv_file :
csv_reader = csv.reader(csv_file)
# to skip header we simply do
next(csv_reader)
self.dataset = {}
for i, line in enumerate(csv_reader):
self.dataset[i] = line
self.length = len( self.dataset)
else:
self.image_folder = os.path.join(self.image_folder,'test')
self.length = len(os.listdir(self.image_folder))
def _format_input(self, input_str, one_hot=False):
one_hot_tensor = torch.tensor([float(i) for i in input_str])
if one_hot:
return one_hot_tensor
if one_hot_tensor.size(0) > 1 :
return torch.argmax(one_hot_tensor)
else:
return one_hot_tensor[0].int()
# lets create the corsponding labels for each category
def _parse_labels(self, input_str):
# white,red,green,black,blue,purple,gold,silver
colors = self._format_input(input_str[4:11], True)
# gender_Female,gender_Male
genders = self._format_input(input_str[12:13])
# region_Asia,region_Egypt, region_Europe, region_Middle East
regions = self._format_input(input_str[14:17])
# fighting_type_magic, fighting_type_melee, fighting_type_ranged
fighting_styles = self._format_input(input_str[18:20])
# alignment_CE, alignment_CG, alignment_CN, alignment_LE,
# alignment_LG, alignment_LN, alignment_NE, alignment_NG, alignment_TN
alignments = self._format_input(input_str[21:])
return colors, genders, regions, fighting_styles, alignments
def __getitem__(self, index):
if self.is_training_set:
img_path = self.dataset[index][1]
labels = self._parse_labels(self.dataset[index])
# image files must be read as bytes so we use 'rb' instead of simply 'r'
# which is used for text files
with open(os.path.join(self.image_folder, img_path), 'rb') as img_file:
# since our datasets include png images, we need to make sure
# we read only 3 channels and not more!
img = Image.open(img_file).convert('RGB')
# apply the transformations
img = self.transforms(img)
return img, labels
else:
for img_path in os.listdir(self.image_folder):
with open(os.path.join(self.image_folder, img_path), 'rb') as img_file:
img = Image.open(img_file).convert('RGB')
# apply the transformations
img = self.transforms(img)
return img, None
def __len__(self):
return self.length
transformations = transforms.Compose([transforms.Resize(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
anime_dataset = AnimeMTLDataset(image_folder = 'mtl_dataset',
csv_file_path = r'mtl_dataset\fgo_multiclass_labels.csv',
transformations=transformations)
# lets test our dataset class and see if it works ok:
#unnormalize
def unnormalize(img):
img = img.detach().numpy().transpose(1,2,0)
return img * [0.229, 0.224, 0.225] + [0.485, 0.456, 0.406]
#training:
print('dataset size: {}'.format(len(anime_dataset)))
img, labels = anime_dataset[0]
plt.imshow(unnormalize(img))
#%%
anime_dataset_test = AnimeMTLDataset(image_folder = 'mtl_dataset',
csv_file_path = r'mtl_dataset\fgo_multiclass_labels.csv',
transformations=transformations,
is_training_set =False)
print('Test dataset test : ')
print('dataset size: {}'.format(len(anime_dataset_test)))
img, _ = anime_dataset_test[0]
plt.imshow(unnormalize(img))
#%%
# now lets create a dataloader and carry on!
# lets create a validation and training set
import numpy as np
import torch.utils.data as data
samples_count = len(anime_dataset)
all_samples_indexes = list(range(samples_count))
np.random.shuffle(all_samples_indexes)
val_ratio = 0.2
val_end = int(samples_count * 0.2)
val_indexes = all_samples_indexes[0:val_end]
train_indexes = all_samples_indexes[val_end:]
assert len(val_indexes) + len(train_indexes) == samples_count , 'the split is not valid'
sampler_train = data.SubsetRandomSampler(train_indexes)
sampler_val = data.SubsetRandomSampler(val_indexes)
dataloader_train = data.DataLoader(anime_dataset, batch_size = 32, sampler = sampler_train)
dataloader_val = data.DataLoader(anime_dataset, batch_size = 32, sampler = sampler_val)
dataloader_train2 = data.DataLoader(anime_dataset, batch_size = 32)
# test
imgs, labels = next(iter(dataloader_train2))
print(imgs[0].shape)
plt.imshow(unnormalize(imgs[0])) |
st82729 | Could you specify the size in Resize as a tuple?
transforms.Resize((224, 224))
as a single value will work differently, if your images are not quadratic. |
st82730 | Thank you very much! that solved the issue!
but do you mind if I ask you to kindly explain a bit on what went wrong in the first case? what is my other option except resizing squarishly! ? |
st82731 | If you pass a single value, only the smaller size will be matched to this value and the other one will be resized accordingly to keep the ratio as equal as possible.
From the docs:
size ( sequence or int ) – Desired output size. If size is a sequence like (h, w), output size will be matched to this. If size is an int, smaller edge of the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size) |
st82732 | Thank you very much, but I meant, how is that relevant to the error I get? why would that simply crash?
Its not like I’m convolving or running any kind of operations except couple of seemingly hurtless transformations! this is strange to me |
st82733 | The collate function cannot call torch.stack on tensors with different shapes in two or more dimensions.
Basically, this is crashing:
x = [torch.randn(3, 224, 224), torch.randn(3, 224, 300)]
torch.stack(x)
> RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 224 and 300 in dimension 3 at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/TH/generic/THTensor.cpp:689
, while this will work (as all image tensors are resized to the same size):
x = [torch.randn(3, 224, 224), torch.randn(3, 224, 224)]
torch.stack(x) # shape = [2, 3, 224, 224] |
st82734 | Thank you very very much. I really appreciate your kind reply.
Have a wonderful day sir:) |
st82735 | Hi! I’d like to highlight a feature request made on the GitHub repo for automatic tuning of batch_size and num_workers, and start some discussion around this topic.
Much like tensorflow has introduced atf.data.experimental.AUTOTUNE 3 flag to automatically tune these parameters, I think this feature would be very relevant for PyTorch users as well.
I have a couple questions for the community to start gathering building concensus -
Have you previously thought about this autotuning flag?
If you have thought about it before, what was the blocker to implementing it?
If this feature was introduced, would you use it?
What parameters do you use for batch_size and num_workers right now, and how do you set them? |
st82736 | It would be interesting to know, how many users are choosing the batch size based on the computation performance vs. the methodological perspective (e.g. bad training behavior for tiny or huge batch sizes).
E.g. if hypothetically a very small batch size would yield the best speedup, wouldn’t it also make some architectural changes necessary (replacing batchnorm layers for groupnorm etc.)? |
st82737 | I think a flag like this might be most useful for inference jobs, where we care exclusively about performance, without regard for training behavior. You’re right - adjusting batch size for users automatically will have side effects for training; but we can avoid this issue by at least narrowing the scope to inference.
Additionally, tuning num_workers would improve performance for both training and inference, especially for large scale (GB/TB) inference jobs.
Also, experimentally, it seems that large batch sizes tend to yield the best speedups, up until data can no longer fit in memory. |
st82738 | batch_size: use the largest batch size that fits in GPU memory. Scale learning rate and warmup according to the method of “Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour.” 2
num_workers: how do you tune a Winnebago to get a good lap time at Le Mans? Step 1: replace it with a Ferrari. Step 2: tune the Ferrari. |
st82739 | Hey Andrew ! Your suggestion on batch_size makes a lot of sense.
For num_workers - are you suggesting a TF/other DL library solution or a different consumer of data within PyTorch? |
st82740 | @Sean_O_Bannon I think it makes sense to have an optimized implementation of image folder and transforms for pytorch. The current API is nice, but the implementation is inefficient, and it slows down the entire system when it needs to feed 8xV100s and a small network (eg mobilenet_v2). |
st82741 | Hi, I am also focusing on this research. Do you mean that I can follow this research by using DistributedParralleDataset?
I am very puzzled about this.
Thanks! |
st82742 | I am using the classic CamVid dataset (http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamSeq01/ 5) for object segmentation. It is basically a dataset with videoframes (think of it just as images) and their corresponding segmentation mask.
I would like now to try something else: I would like to crop each frame and its segmentation mask in e.g. 100 patches centered at random positions from the image with a given size (e.g. 224*224) and save the results in the corresponding image / label folders. Note that both the image and the label have to be cropped in the same way for it to work. Does anybody know how to achieve this with few lines of code? |
st82743 | You could use the functional API of torchvision.transforms as described in this post 158. |
st82744 | image.png1642×657 137 KB
@ptrblck RandomResizeCrop works but RandomCrop returns me an error… |
st82745 | Your code snippet won’t crop the images, as
you have an assert statement, which checks both image shapes are equal
you calling get_params with the same input and output size, thus it will be static and return the parameters to use the complete image (line of code 11) |
st82746 | but @ptrblck both images (mask and image) do have the same size (for this reason the assert does not fail). Moreover, get_params is getting the new size passed (224, 320)… isn’t it? |
st82747 | Let tensor A has gradient information. If I run “B[:A.size()[0]] = A”, does B[:A.size()[0]] contains the same gradient information as A?
I am working on implement bidirectional GRU layer from scratch. I want it can take PackedSequence as input x. To deal with the variable batch_size for each step, I write the following code for reverseRNN layer:
for batch_size in reversed(batch_sizes):
step_state = global_state[:batch_size]
step_input = input[input_offset - batch_size : input_offset]
input_offset -= batch_size
out, step_state = self.cell(step_input, step_state)
outputs = [out] + outputs
global_state[:batch_size] = step_state
The global_state stores states’ situation for all sequences. And I generate PackedSequence using pack_sequence with the default setting.
My idea is if global_state can always get up-to-date gradient information from the current iteration’s step_state, the backpropagation should work correctly. |
st82748 | Hi,
No it won’t. It will contain the gradient information associated to how you use this new copy.
Why is this global_state necessary here? Why not just pass step_state to the next iteration? |
st82749 | Because, in a batch, there is no guarantee that all sequence are the same length. |
st82750 | I’m confused about your example actually.
I’m not sure to understand why there is a global state that is associated with different samples in your batch. Why would the ith entry in one batch should be associated with the ith entry in another batch (through your use of the global_state)? |
st82751 | I have a use-case where I need to do other operations on a sliding window.
For example, regular convolution calculates the dot-product between the kernel weights and the sliding window, but now I want to calculate the euclidean distance (or sum, or …) of the kernel weights with the sliding window. Is there a nice way to do this?
That is using the sliding window functionality of the convolutions layers but replacing the dot product with something else.
Thanks |
st82752 | I think I found an answer.
It seems like the right way to do it is to use unfold and fold functions.
With the unfold function one can create the same sliding window slices that a convolution operation slides over.
But, one thing that remains unclear is how I can apply my own function (e.g. euclidean distance) efficiently (e.g. without a python for-loop) over those slices. |
st82753 | I am trying to implement two similar network models with two LSTM layers whose parameters are not tied.
For eg., If I create two instances of the same model as:
model1 = Model()
model2 = Model()
Would the parameters of the above networks be tied? Kindly suggest a way to solve this. |
st82754 | Thank you @JTiC .
So the parameters won’t be tied, but I didn’t understand why it is specified as 20 characters. |
st82755 | I tried comparing the parameters of the two models with the function described in the following post.
check if models have same weights 6
And I got the result as “mismatch”. The comparison of the parameters was done after inference and before training. Is this the proper method to check if the parameters are tied or not? Kindly suggest the correct approach for the same. |
st82756 | I am trying to replace my existing BN layers with GroupNorm. However, after doing so, my model needs nearly twice the GPU memory compared to before with BN layers. Is this supposed to be the case or might there be something wrong? Thanks! |
st82757 | File “/home/ka/.local/lib/python3.6/site-packages/torch/cuda/init.py”, line 179, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (30) : unknown error at /pytorch/aten/src/THC/THCGeneral.cpp:50
pytorch 1.2.0
cuda 9.2
python 3.6
Thanks in advance |
st82758 | Could you answer these questions to make debugging a bit easier:
Which GPU and NVIDIA driver are you using?
Was PyTorch working before?
What does nvidia-smi output? |
st82759 | GeForce GTX 1080
Yes earlier it used to work.
NVIDIA-SMI 396.54
Driver Version: 396.54 |
st82760 | Hello,
I am searching around for a while, but still can not solve the problem I have at hand. Could somebody point me to the solution, or give an suggestion to this question?
I have a tensor,
A = [[row1], B = [1, 2, 1]
[row2],
[row3]]
tensor B indicates which group does each row of A belong to, and showing the group number
Question:
How can I rearrange A, such that rows for the same group can be grouped together, like:
A_after_rearrange = [[row1]
[row3],
[row2]] |
st82761 | Solved by liyz15 in post #4
Try
a = torch.tensor([[1,2,3],[4,5,6],[7,8,9]])
b = torch.tensor([1,2,1])
_, inds = torch.sort(b)
rearranged_a = a[inds] |
st82762 | If I correctly understood your question, this would work. If you want a more general way, you can easily extend it with a loop. It was not a matter of PyTorch, just Python programming…
import torch
a = torch.tensor([[1,2,3],[4,5,6],[7,8,9]])
b = torch.tensor([1,2,1])
rearranged_a = torch.zeros(a.size())
rearranged_a[:2] = a[b==1]
rearranged_a[-1] = a[b==2] |
st82763 | This gives the correct answer! Thank you so much!
Excuse me, but a following question is:
If there is another quicker way instead of extending using for loop? |
st82764 | Try
a = torch.tensor([[1,2,3],[4,5,6],[7,8,9]])
b = torch.tensor([1,2,1])
_, inds = torch.sort(b)
rearranged_a = a[inds] |
st82765 | You misunderstood me: I just meant that you should use a for loop but only if you have to do more than two replacements and just for compactness. I also would say that how vector b is built is rather counterintuitive but I don’t know from where it comes. I can guess some class labels. However, if instead of having b like this you manage to have b = [0,2,1], which gives the indeces where you want to place the rows, there exists a builtin method that does the job. Try, with b = [0,2,1]:
rearranged_a = torch.index_select(a,0,b) |
st82766 | sorry, I’m back.
If I am using torch.sort(b) then the internal order of all 1s in b is disturbed. It means the inds=[2, 0, 1], not inds=[0, 2, 1]. Is there a way to get b sorted but not disturb elements internal ordering? |
st82767 | b is not modified by the method sort, inds is a new tensor, which gives the argsort of b. However I don’t have clear your question… |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.