id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st84668
|
Could you post a code snippet to reproduce this issue?
I’m not sure, where and hwo self.RCNN_poi_pool is defined in your model etc.
|
st84669
|
I am using https://github.com/DebasmitaGhose/Multimodal_Influenza_Detection/tree/master/faster-rcnn.pytorch.
In order to remove the 4th max pooling layer I modified the lib/model/vgg16.py as follows
list1 = list(vgg.classifier._modules.values())[:23]
list1.extend(list(vgg.classifier._modules.values())[24:-1])
list2 = list(vgg.features._modules.values())[:23]
list2.extend(list(vgg.features._modules.values())[24:-1])
vgg.classifier = nn.Sequential(*list1)
self.RCNN_base = nn.Sequential(*list2)
Moreover in https://github.com/DebasmitaGhose/Multimodal_Influenza_Detection/blob/master/faster-rcnn.pytorch/lib/model/faster_rcnn/faster_rcnn.py, I modified the following lines
self.RCNN_roi_pool = _RoIPooling(cfg.POOLING_SIZE, cfg.POOLING_SIZE, 1.0/8.0)
self.RCNN_roi_align = RoIAlignAvg(cfg.POOLING_SIZE, cfg.POOLING_SIZE, 1.0/8.0)
In the lib/model/utils/config.py I made the following modifications
__C.DEDUP_BOXES = 1. / 16.
__C.FEAT_STRIDE = [16, ]
to
__C.DEDUP_BOXES = 1. / 8.
__C.FEAT_STRIDE = [8, ]
|
st84670
|
Hi,
Suppose I have a weight tensor of shape (4,1), but I want to expand it along the second dimension so that it becomes (4,4) but the underlying storage does not change, and used as (16,1).
In code:
nn.parameter.Parameter(torch.Tensor(4,1).expand(4,4).view(16))
Note the above code will throw exception in the view() call, but is there (will there be) support for such usage?
Thanks!
|
st84671
|
Solved by tom in post #2
You need to keep the Parameter of shape 4 x 1 and only do the expand in .forward.
You need reshape instead of view, as it’ll need to instantiate the larger matrix.
Best regards
Thomas
|
st84672
|
You need to keep the Parameter of shape 4 x 1 and only do the expand in .forward.
You need reshape instead of view, as it’ll need to instantiate the larger matrix.
Best regards
Thomas
|
st84673
|
I have some question about normalization technique.
I encounter very weird phonomena doing normalization.
x = torch.rand(10, 10)
print((x - x.mean(dim=1, keepdim=True)).mean(dim=1, keepdim=True))
It doesn’t give zero tensor! The result is very small but it is not zero.
What is going on???
|
st84674
|
Solved by tom in post #2
This is numerical precision - floating point arithmetic isn’t exact, so you will not get the exact mean. If you switch to double, you get from 5e-8ish to 5e-17ish in your example. This can be more substantial if you have larger tensors.
Best regards
Thomas
|
st84675
|
This is numerical precision - floating point arithmetic isn’t exact, so you will not get the exact mean. If you switch to double, you get from 5e-8ish to 5e-17ish in your example. This can be more substantial if you have larger tensors.
Best regards
Thomas
|
st84676
|
Dear all,
I have a question concerning two different possibilities of initializing a nn.Parameter. In particular, would you expect a different behaviour of the following two possible initializations:
a = nn.Parameter(torch.zeros(5, requires_grad=True))
or
a = nn.Parameter(torch.zeros(5), requires_grad=True)
Thank you in advance!
|
st84677
|
nn.Parameter will enable gradients by default, so the right way to achieve the same thing is just a = nn.Parameter(torch.zeros(5)).
Your first variant works, but it is wrong - it pretends that by changing to requires_grad=False there would be a difference.
The second variant just spells out the default parameter.
Best regards
Thomas
|
st84678
|
Summary: With a ~100mb model and a ~400mb batch of training data, model(x) causes an OOM despite having 16 GB of memory available.
I’ve been playing around with the Recursion Pharmaceuticals competition over on Kaggle, and I’ve noticed bizarre spikes in memory usage when I call models. I’ve tried to create a minimal example here 1. All of the code is present at that link, but here’s a summary of what I’m doing:
The data is 512x512 images with 6 channels. I’m using a pretty standard data loader to load them; the code is contained in ImagesDS in cell 3. (The images should be normalized, but that doesn’t seem relevant here.)
The model is a Resnet50 with pretrained weights. However, since I have 6 channels and 1108 outputs, I replace the first and last layer. (I’ve seen the same error with other different base models like Densenet.):
model = torchvision.models.resnext50_32x4d(pretrained=True)
model.conv1 = nn.Conv2d(6, 64, 7, 2, 3)
model.fc = nn.Linear(2048, classes, bias=True)
model.to(device)
Finally, I’m getting the out of memory error on the first pass through the training loop:
epochs = 10
tlen = len(loader)
for epoch in range(epochs):
tloss = 0
acc = np.zeros(1)
for x, y in loader:
print(f'Memory allocated after tensor load to cpu: {torch.cuda.memory_allocated() / 10 ** 6} MB') # Gives ~100mb
x = x.to(device)
print(f'Memory allocated after tensor load to gpu: {torch.cuda.memory_allocated() / 10 ** 6} MB') # Gives ~500mb
optimizer.zero_grad()
# Everything explodes when we call the model on the input.
output = model(x)
# More training code that is never reached....
With a batch size of 64 (~400mb of input data), the loop causes an OOM. With a batch size of 16 (~100mb), memory usage never goes above ~500mb.
I can see a few possibilities here, but I’m unsure what’s most likely:
I’m doing something wrong when I load the data. Maybe the input tensors need to have requires_grad=False explicitly set on them or something
There’s some kind of memory allocation bug in Pytorch that I’m seeing here
There’s something about Kaggle’s GPU set up that causes the memory error.
Any ideas? The full code is at the link above, and you can easily clone it if you have a Kaggle account.
|
st84679
|
Note that besides the parameters and input, the training will also use memory for the forward activations, which are needed to compute the gradients (which also need to be stored on the device).
Is your training working with a smaller batch size?
|
st84680
|
Thanks for the prompt reply! I think you’re right; I underestimated how large the intermediate results are. I assumed they’d be roughly proportional to the size of the input, but now that I think about it that’s silly. Since the network works fine with a batch size of 16 images, I don’t think there’s anything else going on.
If you don’t mind, how can I access those intermediate forward activations? I thought code like this would work to get the output of the first layer:
result = model(x)
conv1 = next(model.children())
for b in conv1.buffers():
print(b)
However, there don’t seem to be any buffers associated with that layer. Is there a simple way to see those outputs?
|
st84681
|
Good to hear a batch size of 16 works.
Yeah, the intermediate activations can be quite huge, e.g. especially if you are using a lot of kernels in a conv layer.
.buffers is used for internal tensors, which do not require gradients, e.g. the running_mean and running_var in batchnorm layers.
If you want to get the intermediate outputs, you could register forward hooks as explained in this post 28.
|
st84682
|
I cannot successfully install “torch_scatter”.
When I run the command line : pip3 install torch_scatter, an error always occurs, just like below. I tried to solve the problem, but I don’t find the correct method. Could you help me? Thanks a lot!
The error:
…
cpu/scatter.cpp:1:29: fatal error: torch/extension.h: No such file or directory
compilation terminated.
error: command ‘x86_64-linux-gnu-gcc’ failed with exit status 1
Failed building wheel for torch-scatter
Running setup.py clean for torch-scatter
Failed to build torch-scatter
Installing collected packages: torch-scatter
Running setup.py install for torch-scatter … error
Complete output from command /usr/bin/python3 -u -c “import setuptools, tokenize; file =’/tmp/pip-build-ijd9s63n/torch-scatter/setup.py’;exec(compile(getattr(tokenize, ‘open’, open)( file ).read().replace(’\r\n’, ‘\n’), file , ‘exec’))” install --record /tmp/pip-4tkx7v_s-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.5
creating build/lib.linux-x86_64-3.5/torch_scatter
copying torch_scatter/mul.py -> build/lib.linux-x86_64-3.5/torch_scatter
copying torch_scatter/mean.py -> build/lib.linux-x86_64-3.5/torch_scatter
copying torch_scatter/sub.py -> build/lib.linux-x86_64-3.5/torch_scatter
copying torch_scatter/min.py -> build/lib.linux-x86_64-3.5/torch_scatter
copying torch_scatter/std.py -> build/lib.linux-x86_64-3.5/torch_scatter
copying torch_scatter/ init .py -> build/lib.linux-x86_64-3.5/torch_scatter
copying torch_scatter/max.py -> build/lib.linux-x86_64-3.5/torch_scatter
copying torch_scatter/div.py -> build/lib.linux-x86_64-3.5/torch_scatter
copying torch_scatter/add.py -> build/lib.linux-x86_64-3.5/torch_scatter
creating build/lib.linux-x86_64-3.5/test
copying test/test_multi_gpu.py -> build/lib.linux-x86_64-3.5/test
copying test/utils.py -> build/lib.linux-x86_64-3.5/test
copying test/test_std.py -> build/lib.linux-x86_64-3.5/test
copying test/ init .py -> build/lib.linux-x86_64-3.5/test
copying test/test_forward.py -> build/lib.linux-x86_64-3.5/test
copying test/test_backward.py -> build/lib.linux-x86_64-3.5/test
creating build/lib.linux-x86_64-3.5/torch_scatter/utils
copying torch_scatter/utils/ext.py -> build/lib.linux-x86_64-3.5/torch_scatter/utils
copying torch_scatter/utils/ init .py -> build/lib.linux-x86_64-3.5/torch_scatter/utils
copying torch_scatter/utils/gen.py -> build/lib.linux-x86_64-3.5/torch_scatter/utils
running build_ext
building ‘torch_scatter.scatter_cpu’ extension
creating build/temp.linux-x86_64-3.5
creating build/temp.linux-x86_64-3.5/cpu
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/zgh/.local/lib/python3.5/site-packages/torch/lib/include -I/home/zgh/.local/lib/python3.5/site-packages/torch/lib/include/TH -I/home/zgh/.local/lib/python3.5/site-packages/torch/lib/include/THC -I/usr/include/python3.5m -c cpu/scatter.cpp -o build/temp.linux-x86_64-3.5/cpu/scatter.o -Wno-unused-variable -DTORCH_EXTENSION_NAME=scatter_cpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
cpu/scatter.cpp:1:29: fatal error: torch/extension.h: No such file or directory
compilation terminated.
error: command ‘x86_64-linux-gnu-gcc’ failed with exit status 1
----------------------------------------
Command “/usr/bin/python3 -u -c “import setuptools, tokenize; file =’/tmp/pip-build-ijd9s63n/torch-scatter/setup.py’;exec(compile(getattr(tokenize, ‘open’, open)( file ).read().replace(’\r\n’, ‘\n’), file , ‘exec’))” install --record /tmp/pip-4tkx7v_s-record/install-record.txt --single-version-externally-managed --compile” failed with error code 1 in /tmp/pip-build-ijd9s63n/torch-scatter/
You are using pip version 8.1.1, however version 19.1.1 is available.
You should consider upgrading via the ‘pip install --upgrade pip’ command.
|
st84683
|
Hi Guanghui,
did you end up resolving this issue? I’ve been stuck on it for a while, and found no solution…
|
st84684
|
Hi,
Given m*n input matrix (tensor),
A B
C D
I want to repeat it diagonally, into (m*p)*(n+p-1) matrix
A B 0 0
0 A B 0
0 0 A B
C D 0 0
0 C D 0
0 0 C D
Where (m,n,p) are provided during init().
How to do it in Pytorch with autograd support? Thank you!
|
st84685
|
Ok, so I couldn’t find a super easy or efficient way to do this, and if people have better solutions I’d love to see them, but this is what I came up with:
def block_diag(x, n):
"""Repeats a tensor diagonally n times as specified by @zfzhang """
outputs = []
for row in x:
out = torch.zeros(n, n + x.shape[1] - 1)
for ii, elem in enumerate(row):
d = torch.diag(elem.expand(n))
padded = torch.nn.functional.pad(d, pad=(ii, x.shape[1] - ii - 1))
out = out + padded
outputs.append(out)
return torch.cat(outputs)
This method works, but is… not fast:
In [1]: x = torch.rand(10, 10)
In [2]: %timeit block_diag(x, 2)
1.78 ms ± 53.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [3]: %timeit block_diag(x, 10)
1.76 ms ± 60.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [4]: x = torch.rand(50, 50)
In [5]: %timeit block_diag(x, 2)
43.2 ms ± 2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [6]: %timeit block_diag(x, 10)
44.9 ms ± 1.65 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
The method should work with autograd, but may be infeasibly slow, depending on the size of your matrix! Hope that helps a bit. Again, if someone else has a better solution, please share, I’m very curious!
|
st84686
|
I’m trying to update some old code using pytorch 0.4, and it usestorch.utils.trainer.plugins Logger. It’s not supported by more recent versions of pytorch, and therefore I need to get rid of it. Is there any old docs showing what its purpose is and how can I replace it ?
|
st84687
|
Unfortunately it looks like trainer was an undocumented portion of PyTorch that has been removed (see this forum post 18 and this GitHub comment 12). The same forum post recommends TNT 8 and also points to an older forum post 1.
However, all of this is from 2017. I’m not entirely sure what Logger did, but you could check out fast.ai's codebase 2, which allows for metrics and callbacks. I hope that helps a bit!
|
st84688
|
Thank you for your reply !
Maybe you can help me out with the other ones as well ? They’re called Monitor, LossMonitor and there is also something about checkpoints. How are checkpoints handled on more recent versions of pytorch ?
|
st84689
|
Of course! So all of torch.utils.trainer was an undocumented and since-removed portion of PyTorch. As such, Monitor and LossMonitor are in the same category as Logger, to the best of my knowledge. A lot of people have worked on tools to monitor training in PyTorch:
there are ways to hook up Tensorboard to PyTorch 1
or use TensorboardX
the aforementioned fast.ai metrics and callbacks (which I only mention again because it is what I use, but not necessarily best)
a Facebook tool called Visdom
and even commercial products like Weights and Biases (which seems pretty cool, I’ll have to check it out!)
There are probably a ton more options out there, and it’ll depend on what your use case is, how in-depth you need to go, etc.
As for checkpointing, I’m not sure if you’re talking about torch.utils.checkpoint, a tool that has existed (and still exists!) in PyTorch to trade more computing time for less memory usage, or simply saving the model, which is commonly called checkpointing both in other systems (like Tensorflow) and in codebases (like my own…).
If you’re talking about the former, I’ve never used it but a quick glance makes it look like checkpointing is mostly the same since 0.4.0.
If you’re talking about the latter, that is done via torch.save, which “checkpoints” your model by saving the model weights to a pickled file.
Hope that helps!
|
st84690
|
Thank you so much for the detailed answer ! This is going to be very useful, I’m pretty sure I was thinking of “checkpointing”. The code I have runs with torch 0.4.0, but it was created in an earlier version, and therefore I can only guess that they had to create their own checkpointing method. torch.save seems a lot less trouble !
Once again, thank you for being kind and helpful with a newcomer, it makes you want to stay with the community.
|
st84691
|
Glad I could help! I’ve been using PyTorch for a while now but just recently got into the community, it’s a pretty welcoming place – glad I could pass it on! Good luck with your version updating project, I must say I don’t envy you
|
st84692
|
Hello, I have a database of images where every image is represented by a number, eg. color entropy calculated for that image. I am trying to train a CNN by giving to the network an image and the corresponding number to it and I’m expecting after the train is done to have a CNN that will see an image and predict me a number(eg. color entropy). Can anyone give me ideas? ( I am new to pytorch and I can’t fiind anything similar to this in tutorials). How to train the network so that it can estimate me a number after the CNN gets an image as an input?
|
st84693
|
Solved by KFrank in post #2
Hi Public!
Let’s say your images are 256x256 pixels, and each pixel is
given by an rgb value made up of three 8-bit bytes. Therefore
each image is given by 196608 numerical (8-bit) values. Convert
these to floats.
Because your example of color entropy doesn’t care about the
structure of the…
|
st84694
|
Hi Public!
Aww:
Hello, I have a database of images where every image is represented by a number, eg. color entropy calculated for that image. I am trying to train a CNN by giving to the network an image and the corresponding number to it and I’m expecting after the train is done to have a CNN that will see an image and predict me a number(eg. color entropy). Can anyone give me ideas? ( I am new to pytorch and I can’t fiind anything similar to this in tutorials). How to train the network so that it can estimate me a number after the CNN gets an image as an input?
Let’s say your images are 256x256 pixels, and each pixel is
given by an rgb value made up of three 8-bit bytes. Therefore
each image is given by 196608 numerical (8-bit) values. Convert
these to floats.
Because your example of color entropy doesn’t care about the
structure of the image, that is, whether two given pixels are
close to one another or far apart, convolutional layers won’t add
any value. (However, the fact that the three color components
of a given pixel are naturally grouped together is something you
might want to build into the structure of your network, rather than
having your network learn.)
So you might try a simple, single-hidden-layer network – something
like this:
Have 512 hidden neurons. So you start with a fully-connected
layer (nn.Linear (166608, 512)), followed by nonlinear
activations (e.g., nn.ReLU() or nn.Tanh()), followed by a
second fully-connected layer with a single output
(nn.Linear (512, 1)). You would most likely use the square
of the difference between your predicted and known color entropy
as your loss function (nn.MSELoss()).
My gut tells me that color entropy might be hard for a network to
learn. As a practical matter, if what you need is the color entropy
of an image, it will be much more satisfactory – faster, more
accurate – simply to calculate it directly, rather than predict it
with a network.
Good luck.
K. Frank
|
st84695
|
Afternoon,
I’m hoping someone can help me.
I am using Dataloader to load 16 1x2048 vectors into pytorch. The problem is that when i view what DataLoader is loading I find that the values have all been shuffled up. I have a tensor which is 16x1x2048 and all the values are present, just not in the correct order (to quote the late, great Eric Morcombe :-)…)
Does anybody know how I can extract 16 1x2048 vectors which maintain in the orginal file please?
Thanks for your help
Chaslie
|
st84696
|
Theres an argument to Dataloader() called shuffle, just set shuffle = False. Hope this helps.
|
st84697
|
Hi Prerna,
I Tried this and this doesn’t affect the make up of the vector. I would the order of the vector to be maintained ie:
(1 2 3 4 5 6 7 8)
but pytorch dataloader is loading the vector as:
(1 4 2 6 7 3 8 5)
|
st84698
|
Thats strange, it shouldn’t shuffle along the feature dimension like that. Can you post some code?
Edit : Follow up question, along what dimension is the dataloader shuffling the values? Batch or feature?
|
st84699
|
Hi Prerna,
I am loading the data from a 42000x2048 array, and then creating a 16x1x2048 tensor called data, when i view this it has lost the order from the 42000x2048 array
train_dataset = TensorDataset(X_train, y_train)
train_loader = DataLoader(train_dataset, batch_size=BATCHSIZE, shuffle=False)
for epoch in range(num_epochs):
for data, target in train_loader:
# print("data=",data.shape)
np.savetxt(f,data.numpy())
np.savetxt(f2,target.numpy())
data=data.unsqueeze(1)
print("data_sq=",data.shape)
data = data.cuda()
I think its shuffling along the feature, though looking at the data i would guess that its generating a 16x1x2048 tensor of numbers randomly taken from x_train array
|
st84700
|
I think the issue might be when you are reshaping it that it is shuffling, because the data loader itself never shuffles along the feature dimension.
|
st84701
|
So was TensorDataset() a custom dataset that you defined?
Edit: Looks like its a predefined instance of the Dataset() class, I think you can find the source code online for it.
|
st84702
|
yes, X_Train is the 42000x2048 input data and y_train is the labels 1x42000 array numbering from 1 t0 42000
|
st84703
|
Here - https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html#TensorDataset 20
|
st84704
|
ok so being a bit slow (Friday night over here!) is the problem with the y_train array calling numbers 1 to 42000, and if so whats the best way to solve it? should I leave y_train empty?
|
st84705
|
Okay can you try this in the for loop-
ctr = 0
for idx, (batch, target) in enumerate(train_loader):
if ctr ==0:
print(batch)
batch += 1
Now check if it shuffles it?
I did this and it didn’t shuffle along feature dimension for me
|
st84706
|
Actually even when I do
for (batch, target) in train_loader:
It doesn’t shuffle it.
I’m not sure what the issue might be in your case.
|
st84707
|
its still shuffling, i wish i caould view the contents of:
train_dataset = TensorDataset(X_train, y_train)
and
train_loader = DataLoader(train_dataset, batch_size=BATCHSIZE, shuffle=True)
|
st84708
|
You can index the Dataset directly and check for equal values:
x = torch.randn(42, 2048)
y = torch.randn(42, 1)
dataset = TensorDataset(x, y)
for idx in range(x.size(0)):
data, target = dataset[idx]
assert (data==x[idx]).all(), "data shuffled"
assert (target==y[idx]).all(), "target shuffled"
chaslie:
X_Train is the 42000x2048 input data and y_train is the labels 1x42000 array numbering from 1 t0 42000
That would raise an exception, since the number of samples is different for X_Train and y_train (42000 vs. 1). Could you check it again and post the code how you are creating the [16, 1, 2048]-shaped tensor?
|
st84709
|
Hi Ptrblck,
apologies for the delay in getting back to you. I ran your code on a sample deck and compared the values in data to the input array X and i can see that the tensor data doesn’t get shuffled by view the output in the print data statement to the print X statement…
from __future__ import print_function
import torch
import torch.nn.parallel
from torch.utils.data import DataLoader
from torch.utils.data import TensorDataset
x = torch.randn(42, 2048)
y = torch.randn(42, 1)
print("x=",x)
#print("y=",y)
dataset = TensorDataset(x, y)
train_loader = DataLoader(dataset, batch_size=1, shuffle=True)
for idx in range(x.size(0)):
data, target = dataset[idx]
assert (data==x[idx]).all(), "data shuffled"
assert (target==y[idx]).all(), "target shuffled"
for epoch in range(1):
for (data, target) in train_loader:
#print("data=",data.shape)
print("data",data)
#np.savetxt(f2,target.numpy())
however when I repeat the same exercise with my dataset the contents of data are not in the same order as the input. The code is:
X_train=train_array.transpose([1,0,2]).reshape(42000,2048) #the input array size is [42000,2048,1]
X_train=torch.Tensor(X_train)
y_train=np.arange(1,42001,1)
y_train=y_train.reshape(42000,1)
y_train=torch.from_numpy(y_train)
train_dataset = TensorDataset(X_train, y_train)
train_loader = DataLoader(train_dataset, batch_size=BATCHSIZE, shuffle=True)
for idx in range(X_train.size(0)):
data, target = train_dataset[idx]
assert (data==X_train[idx]).all(), "data shuffled"
assert (target==y_train[idx]).all(), "target shuffled"
for epoch in range(num_epochs):
for (data, target) in train_loader:
#print("data=",data.shape)
np.savetxt(f,data.numpy())
np.savetxt(f2,target.numpy())
data=data.unsqueeze(1)
print("data_sq=",data.shape)
I’m begining to think that i have done something silly but i really can’t see what I have done
Chaslie
|
st84710
|
Currently, what is state of the art for running PyTorch models on WASM for inference (not training)?
Is there something similar to https://www.tensorflow.org/js/guide/layers_for_keras_users 45 , but for PyTorch instead?
|
st84711
|
when I see GroupNorm’s definition in pytorch1.1.0. I cannot find the decorator called “weak_module” and “weak_script_method”. I really want to know what they have done due to having to use the GroupNorm in pytorch0.3.0 but it was not supported.
Could anyone give me some help?
|
st84712
|
Hello, this are flags introduced for the torch.jit module, indicating it to create a static computation graph. So if you are just trying to implement GroupNorm, you shouldn’t worry about it.
|
st84713
|
how shoud I initalize the weights of nn.ConvTranspose2d ? like nn.Conv2d? is this any special for Pytorch
Add another question:Does pytorch require manual weight initialization or pytorch layers would initialize automatically? means:if i do’t initialize the weight or bias ,it is all zero or random value ?
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
infer : the bias will automatically initialize with random value . is that right?
The work I want to do is to build a FCN based on a caffemodel FCN . now i want to initalize the network .as following
my initalization:
def weights_initG(m):
for p in m.modules():
if isinstance(p,nn.Conv2d):
n = p.kernel_size[0] * p.kernel_size[1] * p.out_channels
p.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(p,nn.BatchNorm2d):
p.weight.data.normal_(1.0, 0.02)
p.bias.data.fill_(0)
elif isinstance(p,nn.ConvTranspose2d):
n=p.kernel_size[1]
factor = (n+1)//2
if n%2 ==1:
center = factor - 1
else :
center = factor -0.5
og = np.ogrid[:n,:n]
weights_np=(1-abs(og[0]-center)/factor)* (1-abs(og[1]-center)/ factor)
p.weight.data.copy_(torch.from_numpy(weights_np))
Question one : how should I initalize the bias of Conv and deconv
Question two :since Pytorch image data is between[0,1],caffe image data is [0,255]. the weight initalization method have any difference with Caffe?
|
st84714
|
have a look at example dcgan 208
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
netG.apply(weights_init)
it should work.
|
st84715
|
@chenyuntc Does pytorch require manual weight initialization or pytorch layers would initialize automatically? I noticed there is .reset_parameters() in the base _ConvNd class, but I didn’t see where this function is called.
|
st84716
|
The parameters are initialized automatically. If you want to use a specific initialization strategy take a look at torch.nn.init 486. I’ll need to add that to the docs.
|
st84717
|
hello,hank you very much for your help, I ask you another question, for the above code, use the normal distribution to initialize the weights. If i want a normal distribution of variables in a certain range, what method can be done? Do I need to customize a distribution function?
Or called Truncated Normal Distribution
|
st84718
|
ok,thank you!by the way,if I want to clamp [-1, -0.1] and [0.1,1] 。How to operate?
|
st84719
|
Can anyone please explain how exactly this snippet works?
I’m not able to get how the m.weight.data.normal_ line works
and if I set bias=False while defining my network is it necessary to define m.bias.data.fill_(0) explicitly again?
|
st84720
|
.normal_ fills the tensor inplace with values drawn from the normal distribution using the specified mean and standard deviation (docs 10).
If you set bias=False, you don’t have to and in fact cannot call m.bias.data, since bias will be None.
Note, that I would recommend to wrap the parameter manipulations in a torch.no_grad block and avoid using the .data attribute directly:
lin = nn.Linear(10, 10, bias=False)
with torch.no_grad():
lin.weight.normal_(0.0, 1.0)
|
st84721
|
@ptrblck thank you! This helped me a lot
Just one question what is the difference between using torch.no_grad() and using .data?
|
st84722
|
torch.no_grad will disable gradient calculation, so that all operations in this block won’t be tracked by Autograd.
Using the underlying .data attribute will most likely work in this case.
However, I consider it dangerous, as it might silently introduce errors in the gradient calculations.
|
st84723
|
Hi,
I am trying to develop a sequential model (using LSTMs) that has to predict several features at each time-step. I originally wanted to use NLLLoss as a loss function but I am so sure about how to handle in such a multi-feature case.
The input of the model corresponds to the different one-hot encoded features concatenated together.
In the doc, it is said that NLLLoss can accept an input of the form (N,C,d1,d2,...,dK), where I assume the different d’s are dimensions you have to create for each of the feature to predict ? I tried it but it didn’t work.
Am I using the wrong loss function or is there a proper way to use NLLLoss for thie specific case ?
I googled a lot, but am still lost …
|
st84724
|
Hello,
I have been trying to use PyTorch to speed up some simple embarrassingly parallel computations with little success. I am looking for some guidance as to how to speed up the following simple code. Any help would be very much appreciated.
The following functions are to create data to use in the simple example further below:
import numpy
import math
import torch
import pandas
import timeit
from timeit import default_timer as timer
def assetPathsCPU(S0,mu,sigma,T,nRows,nPaths):
dt = T/nRows
nudt = (mu-0.5*sigma**2)*dt
sidt = sigma*math.sqrt(dt)
increments = nudt + sidt*numpy.random.randn(int(nRows),int(nPaths))
x=numpy.concatenate((math.log(S0)*numpy.ones((1,int(nPaths))),increments))
pricePaths=numpy.exp(numpy.cumsum(x,axis=0))
return pricePaths
def assetPathsGPU(S0,mu,sigma,T,nRows,nPaths,dtype,device):
dt = T/nRows
nudt = (mu-0.5*sigma**2)*dt
sidt = sigma*torch.sqrt(dt)
pricePaths=torch.exp(torch.cumsum(torch.cat((torch.log(S0)*torch.ones(1,nPaths,dtype=dtype,device=cuda0),
torch.distributions.Normal(nudt,sidt).sample((nRows, nPaths)).squeeze()), dim=0),dim=0))
return pricePaths
These are the simple functions - one for the CPU and one for the GPU:
def emaNPathsCPU(pricePaths,lookback):
# find T and nPaths
T,nPaths=pricePaths.shape
# create output array
ema=numpy.zeros([int(T),int(nPaths)])
# compute the smoothing constant
a = 2.0 / ( lookback + 1.0 )
# iterate over each price path
for pathIndex in range(0,int(nPaths)):
# iterate over each price path
ema[0,pathIndex] = pricePaths[0,pathIndex]
# iterate over each point in time and compute the EMA
for t in range(1,T):
ema[t,pathIndex]=a * (pricePaths[t,pathIndex]-ema[t-1,
pathIndex]) + ema[t-1,pathIndex]
return ema
def emaNPathsGPU(pricePaths,lookback,dtype,device):
# find T and nPaths
T,nPaths=pricePaths.shape
# create output array
#ema=numpy.zeros([int(T),int(nPaths)])
ema=torch.zeros(T,nPaths,dtype=dtype,device=device)
# compute the smoothing constant
a = 2.0 / ( lookback + 1.0 )
ema[0,:] = pricePaths[0,:]
# iterate over each price path
for pathIndex in range(nPaths):
# iterate over each point in time and compute the EMA
for t in range(1,T):
ema[t,pathIndex]=a * (pricePaths[t,pathIndex]-ema[t-1,
pathIndex]) + ema[t-1,pathIndex]
return ema
Here is how I call them:
cuda0 = torch.device('cuda:0')
dtype=torch.float64
lookbackGPU=torch.tensor(90.0,dtype=dtype,device=cuda0)
# start timer (EMA)
ts_emaGPU = timer()
# EMA on paths
emaGPU=emaNPathsGPU(pricePathsGPU[:,0:1000],lookbackGPU,dtype,cuda0)
# end timer (prices)
te_emaGPU = timer()
# compute time elasped
timeElasped_emaGPU=te_emaGPU-ts_emaGPU
# display time elasped
print('EMA Time Elasped (GPU): '+str(timeElasped_emaGPU))
pricePathsGPU_CPU=pricePathsGPU.cpu().numpy()
lookbackCPU=90.0
# start timer (EMA)
ts_emaCPU = timer()
# EMA on paths
emaCPU=emaNPathsCPU(pricePathsGPU_CPU[:,0:1000],lookbackCPU)
# end timer (EMA)
te_emaCPU = timer()
# compute time elasped
timeElasped_emaCPU=te_emaCPU-ts_emaCPU
# display time elasped
print('EMA Time Elasped (CPU): '+str(timeElasped_emaCPU))
The link to the example notebook is here:
github.com
dnokes/pytorch_examples/blob/master/simpleGpuVsCpuExample.ipynb 4
{
"cells": [
{
"cell_type": "code",
"execution_count": 93,
"metadata": {},
"outputs": [],
"source": [
"import numpy\n",
"import math\n",
"import torch\n",
"import pandas\n",
"import timeit\n",
"from timeit import default_timer as timer"
]
},
{
"cell_type": "code",
"execution_count": 94,
"metadata": {},
This file has been truncated. show original
Why is the GPU version so slow?
I was expecting the ‘path’ loop to run in parallel because it is completely independent. The utilization on the Titan GPU is about 25% while it is running, but is very slow.
What can I do to make the PyTorch version faster?
Any help would be greatly appreciated.
Many thanks,
Derek
|
st84725
|
The parallelism offered by the GPU is primarily per-op parallelism (there is something to be said about cuda streams, but that probably isn’t what you want to do here). The best way to speed this up is to vectorize over your Monte Carlo paths (possibly serializing over time).
As you appear to be looking into financial mathematics: I did something like that for the Hull-White-Model 70 a long time ago (when we had Variable…).
Best regards
Thomas
|
st84726
|
Thank you Thomas for your response. The referenced notebook is helpful.
I was hoping that PyTorch would function in a way similar to numba.
|
st84727
|
Numba is great, but you have to think about the order of your computations nonetheless.
What you would need is the “holy grail of all polyhedral optimizers 38” or so that would rewrite the order of your entire computation. There are projects like TVM or TensorFlow MLIR that are working on pushing some boundaries, but so far spending a few thoughts yourself is the best way to get these things faster.
Best regards
Thomas
|
st84728
|
I’m trying to convert the model in the run_classifier.py example from the pytorch-pretrained-BERT repo to ONNX format but run into a problem with a tensor size mismatch. The details are, hopefully, covered in this case on Stack Overflow 20.
I notice that libtorch with the PyTorch JIT is another option… perhaps they are the path forward? I would like to get this into a c++ server-side component. I’ve tried asking in the jit category but they appear to be solely focused on existing JIT issues.
Thanks!
|
st84729
|
Did you manage to get your model converted? I’ve been trying to convert the bert-base-uncased model to ONNX, but I’m hitting what appears to be a memory-related error: builtins.RuntimeError: [enforce fail at CPUAllocator.cpp:56] posix_memalign(&data, gAlignment, nbytes) == 0. 12 vs 0. Looking into it with free -m I noticed that it happens with as little as 6GB of RAM usage (running on cpu), which seems very odd, since the machine has 24GB installed. Any help appreciated.
For reference, my conversion is super straightforward:
output_dir = '/home/james/src/pytorch-pretrained-BERT/outputs'
max_seq_length = 128
num_labels = 128
# Load pre-trained model (weights)
model = BertModel.from_pretrained('bert-base-uncased')
model_state_dict = os.path.join(output_dir, 'pytorch_bert.bin')
model.load_state_dict(torch.load(model_state_dict))
print('Model loaded: ', model_state_dict)
# Save ONNX
msl = max_seq_length
dummy_input = torch.randn(1, msl, msl, msl, num_labels).long()
output_onnx_file = os.path.join(output_dir, "bert_test.onnx")
torch.onnx.export(model, dummy_input, output_onnx_file)
|
st84730
|
This question has already been asked before (DistributedDataParallel deadlock 11) but it does not seem active anymore.
I am running DistributedDataParallel and both with nccl and gloo backends on Pytorch 1.1 my code freezes on the following line:
model = torch.nn.parallel.DistributedDataParallel(model)
I am not using built-in dataloader so problem with workers not being set to 0 should not be a problem.
Is there any other place where this problem might be coming from?
|
st84731
|
Same problem here, with Pytorch v1.0.1 and NCCL backend. I am running one of the distributed examples from Ignite: https://github.com/pytorch/ignite/blob/master/examples/mnist/mnist_dist.py 14
Mine freezes on the line:
model = DistributedDataParallel(model, [args.gpu])
Where model is an instance of LeNet5 and args.gpu is 0.
|
st84732
|
I have opened up a new issue on this to discuss since I am facing the same problem with DDP.
github.com/pytorch/pytorch
Issue: Script freezes with no output when using DistributedDataParallel 94
opened by shoaibahmed
on 2019-07-13
🐛 Bug
I was trying to evaluate the performance of the system with static data but different models, batch sizes and AMP...
module: distributed
|
st84733
|
Does anyone know what the issue is please?
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-130-4a1832287d27> in <module>()
1 model, criterion, optimizer = build_model()
----> 2 train_model(config, model, criterion, optimizer)
3
4 if config.config_type == 'Q1_4' or config.config_type == 'Q1_5':
5 dropout_validates = []
<ipython-input-127-85332b119cd8> in train_model(config, model, criterion, optimizer)
24 optimizer.zero_grad()
25 # compute loss
---> 26 loss = criterion(model(inputs), targets)
27 loss.backward()
28 optimizer.step()
/anaconda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
355 result = self._slow_forward(*input, **kwargs)
356 else:
--> 357 result = self.forward(*input, **kwargs)
358 for hook in self._forward_hooks.values():
359 hook_result = hook(self, input, result)
<ipython-input-123-94533970db9c> in forward(self, x)
9
10 def forward(self, x):
---> 11 x = nn.ReLU(self.fc1(x))
12 x = nn.ReLU(self.fc2(x))
13 x = nn.ReLU(self.dropout(x))
TypeError: 'tuple' object is not callable
My model is defined like this:
class MLPb(nn.Module):
def __init__(self):
super(MLPb, self).__init__()
self.config = config
self.fc1 = nn.Linear(784, 600),
self.fc2 = nn.Linear(600, 200),
self.dropout = nn.Dropout(p=0.5), #last layer dropout
self.fc3 = nn.Linear(200, 10)
def forward(self, x):
x = nn.ReLU(self.fc1(x))
x = nn.ReLU(self.fc2(x))
x = nn.ReLU(self.dropout(x))
x = nn.Softmax(self.fc3(x))
return x
EDIT: model defined with , between layers
|
st84734
|
You should initialize the nn.ReLU modules in your __init__ function and just call them in forward or use the functional API: x = F.relu(x). Now you are creating a nn.ReLU module with the arguments self.fc1(x).
EDIT: Did the comma issue solve your problem? It still gives an error for me for the aforementioned reason.
|
st84735
|
\I’m no expert in distributed system and CUDA. But there is one really interesting feature that PyTorch support which is nn.DataParallel and nn.DistributedDataParallel. How they are actually implemented? How they separate common embeddings and synchronize data?
Here is a basic example of DataParallel.
import torch.nn as nn
from torch.autograd.variable import Variable
import numpy as np
class Model(nn.Module):
def __init__(self):
super().__init__(
embedding=nn.Embedding(1000, 10),
rnn=nn.Linear(10, 10),
)
def forward(self, x):
x = self.embedding(x)
x = self.rnn(x)
return x
model = nn.DataParallel(Model())
model.forward(Variable.from_numpy(np.array([1,2,3,4,5,6], dtype=np.int64)).cuda()).cpu()
PyTorch can split the input and send them to many GPUs and merge the results back.
How does it manage embeddings and synchronization for a parallel model or a distributed model?
I wandered around PyTorch’s code but it’s very hard to know how the fundamentals work.
I’ve posted this question on StackOverflow 26.
|
st84736
|
I am not sure about DistributedParallel but in DataParallel each GPU gets a copy of the model, so, the parallelization is done via splitting the minibatches, not the layers/weights.
Here’s a sketch of how DataParallel works, assuming 4 GPUs where GPU:0 is the default GPU.
dataparallel.png1275×2312 660 KB
|
st84737
|
dear @Rasbt
I am so sorry to recognize the step5 draft words , could you pls re-type here for reply . thanks a lot
|
st84738
|
Haha no problem
Compute the loss with respect to the network outputs on the default GPU and scatter the losses back to the individual GPUs to compute the gradients with respect to the leaf nodes
|
st84739
|
ranger:
Can every network be ported for multi gpu if the model fits on single gpu?
Yes, I believe so. If your model does not fit on a single GPU though, there’s even another method that you can use (instead of DataParallel). E.g.,
class Network(nn.Module):
def __init__(self, split_gpus):
self.module1 = (some layers)
self.module2 = (some layers)
self.split_gpus = split_gpus
if self.split_gpus:
self.module1.device("cuda:0")
self.module2.device("cuda:1")
def forward(self, x):
x = self.module1(x)
if self.split_gpus:
x = x.device("cuda:1")
return self.module2(x)
|
st84740
|
ranger:
How to parallelise a pytorch model on GPU?
why to gather all outputs into GPU0 in step5 , it merely calculate distance of output_i with target_i ? why not calculate it within each GPU separately ?
one possible reason : all target (ground truth) are just stored in GPU0 , however , target can also be scattered as a addition of minibatch, like scatter data chunk , in step1
|
st84741
|
why to gather all outputs into GPU0 in step5 , it merely calculate distance of output_i with target_i ? why not calculate it within each GPU separately ?
It’s more like an implementation side-effect. In fact, you can do what you proposed, but you would have to rewrite your code then to compute the loss inside the model – usually, the loss is computed in the training loop based on the outputs of the model.
I mean, it’s not a problem at all to do what you propose, but it’s a bit less convenient, because when you decide to use DataParallel (occasionally), you would have to modify your model code as well.
Also, computing the loss is super cheap, so there is not much speed gain in distributing it across the devices, I suppose.
|
st84742
|
sorry ,I can’t agree with this explanation , thrifting or economy the distributing cluster’s compute energy . Because transportation data is not cheaper than calculating within micro chip , in my opinion .
is there some welcomed link or chart to strengthen ?
|
st84743
|
sorry ,I can’t agree with this explanation , thrifting or economy the distributing cluster’s compute energy . Because transportation data is not cheaper than calculating within micro chip , in my opinion .
is there some welcomed link or chart to strengthen ?
I don’t have a chart handy, but when I ran the code on my workstation (only 4x 1080Tis), I didn’t notice any difference between letting the GPU handle the loss computation versus distributing the results, calculating the results on the individual GPUs, and then gathering them. Either way, I get the same ~3x speedup when using 4 instead of 1 GPU. This is a small hardware setup, and the scenario is probably different for large datacenters, like you said… Also, the way I implemented it,
model = MyModel(num_features=num_features, num_classes=num_classes)
if torch.cuda.device_count() > 1:
print("Using", torch.cuda.device_count(), "GPUs")
model = nn.DataParallel(model)
and then
for epoch in range(NUM_EPOCHS):
model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(DEVICE)
targets = targets.to(DEVICE)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = cost_fn(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
Is basically such that I don’t have to rewrite my model when I decide to use DataParallel, which is most convenient for my use cases since I do not always run DataParallel.
|
st84744
|
Thanks for the detailing here.
How can we put the computation part of the losses on another gpu(GPU:4) rather than GPU:0 which happens by default.
It will be great to have the method as many networks are designed such that they cover ~11 gigs and this things holds you back while parallelizing such a network.
|
st84745
|
It doesn’t makes sense. If the loss aggregation is the bottleneck, what’s the benefit to compute on a GPU? The computation of addition shouldn’t get faster because of devices.
|
st84746
|
Computation of addition is not the roadblock. There is no space left for the computation on the gpu. That is the issue which I am trying to solve.
I hope it’s clear now.
|
st84747
|
You actually can use a different device as default device for data parallelism. However, this must be one of the GPUs that is also used by DataParallel. You can set it via the output_device parameter.
Regarding using a GPU that is not wrapped by DataParallel, that’s currently not possible (see discussion here: Uneven GPU utilization during training backpropagation 24)
|
st84748
|
I finally did the code reading on DataParallel part.
Blog – 5 Mar 19
How PyTorch implements DataParallel? 84
PyTorch can send batches and models to different GPUs automatically with DataParallel(model). How is it possible? I assume you know PyTorch uses dynamic computational graph. And PyTorch version is v1.0.1.
|
st84749
|
Hello,
Is there any analogous version of DataParallel in C++/Libtorch?
Whatever I do on an exported traced model, it always uses only a single GPU.
Refer to: "Automatic parallelization of models onto multiple GPUs" 21
|
st84750
|
As shown in the diagram you posted earlier on, “Step 5” says that “loss [is computed] with respect to network outputs on default GPU”. My question is, are losses computed for each network eventually averaged or summed over? I am not sure what exactly my loss function is printing.
|
st84751
|
I have no idea to solve this problem.
I impletemented torch.autograd.grad to get the gradient penalty loss, but this error just show again and again , did there any one has same problem?
Thanks,
Peter
|
st84752
|
Unfortunately, double backward for cudnn rnn is not supported, there’s an upstream issue tracking this https://github.com/pytorch/pytorch/issues/5261 157. Recommended way of doing what you want to do is writing custom rnns with torch script https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/ 155
|
st84753
|
Hi, I’m completely new to PyTorch, only having experiences with Tensorflow.
I have datasets with shape [?, 128, 2048], with ? being the data number (This number can change).
The image size is [128, 2048], and I have saved them by shape [500, 128, 2048]
I want to feed these data into the neural network using PyTorch.
I heard that there is a common way in PyTorch that is used by most, and I need some help with it…
I think that putting thousands of [128, 2048] on a RAM at once will trigger a memory error, so I want a way to avoid this problem.
Thanks in advance.
|
st84754
|
Hi P_C,
If I understand correctly, you have several .npy files saved, each with five hundred 128x2048 arrays, and would like to use all of these .npy files in a dataset? Have you tried something akin to this link 11?
Best,
Adam
|
st84755
|
I am new about using CUDA. I am using the following code for seeding:
use_cuda = torch.cuda.is_available()
if use_cuda:
device = torch.device("cuda:0")
torch.cuda.manual_seed(SEED)
cudnn.deterministic = True
cudnn.benchmark = False
that is correct if I use this code for the network?
if use_cuda:
net.cuda()
net = torch.nn.DataParallel(net)
cudnn.benchmark = True
I mean setting cudnn.benchmark, for seeding False and then for network True. They are related?
|
st84756
|
Solved by ptrblck in post #4
If you would like to use the benchmark mode, then yes.
If activated, cudnn will perform some benchmarking internally using the current input shape and your model to determine the best performing algorithms to use for the operations.
This will most likely slow down the first iteration, but should g…
|
st84757
|
cudnn.benchmark is a global option, so if you first disable and later enable it, it will be used again if you pass some input to your model, which might yield non-deterministic results.
|
st84758
|
Thank you for your response, @ptrblck. I am feeding input to my model, this means cudnn.benchmark should be True for the network?
|
st84759
|
If you would like to use the benchmark mode, then yes.
If activated, cudnn will perform some benchmarking internally using the current input shape and your model to determine the best performing algorithms to use for the operations.
This will most likely slow down the first iteration, but should generally yield better performance for the following iterations.
However, if you are dealing with varying input shapes, cudnn will benchmark each of them, thus slowing down your code.
|
st84760
|
Hello. I am currently trying to implement asynchronous GPU pre-fetching with Pytorch’s DataLoader.
I am doing this because data I/O from disk is a major bottleneck in my task. I therefore want to have a dedicated process reading data from disk at all times, with no gaps.
The idea that I have now is to transfer the data to GPU in Pytorch Dataset’s __getitem__ method. Inside __getitem__, I have code that looks like the following.
tensor = tensor.to('cuda:1', non_blocking=True) (I have 2 GPUs)
Meanwhile, I am using Pytorch’s multiprocessing as follows.
multiprocessing.set_start_method(method='spawn').
This line comes before the DataLoaders are initialized.
Also, I have set DataLoader(num_workers=1).
It is my hope that this will allow the DataLoader will create a dedicated process that will transfer data from host (CPU) to device (GPU) while it is reading data from disk. Meanwhile, memory transfer should be non-blocking so that the worker created by the DataLoader can read data from disk at all times without having to wait while data is being transferred from CPU to GPU through the PCIe bus, which is a high latency data transfer in its own right.
However, I am not at all sure whether this is how Pytorch is doing it. There are several things I do not know.
First, does non_blocking=True work inside DataLoader when multiprocessing is being used? I do not know how CUDA memcpyAsync works with multiprocessing. This is especially confusing since data is being transferred between processes too. I do not know when the Pytorch implementation of non_blocking=True is triggered to block.
Second, does Pytorch DataLoader wait until all processes are finished with their work at each iteration? If so, does it wait until data transfer is finished? I do not see why this should be the case, but I do not know if transferring tensors between processes causes such behavior.
I should mention that I am using a very simple custom collate_fn in Dataloader that just returns the input unpacked into its separate components, without transferring to shared memory, etc. I found that this was necessary for GPU pre-fetching on multiple GPUs.
Finally, if the method that I have proposed does not work, is there any way to implement having a dedicated disk reading process using Pytorch’s DataLoader? Hopefully it should not be interrupted in its data reading task.
Many thanks in advance for anyone who can help out. I know that this requires a lot of in-depth Pytorch and CUDA knowledge. But I think that many people would be interested since data I/O is often a problem for large datasets, especially for those of us who cannot afford specialized hardware at the necessary scale.
|
st84761
|
I am trying to run a topk of a 2d matrix
x = torch.tensor([[1, 2,9,87],[6, 32,8,1],[4,6,7,2],[3,6,2,6]])
print("x",x)
values, indices = torch.topk(x,k=2,dim=0)
print("values",values)
print("indices",indices)
output:
x tensor([[ 1, 2, 9, 87],
[ 6, 32, 8, 1],
[ 4, 6, 7, 2],
[ 3, 6, 2, 6]])
values tensor([[ 6, 32, 9, 87],
[ 4, 6, 8, 6]])
indices tensor([[1, 1, 0, 0],
[2, 2, 1, 3]])
but I am expecting output for values as:
values tensor([[ 6, 32, 8, 1],
[ 4, 6, 7, 2]])
|
st84762
|
The topk works column by column, and you get the top values there.
From what you write, do you expect the rows with the largest leading values? I’m afraid there is an easy way to get those…
Depending on what you know about your values, just running topk on the first column and using those indices on x might work.
Best regards
Thomas
|
st84763
|
Thank you… I ended up doing the following:
I needed the last 3 elements of each array so using ranges
x = torch.tensor([[1, 0,9,87,1],[6, 0,8,1,8],[4,0,7,2,2],[3,0,2,6,3]])
print("x",x)
values, indices = torch.topk(x,k=2,largest=True,dim=0)
print("indices",indices)
#2d data samples
newdataset = torch.zeros(2, 3)
print("newdataset",newdataset)
newdataset[0] = x[indices[0,0].item(),2:5]
newdataset[1] = x[indices[1,0].item(),2:5]
print("newdataset",newdataset)
|
st84764
|
For what it’s worth, a slightly more concise way to do this (that also avoids creating a new tensor, which I find can be cumbersome for autograd purposes) would be
>>> x = torch.tensor([
[1, 0, 9, 87, 1],
[6, 0, 8, 1, 8],
[4, 0, 7, 2, 2],
[3, 0, 2, 6, 3]
])
>>> _, indices = torch.topk(x[:, 0], 2)
>>> indices
tensor([1, 2])
>>> new_dataset = x[indices, 2:5]
>>> new_dataset
tensor([[8., 1., 8.],
[7., 2., 2.]])
|
st84765
|
Its been months I’ve been trying to use pack_padded_sequence with LSTM. My current setup I’m working with data that is in a python list of tensors shape 2x(some variable length) such as torch.Size([2, 2466]).
I have a data loader with a custom collate_fn that is pretty much same as found here: Use PyTorch’s DataLoader with Variable Length Sequences for LSTM/GRU 14 with the exception I don’t sort in order of length. I’m using pytorch 1.1.0.
def padSequence(batch):
# each element in "batch" is a tuple (data, label)
# Get each sequence and pad it
sequences = [x[0] for x in batch] #sorted_batch]
sequences_padded = torch.nn.utils.rnn.pad_sequence(sequences, batch_first=True)
# Also need to store the length of each sequence
# This is later needed in order to unpad the sequences
seq_lengths = [len(x) for x in sequences]
# need to pad labels too
labels = [x[1] for x in batch]
labels_padded = torch.nn.utils.rnn.pad_sequence(labels,batch_first=True)
label_lengths = [len(x) for x in labels]
return sequences_padded, seq_lengths, labels_padded, label_lengths
The data loader’s output with batch=5 gives:
tensor([[8.2430, 8.2793, 8.3186, ..., 0.0000, 0.0000, 0.0000],
[6.6331, 6.6288, 6.6292, ..., 0.0000, 0.0000, 0.0000],
[4.2062, 4.2408, 4.2675, ..., 3.4694, 3.4807, 3.4933],
[3.5047, 3.5154, 3.5313, ..., 0.0000, 0.0000, 0.0000],
[5.4685, 5.4138, 5.3533, ..., 0.0000, 0.0000, 0.0000]],
dtype=torch.float64)
And a list of sequence lengths: [474, 473, 1160, 533, 555]
Feeding it to torch.nn.utils.rnn.pack_padded_sequence(sequences_padded, seq_lengths, batch_first=True, enforce_sorted=False) I get an output something like this:
PackedSequence(data=tensor([4.2062, 5.4685, 3.5047, ..., 3.4694, 3.4807, 3.4933],
dtype=torch.float64), batch_sizes=tensor([5, 5, 5, ..., 1, 1, 1]), sorted_indices=tensor([2, 4, 3, 0, 1]), unsorted_indices=tensor([3, 4, 0, 2, 1]))
And then passing to my lstm I get the error:
RuntimeError: input must have 2 dimensions, got 1
Which I found similar in this post: Pytorch passing PackSequence argument to LSTM 18
Now I tried making my data in the same shape with my collate_fn:
def padSequence(batch):
sequences = [x[0].reshape((x[0].shape[0],1)) for x in batch]
sequences_padded = torch.nn.utils.rnn.pad_sequence(sequences, batch_first=True)
seq_lengths = [len(x) for x in sequences]
labels = [x[1].reshape((x[1].shape[0],1)) for x in batch]
labels_padded = torch.nn.utils.rnn.pad_sequence(labels,batch_first=True)
label_lengths = [len(x) for x in labels] #torch.LongTensor([len(x) for x in labels])
return sequences_padded, seq_lengths, labels_padded, label_lengths
However then I keep getting the error: RuntimeError: input.size(-1) must be equal to input_size. Expected 2, got 1
I’ve tried changing the shape of the original data making it a list of tensors with variable length x 2 and so many other things getting same errors. At this point I don’t know what is wrong and where to go from here in understanding what pytorch input wants for a simple LSTM w/ packed_padded sequence… I’ve read through so many posts and searched a lot these past few months with no help at all. Any direction would help.
Including this if could give any clues:
input_size = 1
hidden_size = 1
output_size = 1
num_layers = 2
num_classes = 4
class Model(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes):
super(Model, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True, bidirectional=False)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, x, X_lengths):
# Set initial hidden and cell states
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).cuda()
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).cuda()
packed = torch.nn.utils.rnn.pack_padded_sequence(x, X_lengths, batch_first=True, enforce_sorted=False)
# Forward propagate LSTM
X, _ = self.lstm((packed), (h0,c0))
#X, _ = torch.nn.utils.rnn.pad_packed_sequence(X, batch_first=True)
out = self.fc(X) #[:, -1, :])
return nn.functional.softmax(out,dim=2)
Thanks!
|
st84766
|
Turns out I didn’t know how to assign variables to cuda or cpu and things got all out of whack. Turns out my packed padded sequence was not assigned to a cuda device. Here is a simple example I used to test things that helped me figure it out.
import torch
import torch.nn.utils.rnn as rnn_utils
a = torch.Tensor([[1], [2], [3]])
b = torch.Tensor([[4], [5]])
c = torch.Tensor([[6]])
d = torch.Tensor([[7],[8],[9],[10]])
batch = [a,b,c,d]
padded= rnn_utils.pad_sequence(batch, batch_first=True)
sorted_batch_lengths = [len(x) for x in padded]
packed = rnn_utils.pack_padded_sequence(padded, sorted_batch_lengths, batch_first=True, enforce_sorted=False).cuda()
lstm = torch.nn.LSTM(input_size=1, hidden_size=3, batch_first=True).cuda()
lstm(packed)
Hopefully this will help someone else out in the future so you don’t spend months stuck.
|
st84767
|
I have a binary classification problem with imbalanced data (1:17 ratio). I want to implement early stopping but not sure which metric value to use as my decider. Actually, if you look at the graph attached, you’ll see accuracy and specificity are similar and sensitivity is the opposite. MCC is pretty low and I’m wondering if I should just use MCC as the decider for early stopping. Or loss? Maybe all hope is not lost and I could train for many more epochs?And how many epochs to check in early stopping and roll back on? Please help, I’m new to early stopping. The plots below represent validation scores of each epoch, trained with unbalanced data and validated on unbalanced data.
When I train with balanced data (undersampled) and validate with unbalanced data, I get these scores:
Please help with some advice. Thank you.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.