id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st80068 | Solved by tom in post #2
I think your basic timing setup makes sense (congrats!).
You’d have to figure in things such batch size, number of synchronizations during training etc.
For MNIST, the main thing might not be the actual compute, but the latency of getting things to the GPU and getting results back to the CPU. If y… |
st80069 | I think your basic timing setup makes sense (congrats!).
You’d have to figure in things such batch size, number of synchronizations during training etc.
For MNIST, the main thing might not be the actual compute, but the latency of getting things to the GPU and getting results back to the CPU. If you then have the same batch size, you would expect proportional results regardless of the extra computation the backward takes.
Best regards
Thomas |
st80070 | I just updated my pytorch to 1.3 and it takes a long time (5~10mins) to call cuda() on my quite large model. Before the update, its almost instantaneous. I am using a titan x pascal with cudatoolkit 10.1.168 if that helps.
Once the model is loaded, the training itself seems fine. |
st80071 | I am also experiencing this issue. I posted an issue to the pytorch github. The quick fix is to downgrade to CUDA 10.0
conda install -c pytorch pytorch cudatoolkit=10.0 |
st80072 | not entirely sure why yet, but we’re looking into it.
tracking in: https://github.com/pytorch/pytorch/issues/27807 130 |
st80073 | I am trying to experiment low-precision training - INT8. Is there support for this in pytorch?. Also what quantization methods are supported? |
st80074 | Training with int8 - no chance due to numerical stability limitations I guess, but inference on int8 is very interesting. |
st80075 | Pytorch does not support efficient INT8 scoring, and if you do not have Volta you will not gain any speed gain on both train and score on fp16. If you want fast scoring in int8 consider using of TensorRT you will get up to 3x faster scoring on ResNet like nets on INT8 with “slightly” lower accuracy |
st80076 | I am not aware of any native 8 bit or lower training, or for that matter, inference, as compared with something like tflite, which only supports it in specific instances. Partially I assume this has to do with the fact that there is no canonical method for quantization. There are a number of implementation examples though, see, e.g., https://github.com/eladhoffer/quantized.pytorch 20, or Glow. |
st80077 | If you look at the github issues or PRs or even the git tree’s test directory, you’ll find there is good progress towards a comprehensive solution of the various quantisation strategies.
Best regards
Thomas |
st80078 | I have tried to transform my pytorch model into ONNX model, and transform it into TensorRT model, but I met an unexpected error(using yolov3.onnx downloaded from offical web), it said “ERROR: Network must have at least one outout”, have you ever met this problem? |
st80079 | Hello, I am downloading a model via Dropbox url and saving the data as a system string. In order to load the model into Pytorch I have to convert the data from string to byte data. When I use .encode() with ‘UTF-8’ codec to convert the string data to byte data and try to load it into BytesIO I get UnpicklingError: invalid load key, ‘<’
I wish I could just use Python request and use the method .content to get the raw byte info but I am stuck with string data on the platform I am using.
Any other ideas on how to convert string data to byte?
data_str = self.Download("https://www.dropbox.com/s/examplepath/checkpoint.pth?dl=1")
data_byte = data_str.encode()
self.agent.local.load_state_dict(torch.load(BytesIO(data_byte), map_location=lambda storage, loc: storage))
UnpicklingError: invalid load key, ‘<’ |
st80080 | Solved by tom in post #2
There are plenty of examples and utility functions for downloading and opening models in the utils for the torch model hub. The key likely is to download them as an octet stream into bytes. You are probably best off using the api rather than reinventing the implementation.
Best regards
Thomas |
st80081 | There are plenty of examples and utility functions for downloading and opening models in the utils for the torch model hub 143. The key likely is to download them as an octet stream into bytes. You are probably best off using the api rather than reinventing the implementation.
Best regards
Thomas |
st80082 | I have a few models defined in list:
G = [ SomeResNet().cuda() for i in range(no_models)]
I need to define the Adam optimizer for all these models, ie G[0], G[1],...
Is the following correct?
x=[]
for i in range(no_models):
x= itertools.chain(x, G[i].parameters())
optimizer = torch.optim.Adam(
x, lr= 0.01, betas=(0.8, 0.9)
) |
st80083 | If you want to get the parameters of all modules, you could also use an nn.ModuleList and just pass the .parameters() of this ModuleList to the optimizer.
Using this approach would also make sure to push all submodules to the CPU/GPU if needed. |
st80084 | Great idea! Thank you @ptrblck; this worked for me:
optimizer = torch.optim.Adam( nn.ModuleList(G).parameters(), lr= 0.01, betas=(0.8, 0.9) ) |
st80085 | Is there any existing implementation of hierarchical attention for image classification, or hierarchical attention for text, that could be applied to images, that does not use LSTM, or GRU, or RNN, only attention?
How should I approach this problem?
till now I have done something like this,
def conv_block(in_channels, out_channels, k):
# set_trace()
# inpp = nn.TransformerEncoderLayer(512, 2)
return nn.Sequential(
AttentionStem(in_channels, out_channels, kernel_size=k, padding=1),
# nn.TransformerEncoder(inpp, 1),
nn.BatchNorm2d(out_channels),
nn.ReLU(),
nn.MaxPool2d(2)
)
from IPython.core.debugger import set_trace
class Top(nn.Module):
def __init__(self):
super().__init__()
# set_trace()
self.encoder = conv_block(3, 16, 3)
self.lin = nn.Linear(20, 10)
self.childone = Second()
self.childtwo = Second()
def forward(self, x):
# set_trace()
# x = self.encoder(x)
a = self.childone(self.encoder(x))
b = self.childtwo(self.encoder(x))
# print('top', a.shape, b.shape)
out = torch.cat((a, b), dim=-1)
return self.lin(out)
class Second(nn.Module):
def __init__(self):
super().__init__()
# set_trace()
self.encoder = conv_block(16, 32, 3)
self.lin = nn.Linear(20, 10)
self.childone = Middle()
self.childtwo = Middle()
def forward(self, x):
# set_trace()
a = self.childone(self.encoder(x))
b = self.childtwo(self.encoder(x))
# print('middle', a.shape, b.shape)
out = torch.cat((a, b), dim=-1)
return self.lin(out)
class Middle(nn.Module):
def __init__(self):
super().__init__()
# set_trace()
self.encoder = conv_block(32, 64, 1)
self.lin = nn.Linear(20, 10)
self.childone = Bottom()
self.childtwo = Bottom()
def forward(self, x):
# set_trace()
a = self.childone(self.encoder(x))
b = self.childtwo(self.encoder(x))
# print('middle', a.shape, b.shape)
out = torch.cat((a, b), dim=-1)
return self.lin(out)
# class AboveBottom(nn.Module):
# def __init__(self):
# super().__init__()
# # set_trace()
# self.encoder = conv_block(64, 128, 1)
# self.lin = nn.Linear(20, 10)
# self.childone = Bottom()
# self.childtwo = Bottom()
# def forward(self, x):
# # set_trace()
# a = self.childone(self.encoder(x))
# b = self.childtwo(self.encoder(x))
# # print('middle', a.shape, b.shape)
# out = torch.cat((a, b), dim=-1)
# return self.lin(out)
class Bottom(nn.Module):
def __init__(self):
super().__init__()
# set_trace()
self.encoder = conv_block(64, 128, 1)
self.lin_one = nn.Linear(512, 10)
def forward(self, x):
# set_trace()
# print('bottom', x.shape)
out = self.encoder(x)
return (self.lin_one(out.view(out.size(0), -1)))
model = Top()
is this a correct way to create a hierarchy?
where AttentionStem is from
github.com
leaderj1001/Stand-Alone-Self-Attention/blob/master/attention.py 6
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
import math
class AttentionConv(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, groups=1, bias=False):
super(AttentionConv, self).__init__()
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.groups = groups
assert self.out_channels % self.groups == 0, "out_channels should be divided by groups. (example: out_channels: 40, groups: 4)"
self.rel_h = nn.Parameter(torch.randn(out_channels // 2, 1, 1, kernel_size, 1), requires_grad=True)
This file has been truncated. show original |
st80086 | Typically CNNs have decreasing spatial resolution, so the typical thing would be to use some of the resolution levels as hierarchy levels. The next thing is how to formulate the attention. The classic K. Xu et al.: Show, attend and tell 8 uses “positional” attention masks while Lu et al.: Knowing when to look 5 have a query-based attention.
It would be interesting to hear about your results and experiences.
Best regards
Thomas |
st80087 | I have a batch of upper triangular matrix of shape (B, N, N) and I want to calculate a batch of positive definite matrices (covariance matrix) using cholesky decomposition such that each matrix in resultant batch comes from corresponding upper triangular matrix in the given batch of upper triangular matrices. |
st80088 | Hi Bhavya!
Bhavya_Bhatt:
I have a batch of upper triangular matrix of shape (B, N, N) and I want to calculate a batch of positive definite matrices (covariance matrix) using cholesky decomposition such that each matrix in resultant batch comes from corresponding upper triangular matrix in the given batch of upper triangular matrices.
Let me say what I think your problem is:
You have a batch (batch-size B) of NxN upper triangular matrices
that are stored as full NxN matrices, for which the elements in
the lower triangle are zero.
We think of these matrices as having come from the Cholesky
decomposition of symmetric-positive-definite (SPD) matrices,
and you wish to recover those SPD matrices as explicit, full
NxN matrices.
Let S be the desired full SPD matrix, and L be its upper-triangular
Cholesky decomposition. Then, by definition, S = L x L-transpose.
torch.matmul() does matrix multiplication on a batch basis, and you
can use torch.transpose() to perform matrix transposition on a batch
basis by specifying the dimensions to swap.
So, letting l be your (B, N, N) batch of upper-triangular matrices,
s = torch.matmul (l, torch.transpose (l, 1, 2))
should give you s, the (B, N, N) batch of your SPD matrices.
Good luck.
K. Frank |
st80089 | hello:
I want to install pytorch 1.1.0, while my Linux’s CUDA’s version is 8.0.44 (I checked it from nvcc --version)
I find out that pytorch 1.1.0 does not have the .whl files that meets my need. I have to install it from source (with GPU support).
So I want to ask if I can install pytorch 1.1.0 from source, with NVIDIA CUDA 8.0’s GPU acceleration support. I know that CUDA 8.0 is very old now.
yours sincerely,
MingFei Wang. |
st80090 | As far as I know, the min CUDA version is CUDA9 to install PyTorch from source.
Do you really need this older version or could you update to a newer one? |
st80091 | Can I do with torch.no_grad():
optimizer.zero_grad()
loss.backward()
optimizer.step() |
st80092 | No that won’t work, as this disables gradient calculation.
From the docs 17:
Context-manager that disabled gradient calculation.
Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward() . It will reduce memory consumption for computations that would otherwise have requires_grad=True. In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True.
What is your intention? |
st80093 | I was just wanted to implement dataloader on the fly but I found a better solution without much headache. It’s a DataLoader from PyTorch that handles all the problems of creating target and input by providing a folder name with folders as categories. |
st80094 | Yeah I know the one you are talking about - DL4J also has a similar dataloading capability if you ever need it. |
st80095 | Hi.
Can I compute matrix derivatives with PyTorch. I am programming some Gaussian processes, setting the hyperparameters by marginal likelihood and I was wondering if the derivatives can be taken automatically with PyTorch.
For instance a simple example would be computing the derivative of:
N=100
val=teta^2*numpy.eye(N)
The derivative is:
N=100
val=2*teta*numpy.eye(N)
Thanks |
st80096 | https://github.com/pytorch/pytorch/issues/14367 19 also mentioned this problem.
Due to some limitation, -Wl,--whole-archive option cannot be used when linking. The executable binary generated fails to run, with error “DeviceGuardImpl for cpu is not available”.
Below are two lines added to register the Device. There are new errors with aten::mul missing. After checking the source code under jit/ and aten/, I did not get a clue of how to register all operators.
Could someone please provide some standard initialization code for static library users?
#include <torch/script.h> // One-stop header.
**#include <ATen/detail/CPUGuardImpl.h> //ADDED first line**
#include <iostream>
#include <memory>
int main(int argc, const char* argv[]) {
if (argc != 2) {
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
return -1;
}
**C10_REGISTER_GUARD_IMPL(CPU, at::detail::CPUGuardImpl); //ADDED second line**
// Deserialize the ScriptModule from a file using torch::jit::load().
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[1]);
assert(module != nullptr);
std::cout << "ok\n";
}
Below is the error message:
terminate called after throwing an instance of 'torch::jit::script::ErrorReport'
what():
unknown builtin op: aten::mul
Could not find any similar ops to aten::mul. This op may not exist or may not be currently supported in TorchScript
:
def mul(a : float, b : Tensor) -> Tensor:
return b * a
~~~~~ <--- HERE
def add(a : float, b : Tensor) -> Tensor:
return b + a
def ne(a : float, b : Tensor) -> Tensor:
return b != a
def eq(a : float, b : Tensor) -> Tensor:
return b == a
def lt(a : float, b : Tensor) -> Tensor:
return b > a
def le(a : float, b : Tensor) -> Tensor:
Aborted |
st80097 | I have the same issue: Unknown builtin op: aten::mul
Created an issue https://github.com/pytorch/pytorch/issues/27726 133 |
st80098 | Hello everyone,
I’m trying to figure out how an fully convolutional image for segmentation works.
I found the following image
image.png708×293 52.9 KB
(https://www.researchgate.net/figure/Fully-convolutional-neural-network-architecture-FCN-8_fig1_327521314 11)
I understand the structure till the last blue rectangle (7x7x4096), because this is just a normal cnn structure.
After that we are doing a 1x1 convolution to reduce the number of feature maps right? To be precise, according to the image the number of filters are reduced to the number of classes?
After that the process of umsampling reaches the same resolution as the input image.
Two questions now:
I don’t understand it why we are using K filters, if K is the number of classes. How is the output of 224x244xK interpreted? How do we get the colored/segmented output image like in this picture?
What does the train data look like? What is the ground truth and what is the loss function? How do we calculate the loss between the segmented image and ground truth?
Thanks for helping. |
st80099 | This video explains segmentation in NNs intutively: https://www.youtube.com/watch?v=NzY5IJodjek&t=1357s 13 . Your first question is answered at the end. Loss is found via flattening out the output array and doing a sum of classification loss for each point in the array or doing dice loss which is a ratio of areas. |
st80100 | thanks for your answer.
I watched the video, just that I get it right: According to my image above at the end I have a 240x240xK Image, were for every pixel there are K probabilities. The highest probability shows the class of the pixel right?
With regard to your second answer, that would mean that e.g. just use some kind of cross-entropy like: the truth should be [ 0 0 1 0 ] (class 3) but the fcn predicts [ 0.2, 0.8, 0.4, 0.1] right? |
st80101 | Hi,
I need to fill images with other image patches during training. Since I’m training with mini-batch, is there any efficient way to do this?
For example, I have a mini-batch of images of size [B, 3, 128, 128]. I also have patches of size [B, 4, 3, 32, 32], where 4 is the number of patches. Besides, I have the bounding box indicating the location of patches of size [B, 4, 4]. How can I fill in the patches in a batch-wise way? You can ignore the overlap between patches.
Thanks! |
st80102 | Solved by Zhihao_Wu in post #3
Hi @ptrblck, you’re right. I answer this question on stackoverflow, How to do batch filling in pytorch. But looks like for-loop isn’t very efficient. Any ideas of improvement? |
st80103 | Could you explain a bit more about how you’ve defined the bounding boxes?
Does each patch have 4 values, corresponding to the upper left x, y and lower right x, y coordinates? |
st80104 | Hi @ptrblck, you’re right. I answer this question on stackoverflow, How to do batch filling in pytorch 41. But looks like for-loop isn’t very efficient. Any ideas of improvement? |
st80105 | I think the problematic part trying to get rid of the loops would be the batched way of creating the range indexing for batches.
Probably creating the linear indices beforehand and index the result tensor directly would work.
However, how performance critical is this code?
E.g. if you just run it once in your script, I wouldn’t invest a lot of time improving its performance, but focus on other parts of the code. |
st80106 | Thanks guys! Actually I just use for loop suggested by @Zhihao_Wu . Because I find out that my GPU can only afford batch size of 8. So for loop do not decrease the performance. |
st80107 | Hi,
I have two sparse (or even more but 2 for demonstration) float tensors, Say,
i_A = torch.LongTensor([[0, 1], [0, 0]])
v_A = torch.FloatTensor([3, 4])
A = torch.sparse.FloatTensor(i_A, v_A, torch.Size([2,2])).to_dense()
A = [[3, 4],
[0, 0]]
i_B = torch.LongTensor([[0, 1], [0, 1]])
v_B = torch.FloatTensor([6, 6])
B = torch.sparse.FloatTensor(i_B, v_B, torch.Size([2,2])).to_dense()
B = [[6, 0],
[0, 6]]
Now I would like to create a new sparse tensor C in which A and B lie on the diagonal, with everywhere else filled with 0:
C = [[3, 4, 0, 0],
[0, 0, 0, 0],
[0, 0, 6, 0],
[0, 0, 0, 6]] |
st80108 | Is there a slicing function to slice an image into square patches of size p. Right now I am doing this in few lines of code but if there is a function that does this I would just replace. |
st80109 | Look into torchvision.transforms.*crop methods. http://pytorch.org/docs/master/torchvision/transforms.html 479
Afaik, there is no default method for an arbitrary location crop, but it is easily implementable using torchvision.transforms.Lambda. |
st80110 | Quite late but for reference for others, you can use the unfold function:
patches = img_t.data.unfold(0, 3, 3).unfold(1, 8, 8).unfold(2, 8, 8)
Here is the demo code to test:
import torch
from torchvision import transforms
import matplotlib.pyplot as plt
%matplotlib inline
transt = transforms.ToTensor()
transp = transforms.ToPILImage()
img_t = transt(Image.open('cifar/train/10000_automobile.png'))
#torch.Tensor.unfold(dimension, size, step)
#slices the images into 8*8 size patches
patches = img_t.data.unfold(0, 3, 3).unfold(1, 8, 8).unfold(2, 8, 8)
print(patches[0][0][0].shape)
def visualize(patches):
"""Imshow for Tensor."""
fig = plt.figure(figsize=(4, 4))
for i in range(4):
for j in range(4):
inp = trans1(patches[0][i][j])
inp = np.array(inp)
ax = fig.add_subplot(4, 4, ((i*4)+j)+1, xticks=[], yticks=[])
plt.imshow(inp)
visualize(patches) |
st80111 | Is it still possible to get layer parameters like kernel_size, pad and stride in grad_fn in torch 1.2? |
st80112 | kernel_size, pad, and stride are attributes of a convolution layer, not the grad_fn.
You could directly access them using the layer instance.
Could you explain your use case using grad_fn a bit? |
st80113 | @ptrblck
I actually want to do some computations that require me to know the entire chain of layers (and their parameters) leading to each leaf l_i of the network graph. This is possible to do in frameworks like Caffe that require users to specify input layer name and output layer name for each layer.
However, I can’t find an easy way to do this in PyTorch.
The only way I can think of right now is to do this via grad_fn which contains next_functions. Getting the chain then is just a matter of tracing the next_functions recursively.
Do you think there is a better way to do this?
I am actually trying to implement this https://github.com/BVLC/caffe/blob/master/python/caffe/coord_map.py 1, which is used in FCN for cropping. Currently, what others do to reimplement this kind of cropping is to manually specify the entire chain for each leaf like this (as in https://github.com/xwjabc/hed/blob/master/networks.py):
def prepare_aligned_crop(self):
“”" Prepare for aligned crop. “”"
# Re-implement the logic in deploy.prototxt and
# /hed/src/caffe/layers/crop_layer.cpp of official repo.
# Other reference materials:
# hed/include/caffe/layer.hpp
# hed/include/caffe/vision_layers.hpp
# hed/include/caffe/util/coords.hpp
# https://groups.google.com/forum/#!topic/caffe-users/YSRYy7Nd9J8
def map_inv(m):
""" Mapping inverse. """
a, b = m
return 1 / a, -b / a
def map_compose(m1, m2):
""" Mapping compose. """
a1, b1 = m1
a2, b2 = m2
return a1 * a2, a1 * b2 + b1
def deconv_map(kernel_h, stride_h, pad_h):
""" Deconvolution coordinates mapping. """
return stride_h, (kernel_h - 1) / 2 - pad_h
def conv_map(kernel_h, stride_h, pad_h):
""" Convolution coordinates mapping. """
return map_inv(deconv_map(kernel_h, stride_h, pad_h))
def pool_map(kernel_h, stride_h, pad_h):
""" Pooling coordinates mapping. """
return conv_map(kernel_h, stride_h, pad_h)
x_map = (1, 0)
conv1_1_map = map_compose(conv_map(3, 1, 35), x_map)
conv1_2_map = map_compose(conv_map(3, 1, 1), conv1_1_map)
pool1_map = map_compose(pool_map(2, 2, 0), conv1_2_map)
conv2_1_map = map_compose(conv_map(3, 1, 1), pool1_map)
conv2_2_map = map_compose(conv_map(3, 1, 1), conv2_1_map)
pool2_map = map_compose(pool_map(2, 2, 0), conv2_2_map)
conv3_1_map = map_compose(conv_map(3, 1, 1), pool2_map)
conv3_2_map = map_compose(conv_map(3, 1, 1), conv3_1_map)
conv3_3_map = map_compose(conv_map(3, 1, 1), conv3_2_map)
pool3_map = map_compose(pool_map(2, 2, 0), conv3_3_map)
conv4_1_map = map_compose(conv_map(3, 1, 1), pool3_map)
conv4_2_map = map_compose(conv_map(3, 1, 1), conv4_1_map)
conv4_3_map = map_compose(conv_map(3, 1, 1), conv4_2_map)
pool4_map = map_compose(pool_map(2, 2, 0), conv4_3_map)
conv5_1_map = map_compose(conv_map(3, 1, 1), pool4_map)
conv5_2_map = map_compose(conv_map(3, 1, 1), conv5_1_map)
conv5_3_map = map_compose(conv_map(3, 1, 1), conv5_2_map)
score_dsn1_map = conv1_2_map
score_dsn2_map = conv2_2_map
score_dsn3_map = conv3_3_map
score_dsn4_map = conv4_3_map
score_dsn5_map = conv5_3_map
upsample2_map = map_compose(deconv_map(4, 2, 0), score_dsn2_map)
upsample3_map = map_compose(deconv_map(8, 4, 0), score_dsn3_map)
upsample4_map = map_compose(deconv_map(16, 8, 0), score_dsn4_map)
upsample5_map = map_compose(deconv_map(32, 16, 0), score_dsn5_map)
crop1_margin = int(score_dsn1_map[1])
crop2_margin = int(upsample2_map[1])
crop3_margin = int(upsample3_map[1])
crop4_margin = int(upsample4_map[1])
crop5_margin = int(upsample5_map[1])
return crop1_margin, crop2_margin, crop3_margin, crop4_margin, crop5_margin
This is problematic because it does not allow easy switching of backbones. I hope to be able to compute this kind of crop offsets automatically. |
st80114 | I design my lstm model like this:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
class LSTMpred(nn.Module):
def __init__(self, input_size, hidden_dim):
super(LSTMpred, self).__init__()
self.input_dim = input_size
self.hidden_dim = hidden_dim
self.lstm = nn.LSTM(input_size, hidden_dim)
self.hidden2out = nn.Linear(hidden_dim, 1)
self.hidden = self.initHidden()
def initHidden(self):
return (Variable(torch.zeros(1, 1, self.hidden_dim)),
Variable(torch.zeros(1, 1, self.hidden_dim)))
def forward(self, *input):
x = input[0]
lstm_out, self.hidden = self.lstm(
x.view(len(x), 1, -1), self.hidden
)
outdat = self.hidden2out(lstm_out.view(len(x), -1))
return outdat
I use model = LSTMModel.LSTMpred(1,40).to(device) to initialize my model, but when I try to train it, it’s just stuck on the code modelout = model(indata) and the whole program gets existed after a few seconds saying Process finished with exit code -1073741819 (0xC0000005).
However, when I initialize device to cpu, like device = torch.device('cpu') it starts to work
I’m confused about it, why does it happen? I would appreciate your help. |
st80115 | it’s possible that CUDA itself is not correctly working on your machine?
Does any other program in CUDA mode work correctly? |
st80116 | Yep, if I use the model provided by pytorch itself, using MNIST dataset, and set as ‘CUDA’, it works. |
st80117 | One thing I do see is model = LSTMModel.LSTMpred(1,40).to(device) does not move the self.hidden over to GPU, because it’s not registered as a parameter or a buffer.
I think you want to move self.initHidden() into the forward function, rather being in the constructor.
Something like:
def forward(self, *input):
hidden = self.initHidden()
The reason self.hidden doesn’t get moved to GPU is because it’s not an instance of nn.Parameter, or it’s not been declared as a buffer via https://pytorch.org/docs/stable/nn.html?highlight=register_buffer#torch.nn.Module.register_buffer 13 so PyTorch doesn’t know when you do .to('cuda') to move this onto GPU. |
st80118 | Still doesn’t work
I try to debug to find the problem, and it stuck on the following code(filename: torch/nn/_functions/rnn.py):
output, hy, cy, reserve, new_weight_buf = torch._cudnn_rnn(
input, weight_arr, weight_stride0,
flat_weight,
hx, cx,
mode, hidden_size, num_layers,
batch_first, dropout, train, bool(bidirectional),
list(batch_sizes.data) if variable_length else (),
dropout_ts)
these are the parameters used above
paras.png1205×427 16.3 KB
can you figure out the reason |
st80119 | Hi,
I have a multilabel classification problem, which I am trying to solve with CNNs in Pytorch. I have 80,000 training examples and 7900 classes; every example can belong to multiple classes at the same time, mean number of classes per example is 130. Here is the plot that shows numbers of samples per each class:
As you can see the data is very imbalanced. For some classes, I have only ~900 examples, which is around 1%. For “overrepresented” classes I have ~12000 examples (15%). When I train the model I use BCEWithLogitsLoss with a positive weights parameter. I calculate the weights the same way as described in the documentation: the number of negative examples divided by the number of positives.
As a result, my model overestimates almost every class… Mor minor and major classes I get almost twice as many predictions as true labels. And my AUPRC is just 0.18. Even though it’s much better than no weighting at all, since in this case the model predicts everything as zero.
So my question is, how do I improve the performance? Is there anything else I can do? I tried different batch sampling techniques (to oversample minority class), but they don’t seem to work. |
st80120 | I’d recommend trying out using softmax with cross entropy with each target class set to 1/num_target_labels. That has usually worked better for me than binary cross entropy and sigmoid outputs.
There are a few papers from facebook that also mention doing that (https://arxiv.org/abs/1805.00932 38) |
st80121 | Try to augment the under-represented class and undersample the dominant class to some extent. I think augmentation + BCEWithLogitsLoss(and without weights assigned) should improve the accuracy by a significant amount. |
st80122 | hi @michalwols
incase of softmax with CE with ground truth set to 1/num_target_labels, what will be the threshold above which you choose the class during inference? won’t it vary depending on num_target_labels for every example? |
st80123 | Can you give me an overall picture of the problem that you’re trying to solve? You have mentioned that every example has an average of 130 class labels.
Do you think detection could help in such a case?(but annotating 80K samples can be expensive)
Can you use existing detectors to run on the train set, extract individual class objects and add them(along with augmentation) to your train set? This may help in tackling class imbalance. |
st80124 | Hi, I am a newbie of Pytorch. Does anyone help me on this problem. Thanks a lot~
I just find the batch_size of output is 1/2 of the input when running my model on two GPUs. So, how to get the whole batch of output?
part of my code is as following:
device_ids = [0,1]
model = nn.DataParallel(model, device_ids=device_ids)
model = model.cuda()
print('start')
start = time.time()
for i in range(1):
inputs = torch.zeros(100, 64, 256).cuda()
outputs = model(inputs)
print(outputs.shape)
end = time.time()
print('time cost:',end-start)
where the second dim ‘64’ represents the batch_size.
Here is the results:
torch.Size([260, 50, 10000])
time cost: 3.447650671005249
Obviously the printed dimension of batch is 1/2 of the input.
I am sure the two GPUs are both used in this situation. |
st80125 | I find tutorial here:https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html
image.png890×558 36.1 KB
So, I guess there are no particularly serious errors in my situation, right? But I don’t know how to retrieve the output completely…
p.s.: The inputs in my case are mel-spectrograms of speech clips with different durations. And I don’t know how to use dataloader with data of different dimension when global padding is not feasible. |
st80126 | nn.DataParallel will chunk the input data in dim0, so you should make sure to permute your data and adapt the model if necessary. |
st80127 | @ptrblck Thank you very much for your reply.
ptrblck:
nn.DataParallel will chunk the input data in dim0, so you should make sure to permute your data and adapt the model if necessary.
Is dim0 the dimension of batch? If so, the inputs in my code no longer need to be permuted, right? because ‘100’ in dim0 indicates the batch_size.
marathon:
inputs = torch.zeros(100, 64, 256).cuda() |
st80128 | I thought dim1 represents the batch size:
marathon:
where the second dim ‘64’ represents the batch_size |
st80129 | Hi,
I am trying to install a Github library and pytorchtools and I keep getting this error:
!pip install git+https://github.com/Bjarten/early-stopping-pytorch
Collecting git+https://github.com/Bjarten/early-stopping-pytorch
Cloning https://github.com/Bjarten/early-stopping-pytorch 43 to /tmp/pip-req-build-txarset_
Running command git clone -q https://github.com/Bjarten/early-stopping-pytorch 43 /tmp/pip-req-build-txarset_
ERROR: Complete output from command python setup.py egg_info:
ERROR: Traceback (most recent call last):
File “”, line 1, in
File “/home/nita/anaconda3/lib/python3.7/tokenize.py”, line 447, in open
buffer = builtin_open(filename, ‘rb’)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-req-build-txarset/setup.py’
----------------------------------------
ERROR: Command “python setup.py egg_info” failed with error code 1 in /tmp/pip-req-build-txarset_/
Does anyone has previous experience on this. thank you in advance |
st80130 | Usually, pip install requires a setup.py file in the directory containing the library to install.
I haven’t used https://github.com/Bjarten/early-stopping-pytorch 156 but I don’t see any setup script there. |
st80131 | I saw the following example in the torch.quantization announcement:
model = ResNet50()
model.load_state_dict(torch.load(“model.pt”))
qmodel = quantization.prepare(
model, {"": quantization.default_qconfig})
qmodel.eval()
for batch, target in data_loader:
model(batch)
qmodel = quantization.convert(qmodel)
Is there a way to get an example like this to work with a v1.3.0 JIT traced/exported model? |
st80132 | Hi,
I want to install matchzoo-py that is based on pytorch but I receive this error:
Could not find a version that satisfies the requirement torch>=1.2
chzoo-py) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
No matching distribution found for torch>=1.2.0 (from matchzoo-py)
I upgrade the version by the following command:
pip install https://download.pytorch.org/whl/cu90/torch-1.2.0-cp36-cp36m-win_amd64.whl
pip install torchvision
after that I even could not import torch and I receive an error:
from torch._C import * (ImportError:...)
I would appreciate it if you guide me.
Thanks |
st80133 | Solved by zahra in post #4
I removed torch and then install torch by the following command:
pip3 install torch==1.2.0 torchvision==0.4.0 -f https://download.pytorch.org/whl/torch_stable.html
Now, I could import torch. Also, I could install matchzoo-py successfully but when I import torch and then import matchzoo, I receive … |
st80134 | I removed torch and then install torch by the following command:
pip3 install torch==1.2.0 torchvision==0.4.0 -f https://download.pytorch.org/whl/torch_stable.html
Now, I could import torch. Also, I could install matchzoo-py successfully but when I import torch and then import matchzoo, I receive this error that is related to pandas version I think:
AttributeError: module 'pandas' has no attribute 'core'
I would appreciate it if guide me.
Thansk |
st80135 | I removed pandas and installed version 0.24.2 and now I do not have any problems.
For who read my question, suppose that this version is compatible with matchzoo-py(the package that I want to install). So, every other versions could be helpful for your application.
Many thanks |
st80136 | Hello,
i wrote my own Dataset and tried to put it in DataLoader. All seems to work, but the loaded data doesn’t get the batch size.
I have a 3x64x64 RGB image and a 1x64x64 grayscale image and concatenate them in my Dataset to get a 4x64x64. After using the Dataloader the output should have the shape 64x4x64x64 (batchsize=64) but it still has 4x64x64. Any suggestions or ideas?
class MyDataset(Dataset):
def __init__(self, path_grain, path_mask, transform=None):
self.data_grain = dset.ImageFolder(
root=path_grain, transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),]))
self.data_mask = dset.ImageFolder(root=path_mask,
transform=transforms.Compose([transforms.Grayscale(num_output_channels=1),
transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]))
def __getitem__(self, index):
x_grain, y = self.data_grain[index]
x_mask, _ = self.data_mask[index]
x = torch.cat((x_grain, x_mask), dim=0)
return x
def __len__(self):
return len(self.data_grain)
dataset = MyDataset("..\data512to64_grain", "..\data512to64_mask")
dataloader = torch.utils.data.DataLoader(dataset, batch_size=opt.batchSize,
shuffle=True, num_workers=int(opt.workers))
Greetings:) |
st80137 | Your code works fine using dummy data:
class MyDataset(Dataset):
def __init__(self):
self.data_grain = torch.randn(100, 3, 64, 64)
self.data_mask = torch.randn(100, 1, 64, 64)
def __getitem__(self, index):
x_grain = self.data_grain[index]
x_mask = self.data_mask[index]
x = torch.cat((x_grain, x_mask), dim=0)
return x
def __len__(self):
return len(self.data_grain)
dataset = MyDataset()
dataloader = torch.utils.data.DataLoader(
dataset, batch_size=64, shuffle=True, num_workers=0)
data = next(iter(dataloader))
print(data.shape)
Could you check the opt.batchSize again?
Also, could you print the shape of the output of dataset.data_grain[0] as well as dataset.data_mask[0]? |
st80138 | Heres the output.
len(dataset.data_grain[0]): 2
len(dataset.data_mask[0]): 2
(dataset.data_grain[0][0]).shape
torch.Size([3, 64, 64])
(dataset.data_mask[0][0]).shape
torch.Size([1, 64, 64])
batchsize: 064
dataset.data_grain[0] is a tuple
I took a look at another example of you, and there was a returning y in the get_item function.
I put it in as well, and the code worked correct after…
def __getitem__(self, index):
x_grain, y = self.data_grain[index]
x_mask, _ = self.data_mask[index]
x = torch.cat((x_grain, x_mask), dim=0)
return x, y |
st80139 | Hi
i am going to install Bindsnet package with this command:
pip install bindsnet -U
but when i run it , it get this error:
ERROR: Could not find a version that satisfies the requirement torch>=1.2.0 (from
bindsnet) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch>=1.2.0 (from bindsnet)
I would be grateful if someone gives me a help |
st80140 | Is it possible to get 100% accuracy in training data, its relatively smaller data 3k data points, and was trained on GRU for 100 epochs. Is it ok (apart from overtraining) ? or is it not okay. |
st80141 | Overfitting would be the most critical part of perfectly fitting the training dataset. What other concerns are you considering? |
st80142 | I’m looking for a way to transform a multidimensional list of tensors that I assemble during one forward pass into a single tensor for further computations.
In three loops (over the direction (D), height (H) and width (W) of an image) I compute hidden state tensors of shape (Batch (B) x Hidden Units (H)), add them to a list, and at the end of this loop add this list to another list in the outside loop. In the end I have a list of shape (D x H x W) containing the (B x H) tensors.
How do I efficiently convert this to one tensor of shape (D x H x W x B x H) so I can apply the next weight matrix to it? Or is there a better way to collect the states in the first place? I thought about just concatenating the states together with torch.cat, but then I would need one initial tensor and always append to different dimensions.
(Also if I’m at it, what would be the way to combine the first and the last dimension of the result? Say I have 4 directions and hidden size of 16, how do I get (H x W x B x 64) from (4 x H x W x B x 16)? Just torch.view?) |
st80143 | Just need to convert this list of tensors to an ordinary list.
From:
[tensor([33]), tensor([7]), tensor([8]), tensor([11]), tensor([17]), tensor([20]), tensor([23]), tensor([24]), tensor([25]), tensor([26])]
To:
x = [33, 7, 8,..., 26] |
st80144 | Solved by Tony-Y in post #2
import torch
x = [torch.LongTensor([i]) for i in [33, 7, 8, 11, 17, 20, 23, 24, 25, 26] ]
torch.cat(x).tolist() |
st80145 | import torch
x = [torch.LongTensor([i]) for i in [33, 7, 8, 11, 17, 20, 23, 24, 25, 26] ]
torch.cat(x).tolist() |
st80146 | torch.matmul(torch.randn(16, 4, 7056, 10), torch.randn(16, 4, 10, 7056))
gives error
RuntimeError: CUDA out of memory. Tried to allocate 11.87 GiB (GPU 0; 10.91 GiB total capacity; 99.79 MiB already allocated; 9.00 GiB free; 4.21 MiB cached)
any alternative? |
st80147 | The output size is
torch.Size([16, 4, 7056, 7056])
So, it really requires 11.87 GB on GPU. If you want whole output at once, you need a GPU with more memory than 12 GB. |
st80148 | You don’t give much of a context, maybe this is obvious but in case not:
You can allocate the matrices on the CPU, then iterate over the first two dimensions, and send the sliced tensors to GPU for the actual matmult (which only computes on the last two dimensions). |
st80149 | I am quite new with Tensorboard, I was trying to visualize my time series data but I don’t know how to do it, also most of tutorials for visualizing image data. I will appreciate your help. |
st80150 | Hello Everyone.
I have a question about torchvision.transforms.ToTensor.
In the Documentation of this Command has said that :
Converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0].
My question is How to this command scale the values in the range [0, 255] to the range of [0.0, 1.0] ? |
st80151 | What if I want to convert an image into a tensor but not in the range [0, 1]? Is there any solution to that? |
st80152 | You could most likely use torch.from_numpy, as most libraries load images as numpy arrays. |
st80153 | Hi
I have some doubts in mapping colors to class index
I have label images (raw pixel values ranging from 0 to 1) and visually there are three classes (black , green, red color). I want to create masks from these label images to feed it to my Segmentation model (which uses cross entropy loss).
After looking at some code from a forum post
Create mapping
Get color codes for dataset (maybe you would have to use more than a single
image, if it doesn’t contain all classes)
target = torch.from_numpy(target)
colors = torch.unique(target.view(-1, target.size(2)), dim=0).numpy()
implementing this i got colors to have a shape (7824, 3) meaning that there are 7824 different colors right ?
could i have some guidance on how to use this to create masks for all my label image which has only the class index (black class 0, green class 1, red class 2) |
st80154 | I assume you are referring to this post 388.
Try to adapt the code to your use case and make sure you are dealing with the same data shapes (i.e. check for the same dimension layout etc.).
One possible reason for a lot of unique colors could be the usage of an interpolation method while resizing other than nearest neighbors. |
st80155 | I have the following code:
x = (torch.arange(16)+1).reshape(1,1,4,4).float()
im2col = F.unfold(x, (2,2))
with the x being:
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
I expect the this to have an output of
[[1,2,5,6],
[2,3,6,7],
[3,4,7,8],
.....]
but it is giving me the output of kernel size 3:
[[1,2,3,5,6,7,9,10,11],
[2,3,4,6,7,8,10,11,12],
.....]
Am I thinking wrong about the unfold operation? and is this the expected behavior? When so, can you elaborate? |
st80156 | Solved by albanD in post #2
Hi,
The only difference between what you expect and what you get is a transpose. The expected use for convolution is to do mm(w, unfold(input)).
You can see here that the unfold is called and here the mm is done with the weights. |
st80157 | Hi,
The only difference between what you expect and what you get is a transpose. The expected use for convolution is to do mm(w, unfold(input)).
You can see here 4 that the unfold is called and here 2 the mm is done with the weights. |
st80158 | I am trying to implement a custom activation function (the codes attached below). Before using the custom activation function, everything works well. However, as long as it is used, the server would throw the error:
Segmentation fault
The error always appears at the first epoch.
I am using
Pytorch 1.1.0
Cuda compilation tools, release 9.2, V9.2.148
the codes
def mg(x):
c = 1.33
b = 0.4
p = 6.88
input_size = x.shape
num = torch.numel(x) # the element number of the input tensor
x = x.view(num)
out = torch.zeros(len(x))
for i in range(len(x)):
if x[i] < 0:
out[i] = 0
else:
out[i] = (c * x[i]) / (1 + torch.mul(b * p, torch.pow(x[i], p)))
out = out.view(input_size[0], input_size[1], input_size[2], input_size[3])
return out |
st80159 | Solved by sirius in post #3
Hi AlbanD,
Thanks a lot for your reply. The issue has been solved by modifying the version of PyTorch. Thanks |
st80160 | Hi AlbanD,
Thanks a lot for your reply. The issue has been solved by modifying the version of PyTorch. Thanks |
st80161 | Hello,
I am currently trying to incorporate a scheduler in my training loop, using the loss itself as validation measurement. I noticed that the scheduler always reduces the learning rate, independantly of the value of the loss.Setting the patience to 20 for example, I get outputs like :
0m 6s (5 0%) 3838.608
0m 8s (10 0%) 68974.180
0m 11s (15 1%) 50144.387
0m 14s (20 1%) 46037.418
Epoch 21: reducing learning rate of group 0 to 5.0000e-03.
0m 16s (25 1%) 43242.031
0m 19s (30 2%) 48685.941
0m 21s (35 2%) 51668.488
0m 24s (40 2%) 48751.176
Epoch 42: reducing learning rate of group 0 to 2.5000e-03.
0m 26s (45 3%) 48641.238
0m 28s (50 3%) 52856.855
0m 31s (55 3%) 55549.047
0m 33s (60 4%) 49968.438
Epoch 63: reducing learning rate of group 0 to 1.2500e-03.
0m 35s (65 4%) 40710.168
0m 38s (70 4%) 40792.227
0m 40s (75 5%) 38300.871
0m 43s (80 5%) 44388.047
Epoch 84: reducing learning rate of group 0 to 6.2500e-04.
0m 45s (85 5%) 76917.484
0m 48s (90 6%) 62983.559
0m 50s (95 6%) 45617.375
0m 53s (100 6%) 39899.105
0m 55s (105 7%) 94773.094
Epoch 105: reducing learning rate of group 0 to 3.1250e-04.
Where at the beginning it makes sense to reduce the LR, but then it also happens when the loss starts decreasing until the LR reaches the minimum allowed value …
I initialized the scheduler like this :
scheduler = ReduceLROnPlateau(optimizer, mode='min', factor=0.5, patience=20, verbose=True)
taking a step with :
scheduler.step(loss)
Did I forgot something ? |
st80162 | You didn’t set minimum value…
https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.ReduceLROnPlateau 67
min_lr (float or list) – A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. Default: 0. |
st80163 | I am trying to develop a gesture recognition system using deep learning in pytorch.
I get the following error .
data = torch.cat(imgs,dim=1)
RuntimeError: expected a non-empty list of Tensors
Could anyone help on this?
Thanks! |
st80164 | Ran into this myself today. I guess id like it to create an empty tensor, but maybe that’s problematic. Will just do an if else I guess. Anyone got a prettier solution than
if list_of_tensors:
t = torch.stack(list_of_tensors)
else
t = torch.tensor([]) |
st80165 | Hi,
In Torchtext,
train, test = datasets.IMDB.splits(TEXT, LABEL)
divides ratio between train and test 50:50, is there any ways that we change this ratio to 80:20? |
st80166 | Solved by ptrblck in post #2
The ratio for splitting the IMDB dataset originates from the data itself, as 25,000 reviews are provided for training and 25,000 for testing.
From the dataset website:
We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. |
st80167 | The ratio for splitting the IMDB dataset originates from the data itself, as 25,000 reviews are provided for training and 25,000 for testing.
From the dataset website 12:
We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.