id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st80168 | personally i dont like torchtext all that much, i would recommend you use custom dataloader, and divide the data as you want. I was working on this very dataset, i will upload the code on my github by sunday, you can take a look if you want, i will link it once its up. |
st80169 | May I ask about Test accuracy for SST dataset, train_data, valid_data, test_data = datasets.SST.splits(TEXT, LABEL) with LSTM? It is around 60%? |
st80170 | Hello,
I want to segment my .nrrd data. I use the following code , but I have this
error:
nibabel.filebasedimages.ImageFileError: Cannot work out file type of …
ROOT_DIR_GMCHALLENGE = “/media/elahe/d/data/dataset”
mri_input_filename = os.path.join(ROOT_DIR_GMCHALLENGE,
‘train’)
mri_gt_filename = os.path.join(ROOT_DIR_GMCHALLENGE,
‘labels’)
pair = mt_datasets.SegmentationPair2D(mri_input_filename, mri_gt_filename)
I would be appreciated if you help me.
Thanks |
st80171 | Hi, I am implementing a UNet for semantic segmentation and i have my data set of images and label images (three classes). I have confused my self about the label images. Should the label images be a tensor of the class index (like 1 ,2 ,3) or its raw pixel value.
I used the raw images (loaded the images and labels and converted to tensors) and fed it to the Unet with a cross entropy loss. It first said the labels needed to be long tensors (so i converted it). Now i am getting this error.
Any guidance on what i am missing or doing wrong is much appreciated.
Thanks in advance
Capture.PNG1722×499 63.8 KB |
st80172 | Hello,
I think that your targets should be something like B x 1 x H x W.
where B is your batch size, H and W height and width of your image.
The target should be a long tensor with the class value for every pixel (0 if it is class 1 at pixel, 1 if classe 2 etc…) |
st80173 | Hi. Thank you for your quick response.
This makes sense. So i have my labels, which are images , black, green and red (three classes) but they are raw pixels at the moment. this is why i get B X 3 X H X W.
So i need to make it B X 1 X H X W. So i should iterate through all my label images and convert the three channels to the class index (ex black 0, green 1 and red 2). and create masks.
Am i understanding it right. ?
And if so is there a way to convert the raw pixels to masks (contains the class index) ? |
st80174 | Exactly. You have your three classes let’s say black green and red. You should transform your B x 3 x H x W in a way to get a B x 1 x H x W. At the end the output should look something like:
[B, B, R, B] [0, 0, 2, 0]
[B, B, R, B] [0, 0, 2, 0]
[B, G, R, R] [0, 1, 2, 2]
(For example, let’s say the image is a Red L surrounded by background Black and a single Green pixel).
I imagine your 3 channels look like(for my example and the three classes black, green, red repectively):
[1, 1, 0, 1] [0, 0, 0, 0][0, 0, 1, 0]
[1, 1, 0, 1] [0, 0, 0, 0][0, 0, 1, 0]
[1, 1, 0, 0] [0, 1, 0, 0][0, 0, 1, 1]
If you have something like that, you can easily select all the positions containing 1 for the different masks and set them three different values, and then add the three masks together to fuse them.
I am not sure to be very clear. |
st80175 | Something like:
def create_mask(black, green, red):
black[black == 1] = 1
green[green == 1] = 2
red[red == 1] = 3
mask = black + green + red
return mask
Note that if I remember well, the CrossEntropyLoss expected classes that starts with index 0 so be carefull on which value you give to it. |
st80176 | Again thank you for the quick responses and the help i really appreciate it.
My labels look like this
[[[0.00392157 0.06274509 0. ]
[0.15686274 0.01176471 0.03921568]
[0.15686274 0. 0.01176471]
…
[0.7411765 0.00392157 0.00784314]
[0.7529412 0. 0.01568621]
[0.74509805 0. 0.02352941]]
[[0.00392157 0. 0.03529412]
[0.15686274 0. 0.09411764]
[0.16470587 0. 0.0862745 ]
…
[0.7411765 0.00392157 0.00784314]
[0.7529412 0. 0.01568621]
[0.74509805 0. 0.02352941]]
[[0.03921568 0. 0.08235294]
[0.14509803 0. 0.11764705]
[0.14509803 0. 0.0862745 ]
…
[0.74509805 0. 0.00784314]
[0.7529412 0. 0.01568621]
[0.74509805 0. 0.02352941]]
…
[[0.0078432 0.00392157 0.06666666]
[0. 0.00784314 0.05882347]
[0. 0.00784314 0.05882347]
…
[0.74509805 0. 0.00784314]
[0.74509805 0. 0.00784314]
[0.74509805 0. 0.00784314]]
[[0.04313725 0. 0.05882347]
[0.03921568 0. 0.05882347]
[0.03921568 0. 0.05490196]
…
[0.74509805 0. 0.00784314]
[0.74509805 0. 0.00784314]
[0.74509805 0. 0.00784314]]
[[0.05490196 0. 0.01176471]
[0.05098039 0. 0.01176471]
[0.05098039 0. 0.00392157]
…
[0.74509805 0. 0.00784314]
[0.74509805 0. 0.00784314]
[0.74509805 0. 0.00784314]]]
I think i need some sort of color mapping where i need to find the red pixels and name it class 2, green pixels to class 1 and black pixels to class 0.
I am unsure on how to read the raw pixel values and create a function for color mapping any guidance on that will be really helpful |
st80177 | No problem.
So If i understand right, your target is just an RGB image with pixels values between 0 and 1 containing only 3 colors? |
st80178 | Yes @SoucheChapich My labels are images values between 0 and 1 containing only 3 colors |
st80179 | Ok, I got it.
I am not an expert but my first idea would be to convert it back in the [0, 255] range and set three thresholds. Then for each pixel, look in which range it belongs and atribute it the 3 classes values. For example in RGB black value is (0,0,0) so you could set a threshold for each pixels closed to that values, the final/target value will be 0 for class 0 (black) |
st80180 | This page outlines that the multiprocessing module can be used with CUDA:
http://pytorch.org/docs/master/notes/multiprocessing.html 736
However CUDA generally has issues running multiple processes in paralell on one GPU:
stackoverflow.com
Running more than one CUDA applications on one GPU 467
cuda, gpu, gpgpu, nvidia
asked by
cache
on 12:55AM - 27 Jul 15
Do these limitations apply to the pytorch multiprocessing module?
Thanks in advance |
st80181 | See http://pytorch.org/docs/master/notes/multiprocessing.html#sharing-cuda-tensors 2.6k
You can use CUDA + multiprocessing if you use python3 and a different start method. |
st80182 | Thanks, I see how to use CUDA with multiprocessing. However I would guess the most common use case of CUDA multiprocessing is utilizing multiple GPU’s (i.e. with one process on each GPU). If you want to train multiple small models in parallel on a single GPU, is there likely to be significant performance improvement over training them sequentially? |
st80183 | GPUs dont do very well with multiple workloads / multiple threading models. So it’s almost always better to use one GPU per model. Unless your model is REALLY REALLY tiny… (to a point where CPU probably is faster) |
st80184 | Hi,
I am trying to run multiprocessing in my python program. I created two processes and passed a neural network in the one process and some heavy computational function in the other. I wanted the neural net to run on GPU and the other function on CPU and thereby I defined neural net using cuda() method. But when I run the program the got the following error:
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the ‘spawn’ start method
So I tried with spawn as well as forkserver start method, but then I got the other error:
RuntimeError: cuda runtime error (71) : operation not supported at …/torch/csrc/generic/StorageSharing.cpp:245
I have tried python3 multiprocessing and torch.multiprocessing both but nothing worked for me. |
st80185 | I’ve been working on a notebook where I try to implement VGG-16 from scratch using PyTorch. As a sanity check I tried to train my model on CIFAR-10, and it was training very slowly (around 20 mins per epoch which I think is too long for CIFAR-10). I was running this on google colab with GPU enabled.
So I tried to train the data on a simple 3 layer CNN just to see if I screwed something up while implementing VGG. The result was that the validation loss decreased for the first few epochs and then increased and stayed around constant.
Here’s the loss for the small CNN
cuda:0
Training begin!
[1, 500] training loss: 2.160
[1, 1000] training loss: 2.090
[1, 1500] training loss: 2.063
[1, 2000] training loss: 2.053
[1, 2500] training loss: 2.043
Validating!
[1, 625] val loss: 3.293, val acc: 0.151
[2, 500] training loss: 2.026
[2, 1000] training loss: 2.001
[2, 1500] training loss: 2.011
[2, 2000] training loss: 2.009
[2, 2500] training loss: 2.006
Validating!
[2, 625] val loss: 3.229, val acc: 0.215
[3, 500] training loss: 1.989
[3, 1000] training loss: 1.974
[3, 1500] training loss: 1.980
[3, 2000] training loss: 1.977
[3, 2500] training loss: 1.975
Validating!
[3, 625] val loss: 3.185, val acc: 0.253
[4, 500] training loss: 1.961
[4, 1000] training loss: 1.977
[4, 1500] training loss: 1.960
[4, 2000] training loss: 1.957
[4, 2500] training loss: 1.955
Validating!
[4, 625] val loss: 3.219, val acc: 0.223
[5, 500] training loss: 1.953
[5, 1000] training loss: 1.951
[5, 1500] training loss: 1.952
[5, 2000] training loss: 1.938
[5, 2500] training loss: 1.947
Validating!
[5, 625] val loss: 3.269, val acc: 0.164
[6, 500] training loss: 1.944
[6, 1000] training loss: 1.943
[6, 1500] training loss: 1.942
[6, 2000] training loss: 1.931
[6, 2500] training loss: 1.939
Validating!
[6, 625] val loss: 3.294, val acc: 0.149
[7, 500] training loss: 1.929
[7, 1000] training loss: 1.927
[7, 1500] training loss: 1.937
[7, 2000] training loss: 1.924
[7, 2500] training loss: 1.927
Validating!
[7, 625] val loss: 3.286, val acc: 0.160
[8, 500] training loss: 1.919
[8, 1000] training loss: 1.931
[8, 1500] training loss: 1.924
[8, 2000] training loss: 1.921
[8, 2500] training loss: 1.924
Validating!
[8, 625] val loss: 3.307, val acc: 0.139
[9, 500] training loss: 1.915
[9, 1000] training loss: 1.919
[9, 1500] training loss: 1.913
[9, 2000] training loss: 1.912
[9, 2500] training loss: 1.918
Validating!
[9, 625] val loss: 3.283, val acc: 0.163
[10, 500] training loss: 1.908
[10, 1000] training loss: 1.907
[10, 1500] training loss: 1.906
[10, 2000] training loss: 1.917
[10, 2500] training loss: 1.912
Validating!
[10, 625] val loss: 3.244, val acc: 0.203
[11, 500] training loss: 1.905
[11, 1000] training loss: 1.908
[11, 1500] training loss: 1.904
[11, 2000] training loss: 1.910
[11, 2500] training loss: 1.916
Validating!
[11, 625] val loss: 3.356, val acc: 0.106
[12, 500] training loss: 1.914
[12, 1000] training loss: 1.900
[12, 1500] training loss: 1.903
[12, 2000] training loss: 1.895
[12, 2500] training loss: 1.894
Validating!
[12, 625] val loss: 3.315, val acc: 0.131
[13, 500] training loss: 1.902
[13, 1000] training loss: 1.898
[13, 1500] training loss: 1.895
[13, 2000] training loss: 1.904
[13, 2500] training loss: 1.904
Validating!
[13, 625] val loss: 3.337, val acc: 0.115
[14, 500] training loss: 1.896
[14, 1000] training loss: 1.905
[14, 1500] training loss: 1.904
[14, 2000] training loss: 1.891
[14, 2500] training loss: 1.889
Validating!
[14, 625] val loss: 3.284, val acc: 0.169
[15, 500] training loss: 1.898
[15, 1000] training loss: 1.896
[15, 1500] training loss: 1.889
[15, 2000] training loss: 1.892
[15, 2500] training loss: 1.890
Validating!
[15, 625] val loss: 3.356, val acc: 0.106
[16, 500] training loss: 1.891
[16, 1000] training loss: 1.887
[16, 1500] training loss: 1.887
[16, 2000] training loss: 1.899
[16, 2500] training loss: 1.886
Validating!
[16, 625] val loss: 3.350, val acc: 0.107
[17, 500] training loss: 1.891
[17, 1000] training loss: 1.883
[17, 1500] training loss: 1.884
[17, 2000] training loss: 1.889
[17, 2500] training loss: 1.885
Validating!
[17, 625] val loss: 3.332, val acc: 0.121
[18, 500] training loss: 1.878
[18, 1000] training loss: 1.886
[18, 1500] training loss: 1.886
[18, 2000] training loss: 1.884
[18, 2500] training loss: 1.880
Validating!
[18, 625] val loss: 3.341, val acc: 0.117
[19, 500] training loss: 1.873
[19, 1000] training loss: 1.877
[19, 1500] training loss: 1.883
[19, 2000] training loss: 1.887
[19, 2500] training loss: 1.883
Validating!
[19, 625] val loss: 3.351, val acc: 0.108
[20, 500] training loss: 1.876
[20, 1000] training loss: 1.881
[20, 1500] training loss: 1.882
[20, 2000] training loss: 1.879
[20, 2500] training loss: 1.885
Validating!
[20, 625] val loss: 3.333, val acc: 0.124
Here’s my random CNN I threw together to try and see if the reason was because VGG was too complex for CIFAR-10
class MyCNN(nn.Module):
def __init__(self):
super(MyCNN, self).__init__()
#nn.Conv2d(input_channels, output_channels, kernel_size, padding)
self.conv1_1 = nn.Conv2d(3, 32, kernel_size=3, padding=1)
self.conv1_2 = nn.Conv2d(32, 32, kernel_size=3, padding=1)
self.conv1_3 = nn.Conv2d(32, 32, kernel_size=3, padding=1)
self.bn1 = nn.BatchNorm2d(32)
#nn.MaxPool2d(kernel_size, stride)
self.maxpool = nn.MaxPool2d(2, stride=2)
self.fc6 = nn.Linear(16*16*32, 128)
self.bn_fc = nn.BatchNorm1d(128)
self.fc7 = nn.Linear(128, 10)
self.dropout = nn.Dropout()
def forward(self, x):
#conv block 1
x = self.conv1_1(x)
x = F.relu(self.bn1(x))
x = self.conv1_2(x)
x = F.relu(self.bn1(x))
x = self.conv1_3(x)
x = F.relu(self.bn1(x))
x = self.maxpool(x)
#Now we need to flatten the tensor so that it'll fit into the FC layer
x = x.reshape(-1, 16 * 16 * 32)
#fc6
x = self.fc6(x)
x = F.relu(self.bn_fc(x))
x = self.dropout(x)
#output layer
x = self.fc7(x)
x = F.softmax(x, dim=1)
return x
Here’s my implementation of VGG-16
class MyVGG16(nn.Module):
def __init__(self):
super(MyVGG16, self).__init__()
#nn.Conv2d(input_channels, output_channels, kernel_size, padding)
self.conv1_1 = nn.Conv2d(3, 64, kernel_size=3, padding=1)
self.conv1_2 = nn.Conv2d(64, 64, kernel_size=3, padding=1)
#VGG originally didn't have batch norm, since it was before batch norm
#was invented. But adding it provides additional performance, so might
#as well.
self.bn1 = nn.BatchNorm2d(64)
self.conv2_1 = nn.Conv2d(64, 128, kernel_size=3, padding=1)
self.conv2_2 = nn.Conv2d(128, 128, kernel_size=3, padding=1)
self.bn2 = nn.BatchNorm2d(128)
self.conv3_1 = nn.Conv2d(128, 256, kernel_size=3, padding=1)
self.conv3_2 = nn.Conv2d(256, 256, kernel_size=3, padding=1)
self.conv3_3 = nn.Conv2d(256, 256, kernel_size=3, padding=1)
self.bn3 = nn.BatchNorm2d(256)
self.conv4_1 = nn.Conv2d(256, 512, kernel_size=3, padding=1)
self.conv4_2 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv4_3 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.bn4 = nn.BatchNorm2d(512)
self.conv5_1 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv5_2 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv5_3 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.bn5 = nn.BatchNorm2d(512)
#nn.MaxPool2d(kernel_size, stride)
self.maxpool = nn.MaxPool2d(2, stride=2)
self.fc6 = nn.Linear(7*7*512, 4096)
self.bn_fc = nn.BatchNorm1d(4096)
self.fc7 = nn.Linear(4096, 4096)
#Here we change the final output from 1000 to 10. This is because the
#number of outputs here correspond to the number of classes. VGG was
#originally trained for ImageNet, which has 1000 classes. For our purposes,
#we only have 10 classes, so we will put 10 here.
self.fc8 = nn.Linear(4096, 10)
self.dropout = nn.Dropout()
def forward(self, x):
#conv block 1
x = self.conv1_1(x)
x = F.relu(self.bn1(x))
x = self.conv1_2(x)
x = F.relu(self.bn1(x))
x = self.maxpool(x)
#conv block 2
x = self.conv2_1(x)
x = F.relu(self.bn2(x))
x = self.conv2_2(x)
x = F.relu(self.bn2(x))
x = self.maxpool(x)
#conv block 3
x = self.conv3_1(x)
x = F.relu(self.bn3(x))
x = self.conv3_2(x)
x = F.relu(self.bn3(x))
x = self.conv3_3(x)
x = F.relu(self.bn3(x))
x = self.maxpool(x)
#conv block 4
x = self.conv4_1(x)
x = F.relu(self.bn4(x))
x = self.conv4_2(x)
x = F.relu(self.bn4(x))
x = self.conv4_3(x)
x = F.relu(self.bn4(x))
x = self.maxpool(x)
#conv block 5
x = self.conv5_1(x)
x = F.relu(self.bn5(x))
x = self.conv5_2(x)
x = F.relu(self.bn5(x))
x = self.conv5_3(x)
x = F.relu(self.bn5(x))
x = self.maxpool(x)
#Now we need to flatten the tensor so that it'll fit into the FC layer
x = x.reshape(-1, 7 * 7 * 512)
#fc6
x = self.fc6(x)
x = F.relu(self.bn_fc(x))
x = self.dropout(x)
#fc7
x = self.fc7(x)
x = F.relu(self.bn_fc(x))
x = self.dropout(x)
#output layer
x = self.fc8(x)
x = F.softmax(x, dim=1)
return x
Here’s my training code
def train(model, criterion, optimizer, epochs):
print("Training begin!\n")
for epoch in range(epochs): # loop over the dataset multiple times
running_loss = 0.0
model.train()
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data[0].to(device), data[1].to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 100 == 99: # print every 100 mini-batches
print('[%d, %5d] training loss: %.3f' %
(epoch + 1, i + 1, running_loss / 100))
running_loss = 0.0
print("Validating!\n")
val_loss = 0.0
val_total = 0
val_correct = 0
model.eval()
with torch.no_grad():
#validation step after every epoch
for i, data in enumerate(valloader, 0):
inputs, labels = data[0].to(device), data[1].to(device)
outputs = model(inputs)
loss = criterion(outputs, labels)
score, predictions = torch.max(outputs.data, 1)
val_total += labels.size(0)
val_correct += (predictions == labels).sum().item()
val_loss += loss.item()
print('[%d, %5d] val loss: %.3f, val acc: %.3f' %
(epoch + 1, i + 1, val_loss / i + 1, val_correct / val_total))
val_loss = 0.0
val_total = 0
val_correct = 0
print('Finished Training')
Here are my hyperparameters
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
model = MyVGG16()
model.to(device)
def weights_init(m):
if isinstance(m, nn.Conv2d):
torch.nn.init.xavier_uniform_(m.weight.data)
model.apply(weights_init)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum = 0.9)
train(model, criterion, optimizer, 20)
I’m not exactly sure where I went wrong, so any advice would be greatly appreciated. |
st80186 | Solved by Ce_Wang in post #5
Thanks for your advice! I followed it and seemed to have found the source of the problem, which was that I was trying to train VGG from scratch, which apparently is very hard to do. As for the small toy CNN, it was because of the batch normalization. When I removed the batch normalization, the netwo… |
st80187 | Hi! I ran the code on Google colab with GPU enabled. I’m not exactly sure what GPU is being used.
As for the hyperparameters, I have updated the post with it. Here’s a summary: I used cross entropy loss, and SGD with learning rate 0.001 and momentum 0.9, and I trained for 20 epoch. The dataloader used a batch size of 16. I also initialized the hidden layer weights with xavier initialization.
Thanks! |
st80188 | The first thing you should try is to overfit the network with just a single sample and see if your loss goes to 0. Then gradually increase the sample space (100, 1000, …)
Also instead of printing the losses, try to visualize them on a single graph (both train and val). This will give you better insights.
And try to use learning rate decay after a few epochs. Maybe it could help. |
st80189 | Thanks for your advice! I followed it and seemed to have found the source of the problem, which was that I was trying to train VGG from scratch, which apparently is very hard to do. As for the small toy CNN, it was because of the batch normalization. When I removed the batch normalization, the network was able to converge, although I’m not exactly sure why batch normalization affected it that much. |
st80190 | First post here, forgive me if I’m breaking any conventions…
I’m trying to train a simple LSTM on time series data where the input (x) is 2-dimensional and the output (y) is 1-dimensional. I’ve set the sequence length at 60 and the batch size at 30 so that x is of size [60,30,2] and y is of size [60,30,1]. Each sequence is fed through the model one timestamp at a time, and the resulting 60 losses are averaged. I am hoping to backpropagate the gradient of this loss to do a parameter update.
for i in range(num_epochs):
model.hidden = model.init_hidden()
for j in range(data.n_batches):
x, y = data.next_batch(0)
lst = torch.zeros(1, requires_grad=True)
for t in range(x.shape[0]):
y_pred = model(x[t:t+1,:,:])
lst = lst + loss_fn(y_pred, y[t].view(-1))
lst /= x.shape[0]
optimizer.zero_grad()
lst.backward()
optimizer.step()
This gives me the error of trying to backward through the graph a second time, and that I must specify retain_graph=True. My questions are:
Why is retain_graph=True necessary? To my understanding, I am “unfolding” the network 60 timesteps and only doing a backward pass on the last timestep. What exactly needs to be remembered from batch to batch?
Is there a more optimal/“better” way of doing truncated backpropagation? I was thinking I could backpropagate the loss every time one timestep is unfolded, but am not sure if that would be a big improvement. See here (https://r2rt.com/styles-of-truncated-backpropagation.html 1) for what I mean - specifically the picture before the section titled “Experiment design”.
Any other comment or suggestion on code is appreciated… I’m relatively new to PyTorch so not sure what best practices are.
Thanks! |
st80191 | Solved by vdw in post #2
Try moving init_hidden into the inner loop. You need to initialize your hidden state for each batch, not just for each epoch. |
st80192 | Try moving init_hidden into the inner loop. You need to initialize your hidden state for each batch, not just for each epoch. |
st80193 | Thanks, that works. Though for others who are reading, a better solution would be to detach the hidden state at the start of each batch rather than to re-initialize, allowing the RNN to “carry-over” the final state from previous batch - this is standard BPTT. |
st80194 | Good point with detach()!
What is actually the benefit and intuition of carrying the hidden state over? Say I have do just sentence classifications, and each sentences are independent. Why should one batch depend on the final hidden of a previous batch. Here, re-initialization seems to me the more consistent method.
Sure for a language model where you break long documents into chunks, so the chunks depend on each other, carrying the hidden state over seems the more natural way to go. |
st80195 | I agree, the benefits aren’t as clear when batches are truly independent from one another. In my case, I’m doing time series prediction where the batches are created sequentially, so carrying the hidden state over does seem more natural, as you suggested.
However, in sentence classifications, one could also argue that the hidden states capture syntactic rules that is universal in how most sentences are formed, and thus few sentences are truly “independent” from one another, so that carrying the hidden state would be beneficial. I’m sure there has to be research done to answer that. |
st80196 | Hi everyone
I am trying to apply a 1d conv filter on my torch.Size([65536, 94]) data, however I faced with this error.
RuntimeError Traceback (most recent call last)
in ()
----> 1 z = m(y)
1 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
198 _single(0), self.dilation, self.groups)
199 return F.conv1d(input, self.weight, self.bias, self.stride,
–> 200 self.padding, self.dilation, self.groups)
201
202
RuntimeError: Expected 3-dimensional input for 3-dimensional weight 10 1, but got 2-dimensional input of size [65536, 94] instead
Here is my code:
m = nn.Conv1d(1,10,5)
z=m(y)
Dose anyone could help me ? |
st80197 | Based on the definition of your conv layer, it seems you are dealing with a single channel input data.
If that’s the case, make sure to add the channel dimension in dim1 to your input:
y = y.unsqueeze(1)
z = m(y)
This would create a tensor with the dimensions [batch_size=65535, channels=1, seq_length=94]. |
st80198 | Hi.
Whenever I Normalize my input images with its mean and std, and when i plot the images to visualize i get this message along with plots of images which are distorted in a way.
Clipping input data to the valid range for imshow with RGB data ([0…1] for floats or [0…255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0…1] for floats or [0…255] for integers).
If anyone could help me understand why this is happening it will be great. |
st80199 | Solved by ptrblck in post #4
The easiest way would be to plot them before normalizing.
However, if that’s not possible, you could also undo the normalization:
x = torch.empty(3, 224, 224).uniform_(0, 1)
mean = (0.5, 0.5, 0.5)
std = (0.5, 0.5, 0.5)
norm = transforms.Normalize(mean, std)
x_norm = norm(x)
x_restore = x_norm * … |
st80200 | torchvision.transforms.Normalize will use the mean and std to standardize the inputs, so that they would have a zero mean and unit variance.
Your current library to show these images (probably matplotlib) will clip the values of these float image arrays to [0, 1], which will distort them. |
st80201 | Thank you very much @ptrblck . Yes i am using matplotlib to show the images. Getting distorted images , does it mean that I am doing something wrong ? , If not is there a way i could plot the images without them being distorted ? |
st80202 | The easiest way would be to plot them before normalizing.
However, if that’s not possible, you could also undo the normalization:
x = torch.empty(3, 224, 224).uniform_(0, 1)
mean = (0.5, 0.5, 0.5)
std = (0.5, 0.5, 0.5)
norm = transforms.Normalize(mean, std)
x_norm = norm(x)
x_restore = x_norm * torch.tensor(std).view(3, 1, 1) + torch.tensor(mean).view(3, 1, 1)
print((x_restore - x).abs().max())
> tensor(0.) |
st80203 | Hello, my work to customize linear multiplication y = wx in a neural network requires to manipulate the shape of w and x matrices. For example, previously w and x have the shape mxk and kxn separately; now I need to temporarily expand them to mx2k and 2kxn to do some processing. Making changes on w and x directly doesn’t seem to be a good idea, since it will change the parameters to update.
Therefore I created temporary tensors w1 and x1 to have the expanded shape and values, and compute y using these two. i.e.
y = w1x1, where w1 has the shape mx2k and x1 has the shape 2kxn
However, the neural network parameters do not get updated properly by doing this. My guess is this customization broke the computational graph (i.e. y is not properly linked to w and x). Is there a way to set any properties of y to fix this broken path? |
st80204 | I think to get a useful answer, you’d have to say a bit more what you want the relationship between w1 and x1 to be (e.g. do you want w1 to have two blocks with the entries of w or something completely different or so).
Best regards
Thomas |
st80205 | Hi, thanks for your reply. One example of the relationship between w1,x1 and w,x could be to round each element in w and x to integer first, then expand them to bits expression. For instance, if x = [[1,0], [1,2]], then x1 wuold be [[0,0], [1,0], [0,1], [1,0]]. Then the result y would be w1x1 plus some post processing.
In this case, y is not directly computed from wx, and hence the backpropogation would not flow properly. I wonder if there is a way to tell the neural network to treat y as computed from wx? |
st80206 | One trick that often helps for “pretend it has been calculated with x even when I used x1” is to use y = wx + (w1x1 - wx).detach_(): In the forward wx is cancelled out, so y = w1x1, but in the backward, the detach_ causes gradients to only flow to wx.
Would that work for you?
Best regards
Thomas |
st80207 | Hi, I’ve been investigating this issue these days. The approach you provided seems working properly. Thank you so much!!!
However, I also tried something like this:
y = wx
y1 = w1x1
y.data = y1.data
From my understanding, this should also work. But it actually led to different result from your approach. Do you know why is that? |
st80208 | Autograd does not trace this operation (manually changing the data attribute) and hence different result. |
st80209 | Hi, thanks for your reply. I still don’t get what do you mean by “not trace this operation”. I tried to create a simple network with only 2 parameters, w1 and w2. The middle layer is given by y1=w1x, and the output layer is given by y2 = w2y1. Now if I simply do
y1 = w1x
temp = torch.tensor([1.0, 2.0])
y1.data = temp.data
The backward gradient seems getting calculated properly. How and in which cases do .data cause problems? |
st80210 | As @InnovArul said, the manipulation of data won’t be traced and thus might lead to a wrong result.
Here is a simple example demonstrating this issue.
In the first part of the code, we just calculate the loss for our operations and apply the gradient on w.
We expect values of [[17., 17.]] after the update.
In the second example, I’ve manipulated the underlying data of w after the gradient calculation.
Autograd did not trace this manipulation and the gradients for the original w are now applied on the manipulated w.
# Standard approach
x = torch.ones(1, 2)
w = torch.ones(2, 1, requires_grad=True)
target = torch.full((1,), 10.)
optimizer = torch.optim.SGD([w], lr=1.)
output = x.mm(w)
loss = (output - target)**2
loss.backward()
print(w.grad)
> tensor([[-16.],
[-16.]])
optimizer.step()
print(w)
> tensor([[ 17.],
[ 17.]])
# Now manipulate the underlying data
w = torch.ones(2, 1, requires_grad=True)
optimizer = torch.optim.SGD([w], lr=1.)
output = x.mm(w)
loss = (output - target)**2
loss.backward()
print(w.grad)
> tensor([[-16.],
[-16.]])
w.data = torch.full((2, 1), -100.)
optimizer.step()
print(w)
> tensor([[-84.],
[-84.]]) |
st80211 | Oh, I see. So in my example above, the backward computation will use the manipulated value of y1 (i.e. temp) instead of the original one (i.e. w1x), right?
But for the this code blow:
# method 1
y = wx
temp = w1x1
y.data = temp.data
# method 2
y = wx
temp = w1x1
y += (temp - y).detach_()
I still think they are equivalent, since during the backward computation, the value of y will be changed to temp in both cases. Please correct me if I’m wrong. |
st80212 | Ofcourse, there are many ways to make the operation equivalent in calculating gradients. It’s just not advised to do access .data, if you want to completely depend on autograd for the accuracy of your gradients.
From your question on the top, I feel that your use case would be achievable by using .repeat() function of tensor. Can you have a look?
for example, you can say
y = w.repeat(1,2) * x.repeat(1,2) |
st80213 | Hi, thank you for help!! This could work, but instead of this one line code, can I split them to different lines?
w1 = w.repeat(1,2)
x1 = x.repeat(1,2)
y = w1*x1
Since w has .requires_grad set to True, Pytorch document indicates w1 will also has .requires_grad set to True. Will this change the network topology or increase the number parameters? |
st80214 | You can absolutely do this and it will not increase the params. i.e., w is the only param in your code (assuming x is data). |
st80215 | I would generally recommend against using data at all.
There are two parts:
If you want to break the graph at a point, use .detach() instead.
If you want calculation that without autograd use with torch.no_grad():.
Using .data is a bit like using those two, but except in very special situations (e.g. the optimizer updates internally) there isn’t a good reason to use it except if you like headaches (then it is much cheaper than too much beer at the Octoberfest, though).
Best regards
Thomas |
st80216 | Hi, these days I went back to this problem for some reason, it seems autograd did trace data attribute. Here is an example based on yours:
x = torch.ones(1, 2)
w = torch.ones(2, 1, requires_grad=True)
target = torch.full((1,), 10.)
# manipulate data attribute
x.data = torch.cat((x,x), dim=1)
w.data = torch.cat((w,w), dim=0)
output = x.mm(w)
optimizer = torch.optim.SGD([w], lr=1.)
loss = (output - target)**2
loss.backward()
print(w.grad)
> tensor([[-12.],
[-12.],
[-12.],
[-12.]])
If autograd did not trace data attribute, should w.grad have a size of 2x1 instead of 4x1? Please correct me if I’m wrong. Thx! |
st80217 | Thanks for the example!
The manipulation of the .data attribute will work in your example, as you are manipulating it before performing any computation.
I would still discourage the usage of it, as it might still break, if you manipulate the data after some of the computation graph was already created. |
st80218 | AFAIK, torch.nn.Sigmoid calls torch.nn.functional.sigmoid in the background, and according to this answer 194, the functional and torch.xxx calls differ with their backwards implementation (which is more efficient and GPU capable in the torch.nn case). |
st80219 | Hi Rohan!
Rohan_Kumar:
what is the difference between these 2 sigmoid functions?
torch.nn.Sigmoid (note the capital “S”) is a class. When you
instantiate it, you get a function object, that is, an object that you
can call like a function. In contrast, torch.sigmoid is a function.
From the source code 116 for torch.nn.Sigmoid, you can
see that it calls torch.sigmoid, so the two are functionally
same.
(Why even have such a class / function object? Because,
although it isn’t the case for Sigmoid, in many cases when
you construct a pytorch function object you can pass in
parameters to the constructor that control the behavior of the
function. This is useful in cases where where the caller isn’t
able (or it might just be annoying) to pass in those parameters
when actually calling the function.)
As far as Alex’s comment, he references
torch.nn.functional.sigmoid, which is (probably) different
than torch.sigmoid. (Again, in any event, directly from the
source code, torch.nn.Sigmoid calls torch.sigmoid, so
the two are functionally the same.)
As for the post Alex links to, about
torch.nn.functional.sigmoid having a different backwards
implementation than torch.sigmoid, I suspect that this is out
of date (or perhaps just incorrect). Its documentation shows
that it’s deprecated in favor of torch.sigmoid() and that is
calls input.sigmoid(). I very much doubt that
torch.nn.functional.sigmoid and torch.sigmoid
do anything different (but since I can’t find any code for
torch.sigmoid or tensor.sigmoid it’s hard to tie this
up into a neat, little package).
Good luck.
K. Frank |
st80220 | I was wondering how to convert the last linear layer of the above architecture for classification task(in my case 6 classes) as there is no fc like in resnext ,any help is appreciated .
This is the entire architecture
ResNet(
(net): Sequential(
(0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
(4): Sequential(
(0): ResNetBlock(
(net): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(256, 16, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(16, 256, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): ResNetBlock(
(net): Sequential(
(0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(256, 16, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(16, 256, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
)
(2): ResNetBlock(
(net): Sequential(
(0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(256, 16, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(16, 256, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
)
)
(5): Sequential(
(0): ResNetBlock(
(net): Sequential(
(0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(32, 512, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): ResNetBlock(
(net): Sequential(
(0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(32, 512, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
)
(2): ResNetBlock(
(net): Sequential(
(0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(32, 512, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
)
(3): ResNetBlock(
(net): Sequential(
(0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(32, 512, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
)
)
(6): Sequential(
(0): ResNetBlock(
(net): Sequential(
(0): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(1024, 64, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(64, 1024, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): ResNetBlock(
(net): Sequential(
(0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(1024, 64, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(64, 1024, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
)
(2): ResNetBlock(
(net): Sequential(
(0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(1024, 64, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(64, 1024, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
)
(3): ResNetBlock(
(net): Sequential(
(0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(1024, 64, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(64, 1024, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
)
(4): ResNetBlock(
(net): Sequential(
(0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(1024, 64, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(64, 1024, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
)
(5): ResNetBlock(
(net): Sequential(
(0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(1024, 64, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(64, 1024, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
)
)
(7): Sequential(
(0): ResNetBlock(
(net): Sequential(
(0): Conv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(2048, 128, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(128, 2048, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): ResNetBlock(
(net): Sequential(
(0): Conv2d(2048, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(2048, 128, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(128, 2048, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
)
(2): ResNetBlock(
(net): Sequential(
(0): Conv2d(2048, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(4): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): SEBlock(
(net): Sequential(
(0): AdaptiveAvgPool2d(output_size=1)
(1): Conv2d(2048, 128, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace=True)
(3): Conv2d(128, 2048, kernel_size=(1, 1), stride=(1, 1))
(4): Sigmoid()
)
)
)
(relu): ReLU(inplace=True)
)
)
(8): AdaptiveAvgPool2d(output_size=(1, 1))
(9): Dropout(p=0.0, inplace=False)
(10): Flatten()
(11): Linear(in_features=2048, out_features=1000, bias=True)
)
) |
st80221 | Finally got fed up with tensorflow and am in the process of piping a project over to pytorch. So far I’ve found pytorch to be different but MUCH more intuitive.
One of my nets is a good old fashioned autoencoder I use for anomaly detection of unlabelled data. I’ve set it up to periodically report my current training and validation loss and have come across a head scratcher. My training loss improves about what I’d expect (although faster would be great), but my validation loss remains essentially the same. I’ve perused the forums here and can’t find anything that helps. Admittedly, while I can build nets pretty well in tensorflow, this could just be a stupid error on my part. I know it isn’t the data as the same data, formatted exactly the same way performs well in tensorflow. And I have… a lot of data. So no issues there. I’ll post a sample below of the minimum working code that reproduces the error.
If you want machine specs, I’d be happy to post if they’re relevant (three gpus).
Thanks!
Formatting data and defining the autoencoder
train_data = torch.from_numpy(df[fullData].values[:train])
val_data = torch.from_numpy(df[fullData].values[validate:])
batchsize = 1024
train_iter = DataLoader(dataset=train_data, batch_size=batchsize, shuffle=True)
val_iter = DataLoader(dataset=val_data, batch_size=batchsize, shuffle=True)
class Model(nn.Module):
def __init__(self, input_size, output_size, droprate):
super(Model, self).__init__()
self.en1 = nn.Linear(input_size, 640)
self.dp1 = nn.Dropout(droprate)
self.en2 = nn.Linear(640, 320)
self.dp2 = nn.Dropout(droprate)
self.en3 = nn.Linear(320, 160)
self.dp3 = nn.Dropout(droprate)
self.en4 = nn.Linear(160, 80)
self.dp4 = nn.Dropout(droprate)
self.dec1 = nn.Linear(80, 160)
self.dp5 = nn.Dropout(droprate)
self.dec2 = nn.Linear(160, 320)
self.dp6 = nn.Dropout(droprate)
self.dec3 = nn.Linear(320, 640)
self.dp7 = nn.Dropout(droprate)
self.dec4 = nn.Linear(640, output_size)
def forward(self, ins):
x = F.elu(self.en1(ins))
x = self.dp1(x)
x = F.elu(self.en2(x))
x = self.dp2(x)
x = F.elu(self.en3(x))
x = self.dp3(x)
x = F.elu(self.en4(x))
x = self.dp4(x)
x = F.elu(self.dec1(x))
x = self.dp5(x)
x = F.elu(self.dec2(x))
x = self.dp6(x)
x = F.elu(self.dec3(x))
x = self.dp7(x)
output = F.elu(self.dec4(x))
return output
Aaaaand doing the training
model = Model(input_size, output_size, 0.5)
if torch.cuda.device_count() > 1:
model = nn.DataParallel(model)
model.to(device)
optimizer = optim.Adam(model.parameters(), lr=0.0001, weight_decay=0.00001)
num_epochs = 10
iters = 0
model = model.double()
validate = 100
best_val_loss = 100
vals = []
losses = []
for epoch in range(num_epochs):
for batch_idx, batch in enumerate(loader):
model.train()
optimizer.zero_grad()
iters += 1
inputs = batch.to(device)
output = model(inputs)
train_loss = criterion(output, inputs)
train_loss.backward()
optimizer.step()
if iters % validate == 0:
val_loss = 0
model.eval()
with torch.no_grad():
for val in (val_iter):
val = val.to(device)
answer = model(val)
val_loss = criterion(answer, val)
vals.append(val_loss.item())
iterations.append(iters)
losses.append(train_loss.item()) |
st80222 | Solved by rtkaratekid in post #6
Yeah, not sure why, but pirating this code got me the kind of performance one would expect from a neural net. I’ll have to look into the differences a little more to understand why it’s working while my model doesn’t. |
st80223 | Each time iters % validate == 0, you append only the last train_loss to lossess, and the last val_loss to vals.
Maybe you’d want to accumulate them, and append their means for example. |
st80224 | @phan_phan thanks for the reply!
I went ahead and modified my code to accumulate the averages of the values as you suggested. My results look a little better but I think maybe more so just confirms the problem further haha
For reference my:
starting training loss was 0.016 and validation was 0.0019,
final training loss was 0.004 and validation loss was 0.0007.
And here’s a viz of the losses over ten epochs of training. Based on this, I think the model is improving and I’m not calculating validation loss correctly, but I can’t figure out anything I’m doing wrong! |
st80225 | What is curious is that the validation loss seems to converge 10 times faster than the training loss.
To better understand what is going on, could you:
Do validation at a higher frequency ; for example at validate = 20
Try to train it without dropout… Just to see if something changes
By the way, a nn.Dropout() layer has no parameters. So you can define a single layer instead of seven : self.dp = nn.Dropout(droprate)
And in forward call this layer multiple times : x = self.dp(x). |
st80226 | It’s funny that you mention that about the dropout because right before reading you comment I had a little facepalm moment when I realized just that.
I also tried taking out the dropout and what happened was the validation accuracy had the same behavior (where it was essentially the same over time), but the accuracy improved to the range of 1e-5.
When I validate more often (which was a good suggestion) I think it’s uncovered a little more granular view of it not improving haha.
This is validating every 20 without dropout for 10 epochs
This is the same, but with dropout rate of 0.5 between each layer
What I might need to do is just copy some other project and try and run it and see if I can replicate their results. The validation scores are good, my worry is just that since they aren’t improving, 1) something is wrong, and 2) that limits how accurate my model can be in the long run |
st80227 | Yeah, not sure why, but pirating this code 97 got me the kind of performance one would expect from a neural net. I’ll have to look into the differences a little more to understand why it’s working while my model doesn’t. |
st80228 | Hi,
I am using data-parallel across two GPUs.
How to set my batch size and learning rate, as my loss in not decreasing, while in one gpu it decreased?
Are there any rules of thumb to do this?
Specifically, learning rate and batch size.
Also, my second GPU is not used probably:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.88 Driver Version: 418.88 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... On | 00000000:26:00.0 Off | N/A |
| 27% 62C P2 76W / 280W | 9523MiB / 11175MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... On | 00000000:27:00.0 Off | N/A |
| 0% 31C P8 11W / 280W | 10MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 24 C python 9513MiB |
+-----------------------------------------------------------------------------+
How to make sure the second gpu is also used? I am wrapping my model in data parallel. |
st80229 | Not sure what I am doing wrong but I have had some weird issues where if I use torch.utils.data.RandomSampler with torch.utils.data.DataLoader (i.e. DataLoader(shuffle=True)) I get dramatically different F1 scores when I load identical data from different locations. One location is a local drive and the other is a NFS drive. Both locations contain exactly the same files when checked with md5sum over each one. I found that if I create my own version of RandomSampler (below) I no longer see the difference in performance. I think the issue might be with randperm if my code is correct (the issue exists on both CPU and GPU).
class RandomSampler(torch.utils.data.RandomSampler):
def __iter__(self):
n = len(self.data_source)
if self.replacement:
return iter(np.random.randomint(0, n, size=n).tolist())
arr = np.arange(n)
np.random.shuffle(arr)
return iter(arr.tolist())
I would post a minimal working example here but not sure how to reduce my working code (dataset is >1GB as well) to reproduce. |
st80230 | Is this effect reproducible, i.e. how many times have you trained your model from the local drive and the NFS drive? |
st80231 | I’ve been able to reproduce it dozens of times now (took me a long time to figure out this was the difference). Very puzzled as to why. |
st80232 | pytorch’s Variable is too confusing to use well, and I got an error which spent my whole day to solve but I stil have no idea where is the problem?
here is part of my code,and the ‘train_batches’ is just an iterator of my trainset
for batch in train_batches:
loss=0
Encoder_optimizer.zero_grad()
Attention_optimizer.zero_grad()
Score_optimizer.zero_grad()
for idx in range(args.batch_size):
question=Variable(torch.LongTensor(batch['question_token_ids'][idx]))
answer_passage=batch['answer_passage'][idx]
label=torch.zeros(args.max_paragraph_num)
label[answer_passage]=1
label=Variable(label)
label.requires_grad=True
scores = Variable(torch.zeros(args.max_paragraph_num))
Encoder.init_hidden()
_,question=Encoder(question)
j=0
for pidx in range(idx*args.max_paragraph_num,(idx+1)*args.max_paragraph_num):
passage=Variable(torch.LongTensor(batch['passage_token_ids'][pidx]))
Encoder.init_hidden()
passage,_=Encoder(passage)
passage=Attention(passage,question)
score=Score(passage,question)
scores[j]=score
j+=1
scores=F.softmax(scores,0)
loss+=loss_func(label,scores.view(1,5))
the error is:
Traceback (most recent call last):
File “/home/k/PycharmProjects/PassageRanking/run.py”, line 154, in
run()
File “/home/k/PycharmProjects/PassageRanking/run.py”, line 146, in run
train(args)
File “/home/k/PycharmProjects/PassageRanking/run.py”, line 134, in train
loss+=loss_func(label,scores.view(1,5))
File “/home/k/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “/home/k/anaconda3/lib/python3.6/site-packages/torch/nn/modules/loss.py”, line 677, in forward
_assert_no_grad(target)
File “/home/k/anaconda3/lib/python3.6/site-packages/torch/nn/modules/loss.py”, line 11, in _assert_no_grad
"nn criterions don’t compute the gradient w.r.t. targets - please "
AssertionError: nn criterions don’t compute the gradient w.r.t. targets - please mark these variables as volatile or not requiring gradients |
st80233 | Take the following as an example,
>>> loss = nn.L1Loss()
>>> input = autograd.Variable(torch.randn(3, 5), requires_grad=True)
>>> target = autograd.Variable(torch.randn(3, 5))
>>> output = loss(input, target)
>>> output.backward()
target is the second parameter, not the first. So you need to swap the paramters. |
st80234 | But it still occurs the same error after I exchange the position of input and target:sob: |
st80235 | Because you set label.requires_grad=True, delete this line. Remember we only need to compute the gradient w.r.t the input. In many situations, the gradient w.r.t the target is useless. |
st80236 | But why throw an error ?
I came across a use case where I needed to minimize the mse between intermediate features of an auto-encoder: both input and target need to be differentiated here.
I had to trade nn.MSELoss(encoder_i, decoder_i) for torch.sum((encoder_i - decoder_i)**2) which also does the job. However I’m not 100% sure I didn’t lose somthing with this fix (efficiency ?). I don’t understand why such use of nn losses are not permited. |
st80237 | So, how do you solve this problem when both the input and the target need to be differentiated? |
st80238 | Hi everyone,
I have 2 multivariate Gaussians and I want to compute KL-divergence between them. The shape of mu1, mu2, std1, std2 is (batch_size, 128). Currently I am computing this using for loop to cmpute this. Can this be done in a vectorized way?
def compute_kl_div(mu1, mu2, std1, std2):
kl_sum = 0
for i in range(batch_size):
cov1 = torch.diag(std1[i])
cov2 = torch.diag(std2[i])
a = torch.logdet(cov1) - torch.logdet(cov2)
cov2_inv = torch.matrix_power(cov2, -1)
b = torch.trace(torch.mm(cov2_inv, cov1))
c = torch.mm(torch.mm((mu2[i] - mu1[i]).unsqueeze(0), cov2_inv), (mu2[i] - mu1[i]).unsqueeze(1))
kl_sum += (a - 64 + b + c)
return 0.5*kl_sum |
st80239 | Every of the functions seems to be batcheable or have batcheable equivalents, so you could just convert it line by line.
But then, it seems that the computation by explicitly constructing the diagonal matrices is excessively inefficient – logdet or inverse are easy for diagonal matrices and hard for general ones – and I would recommend revisiting the maths here. (The GP people do a lot of these calculations, so GPyTorch 1 or my ancient CandleGP 1 will have plenty of examples, even if the latter doesn’t quite have what you want.
It may be convenient to use einsum for the summations.
Best regards
Thomas |
st80240 | I have 2 tensors that have shapes like this:
(15, 720, 30)
(15, 720, 19, 2)
I want to gather the tensors into one tensor with the shape like this : (15, 720, 19, 30), how can I do this?
From the other topic: How to do the tf.gather_nd in pytorch? 72, i see that there is a way but it’s tricky for me to understand how to use this. |
st80241 | Hi,
What should your output matrix contain? I fail to see what is the operation that you want to do here. |
st80242 | Hi,
Thanks for your reply.
If you look at this repo at this line: https://github.com/bdqnghi/bi-tbcnn/blob/master/bi-tbcnn/bi-tbcnn/network.py#L155 18, which performs a bunch of complicated matrix operations on the tree structure.
These lines do the job:
vector_lookup = tf.concat([zero_vecs, nodes[:, 1:, :]], axis=1)
....
children = tf.concat([batch_indices, children], axis=3)
return tf.gather_nd(vector_lookup, children, name='children')
the shape of the vector lookup will be sth like: (15, 720, 30)
and the shape of the children will be sth like: (15, 720, 19, 2)
and my goal is to do thing similar to the return line
I’m trying to port this into Pytorch but seems quite tricky in some parts. |
st80243 | I do not know tensorflow very well, so I am not sure if I got what you are asking correctly, can you check if thats what you are looking for:
>>> lookup=torch.ones((15, 720, 30))
>>> children=torch.randint(0,15,(15, 720, 19, 2),dtype=torch.long)
>>> lookup[children[:,:,:,0],children[:,:,:,1],:].shape
torch.Size([15, 720, 19, 30]) |
st80244 | @enisberk
you’re the lifesaver !!!, thanks a lot
But I don’t understand how did you come up with the solution? Do you mind sharing how the code works?
The scenario can be described like this:
lookup=torch.ones((15, 720, 30))
means: batch_size x num_nodes x feature_size
where num nodes are the number of nodes in a tree and each node is represented by an embedding with size = feature_size.
children= (15, 720, 19, 2)
means: batch_size x num_nodes x num_children x 2. I don’t really understand the meaning of 2 here.
Since every node in a tree is represented by an embedding, what the tensorflow code did is that they want to “merge” the 2 tensors into one, the dimension 4 means that each child in dimension 3 contains a corresponding embedding of its parent, make it more convenient for the later step.
In your code, i don’t really get how it works but it did the job… |
st80245 | I am glad it helped.
check that out first to understand gather_nd explanation of gather_nd from stackoverflow 64
So basically gather_nd needs set of indexes(children in our case) to use,
In our case children have shape of [15, 720, 19, 2] so you can think it as, there are 15*720*19 index tuples with two elements, lets say one of those tuples is equal to (19,22), it corresponds to lookup[19,22,:] —> this has shape of [1,1,30] but as I said there 15*720*19 of those tuples, when you combine them all, you get[15,720,19,30].
When you translate that to pytorch, all those index tuples corresponds to :
children[:,:,:,0],children[:,:,:,1]
you need to use them as indexing elements to lookup, but you do not have an index for last dimension so you get all of them:
lookup[children[:,:,:,0],children[:,:,:,1],:]
I hope that makes it clearer. |
st80246 | Almost all major open source Python packages now support both Python 3.x and Python 2.7, and many projects have been supporting these two versions of the language for several years. While we have developed tools and techniques to maintain compatibility efficiently, it is a small but constant friction in the development of a lot of code.
We are keen to use Python 3 to its full potential, and we currently accept the cost of writing cross-compatible code to allow a smooth transition, but we don’t intend to maintain this compatibility indefinitely. Although the transition has not been as quick as we hoped, we do see it taking place, with more and more people using, teaching and recommending Python 3.
The developers of the Python language extended support of Python 2.7 from 2015 to January 1, 2020, recognising that many people were still using Python 2. We believe that the extra 5 years is sufficient to transition off of Python 2, and our projects plan to stop supporting Python 2 when upstream support ends in 2020, if not before. We will then be able to simplify our code and take advantage of the many new features in the current version of the Python language and standard library.
In addition, significantly before 2020, many of our projects will step down Python 2.7 support to only fixing bugs, and require Python 3 for all new feature releases. Some projects have already made this transition. This too parallels support for the language itself, as Python 2.7 releases only include bugfixes and security improvements.
Third parties may offer paid support for our projects on old Python versions for longer than we support them ourselves. We won’t obstruct this, and it is a core principle of free and open source software that this is possible. However, if you enjoy the free, first party support for many projects including the Scientific Python stack, please start planning to move to Python 3.
For all of these reasons, we have pledged to drop support for Python 2.7 no later than Jan 1st, 2020 , coinciding with the Python development team’s timeline for dropping support for Python 2.7 63. |
st80247 | What is the most efficient way of zero padding a multi dimensional signal before using torch.fft?
the respective module does not consider this argument so I am wondering if placing said signal into a zero tensor at certain positions is the most suitable way of mimicking zero-padding
Can somebody share their thoughts on this please? |
st80248 | Placing the tensor into another tensor initialized with zeros should work.
However, the more straightforward way would probably be to use F.pad instead.
Would that work for you? |
st80249 | I want to implement my own quantized and clipped ReLU. This is how I implemented it:
class _quantAct(torch.autograd.Function):
@staticmethod
def forward(ctx, input, clip_low=0., clip=6., bits=8, inplace=False):
if inplace:
ctx.mark_dirty(input)
output = input
else:
output = input.clone()
output[output<clip_low]=clip_low
output[output>clip]=clip
output = output.div(clip).mul((2**bits)-1).round().div((2**bits)-1).mul(clip)
ctx.save_for_backward(output.eq(clip_low)+output.eq(clip))
return output
@staticmethod
def backward(ctx, grad_output):
# saved tensors - tuple of tensors with one element
mask, = ctx.saved_tensors
grad_input = grad_output.masked_fill(mask,0)
return grad_input, None, None, None, None
class quantReLU(nn.ReLU):
def __init__(self, clip=6., bits=8, inplace=False):
super(quantReLU, self).__init__()
self.clip = clip
self.bits = bits
self.inplace = inplace
def forward(self, inputs):
return _quantAct().apply(inputs, 0, self.clip, self.bits, self.inplace)
How many grads do I have to return from the static backward method of my torch.autograd.Function inherited class? Why does it expect me to return 5 of them?
Appreciate your inputs, thanks! |
st80250 | You need to return as many values from backwards as were passed to to forward, this includes any non-tensor arguments (likeclip_low etc). For non-Tensor arguments that don’t have an input gradient you can return None but still need to return a value. So, as there were 5 inputs to forward, you need 5 outputs from backward. Technically, I gather if the user only passed input and left the others as default you could just return one gradient, but then you’d have to track that, instead you can also return extra Nones which will be ignored.
Explained in the docs on extending autograd 7. |
st80251 | Hi All,
I’m struggling to convert an image classifier trained with fast.ai to coreml for iphone and hoping to find someone who can help. Full detail is listed in my stack overflow question here: https://stackoverflow.com/questions/58276161/problem-when-converting-pytorch-image-classifier-to-mlmodel-returns-same-softma 33
Any suggestions or ideas would be awesome!
Thanks,
Steve |
st80252 | I am trying to normalize the weight that I get from embedding layer using F.normalize function but getting error as “IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)”.
Not sure why.
def forward(self, imgs):
### complete the forward path --------------------
cls_scores = []
## YOUR CODE HERE
for i in range(len(self.classes)):
v = self.backbone(imgs)
class_out = random.choice(self.classes)
class_out = self.classes.index(class_out)
class_out = torch.tensor(class_out)
wt = self.embeddings(class_out) # not sure what should be the input
wt_normalized = F.normalize(wt, p=2, dim=1)
self.dc.weight = nn.Parameter(wt_normalized)
v_out = self.dc(v)
out = nn.Upsample(size=(8, 8), mode='bilinear')(v_out)
out = nn.Upsample(size=(64, 64), mode='bilinear')(v_out)
score = self.mlp(out)
cls_scores.append(score)
### ----------------------------------------------
return cls_scores # Dim: [batch_size, 10] |
st80253 | Based on the error message, if seems wt has only a single dimension.
If you want to normalize using dim=1, make sure your tensor has at least two valid dims:
F.normalize(torch.randn(10, 10), p=2, dim=1) # works
F.normalize(torch.randn(10), p=2, dim=1) # throws your error |
st80254 | Hi everyone,
I’m iterating through a dataset in the following way:
from data_loader.hci_benchmark import HCIDataset
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
evalset = HCIDataset('test', False)
eval_loader = DataLoader(evalset,
batch_size=2,
pin_memory=False,
num_workers=2)
for i, sample in enumerate(eval_loader):
print(i)
for k in sample.keys():
print(k)
break
When I run the code above, on the start of the for loop, I get the following error:
Traceback (most recent call last):
File "testDataLoaders.py", line 18, in <module>
for i, sample in enumerate(eval_loader):
File "/home/***/anaconda3/envs/epinet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 819, in __next__
return self._process_data(data)
File "/home/***/anaconda3/envs/epinet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data
data.reraise()
File "/home/***/anaconda3/envs/epinet/lib/python3.7/site-packages/torch/_utils.py", line 369, in reraise
raise self.exc_type(msg)
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/***/anaconda3/envs/epinet/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/***/anaconda3/envs/epinet/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/***/anaconda3/envs/epinet/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 75, in default_collate
return {key: default_collate([d[key] for d in batch]) for key in elem}
File "/home/***/anaconda3/envs/epinet/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 75, in <dictcomp>
return {key: default_collate([d[key] for d in batch]) for key in elem}
File "/home/***/anaconda3/envs/epinet/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 53, in default_collate
numel = sum([x.numel() for x in batch])
File "/home/***/anaconda3/envs/epinet/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 53, in <listcomp>
numel = sum([x.numel() for x in batch])
AttributeError: 'str' object has no attribute 'numel'
I am doing a fairly simple thing yet I can’t seem to get my head around the error. The dataset (evalset) follows the Dataset class documentation 4.
Im using Python 3.7.4, pytorch 1.2.0 and CUDA V10.0.130.
Googling the error doesnt provide any insights. Has anyone experienced this? Any suggestions on how to proceed? |
st80255 | The error seems to have to do with the fact that I am sorting the keys of a dictionary when returning in def getitem(self, idx): part of the HCIDataset |
st80256 | I am getting
no module named hypothesis
error when I ran the command
python test/ran_test.py
Running test_expecttest … [2019-10-07 05:34:41.949386]
Traceback (most recent call last):
File “test_expecttest.py”, line 8, in
import hypothesis
ModuleNotFoundError: No module named ‘hypothesis’
Traceback (most recent call last):
File “test/run_test.py”, line 458, in
main()
File “test/run_test.py”, line 450, in main
raise RuntimeError(message)
RuntimeError: test_expecttest failed! |
st80257 | If you use conda, run conda install hypothesis,for pip I believe it should be pip install hypothesis |
st80258 | Hey guys, I was wondering if it’s possible to create a convnet with variable size input images as training data. If so can someone provide a simple example on how to achieve sth like that. For instance in tensorflow I would go and simply define the input shape on the first conv layer as (None, None, 3). Can we do sth like that in pytorch?
Thanks! |
st80259 | it depends on what you want to do at the end.
If the convnet is for image classification (you want one output for an image, regardless of size), then you can use nn.AdaptiveAvgPooling right before the fully connected layers.
If you want dense classification (larger image = larger output), you can replace your nn.Linear layers at the end with nn.Conv2d with kernel size 1x1 |
st80260 | Hi Soumith , thanks a lot for the answer. Ultimately the goal is to do classification on image ROIs and since they are of different sizes that’s why I was asking. So for instance if we have a training loop like this:
for epoch in epochs:
for sample in nb_samples:
outputs = convnet(sample)
where each sample represents a batch and it actually is a list of images [img_1, img_2,...,img_n]. And, img_1, img_2,...,img_n represent ROIs extracted from the original images.
Would that work or do I need to specify sth beforehand in the convnet architecture to make it work with this kind of data?
I’m not quite sure about the benefits of nn.AdaptiveAvgPooling in this case. Would you mind elaborating a little bit. How could it help in terms of making the convnet deal with data where each sample has as variable size.
For instance:
img1.shape = 238, 126, 3
img2.shape = 68, 234, 3
img3.shape = 225, 98, 3
. . .
img_n = ... |
st80261 | AdaptiveAvgPooling will take as input a variable sized image, and then downsample it to a fixed size. The way it does it is by adaptively changing the pooling filter size based on the input image size. |
st80262 | @kirk86 Can you share your key code that loading different size input images?
I read the source code of dataloader, finding that torch.stack() is used. This function expects all elements in the batch sequence the exactly same size. How do you handle that?
@smth
If I set batch size to greater than 1, how can I use a dataloader of different size inputs?
Thanks. |
st80263 | Same problem here. It would be much more convenient if we can do something like torch.stack with tensor of different sizes. |
st80264 | as a workaround, for batchsize=1, you can manually accumulate loss, do the average, then backward and update weight. |
st80265 | Thanks for the workaround, Jiedong. Is there a more optimized method for GPU utilization? Calling .cuda() on each data example tensor is not improving training performance over the CPU alone. |
st80266 | I might be missing something, but how does a kernel size of 1x1 solve this problem? How do we go from the 1x1 convolution of the preceding feature maps to a classification? |
st80267 | The number of kernels in the last conv layer (out_channels) will specify the number of classes, while the 1x1 kernel does not change the spatial size.
Using this approach you can easily output the class logits for each pixel location e.g. in a segmentation use case. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.