id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st84568
|
I ran it in python’s unit test with no errors. You sure it does raise an error in yours with that simple/trivial example?
|
st84569
|
You sure it does raise an error in yours with that simple/trivial example?
Yes. Notice that it prints idx = 3. The evaluation of self.bob[3] raises an IndexError. The Python interpreter catches this error and stops the for loop.
This doesn’t happen if you wrap it in a DataLoader like in @Prerna_Dhareshwar’s example because DataLoader uses __len__.
Here is the full runnable example:
from torch.utils.data import Dataset
class MyDataset(Dataset):
def __init__(self):
self.bob = [0,1,2]
def __len__(self):
print('len')
return 0
def __getitem__(self, idx):
print(f'idx = {idx}')
return self.bob[idx]
dataset = MyDataset()
for i, data in enumerate(dataset):
print(i)
print(data)
Here is the output:
idx = 0
0
0
idx = 1
1
1
idx = 2
2
2
idx = 3
|
st84570
|
Hello everyone,
I’m trying to build an LSTM model to predict if a customer will qualify for a loan given multiple data points data that are accumulated over a 5-day window (customer is discarded on day 6). My target variable is binary. Below is a snapshot of the data set for reference.
image.png530×632 13.6 KB
As you can see, “age” is available upon lead submission while credit score might get pulled anytime between day 2 and day 5. My ultimate goal is to have a model that can predict the outcome of a lead based on the data available at any point in time. For example, a lead with “age” = 25 and no credit score pulled on day 4 will have a low likelihood to convert (even lower, close to 0, if there’s still no credit score on day 5) but if the same lead had the credit score pulled on day 2 - assuming the credit score is good - it will indicate high intent by the consumer which would result in high likelihood to close. Basically, I’m looking to build a lead scoring model that updates its scores after each day passes and as new data is collected.
The Pytorch issue that I ran into is that I can’t understand how to reshape the input data in a way that makes sense for what I’m trying to do. I read this thread but it didn’t help: Understanding LSTM input 10
I understand that I have to reshape the data to be of shape (batch, time-steps, input_size). I tried using this method:
df = pd.read_csv("sample_data.csv")
a = torch.Tensor(df.values)
a.unsqueeze_(-1)
a = a.expand(100,5,5)
However the result is that each data point is repeated 5 times along the X axis as you can see below.
tensor([[[100., 100., 100., 100., 100.],
[ 1., 1., 1., 1., 1.],
[ 50., 50., 50., 50., 50.],
[ 0., 0., 0., 0., 0.],
[ 1., 1., 1., 1., 1.]],
[[100., 100., 100., 100., 100.],
[ 2., 2., 2., 2., 2.],
[ 50., 50., 50., 50., 50.],
[700., 700., 700., 700., 700.],
[ 1., 1., 1., 1., 1.]],
But my understanding is that each block should contain the 5 time-steps for each lead:
tensor([[[100., 100., 100., 100., 100.],
[ 1., 2., 3., 4., 5.],
[ 50., 50., 50., 50., 50.],
[ 0., 700., 700., 700., 700.],
[ 1., 1., 1., 1., 1.]],
Any help and possibly some starter code would be highly appreciated
|
st84571
|
I have some questions about the usage of local variables in the forward function. Say I have the following model:
class Model(nn.Module):
def __init__(self):
self.conv1 = nn.Conv2d(3,64,1)
self.bn1 = nn.BatchNorm2d(64)
self.conv2 = nn.Conv2d(64,128,1)
self.bn2 = nn.BatchNorm2d(128)
self.relu = nn.ReLU(inplace=True)
I can write the forward function in 2 ways:
(1)
def forward(self, x):
x = self.relu(self.bn1(self.conv1(x)))
x = self.relu(self.bn2(self.conv2(x)))
return x
(2)
def forward(self, x):
x = self.relu(self.bn2(self.conv2(self.relu(self.bn1(self.conv1(x))))))
return x
Does (2) use less memory than (1)?
Also, when relu is inplace, I get an error if I do:
x = conv1(x)
x = bn1(x)
x = relu(x)
However, it is fine if I do:
x = relu(bn1(conv1(x)))
Is it because relu is inplace I can’t assign a local variable to its result?
|
st84572
|
Hello. I am recently trying to load tfrecords using pytorch. However, it seems that if I load tf.data.TFRecordDataset in pytorch datasets and use dataloader with num_workers > 0, the program won’t work properly. I am wondering if there is any better ways to load tfrecords or other better ways to store large scale datasets.
Here are the example codes:
class TestDataset(Dataset):
def __init__(self, record_path):
self.record_path = record_path
self.reader = tf.data.TFRecordDataset(self.record_path).map(decoder)
self._records_iter = self.reader.make_one_shot_iterator()
def __len__(self):
return 100
def _parser(self, img):
image_arr = np.frombuffer(img, dtype=np.uint8)
sample = torch.tensor(image_arr)
return sample
def __getitem__(self, item):
sample = next(self._records_iter).numpy()
return self._parser(sample)
dataset = TestDataset(0, path)
loader = DataLoader(dataset, batch_size=1, num_workers=1)
for i in loader:
print(i)
|
st84573
|
OK, I fixed the problem using tf.python_io.tf_record_iterator although it is deprecated in tensorflow.
|
st84574
|
I’m trying to implement an autoencoder in pytorch but all my outputs are zero and i don’t know why
here is my code for the autoencoder:
class autoencoder(nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(686, 256),
nn.ReLU(),
nn.Linear(256, 64),
nn.ReLU())
self.decoder = nn.Sequential(
nn.Linear(64, 256),
nn.ReLU(),
nn.Linear(256, 686),
nn.ReLU())
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
and here is the training process:
iterations = 10
learning_rate = 0.98
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(
net.parameters(), lr=learning_rate, weight_decay=1e-5)
for epoch in range(iterations):
runningLoss = 0.0
for i, data in enumerate(train_dl, 0):
inputs, labels = data
if use_gpu:
inputs = Variable(inputs.view(-1,686).double()).cuda()
else:
inputs = Variable(inputs.view(-1,686).double())
outputs = net(inputs)
loss = criterion(outputs, inputs)
optimizer.zero_grad()
loss.backward()
optimizer.step()
runningLoss += loss.data.item()
print(f'at iteration: {epoch+1}/{iterations}; BC Error: {runningLoss}')
print('Finished Training')
|
st84575
|
ok, i tried it, this time the outputs aren’t zero anymore but it seems like it doesn’t converge, and the error is very high.
|
st84576
|
That’s a good starting point to play around with some hyperparameters, e.g. lowering the learning rate etc.
|
st84577
|
I am encountering some very strange behavior with my model. I am trying to take the feature extraction layers of ResNet34 and add a final classifier layer onto the end. When I change the final nn.Linear layer from a binary output to a multiclass output, the gradient disappears and my model fails to update its weights.
PyTorch version 1.0.1
Let me know if any additional information is useful, I’ve never made a post like this before.
Example code:
General setup
import torch
import torch.nn as nn
import torchvision.models as models
class ResNet34_Model(nn.Module):
def __init__(self, original_model, num_classes):
super(TestModel, self).__init__()
linear_size = 512
self.features = nn.Sequential(*list(original_model.children())[:-1])
self.classifier = nn.Sequential(
nn.Linear(linear_size, num_classes)
)
# Freeze those weights
for p in self.features.parameters():
p.requires_grad = False
def forward(self, x):
f = self.features(x)
if self.modelName == 'alexnet' :
f = f.view(f.size(0), 256 * 6 * 6)
elif self.modelName == 'vgg16':
f = f.view(f.size(0), -1)
elif self.modelName == 'resnet' :
f = f.view(f.size(0), -1)
elif self.modelName == "densenet":
# f = f.relu(f, inplace=True)
# f = f.avg_pool2d(f, kernel_size=7).view(f.size(0), -1)
f = f.view(f.size(0), -1)
y = self.classifier(f)
return y
Binary code (gradient is fine)
original_model = models.__dict__["resnet34"](pretrained=True)
model = ResNet34_Model(original_model, 1)
model = model.cuda()
optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr = 3e-4, weight_decay=0)
criterion = nn.BCEWithLogitsLoss().cuda()
X_batch = X_batch.cuda() # X_batch and y_batch are pulled from the first iteration of a data loader
y_batch = y_batch.cuda()
# train for 1 epoch
model.train()
output = model(X_batch).squeeze()
loss = criterion(output.float(), y_batch.float())
output_var = torch.sigmoid(output).data
output_class = [ 0 if a<=0.5 else 1 for a in output_var]
optimizer.zero_grad()
loss.backward()
optimizer.step()
# check that gradients exist
for n, p in model.named_parameters():
print(n)
if(p.requires_grad) and ("bias" not in n):
print(p.grad)
Output
features.0.weight
features.1.weight
features.1.bias
features.4.0.conv1.weight
features.4.0.bn1.weight
features.4.0.bn1.bias
features.4.0.conv2.weight
features.4.0.bn2.weight
features.4.0.bn2.bias
features.4.1.conv1.weight
features.4.1.bn1.weight
features.4.1.bn1.bias
features.4.1.conv2.weight
features.4.1.bn2.weight
features.4.1.bn2.bias
features.4.2.conv1.weight
features.4.2.bn1.weight
features.4.2.bn1.bias
features.4.2.conv2.weight
features.4.2.bn2.weight
features.4.2.bn2.bias
features.5.0.conv1.weight
features.5.0.bn1.weight
features.5.0.bn1.bias
features.5.0.conv2.weight
features.5.0.bn2.weight
features.5.0.bn2.bias
features.5.0.downsample.0.weight
features.5.0.downsample.1.weight
features.5.0.downsample.1.bias
features.5.1.conv1.weight
features.5.1.bn1.weight
features.5.1.bn1.bias
features.5.1.conv2.weight
features.5.1.bn2.weight
features.5.1.bn2.bias
features.5.2.conv1.weight
features.5.2.bn1.weight
features.5.2.bn1.bias
features.5.2.conv2.weight
features.5.2.bn2.weight
features.5.2.bn2.bias
features.5.3.conv1.weight
features.5.3.bn1.weight
features.5.3.bn1.bias
features.5.3.conv2.weight
features.5.3.bn2.weight
features.5.3.bn2.bias
features.6.0.conv1.weight
features.6.0.bn1.weight
features.6.0.bn1.bias
features.6.0.conv2.weight
features.6.0.bn2.weight
features.6.0.bn2.bias
features.6.0.downsample.0.weight
features.6.0.downsample.1.weight
features.6.0.downsample.1.bias
features.6.1.conv1.weight
features.6.1.bn1.weight
features.6.1.bn1.bias
features.6.1.conv2.weight
features.6.1.bn2.weight
features.6.1.bn2.bias
features.6.2.conv1.weight
features.6.2.bn1.weight
features.6.2.bn1.bias
features.6.2.conv2.weight
features.6.2.bn2.weight
features.6.2.bn2.bias
features.6.3.conv1.weight
features.6.3.bn1.weight
features.6.3.bn1.bias
features.6.3.conv2.weight
features.6.3.bn2.weight
features.6.3.bn2.bias
features.6.4.conv1.weight
features.6.4.bn1.weight
features.6.4.bn1.bias
features.6.4.conv2.weight
features.6.4.bn2.weight
features.6.4.bn2.bias
features.6.5.conv1.weight
features.6.5.bn1.weight
features.6.5.bn1.bias
features.6.5.conv2.weight
features.6.5.bn2.weight
features.6.5.bn2.bias
features.7.0.conv1.weight
features.7.0.bn1.weight
features.7.0.bn1.bias
features.7.0.conv2.weight
features.7.0.bn2.weight
features.7.0.bn2.bias
features.7.0.downsample.0.weight
features.7.0.downsample.1.weight
features.7.0.downsample.1.bias
features.7.1.conv1.weight
features.7.1.bn1.weight
features.7.1.bn1.bias
features.7.1.conv2.weight
features.7.1.bn2.weight
features.7.1.bn2.bias
features.7.2.conv1.weight
features.7.2.bn1.weight
features.7.2.bn1.bias
features.7.2.conv2.weight
features.7.2.bn2.weight
features.7.2.bn2.bias
classifier.0.weight
tensor([[-4.0103e-02, -3.0650e-03, -1.7690e-02, 1.8196e-02, 1.8940e-02,
-5.2432e-02, 4.0119e-02, 1.6359e-02, -1.6112e-02, 3.6596e-02,
1.1273e-02, 1.9971e-02, 2.4494e-02, -4.9495e-02, -2.4485e-02,
-2.7787e-02, -4.4333e-02, 3.5293e-02, -1.1992e-02, -4.7242e-02,
2.5577e-02, -3.0916e-02, -8.1142e-03, -2.4182e-03, -1.7207e-02,
1.7690e-02, 3.6298e-02, -3.3379e-03, 1.5704e-02, 2.7359e-02,
-1.6443e-04, 2.8785e-02, -4.1732e-03, 1.2785e-02, 6.6502e-02,
2.3740e-02, -1.9601e-02, 2.0654e-02, -1.5341e-02, 5.1119e-02,
-9.3292e-03, -4.6272e-02, 4.0541e-03, -1.6780e-02, 2.7696e-02,
-2.8260e-02, -2.7696e-02, -2.6938e-02, -5.7942e-02, -1.7714e-02,
1.1813e-02, -3.6892e-02, 3.2378e-02, -7.0157e-02, 2.9828e-03,
4.5477e-02, 9.4537e-03, 1.4507e-02, 2.3831e-02, 1.3459e-02,
5.2202e-02, -3.2785e-02, -1.7972e-02, 3.4436e-02, -1.7274e-02,
-5.2432e-02, 3.5817e-02, -3.7003e-02, 3.9805e-02, -3.2142e-02,
4.8118e-02, 3.0372e-02, 3.1019e-02, 2.0520e-02, -2.4969e-02,
1.9197e-02, -5.7083e-02, 1.9971e-02, 4.1865e-03, -1.6170e-02,
-2.7157e-02, 7.6236e-02, 7.7632e-04, 1.2858e-02, 6.0151e-02,
4.4184e-02, -2.4115e-02, 3.8711e-03, 3.1806e-02, -7.6332e-03,
-2.3273e-02, -3.6383e-02, -3.8611e-02, 4.8423e-02, 1.4127e-03,
2.1603e-02, 5.7787e-03, 1.6463e-02, 2.6809e-02, -1.7455e-02,
-3.9449e-03, -2.3069e-03, 6.2234e-02, -2.9278e-02, -9.0541e-03,
4.0190e-02, 3.4797e-02, -3.7103e-03, -2.7790e-02, -4.0176e-02,
4.5264e-03, 1.3271e-02, -3.0030e-03, 2.4259e-02, -2.2459e-02,
-8.1706e-03, 2.9898e-02, -4.2108e-02, -3.8830e-03, -1.6584e-03,
-2.5688e-03, 5.5268e-02, 2.5715e-03, -8.0469e-03, -5.3614e-03,
3.2183e-02, -1.6894e-02, 4.4844e-02, -5.0618e-03, 3.0681e-03,
3.4562e-02, 2.0505e-02, 5.2180e-02, -4.3831e-02, 4.5276e-02,
-8.6777e-03, 4.2830e-02, -2.6109e-02, -4.2990e-03, -7.8125e-02,
2.6773e-03, -4.4095e-03, -6.4931e-02, -1.2023e-02, 3.0119e-02,
6.6624e-02, 3.2416e-02, 1.4566e-02, -6.7606e-02, -4.9717e-02,
-2.2283e-02, -1.9339e-02, -8.6594e-02, -9.5567e-03, -3.6641e-02,
-2.4269e-02, 6.0891e-03, 3.5978e-02, -1.3062e-02, -2.4939e-02,
-2.6925e-02, -2.6563e-03, -1.3851e-02, -5.0669e-03, 6.0252e-02,
-4.9383e-02, 4.5053e-02, 1.6863e-02, 2.6193e-02, -3.0816e-02,
3.9407e-02, -5.0734e-02, -2.5574e-02, 1.5975e-02, 3.2121e-02,
-2.1032e-02, -1.1800e-02, 2.0957e-02, 1.7678e-02, 3.2595e-02,
2.7446e-02, 2.3684e-02, 4.6127e-03, -1.3901e-02, -1.4728e-02,
5.2495e-02, -3.1069e-02, 1.1383e-02, 5.4902e-02, -9.8808e-03,
5.5881e-02, 3.7348e-02, 4.9416e-03, -5.1515e-02, -6.1780e-02,
2.5301e-02, -2.6315e-02, -1.8703e-02, 2.9316e-02, -2.3085e-02,
4.1748e-03, 5.2638e-02, -2.1772e-02, -1.4431e-03, 9.2481e-03,
-4.8283e-02, 3.1777e-03, 8.0173e-02, 4.5029e-03, -1.4431e-02,
-3.3560e-02, 4.6829e-02, -9.9793e-03, -3.2405e-03, -1.1975e-02,
5.5930e-02, 2.2951e-02, -3.9283e-02, 2.4410e-03, 3.8763e-03,
-2.1583e-02, -4.4375e-03, -3.1224e-02, 2.5172e-02, 5.3837e-03,
-4.2036e-02, 3.9306e-02, -1.9016e-02, 8.3942e-02, 1.7185e-02,
-3.3008e-02, 1.2389e-02, -6.3560e-02, 2.6117e-02, -1.6864e-02,
5.5821e-02, -1.2640e-03, -4.0874e-02, -1.3435e-02, 2.2319e-02,
-3.9246e-02, 2.1261e-02, -2.4980e-02, -5.5888e-02, -5.4517e-03,
2.5954e-02, 4.3477e-02, 5.3093e-02, -1.7187e-03, 5.0169e-02,
-3.7164e-02, 2.7800e-02, 1.2954e-02, 3.6849e-03, -1.0594e-02,
-1.0012e-02, -1.7303e-02, -2.1361e-02, 2.9390e-02, -2.0674e-02,
3.5865e-02, 7.5633e-03, 2.3617e-02, -4.1382e-02, 2.9991e-02,
3.0470e-04, -2.7890e-02, -1.8794e-02, -1.6211e-02, 3.3249e-02,
1.9253e-02, 1.9588e-02, 2.2323e-02, 1.7697e-02, -7.8543e-03,
-7.8463e-03, 2.2645e-02, -3.9645e-02, -3.3896e-02, 2.1476e-02,
3.9840e-02, -7.6785e-03, -1.7353e-02, -5.0593e-02, 2.9839e-02,
9.3777e-03, -5.1932e-05, 4.8221e-02, 1.5305e-02, 1.1562e-02,
1.7175e-02, 1.9763e-02, 2.1498e-02, -6.4569e-03, 4.4490e-02,
-2.1948e-02, 3.3371e-02, -2.6169e-02, 2.1575e-02, 5.0593e-02,
-7.6582e-03, 7.3652e-02, -9.0516e-02, 3.0139e-03, 3.1726e-02,
-5.5917e-02, 2.1198e-02, -2.0204e-02, -3.1504e-02, 8.3176e-03,
6.8362e-02, -1.7631e-02, 3.2006e-02, -3.2219e-02, -2.3685e-02,
2.5310e-02, 2.4241e-02, 1.3161e-02, 4.5711e-02, 6.0400e-02,
3.2823e-02, 3.7996e-02, -6.6305e-03, 6.0794e-03, 3.6651e-02,
-7.5762e-03, 2.3375e-03, 1.8069e-02, -3.9109e-02, 3.8477e-02,
-2.9035e-02, -1.5453e-02, -6.7045e-03, 3.4121e-02, -9.8876e-03,
-8.4192e-06, 5.2210e-03, 1.6493e-02, -6.2000e-02, -6.0531e-04,
7.3830e-03, -4.0898e-05, -6.0627e-03, -2.8498e-02, -7.6559e-03,
1.7166e-02, -1.6621e-02, 3.0013e-02, -2.9750e-02, 5.7559e-02,
-1.9507e-02, 2.1800e-02, -6.5081e-02, 2.9435e-02, -9.1154e-03,
1.4963e-02, 2.7812e-02, 3.7519e-02, 1.0546e-02, -5.9419e-02,
-2.2688e-02, 2.9650e-02, 3.1546e-02, 2.7190e-03, 4.2752e-02,
-9.8702e-03, 4.0616e-02, -8.7285e-03, 4.1171e-02, -2.8747e-03,
-4.7521e-02, 1.4819e-02, 3.4308e-02, -2.3178e-02, 6.8566e-04,
6.5807e-02, -1.5936e-02, -5.6867e-02, -2.0194e-02, 1.0089e-02,
3.2515e-02, 2.7668e-02, -8.3925e-02, -4.6546e-02, 3.0311e-02,
-4.4808e-02, -6.0378e-02, 2.9398e-02, -4.5278e-03, -3.3444e-02,
1.7838e-02, 1.8011e-02, 1.1978e-02, 6.9284e-03, -4.7839e-02,
6.3141e-03, -2.7778e-03, -6.6707e-02, 3.5588e-02, 3.3485e-02,
4.3899e-02, -4.5265e-02, 6.8920e-03, -1.4840e-02, 2.2699e-02,
-5.0180e-02, -5.5397e-02, 1.2932e-02, -1.6373e-02, -1.4470e-03,
-3.1263e-02, 1.2203e-02, 6.5744e-02, 5.2748e-02, 4.9446e-02,
1.1454e-02, -1.2506e-02, 1.6448e-02, 6.4812e-03, 9.1295e-03,
4.7307e-02, -1.0583e-02, 2.6514e-02, -2.7043e-02, 2.1754e-02,
-7.4832e-03, 8.8365e-03, 3.6107e-02, 1.0179e-02, 3.0531e-02,
2.9151e-02, 2.7932e-02, 2.9156e-02, 2.0762e-02, -2.9590e-02,
-8.0620e-04, -5.6288e-02, 4.2960e-02, 2.9789e-02, 1.9852e-02,
-3.9940e-02, -1.6577e-03, -9.1024e-03, 1.2403e-03, 3.8063e-02,
5.4522e-02, 3.3541e-02, 1.8009e-02, -2.9072e-03, -2.2482e-04,
-6.2176e-03, -7.1625e-02, -2.9141e-02, 2.6403e-02, -3.7902e-02,
7.6599e-03, 9.9294e-03, 4.3861e-02, 2.1696e-02, 3.6313e-03,
5.7046e-02, 8.5943e-02, -1.0694e-02, -6.1576e-02, 3.4494e-02,
-1.4768e-02, -1.5311e-03, 5.9259e-02, 2.4781e-02, 2.1675e-02,
7.6399e-02, -9.0391e-03, -1.2334e-02, -2.4607e-02, 5.1892e-02,
-2.1849e-02, -8.1642e-02, 2.2554e-02, -1.6767e-02, 2.3100e-02,
1.6989e-02, 9.1284e-04, 1.9720e-02, -3.2603e-02, 3.9477e-03,
-2.7493e-02, 2.6490e-02, 1.4810e-02, -9.9203e-02, -4.7352e-03,
8.6435e-03, 7.8281e-02, -3.9165e-02, 3.1929e-02, 2.9405e-02,
4.3515e-02, 4.6316e-02, 1.0432e-03, 3.8957e-02, -2.6859e-02,
-4.3090e-03, 1.5592e-02, 5.6056e-02, -3.0031e-02, -7.8511e-03,
-3.4687e-02, 1.6892e-02, -2.0491e-03, -1.4476e-02, 4.8247e-02,
-1.3725e-02, -2.7866e-02]], device='cuda:0')
classifier.0.bias
Multiclass code (gradient is none)
original_model = models.__dict__["resnet34"](pretrained=True)
model = ResNet34_Model(original_model, 3)
model = model.cuda()
optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr = 3e-4, weight_decay=0)
criterion = nn.CrossEntropyLoss(reduction='mean').cuda()
X_batch = X_batch.cuda() # X_batch and y_batch are pulled from the first iteration of a data loader
y_batch = y_batch.cuda()
# train for 1 epoch
model.train()
output = model(X_batch).squeeze()
output_var = torch.softmax(output, dim=1)
_, output_class = torch.max(output_var, 1)
loss = torch.tensor(criterion(output_var, y_batch), requires_grad=True)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# check that gradients exist
for n, p in model.named_parameters():
print(n)
if(p.requires_grad) and ("bias" not in n):
print(p.grad)
Output
features.0.weight
features.1.weight
features.1.bias
features.4.0.conv1.weight
features.4.0.bn1.weight
features.4.0.bn1.bias
features.4.0.conv2.weight
features.4.0.bn2.weight
features.4.0.bn2.bias
features.4.1.conv1.weight
features.4.1.bn1.weight
features.4.1.bn1.bias
features.4.1.conv2.weight
features.4.1.bn2.weight
features.4.1.bn2.bias
features.4.2.conv1.weight
features.4.2.bn1.weight
features.4.2.bn1.bias
features.4.2.conv2.weight
features.4.2.bn2.weight
features.4.2.bn2.bias
features.5.0.conv1.weight
features.5.0.bn1.weight
features.5.0.bn1.bias
features.5.0.conv2.weight
features.5.0.bn2.weight
features.5.0.bn2.bias
features.5.0.downsample.0.weight
features.5.0.downsample.1.weight
features.5.0.downsample.1.bias
features.5.1.conv1.weight
features.5.1.bn1.weight
features.5.1.bn1.bias
features.5.1.conv2.weight
features.5.1.bn2.weight
features.5.1.bn2.bias
features.5.2.conv1.weight
features.5.2.bn1.weight
features.5.2.bn1.bias
features.5.2.conv2.weight
features.5.2.bn2.weight
features.5.2.bn2.bias
features.5.3.conv1.weight
features.5.3.bn1.weight
features.5.3.bn1.bias
features.5.3.conv2.weight
features.5.3.bn2.weight
features.5.3.bn2.bias
features.6.0.conv1.weight
features.6.0.bn1.weight
features.6.0.bn1.bias
features.6.0.conv2.weight
features.6.0.bn2.weight
features.6.0.bn2.bias
features.6.0.downsample.0.weight
features.6.0.downsample.1.weight
features.6.0.downsample.1.bias
features.6.1.conv1.weight
features.6.1.bn1.weight
features.6.1.bn1.bias
features.6.1.conv2.weight
features.6.1.bn2.weight
features.6.1.bn2.bias
features.6.2.conv1.weight
features.6.2.bn1.weight
features.6.2.bn1.bias
features.6.2.conv2.weight
features.6.2.bn2.weight
features.6.2.bn2.bias
features.6.3.conv1.weight
features.6.3.bn1.weight
features.6.3.bn1.bias
features.6.3.conv2.weight
features.6.3.bn2.weight
features.6.3.bn2.bias
features.6.4.conv1.weight
features.6.4.bn1.weight
features.6.4.bn1.bias
features.6.4.conv2.weight
features.6.4.bn2.weight
features.6.4.bn2.bias
features.6.5.conv1.weight
features.6.5.bn1.weight
features.6.5.bn1.bias
features.6.5.conv2.weight
features.6.5.bn2.weight
features.6.5.bn2.bias
features.7.0.conv1.weight
features.7.0.bn1.weight
features.7.0.bn1.bias
features.7.0.conv2.weight
features.7.0.bn2.weight
features.7.0.bn2.bias
features.7.0.downsample.0.weight
features.7.0.downsample.1.weight
features.7.0.downsample.1.bias
features.7.1.conv1.weight
features.7.1.bn1.weight
features.7.1.bn1.bias
features.7.1.conv2.weight
features.7.1.bn2.weight
features.7.1.bn2.bias
features.7.2.conv1.weight
features.7.2.bn1.weight
features.7.2.bn1.bias
features.7.2.conv2.weight
features.7.2.bn2.weight
features.7.2.bn2.bias
classifier.0.weight
None
classifier.0.bias
|
st84578
|
Solved by ptrblck in post #2
You are creating a new tensor as your loss, thus detaching the original output of your criterion from the computation graph:
loss = torch.tensor(criterion(output_var, y_batch), requires_grad=True)
Instead just call:
loss = criterion(output, y_batch)
loss.backward()
Note, that nn.CrossEntropyLoss …
|
st84579
|
You are creating a new tensor as your loss, thus detaching the original output of your criterion from the computation graph:
loss = torch.tensor(criterion(output_var, y_batch), requires_grad=True)
Instead just call:
loss = criterion(output, y_batch)
loss.backward()
Note, that nn.CrossEntropyLoss expects raw logits as the model output, so don’t apply the softmax on your output tensor.
|
st84580
|
I have two tensors with shapes [n, d, d] and [n, 1], respectively, and I would like to add the latter to the diagonals of the matrices in the former. What’s the most straightforward way of doing it? It shouldn’t be in-place.
LE: I’m curious if there’s a better way than stacking torch.eyes.
|
st84581
|
Solved by tom in post #2
I think inplace is the best way, but I’ll throw in a .clone(), so you get to keep the input:
a = torch.randn(5,4,4, requires_grad=True)
b = torch.randn(5,1, requires_grad=True)
c = a.clone()
c.diagonal(dim1=-2, dim2=-1)[:] += b
# backward works as expected:
c.sum().backward()
print(a.grad, b.gr…
|
st84582
|
I think inplace is the best way, but I’ll throw in a .clone(), so you get to keep the input:
a = torch.randn(5,4,4, requires_grad=True)
b = torch.randn(5,1, requires_grad=True)
c = a.clone()
c.diagonal(dim1=-2, dim2=-1)[:] += b
# backward works as expected:
c.sum().backward()
print(a.grad, b.grad) # ones_like(a) and full_like(b, 4)
Best regards
Thomas
|
st84583
|
Thanks, @tom, looks good. BTW, the reason I wanted it to not be inplace was because I need it to be differentiable. Does backward work even if it’s inplace?
|
st84584
|
The rule of thumb is that inplace works unless it does not.
So the two things that usually break are
you move a leaf tensor into the graph (if you remove the cloning in above example - and cloning helps),
when a isn’t a leaf and whatever computed a wants to have a to compute the backward (cloning helps here, too).
So the conventional wisdom is to not use inplace ops, but looking deeper, it can usually be made to work. I always joke to write a non-deep-learning PyTorch book with @ptrblck where we would have a section on inplace ops.
Best regard
Thomas
|
st84585
|
I looking for an example
Datum,Visitors, Holliday,weekend
1.7.2019 , 100, 0, 0
2.7.2019 , 110, 0, 0
3.7.2019 , 180, 1, 0
3.7.2019 , 110, 0, 0
4.7.2019 , 120, 0, 0
5.7.2019 , 130, 0, 0
6.7.2019 , 200, 0, 1
7.7.2019 , 180, 0, 1
8.7.2019 , 150, 0, 0
TO Predict 9.7.2019
Most examples i found are with random or with Pictures.
so i Searching an Practical Example , so Visitors of an Hompage,Shop, Or Birds when using (Rain,Sun) instead weekends
I Have Problems with Dateobject i Tried using Unix TimeStamp,…
Mayby someone has an simple example that is just with some random Numbers, without beeing to complicated
Hope someone could help
|
st84586
|
Just to clarify, are you looking for an example in which you predict a date given features? Or are you trying to use a date feature in prediction?
|
st84587
|
Perhaps you could try encoding the date values into some ordinal values (like 1.7.2019 as 0, 2.7.2019 as 1, etc.) to preserve some sort of order within the dates or one-hot-encoding if you’d prefer to leave the dates as categorical values.
|
st84588
|
If I remember correctly, the default setting of DataParallel tries to split the minibatch equally into smaller patches, but in NLP model, the minibatches are sorted by length in decreasing order, that makes the first GPU under heavier load. Ideally if we can explicitly controls the splitted batch size so that the total number of tokens are roughly balanced. Is there anyway to do this efficiently?
Screen Shot 2019-07-16 at 9.18.30 AM.png870×234 14.4 KB
|
st84589
|
Hey,
If you are using Dataset and DataLoader, if you set the argument shuffle = True in DataLoader(), it will automatically shuffle the examples within a batch.
|
st84590
|
Is any one knows how can visualize tensor featuremap with exact pixels from original image?
|
st84591
|
Is there anyone knows how it is possible to find most relevant (most similar) parts between two 51277 tensor feature maps?
Actually in pytorch I defined two layer neural network that calculate similarity, however it seems it doesn’t work efficiency.
Any help please?
|
st84592
|
Hi all,
I’m looking at the Learning PyTorch with Examples page (see example code below).
https://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-custom-nn-modules 2
I’m a little confused about where to go from here in terms of testing my model now. It is unclear to me how I apply my new model/linear relationship to “forecasting” to hindcasting on my data.
I also am not sure how to extract the relevant weights for each input layer.
Any help would be appreciated. Thanks!
Callum
class TwoLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
"""
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
"""
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
"""
In the forward function we accept a Tensor of input data and we must return
a Tensor of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Tensors.
"""
h_relu = self.linear1(x).clamp(min=0)
y_pred = self.linear2(h_relu)
return y_pred
|
st84593
|
Hello, I have modified AlexNet model trained with PyTorch. The trained file has .model extension file. I want to convert it to ONNX model and export to c++.
I tried the basic convertion code but it gives error:
import torch.onnx
device = torch.device('cpu')
model = torch.load('SiamRPN.model', map_location=device)
dummy_input = torch.randn(1, 3, 224, 224)
torch.onnx.export(model, dummy_input,"my_onnx_model.onnx")
When I run the last line with Python 2, it gives error like below. What is the problem here?
Traceback (most recent call last):
File "torch_to_onnx.py", line 7, in <module>
torch.onnx.export(model, dummy_input,"my_onnx_model.onnx")
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/__init__.py", line 32, in export
return utils.export(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/utils.py", line 170, in export
example_outputs=example_outputs, strip_doc_string=strip_doc_string, dynamic_axes=dynamic_axes)
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/utils.py", line 423, in _export
_retain_param_name, do_constant_folding)
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/utils.py", line 317, in _model_to_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args, training)
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/utils.py", line 267, in _trace_and_get_graph_from_model
orig_state_dict_keys = _unique_state_dict(model).keys()
File "/usr/local/lib/python2.7/dist-packages/torch/jit/__init__.py", line 244, in _unique_state_dict
state_dict = module.state_dict(keep_vars=True)
AttributeError: 'OrderedDict' object has no attribute 'state_dict'
|
st84594
|
I have pretty complex model, but can I exactly find operation which ONNX model exporter did not like?
RuntimeError: Failed to export an ONNX attribute, since it's not constant, please try to make things (e.g., kernel size) static if possible
|
st84595
|
Hi,
I have trained a model, and at the moment I would like to use that model as a layer in my new model.
How it would be possible to initialize my layer with the pretrained model I did before?
Thanks,
|
st84596
|
class newmodel(nn.Module):
__init__():
super(...)
self.model_as_layer = model_constructor(*args,**kwargs)
self.model_as_layer.load_state_dict(pretrained_weights)
Just loads weighs while you instantiate your layer
|
st84597
|
Reading this bear in mind that I’m a beginner with pytorch. I think saving the model went wine, I did it using :
torch.save({
'epoch': self.epochs,
'model_state_dict': self.model.state_dict(),
'optimizer_state_dict': self.optimizer.state_dict()
}, 'sav
inside the for loop on epochs.
Then, to load the model, I do this after initializing the model and the optimizer.
if os.path.exists('saved') :
print('Loading saved model')
checkpoint = torch.load('saved')
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
Here’s the error :
Missing key(s) in state_dict: "frame_level_rnns.0.h0", "frame_level_rnns.0.input_expand.bias", "frame_level_rnns.0.input_expand.weight_g", "frame_level_rnns.0.input_expand.weight_v", "frame_level_rnns.0.rnn.weight_ih_l0", "frame_level_rnns.0.rnn.weight_hh_l0", "frame_level_rnns.0.rnn.bias_ih_l0", "frame_level_rnns.0.rnn.bias_hh_l0", "frame_level_rnns.0.rnn.weight_ih_l1", "frame_level_rnns.0.rnn.weight_hh_l1", "frame_level_rnns.0.rnn.bias_ih_l1", "frame_level_rnns.0.rnn.bias_hh_l1", "frame_level_rnns.0.upsampling.bias", "frame_level_rnns.0.upsampling.conv_t.weight_g", "frame_level_rnns.0.upsampling.conv_t.weight_v"
and then shortly after
Unexpected key(s) in state_dict: "model.frame_level_rnns.0.h0", "model.frame_level_rnns.0.input_expand.bias", "model.frame_level_rnns.0.input_expand.weight_g", "model.frame_level_rnns.0.input_expand.weight_v", "model.frame_level_rnns.0.rnn.weight_ih_l0", "model.frame_level_rnns.0.rnn.weight_hh_l0", "model.frame_level_rnns.0.rnn.bias_ih_l0", "model.frame_level_rnns.0.rnn.bias_hh_l0", "model.frame_level_rnns.0.rnn.weight_ih_l1", "model.frame_level_rnns.0.rnn.weight_hh_l1", "model.frame_level_rnns.0.rnn.bias_ih_l1", "model.frame_level_rnns.0.rnn.bias_hh_l1", "model.frame_level_rnns.0.upsampling.bias"
So instead of loading ‘a’ it loads ‘model.a’ I guess, but I don’t get what I am doing wrong. Can you help me out please ?
|
st84598
|
Also, I’d like to add that the optimizer loads just fine (when putting the line about the optimizer just before the line for the model) and also the epochs.
|
st84599
|
So I hacked it, basically removing the “model.” part just before key. Still I’m not tagging it as solved because I wanna know why I had this error and also find a more elegant way to deal with it !
|
st84600
|
I guess the model attribute is added, as it seems you are using another class to store your self.model, self.optimizer and probably other objects.
Are you working with some high-level API or did you create this wrapper manually?
|
st84601
|
Hi there. I’d like to keep track of some statistical properties of activations in a network, in an output-dependent fashion. I do not see how I could use a forward hook, because the output of intermediate layers is not given as an argument to such hooks. Of course, I could also register output-saving forward hooks for each layer, but my understanding is that I would then be replicating context-saving functionality that is already implemented for backward passes. How can I access that functionality for my custom needs?
|
st84602
|
The model is available on GitHub.It is for manipulation of multiple face attributes.
Here is the link
GitHub
Prinsphield/ELEGANT 3
ELEGANT: Exchanging Latent Encodings with GAN for Transferring Multiple Face Attributes - Prinsphield/ELEGANT
It generates four files
Enc-iter.pth
Dec-iter.pth
D1-iter.pth
D2-iter.pth
torch_out=torch.onnx.export(model,x,model.onnx, verbose=True)
Export function gives the graph but when I print torch_out it gives None but it should print tensor value
And this is causing AtrributError when comparing torch_out with caffe2 model
np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3)
In this I am getting AtrributError
Nonetype object has no attribute ‘data’
I have asked it on GitHub/onnx but they are saying ask this on pytorch forum .issue is in pytorch code.
|
st84603
|
torch.onnx.export shouldn’t return anything, but store a binary protobuf file at the specified location.
This file can then be loaded using onnx.
Have a look at the doc 45 for some example code.
|
st84604
|
thank you for the response
Verify the numerical correctness upto 3 decimal places
np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3)
in this we are comparing two multidimensional arrays
for this i ran example of super resolution and here is the output
import io
import numpy as np
from torch import nn
import torch.utils.model_zoo as model_zoo
import torch.onnx
torch_model = SuperResolutionNet(upscale_factor=3)
Input to the model
x = torch.randn(batch_size, 1, 224, 224, requires_grad=True)
Export the model
torch_out = torch.onnx._export(torch_model, # model being run
x, # model input (or a tuple for multiple inputs)
“super_resolution.onnx”, # where to save the model (can be a file or file-like object)
export_params=True,verbose=True) # store the trained parameter weights inside the model file
print(“torch_out”,torch_out)
this is the output i.e. graph and to see what is the output of torch_out i.e. tensor value
torch_out tensor([[[[ 6.8571e-01, 9.3982e-02, 1.8855e-01, …, 1.1247e-01,
3.9000e-01, -3.4897e-01],
[ 8.2722e-02, -6.9161e-01, 1.0651e-01, …, 1.6725e-01,
8.2437e-02, -1.2075e-02],
[ 5.9089e-02, -1.4693e-01, -2.1743e-01, …, -1.1221e-01,
9.0878e-02, 2.8896e-01],
…,
[-9.9215e-02, 6.8443e-02, 3.1096e-01, …, 6.0085e-05,
6.3814e-02, 5.8048e-02],
[ 8.1212e-02, -8.7285e-02, 3.3845e-02, …, -4.4617e-03,
-7.5614e-02, 2.7088e-01],
[-1.1385e-01, 3.1267e-01, -3.3085e-01, …, -2.5969e-01,
5.3383e-01, -2.2986e-01]]]], grad_fn=)
and
import onnx
import onnx_caffe2.backend
Load the ONNX ModelProto object. model is a standard Python protobuf object
model = onnx.load(“super_resolution.onnx”)
prepare the caffe2 backend for executing the model this converts the ONNX model into a
Caffe2 NetDef that can execute it. Other ONNX backends, like one for CNTK will be
availiable soon.
prepared_backend = onnx_caffe2.backend.prepare(model)
run the model in Caffe2
Construct a map from input names to Tensor data.
The graph of the model itself contains inputs for all weight parameters, after the input image.
Since the weights are already embedded, we just need to pass the input image.
Set the first input.
W = {model.graph.input[0].name: x.data.numpy()}
Run the Caffe2 net:
c2_out = prepared_backend.run(W)[0]
print(“c2_out”,c2_out)
Verify the numerical correctness upto 3 decimal places
np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3)
print(“Exported model has been executed on Caffe2 backend, and the result looks good!”)
here output of c2_out is tensor which is
c2_out [[[[ 0.19614297 -0.14641185 0.63868225 … -0.12567244 -0.09246139
0.1514894 ]
[ 0.2977414 0.30597484 0.07216635 … 0.3880918 0.31021857
-0.00942096]
[ 0.29691827 0.26158085 0.67230344 … 0.5879603 -0.22218955
-0.04444573]
…
[-0.40999678 -0.31105435 -0.9089918 … 0.965673 -0.32325625
0.47099867]
[-0.34081718 -0.12496098 -0.7071778 … -0.48262012 -0.28351355
0.305478 ]
[-0.10641275 -0.42513472 -0.41691086 … 0.01121937 0.36214358
0.11300252]]]]
|
st84605
|
the model which i am trying to export does not output tensor value
when i am trying to print torch_out it gives ouput as None
and that’s why it is giving attribute error
|
st84606
|
torch.onnx.export does not return anything, thus will yield an error, if you try to access the return value.
In your code snippet, you are using torch.onnx._export (note the underscore), which seems to be an internal method, and will yield an output.
I’m not sure, which method you are currently using.
|
st84607
|
thank u for the response
I was using torch.onnx.export .And i suppose to use torch.onnx._export.
Silly mistake.
Thanks a ton
|
st84608
|
How to set adaptive learning rate for different class in multi-class classification while training? Like, when some classes’ loss get the threshold, then set their learning rate * 0.1, wile other classes keep their learning rate to not to change.
|
st84609
|
What do you exactly want to do? LR is inherent to model parameters but it seems you want to modify it depending on your output. A mini-batch may contain different classes so could you in-depth explain it?
You can set a threshold for a class and modify learning rate the mini-batch will be backpropagated with before backpropagating but it still does not solves the fact the batch will contain different classes and they will be backpropagated with the same LR.
You may consider to penalize loss depending on this th, this way gradients will be penalized too.
|
st84610
|
Hi,
According to the V1.1.0 docs it’s possible to modify a module output using forward hooks.
image.png793×501 26.6 KB
However sourcecode is not consistent:
def __call__(self, *input, **kwargs):
for hook in self._forward_pre_hooks.values():
hook(self, input)
if torch._C._get_tracing_state():
result = self._slow_forward(*input, **kwargs)
else:
result = self.forward(*input, **kwargs)
for hook in self._forward_hooks.values():
hook_result = hook(self, input, result)
if hook_result is not None:
raise RuntimeError(
"forward hooks should never return any values, but '{}'"
"didn't return None".format(hook))
As you can see if hook_result is not None, it throws an error.
What did I miss?
|
st84611
|
I have tried building pytorch from source. I print torch.config.show() and the ‘MKLDNN’ is on. For my model, training one epoch needs 5min.
However, I also tried conda install pytorch 1.1. I also print torch.config.show() and the ‘MKLDNN’ is also on. But for the same model, training one epoch needs 8min.
How can I speed up training on CPU when I conda install pytorch?
|
st84612
|
Is there an existing pytorch.distribution.constraint for unit norms of the innermost dimension? The simplex constraint is quite nice - would be great to have something just like that. Seems like Keras has a UnitNorm constraint. Might be a nice addition to the core set.
Thanks!
|
st84613
|
The official doc on torch.numel 40 notes that its input must be a torch.Tensor. I tried torch.Size as input and the result might be kind of unexpected.
The code snippet is shown as:
import torch
a = torch.Tensor(3, 4)
print(a.numel(), a.shape.numel())
The returned values of both a.numel() and a.shape.numel() are 12.
But there is no documentation or explanation for the input type torch.Size. Whether it is a potential bug or feature?
|
st84614
|
Hi, all. I’m trying to get PyTorch up and running locally on a Win10 laptop, and I’ve been having a fair bit of difficulty; everything crashes and burns when I hit a call to torch._C._cuda_init() with a rather unhelpful runtime error (RuntimeError: CUDA error: unknown error). torch.cuda.is_available() returns true, and I am able to set the device to “cuda:0” .
Here’s how my laptop is currently set up:
GPU: GeForce GTX 1050ti (Max-Q variant)
GPU driver version: 431.36
CUDA Version: 10.1 (I’ve installed the 10.0 archival version from nVidia’s site, but 10.1 shows up when I run nvidia-smi. I did originally start with the 10.1 installer, but tried to uninstall all the components; is there something I need to do to get a clean uninstall?)
PyTorch + torchvision were installed via conda using conda install pytorch torchvision cudatoolkit=10.0 -c pytorch ; pytorch shows as version 1.1.0 with build py3.7_cuda100_cudnn7_1 .
I tried uninstalling my drivers entirely and just installing the CUDA 10.0 toolkit, but on doing so nvidia-smi reported that it couldn’t communicate with drivers. Do I need to try to roll back to an earlier GPU driver version? Is there something in particular I need to do to go vestiges of the CUDA 10.1 toolkit off of my machine? Should I take off and nuke it all from orbit?
|
st84615
|
Could you try to install the latest pytorch nightly build as suggested in this issue 19?
Let us know, if that doesn’t help and you still get this error.
|
st84616
|
Hm. I uninstalled the stable version and installed the nightly, but now things appear to be broken further. The torch module seems to be mostly empty when I inspect it with help('torch'), showing only the submodules nn and utils - no cuda, which means that the second we hit a torch.cuda reference it falls over.
If it helps, I installed torch-nightly via conda install pytorch-nightly cudatoolkit=10.0 -c pytorch , and it installed pytorch/win-64::pytorch-nightly-1.2.0.dev20190714-py3.7_cuda100_cudnn7_0.
Having said that, the nightly that was mentioned as working in that thread is from February. I’m going to see if I can pull an older version of pytorch (either one of the older nightlies or 1.0.0, which was mentioned to be working in that thread) and see where that takes me.
edit: It’s a hack, but making a call to torch.cuda.current_device() (as mentioned in that thread) appears to resolve this issue.
|
st84617
|
cdrouin:
It’s a hack, but making a call to torch.cuda.current_device() (as mentioned in that thread) appears to resolve this issue.
Without this line of code you cannot call any CUDA functions without raising an error?
CC @peterjc123: could this be related to the linked issue (which should have been resolved)?
|
st84618
|
I don’t think an incomplete python package will get uploaded. We will run some basic smoke tests before uploading these packages. Apparently, importing torch.cuda is one of them. Also, from the size of the package, it is normal, which is around ~500MB. However, I’ll check it later. As for the problem, have you completely removed the old installation? What if you do conda uninstall pytorch, pip uninstall torch, conda uninstall pytorch-nightly and pip uninstall torch-nightly in a row and then install it again? Also, would you please check if there is any pytorch installation in your PYTHONPATH?
|
st84619
|
I was able to call torch.cuda.is_available() and torch.device() (not strictly a CUDA function, I think, but was using it to set the device to “cuda:0”) without anything blowing up.
@peterjc123 , I’d uninstalled pytorch + torchvision + pytorch-nightly before attempting the (re)install operation. Note that I’m using conda, not pip, in case that has anything to do with the issue. To get torchvision (re)installed on top of pytorch-nightly, I had to use the --no-deps flag since it otherwise requires pytorch to be install.
It appears I don’t actually have a PYTHONPATH environment variable on this system (FWIW - installed using the Anaconda graphical installer); I don’t see pytorch referenced in the normal PATH variable either.
|
st84620
|
Yes, I know. I just want to ensure torch is uninstalled. Using pip uninstall is harmless since it will ask for your confirmation. If you ensure the package is completely removed, then it is likely that the package you downloaded is incomplete or broken. Please remove the cache file in [Anaconda Root]\pkgs and try again. Alternatively, you can download the file from https://anaconda.org/pytorch/pytorch-nightly/files 9 and install it locally.
|
st84621
|
Hey all,
I keep getting this error when trying to use nn.Linear with cuda:
cublas runtime error : library not initialized at ../aten/src/THC/THCGeneral.cpp:216
On the CPU it runs fine though.
PyTorch is 1.2.0, python 3.7, Cuda 10.1, CuDNN 7.6.1, and my GPU is a GTX 970. I’m running macOS and I installed PyTorch from source.
Anyone know why this might be happening or how to debug this further? I’m a bit new to these tools.
|
st84622
|
Thank you for the reply! It appears that most of the info in that thread is about two GPU systems, which mine is not. The only potentially relevant suggestion I’m seeing is about removing the .nv file, which I don’t seem to have on my system.
I’m thinking my issue may be something like an incomplete install or missing dependencies or something?
If I use
linear = nn.Linear(2, 2)
x = torch.randn(2, 2)
print(linear(x))
everything works fine, but if instead I use
linear = nn.Linear(2, 2).cuda()
x = torch.randn(2, 2).cuda()
print(linear(x))
I get
RuntimeError Traceback (most recent call last)
<ipython-input-3-81612b3daf91> in <module>
4 linear = nn.Linear(2, 2).cuda()
5 x = torch.randn(2, 2).cuda()
----> 6 print(linear(x))
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
537 result = self._slow_forward(*input, **kwargs)
538 else:
--> 539 result = self.forward(*input, **kwargs)
540 for hook in self._forward_hooks.values():
541 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/linear.py in forward(self, input)
85
86 def forward(self, input):
---> 87 return F.linear(input, self.weight, self.bias)
88
89 def extra_repr(self):
~/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1363 if input.dim() == 2 and bias is not None:
1364 # fused op is marginally faster
-> 1365 ret = torch.addmm(bias, input, weight.t())
1366 else:
1367 output = input.matmul(weight.t())
RuntimeError: cublas runtime error : library not initialized at ../aten/src/THC/THCGeneral.cpp:216
|
st84623
|
Hello, I hit this exact error trying to get pytorch/examples/mnist/main.py to work. The above print(linear(x)) code also generates the same error. I’m on a 2014 MacBook Pro with a GTX 750M. Built pytorch from GitHub src a couple days ago, rebuilding just now to see if it makes any difference. Would love to see this fixed! Thanks!
print(sys.version)
print(torch.version)
print(torchvision.version)
print(torch.cuda.is_available())
print(torch.backends.cudnn.enabled)
print(torch.version.cuda)
print(torch.backends.cudnn.version())
3.7.3 (default, Mar 30 2019, 03:44:34)
[Clang 9.1.0 (clang-902.0.39.2)]
1.2.0a0+5395db2
0.3.0a0+487c9bf
True
True
10.1
7601
|
st84624
|
John_G, you still encountering this? This is an ongoing problem for me. Perhaps I should open an issue on the github.
|
st84625
|
I’m trying to debug my code. I predict two variables A, B and compute losses L1(A) and L2(B) on them. However, the way the network currently behaves is as if L2 is affecting A, even though it shouldn’t according to my code. I want to make absolutely sure that during backpropagation, A is not affected by L2. I thought one easy way of doing this is to check if A is part of the graph which produces L2.
How would one proceed to check this?
|
st84626
|
Hi,
Suppose there is a case like this:
x1, x2, lb1, lb2 = dataloader.next
logits1 = model(x1)
logits2 = model(x2)
loss = criteria1(logits1, lb1) + criteria2(logits2, lb2)
...
The problem is that I need to update bn moving average status only on x1, and I do not need to update the moving average status on other forward computations. How could I do this with pytorch?
|
st84627
|
You could call .eval() on all batch norm layers in your model after passing x1 to the model and before using x2. After it just reset these layers to .train() again.
If you don’t use any dropout layers, you could also just call model.eval()/.train().
|
st84628
|
thanks for replying!!
If I call model.eval(), the input tensor would be normalized with the running mean and running var rather than the batch statistics, can this be avoided ?
|
st84629
|
You could disable the running stats completely by setting track_running_stats=False.
However, your use case seems to be:
for x1 use batch stats and update running estimates
for x2 just use batch stats
What about the affine parameters?
Should they also be updated using the loss from x2 or just x1?
In the latter case, you could use two different batchnorm layers, pass a flag to forward, and switch between these layers depending if x1 or x2 was passed.
|
st84630
|
Thanks for replying!!
The affine parameters are trained from both x1 and x2. Two different batchnorm layers can solve the running estimates problem, but the affine parameters are not shared between these two batchnorm layers in this way. Any suggestions ?
|
st84631
|
You could use a hacky way of setting track_running_stats=False for the x2 input and reset the running stats manually.
Here is a small example:
bn = nn.BatchNorm2d(3, track_running_stats=True)
bn.eval()
print(bn.running_mean) # zeros
print(bn.running_var) # ones
x = torch.randn(2, 3, 4, 4)
out = bn(x)
print(out.mean()) # should NOT be perfectly normal, since running stats used
print(out.std())
print(bn.running_mean) # not updated
print(bn.running_var)
# Disable running_states
# internally buffers are still valid, so we need to reset them manually
bn.track_running_stats = False
out2 = bn(x)
print(out2.mean()) # should be normal now
print(out2.std())
print(bn.running_mean) # unfortunately updated
print(bn.running_var)
# Reset running stats
with torch.no_grad():
bn.running_mean = (bn.running_mean - x.mean([0, 2, 3]) * bn.momentum) / (1 - bn.momentum)
bn.running_var = (bn.running_var - x.var([0, 2, 3]) * bn.momentum) / (1 - bn.momentum)
print(bn.running_mean) # back to values before last update
print(bn.running_var)
# enable running stats again
bn.track_running_stats = True
out3 = bn(x)
print(out3.mean() == out.mean()) # compare to initial output
print(out3.std() == out.std())
I would recommend to test this approach in your model and make sure you’ll get the desired outputs, gradients and updates.
|
st84632
|
So I know my GPU is close to be out of memory with this training, and that’s why I only use a batch size of two and it seems to work alright.
The problem arises when I first load the existing model using torch.load, and then resume training. When resuming training, it instantly says :
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 1.96 GiB total capacity; 1.36 GiB already allocated; 46.75 MiB free; 38.25 MiB cached)
I don’t know how to get rid of this error. When using nvidia-smi right after the error, I obtain this :
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 991 G /usr/lib/xorg/Xorg 120MiB |
| 0 1350 G cinnamon 29MiB |
| 0 2878 G ...incent/anaconda3/envs/newtor/bin/python 2MiB |
+-----------------------------------------------------------------------------+
So it’s definitely something about loading the model that makes it break. Can you help me out ? Maybe hack it and reset Cuda memory usage after loading the model ? Thanks for helping me out.
|
st84633
|
del checkpoint
was the solution, though once again i’m not sure why. Cuda magic I guess
|
st84634
|
Hi,
I read the documentation of torch.nn.functional.softmax, but I am not very clear upon the usage of the dim argument.
My output is of the following dimensions :
(batchsize, num_classes, length, breadth, height) - so it is a 5D output
I am doing a multi class semantic segmentation problem, so I want the pixel-wise softmax of corresponding pixels in (length, breadth, height) along all classes. So do I use dim = 1 while using torch.nn.functional.softmax(output, dim = 1) , this way?
Please advise.
Thanks a lot!
|
st84635
|
torch.nn.functional.softmax(output, dim = 1) will yield an output, where the probabilities for each pixel sum to 1 in the class dimension (dim1).
This probability tensor can be used as a sanity check or for visualization purposes.
However, note that e.g. nn.CrossEntropyLoss expects raw logits as the model’s output, since internally F.log_softmax(output, 1) and nn.NLLLoss will be used, so that you shouldn’t use the softmax method manually.
|
st84636
|
Hi PyTorchers,
I am currently attempting to implement a network similar to Network in Network (2014; M.Lin Q.Chen). Which takes the output of each particular sliding window of a 2D Convolutional Layer and passes it through multiple linear layers prior to the pooling down sampling. Which looks like the following:
.
I am curious if there is a way to manually extract all of the features from each window as the Conv2D creates feature maps, so I can add my linear MLP layers prior to the down sampling.
|
st84637
|
Hello everyone,
im currently working with a LSTM to predict rest lifetime of mechanical parts. I got a problem when i try to use the trained network to predict multiple timesteps ahead. I noticed that everytime i use the predicted output as input, i get this error message after a few timesteps(mostly between 50-200 timesteps):
cuda out of memory. tried to allocate 2.50 mib (gpu 0; 8.00 gib total capacity; 6.28 gib already allocated; 1.55 mib free; 1.55 mib cached
This is the code:
inp, label = Validset[1]
print(inp.shape)
y = []
for i in range(100):
print(i)
torch.no_grad()
inp = torch.tensor(inp)
inp = inp.view(1,-1,1)
inp = inp.to(device)
inp = inp.float()
out, hn_cn = model.forward(inp)
del inp
out = out.reshape(-1)
y.append(out[-1])
inp = out.clone().detach()
del out
torch.cuda.empty_cache()
I used a gtx1070 and a gtx1080 sli for this task but it seems like both setups fail to predict enough timesteps.
I noticed this problem also while training a LSTM There i tried to use a output Tensor (1x1) as an Input aswell and i get the same error message. Is there a way to free memory after i predicted one timestep? Or has anyone a different solution?
Thank you in advance
|
st84638
|
Hello Forum!
Are things likely to go smoothly if I try to run pytorch on an external
gpu on a laptop? (Assume recent mainstream hardware.) Or are
things likely to be finicky or not work?
Would I expect to be able to download a pre-built version of
pytorch? I imagine that matching my gpu, os, and cuda version
should be enough, and that pytorch will neither know nor care
that the gpu is external. Is this right?
As I understand it, external gpus are typically connected through
a thunderbolt port. Would I expect to get (nearly) full gpu
performance, or would the thunderbolt port introduce some kind
of bottleneck?
And, of course, if anyone has actually done this, what hardware
and os did you have success with, and did you encounter any
noteworthy problems?
Thanks for any advice!
K. Frank
|
st84639
|
When I directly define a nn.Module with fixed layers and use torch.utils.tensorboard.SummaryWriter, I get a nice vizualisation of my computational graph like this::
class NeuralNetStrategy(nn.Module):
def __init__(self, input_length):
super(NeuralNetStrategy, self).__init__()
self.fc1 = nn.Linear(input_length, 10)
self.fc_out = nn.Linear(10, 1)
def forward(self, x):
x = self.fc1(x).tanh()
x = self.fc_out(x).relu()
return x
model = NeuralNetStrategy(1)
with SummaryWriter(logdir) as writer:
writer.add_graph(some_input_tensor)
image.png612×601 19.2 KB
My goal is to get a similar output graph but for a dynamic number of layers created at runtime, but I’m having trouble to make this work.
I’ve tried using nn.ModuleDict() or building an OrderedDict and passing it to nn.Sequential in order to create named layers, but my tensorboard graph does not display the names. My current class definition and tensorboard output looks something like this:
class NeuralNetStrategy(nn.Module):
def __init__(self, input_length, hidden_nodes: Iterable[int], hidden_activations: Iterable):
super(NeuralNetStrategy, self).__init__()
self.layers = nn.ModuleDict()
self.layers['fc_0'] = nn.Linear(input_length, hidden_nodes[0])
self.layers['activation_0'] = hidden_activations[0]
for i in range (1, len(hidden_nodes)):
self.layers['fc_' + str(i)] = nn.Linear(hidden_nodes[i-1], hidden_nodes[i])
self.layers['activation_' + str(i)] = hidden_activations[i]
self.layers['fc_out'] = nn.Linear(hidden_nodes[-1], 1)
self.layers['activation_out'] = nn.ReLU()
def forward(self, x):
for layer in self.layers.values():
x = layer(x)
return x
When I initialize my model, it looks fine in pytorch (and does perform the desired computations), yet the tensorboard graph does not respect the named layers:
How can I fix this?
|
st84640
|
Hey everyone,
I’ve looked into this quite some time now, and I haven’t found any explanation, yet. I’ve trained a few GANs with different hyperparameters and saved the checkpoints every epoch. Now, some of the runs I’ve done have checkpoints I can load and generate data without a problem, and some just generate rubbish, as if the generator has not been trained at all or collapsed at some point.
I have changed the network structure at one point, doubling the number of filters in the first layer of the generator, but also reverted to that network structure when I loaded the checkpoints. I also don’t any error messages when loading the state dict, which should mean that everything is loaded correctly.
I have trained the network using Pytorch 0.4.1 and try to evaluate on Pytorch 0.4.0, but deleted the num_batches_tracked in the batch norm layers accordingly. It also works on some checkpoints.
I’m utterly confused on why that would happen, has anyone experienced something of that kind?
I load my generator checkpoint like this:
generator = dcgm.GeneratorNe(nc, ngf=128, ngpu=ngpu)
dict = torch.load(opt.loadG, map_location='cuda:0' if torch.cuda.is_available() else 'cpu')
if torch.__version__ == '0.4.0':
del dict['net.1.num_batches_tracked']
del dict['net.4.num_batches_tracked']
del dict['net.7.num_batches_tracked']
del dict['net.10.num_batches_tracked']
del dict['net.13.num_batches_tracked']
generator.load_state_dict(dict)
generator.to(gpu)
generator.eval()
|
st84641
|
Hi sami,
I have the same issue. I loaded weights file and generated image from weights. It seemed that using random noise not really learnt weights. Have you figured out why is that? I haven’t successfully generated one image for now.
I’m using torch 0.4.1.
Weights file is saved via torch.save(generator.state_dict(), path) during training phase.
During testing phase, I did:
model = generator()
checkpoint = torch.load(‘path/001_G.pth’, map_location = str(device))
model.load_state_dict(checkpoint, strict=False)
model.to(device)
model.float()
model.eval()
def label_sampel():
label = torch.LongTensor(batch_size, 1).random_()%n_class
one_hot= torch.zeros(batch_size, n_class).scatter_(1, label, 1)
print(device)
return label.squeeze(1).to(device), one_hot.to(device)
z = torch.randn(batch_size, z_dim).to(device)
z_class, z_class_one_hot = label_sampel()
fake_images = model(z, z_class_one_hot)
save_image(denorm(fake_images.data), os.path.join(path, ‘1_generated.png’))
|
st84642
|
In pytorch, if both tensors are long, an integer division will be performed :
tensor([242240, 226320, 186240, 171840, 165680]) / torch.tensor(694)
Out : tensor([349, 326, 268, 247, 238])
I need a regular division so I tried to convert my second tensor to float, it yields the same result :
tensor([242240, 226320, 186240, 171840, 165680]) / torch.tensor(694).float()
Out : tensor([349, 326, 268, 247, 238])
However, if the first tensor is of size 1, this works :
torch.tensor(10) / torch.tensor(10).float()
Out : tensor(1.0)
This seems really weird. Any thoughts?
|
st84643
|
It’s not the same doing
import torch
a=torch.tensor(10) / torch.tensor(10).float()
b=torch.tensor([10]) / torch.tensor(10).float()
print(a, 'Dimensions: %s'%a.ndimension(),' Type %s'%a.type())
print(b,'Dimensions: %s'%b.ndimension(),' Type %s'%b.type())
/usr/bin/python3.6 /home/jfm/.PyCharm2019.1/config/scratches/scratch.py
tensor(1.) Dimensions: 0 Type torch.FloatTensor
tensor([1]) Dimensions: 1 Type torch.LongTensor
That’s why the result is different
|
st84644
|
I agree with you that the number of dimensions is different. That doesn’t explain why the type is different though.
|
st84645
|
The constructor generates different data types depending on if the input is numerical or an array. I edited my previous reply.
The short answer is that these kind of operations depends on the dimensionality, it does not threat 0-dimensional tensors than 1-dimensional tensors of 1 element.
|
st84646
|
Hi,
Suppose I have a matrix tensor
a b
c d
and want to repeat it into
a 0 b 0
0 a 0 b
c 0 d 0
0 c 0 d
In numpy code, it is y = np.kron(x,np.eye(2)), I want to know pytorch equivalent.
My actual usage is higher than 2D, which has additional dimensions at the tail, though.
Thank you very much!
|
st84647
|
I don’t think it’s ready-mades, but starting with zeros and copying x into y[::2, ::2] and y[1::2, 1::2] should do the trick.
If your x isn’t tiny and the “stencil matrix” is reasonably small, a for foop over the stencil isn’t a performance headache.
Best regards
Thomas
|
st84648
|
Hi Tom,
I’m operating on module parameters, will this copying & looping approach provide autograd support?
And how to use broadcasting semantics in the indexed assignment?
Thanks!
|
st84649
|
Yes
How would broadcasting look like?
P.S.: Please don’t ask the same question twice.
|
st84650
|
I had a quick question about best practices for device agnostic coding. Some context: I prototype my code on my laptop (CPU only), before training in the cloud.
Right now, I follow the following pattern.
device = torch.device(“cuda:0” if torch.cuda.is_available() else “cpu”
model.to(device)
for everything. This works quite nicely, and I’m happy that this functionality has been added to PyTorch.
But there are situations like the following where everything will work well on my local CPU environment, then fail in a GPU environment.
a = torch.tensor([1]).to(device)
b = torch.tensor([1])
c = a+b
which is entirely correct, and the behaviour I would expect.
What I would like to be able to impose restrictions on myself which ensure that errors like that above would fail both in my local CPU environment and in the GPU environment where I do my training.
Are there any existing solutions to this? If not, I think what I would like is an option I could enable which would force me to specify the device on which all leaf tensors live. Does such an option exist? Am I the only person who would be interested in this?
|
st84651
|
atiyo:
Are there any existing solutions to this? If not, I think what I would like is an option I could enable which would force me to specify the device on which all leaf tensors live. Does such an option exist? Am I the only person who would be interested in this?
The canonical thing is to test in an environment that resembles the target better.
There are 19 other people 3 who want this. However, there are reservations to the feature request because it would replace an obvious way to get things wrong with a more subtle one.
Best regards
Thomas
|
st84652
|
Thanks for the reply. I agree that setting the default to the GPU could be problematic, so I was careful in my original post not to suggest this.
I posted what’s below already in your linked GitHub issue, but posting it here too for the sake of continuity of this thread:
What about things like torch.set_default_device() and torch.get_default_device(), where the default device type could initially be the CPU?
There is already similar functionality for dtypes with torch.set_default_tensor_type() and torch.get_default_dtype() .
|
st84653
|
Trying to install pytorch package using below command but its failing to install with an error
command:
pip3 install https://download.pytorch.org/whl/cu90/torch-1.1.0-cp36-cp36m-win_amd64.whl 24
ERROR: torch-0.4.0-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform.
Based on related post i updated pip , checked for the python version… it was fine but yet i could not figure it out. You can refer to image below for the deatils
torcherror.PNG1205×262 11.7 KB
|
st84654
|
I’m not sure, if I’m interpreting the message in your python terminal correctly, but are you using 32bit Windows?
If so, the binaries won’t work on your system as they are built for a 64bit system.
|
st84655
|
ptrblck:
for
No, I am using 64 bit.
in the image, you may see, first I run python command, it showing something like win32 but to validate that I executed the second command(one that is followed by exit() command). It shows the machine system. it’s 64, I even checked in system configuration it is 64.
I have a similar set in my desktop and its working fine there.
working.PNG1490×177 8.2 KB
just for your reference, below is the link to the same problem, where they stated the reason, I followed it but could not get required results.
Windows - not a supported wheel on this platform
I was using peter123’s pytorch with Anaconda on Windows platform successfully. With the new windows support I am trying to install pytorch but I keep getting not a supported wheel.
I’m trying
pip3 install http://download.pytorch.org/whl/cpu/torch-0.4.0-cp36-cp36m-win_amd64.whl
python --version : Python 3.6.3
pip --version : pip 10.0.1
pip3 --version : pip 10.0.1
np.version.version : 1.14.2
Windows 7 - 64 bit without any CUDA supported GPU
|
st84656
|
You are using Python 3.7 but trying to install package for Python 3.6. Check your output carefully in your first pic.
|
st84657
|
Please for help.
I trained the model, which was constructed in Pytorch.
But there is a error. It shows in the figure.
problem.JPG1027×451 63.6 KB
The environment is:
Ubuntu 16.04
Python 3.6
Pytorch 1.1.0
My GPU is GeForce GTX 1080.
|
st84658
|
The first thing you should do is set the mini-batch size to a smaller number, e.g a half of your current size.
This error occurs when your network need more memory than the available amount in your system.
Hope this help!
|
st84659
|
Hi all, I’m working with the MNIST dataset available through torchvision and am trying to visualize the data (average digit, overlaid digits, etc.). While I am able extract/transform/load the dataset I can’t seem to find a way to extract the data itself to perform operations such as slicing and subsetting.
Any clarification would be greatly appreciated!
|
st84660
|
Solved by ptrblck in post #2
You could access the underlying .data and .target attributes:
import torchvision.transforms.functional as TF
dataset = datasets.MNIST(root='./data',
transform=transforms.ToTensor())
data = dataset.data
print(data.shape) # torch.Size([60000, 28, 28])
img = TF.to_pil_image(d…
|
st84661
|
You could access the underlying .data and .target attributes:
import torchvision.transforms.functional as TF
dataset = datasets.MNIST(root='./data',
transform=transforms.ToTensor())
data = dataset.data
print(data.shape) # torch.Size([60000, 28, 28])
img = TF.to_pil_image(data[0])
|
st84662
|
Hi
I was wondering if when running an RNN with multiple batches the hidden state is saved across batches, or if there is a way of switching statefulness on and off (like in keras’ stateful=True).
Thanks!
|
st84663
|
Hi there,
I’m quite new to torch, so maybe it’s sth simple. I’ve trained my net, which is:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(4, 4)
self.conv2 = nn.Conv2d(6, 16, 5)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(44896, 4096)
self.fc2 = nn.Linear(4096, 1024)
self.fc3 = nn.Linear(1024, 251)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(x.size(0), 44896)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return
And now I want to remove last layer (fc3), to make a feature extractor out of my net. So I’m loading the trained model, and removing last layer:
#loading model
model = torch.load('./net_100_test_train')
model.eval()
#loading, preparing and normalizing a test image
img=mpimg.imread('./0298_0040_0001.png')
img = np.array(img*2-1)
img = np.tile(img, (1, 1, 1))
img = torch.from_numpy(img)
img = torch.unsqueeze(img, 1)
#removing last layer
new_model = nn.Sequential(*list(model.children())[:-1])
with torch.no_grad():
output = new_model(img.to(device))
which is giving me an error:
RuntimeError: size mismatch, m1: [1952 x 93], m2: [44896 x 4096] at c:\a\w\1\s\windows\pytorch\aten\src\thc\generic/THCTensorMathBlas.cu:266
Is this because I’m using Sequential? How should I remove last layer properly, and be able to extract features?
edit:
I think I workaround it by replacing the last layer with an Identity layer, but the layer itself is still there.
|
st84664
|
Solved by ptrblck in post #2
If you try to create a new nn.Sequential model using the layers from Net, the flatten operation will be missing.
Additionally, you are currently using only one pooling layer, as both are defined as self.pool.
Probably the most straightforward way would be to derive another class from Net and just …
|
st84665
|
If you try to create a new nn.Sequential model using the layers from Net, the flatten operation will be missing.
Additionally, you are currently using only one pooling layer, as both are defined as self.pool.
Probably the most straightforward way would be to derive another class from Net and just manipulate the forward method so that it returns your desired features tensor.
However, if you would like to use the nn.Sequential approach, you could define a Flatten layer and add it to the right place:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.pool2 = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(44896, 4096)
self.fc2 = nn.Linear(4096, 1024)
self.fc3 = nn.Linear(1024, 251)
def forward(self, x):
x = self.pool1(F.relu(self.conv1(x)))
x = self.pool2(F.relu(self.conv2(x)))
print(x.shape)
x = x.view(x.size(0), 44896)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return
x = torch.randn(1, 1, 259, 199)
model = Net()
output = model(x)
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, x):
return x.view(x.size(0), -1)
new_model = nn.Sequential(*[*list(model.children())[:4], Flatten(), *list(model.children())[4:-1]])
new_model(x)
|
st84666
|
Thanks!
I’ll probably go with the Flatten solution.
Indeed I defined theese pools wrong, thanks for finding that! I’ve retrained the net with correct pooling and, interestingly, the accuracy has improved.
|
st84667
|
I am trying to remove the 4th maxpooling layer in VGG16 using
list1 = list(vgg.classifier._modules.values())[:23]
list1.extend(list(vgg.classifier._modules.values())[24:-1])
list2 = list(vgg.features._modules.values())[:23]
list2.extend(list(vgg.features._modules.values())[24:-1])
vgg.classifier = nn.Sequential(*list1)
self.RCNN_base = nn.Sequential(*list2)
I have also changed
self.RCNN_roi_pool = _RoIPooling(cfg.POOLING_SIZE, cfg.POOLING_SIZE, 1.0/8.0)
from
self.RCNN_roi_pool = _RoIPooling(cfg.POOLING_SIZE, cfg.POOLING_SIZE, 1.0/16.0)
However I am getting size mismatch error
RuntimeError: size mismatch, m1: [256 x 1000], m2: [4096 x 8] at /opt/conda/conda-bld/pytorch_1524577177097/work/aten/src/THC/generic/THCTensorMathBlas.cu:249
Can you please help me with this?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.