id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st30368
|
Could you show me an example of what you were thinking?
torch.gather can do something like this (https://pytorch.org/docs/master/torch.html?highlight=gather#torch.gather 213) but it might not be exactly what you’re looking for.
|
st30369
|
emmmmm…, it might work, i only figured out such a naive solution:
the indices must be a 2d tensor and indices.size(1)==len(params.size()).
def gather_nd(params, indices, name=None):
'''
the input indices must be a 2d tensor in the form of [[a,b,..,c],...],
which represents the location of the elements.
'''
indices = indices.t().long()
ndim = indices.size(0)
idx = torch.zeros_like(indices[0]).long()
m = 1
for i in range(ndim)[::-1]:
idx += indices[i] * m
m *= params.size(i)
return torch.take(params, idx)
|
st30370
|
Could you provide a more general version to fulfill tf.gather_nd, not only limited to 2d tensor?
For example, if I want to gather 3d tensor with size CxHxW by giving the indices with size Nx2, where the indices values correspond to the coordinates in HxW grid, how can I get the results with size CxN?
Actually in tensorflow, this can be easily implemented by tf.gather_nd…
|
st30371
|
I think you can use advanced indexing for this:
C, H, W = 3, 4, 4
x = torch.arange(C*H*W).view(C, H, W)
print(x)
idx = torch.tensor([[0, 0],
[1, 1],
[2, 2],
[3, 3]])
print(x[list((torch.arange(x.size(0)), *idx.chunk(2, 1)))])
|
st30372
|
@ptrblck Thanks! I have found your previous answer 122 which solved my problem.
The following codes are used for validation:
import torch
batch_size = 2
c, h, w = 256, 38, 65
nb_points = 784
nb_regions = 128
img_feat = torch.randn(batch_size, c, h, w).cuda()
x = torch.empty(batch_size, nb_regions, nb_points, dtype=torch.long).random_(h).cuda()
y = torch.empty(batch_size, nb_regions, nb_points, dtype=torch.long).random_(w).cuda()
# method 1
result_1 = img_feat[torch.arange(batch_size)[:, None], :, x.view(batch_size, -1), y.view(batch_size, -1)]
result_1 = result_1.view(batch_size, nb_regions, nb_points, -1)
# method 2
result_2 = img_feat.new(batch_size, nb_regions, nb_points, img_feat.size(1)).zero_()
for i in range(batch_size):
for j in range(x.shape[1]):
for k in range(x.shape[2]):
result_2[i, j, k] = img_feat[i, :, x[i, j, k].long(), y[i, j, k].long()]
print((result_1 == result_2).all())
|
st30373
|
Thank you for this solution. How can I do this if I have images in batches, NXCXHXW and index tensor are also in batches NXH*WX2. So that My output is NXCXHXW. It would be really great.
|
st30374
|
Could you post a minimal example, how the index tensor should be used to index which values in your input?
|
st30375
|
I have been achieving this using map following is a code and output
N, C, H, W = 2 , 3 , 2, 2
img = torch.arange(N*C*H*W).view(N,C, H, W)
idx = torch.randint(0,2,(N,H*W,2))
maskit = lambda x,idx: x[list((torch.arange(x.size(0)), *idx.chunk(2, 1)))]
masked = torch.stack([*map(lambda x:maskit(x[0],x[1]),zip(img,idx))])
final = masked.expand(1,*masked.shape).permute(1,3,2,0).view(*img.shape)
OUTPUT
print(img)
**out :** tensor([[[[ 0, 1],
[ 2, 3]],
[[ 4, 5],
[ 6, 7]],
[[ 8, 9],
[10, 11]]],
[[[12, 13],
[14, 15]],
[[16, 17],
[18, 19]],
[[20, 21],
[22, 23]]]])
print(idx)
**out :** tensor([[[0, 1],
[1, 0],
[1, 0],
[1, 1]],
[[1, 1],
[0, 1],
[1, 0],
[0, 0]]])
print(masked)
**out :** tensor([[[ 1, 5, 9],
[ 2, 6, 10],
[ 2, 6, 10],
[ 3, 7, 11]],
[[15, 19, 23],
[13, 17, 21],
[14, 18, 22],
[12, 16, 20]]])
print(final)
**out :** tensor([[[[ 1, 2],
[ 2, 3]],
[[ 5, 6],
[ 6, 7]],
[[ 9, 10],
[10, 11]]],
[[[15, 13],
[14, 12]],
[[19, 17],
[18, 16]],
[[23, 21],
[22, 20]]]])
print(final.shape)
**out :** torch.Size([2, 3, 2, 2])
Can’t we do this without using map.
|
st30376
|
I’m very new to pytorch and I’m very stuck with model converging. It seems to me it is not learning since the loss/r2 do not improve.
What I’ve checked/tried based on the suggestions I found here.
changes/wrote from scratch loss function
set “loss.requires_grad = True”
tried to feed the data without dataloader / just straight manual batches
played with 2d data / mean pooled data!!! I got decent results for mean pooled data in Random Forest and SVM regressor, but not in NN, which confuses me and makes me think that the data is OK and the net is NOT ok!
played with learning rate, batch size
etc.
About the data: input is bert embeddings from letter-sequence, each data point is 1024 features 43 rows (for Conv1d I transpose it to 1024*43) Total >40K data point in train, batch size=64
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F ## relu, tahn
import torch.utils.data as DataLoader # helps create batches to train on
from scipy.stats import pearsonr
import numpy as np
import torch.utils.data as data_utils
torch.set_printoptions(precision=10)
#Hyperparameters
learning_rate=0.001
batch_size = 64
num_epochs=100
data_train1 = torch.Tensor(data_train)
targets_train1=torch.Tensor(targets_train)
dataset_train = data_utils.TensorDataset(data_train1, targets_train1)
train_loader = DataLoader.DataLoader(dataset=dataset_train, batch_size=batch_size, shuffle=True)
class NN (nn.Module):
def __init__(self):#input_size=43x1024
super(NN,self).__init__()
self.layers = nn.Sequential(
nn.Conv1d(1024, 512, kernel_size=4), #I tried different in and out here
nn.ELU(),
nn.BatchNorm1d(512),
nn.Flatten(),
nn.Linear(512*40, 512),
nn.ReLU(),
nn.Linear(512, 1)
)
def forward(self, x):
return self.layers(x)
torch.manual_seed(100)
#Initialize network
model=NN().to(device)
#Loss and optimizer
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate) #to check
#Training the model
metric_name='r2'
for epoch in range(num_epochs):
score=[]
loss_all=[]
print(f"Epoch: {epoch+1}/{num_epochs}")
model.train()
for batch_idx, (data, targets) in enumerate(train_loader):
data=data.to(device=device)
targets=targets.to(device=device)
optimizer.zero_grad()
#forward
predictions=model(data)
loss=criterion(predictions,targets).to(device)
loss.requires_grad = True
#backward
loss_all.append(loss.item())
loss.backward()
#gradient descent or adam step
optimizer.step()
#computing r squared
output=predictions.detach().cpu().numpy()
target=targets.detach().cpu().numpy()
output=np.squeeze(output)
target=np.squeeze(target)
score.append(pearsonr(target, output)[0]**2)
total_score = sum(score)/len(score)
print(f'training {metric_name}: {total_score}, mean loss: {sum(loss_all)/len(loss_all)}')
Output for (10) first Epochs:
Epoch: 1/100 training r2: 0.0026224905802415955, mean loss: 0.5084380856556941
Epoch: 2/100 training r2: 0.0026334153423518466, mean loss: 0.5082988155293148
Epoch: 3/100 training r2: 0.002577073836564485, mean loss: 0.5085703951569392
Epoch: 4/100 training r2: 0.002633483899689855, mean loss: 0.5081870414129565
Epoch: 5/100 training r2: 0.0025642136678393776, mean loss: 0.5083346445680192
Epoch: 6/100 training r2: 0.0026261540869286933, mean loss: 0.5084220717277274
Epoch: 7/100 training r2: 0.002614604670602339, mean loss: 0.5082813335398275
Epoch: 8/100 training r2: 0.0024826257263258784, mean loss: 0.5086268588042153
Epoch: 9/100 training r2: 0.00261018096876641, mean loss: 0.5082496945227619
Epoch: 10/100 training r2: 0.002542892071836945, mean loss: 0.5088265852086478
Response is float64 in range (-2,2).
With response scaled to float64 in range [-1,1] and tanh still not converging. I feel like something general is missing. BTW, when I do non shuffled batches and straight-forward data for batches (first batch with indices 0-63, second bath 64-127 etc) I get the same score results in each of epochs!
Tried to add (2) more sequences with Conv1d, BatchNorm1d, ELU (1024->512, 512->256, 256->128, kernel size 4) and the result is worse:( it is not learning at all!
I also tried to do mean pooled for data to have only 1024 for the input, with the same poor result for NN.
For me it seems like I’m missing something very general and the model just do not learn.
Hope you can help! Thank you in advance for your time!
|
st30377
|
Hello,
Since the loss is changing 0.0026 to 0.0024 I think it’s learning even if it’s not decreasing in te begining but maybe in the end itt will decrease, if not so try tot ake more epochs (more an 100) to give enough time for convergence.
otherwise, maybe didn’t understand you question.
|
st30378
|
Thank you, I will try to run 10K epochs today to see if it converges.
Yes - my question was why the model is not converging (I expect to see r^2 in about 0.5-0.6 range as I got this result from Random Forest)
|
st30379
|
I think a classic test to try here is to decrease the amount of data in the dataset and verify that your model can overfit it.
|
st30380
|
Thank you for the idea!
But the result is the same, just ran on 10K data points.
not converging. Something wrong with it’s learning ability…
|
st30381
|
Right, what happens when you reduce this to an extreme, say just a few or even a single data point?
|
st30382
|
this is what i got:
5 data points:
Epoch: 1/100
training r2: 0.13436292977865993, mean loss: 0.33448025584220886
…
Epoch: 100/100
training r2: 0.1343612927439954, mean loss: 0.33448025584220886
2 data points:
Epoch: 1/100
training r2: 1.0, mean loss: 0.6390723586082458
Epoch: 100/100
training r2: 1.0, mean loss: 0.6390724778175354
1 data point:
ValueError: x and y must have length at least 2.
|
st30383
|
Interesting, I’ve tried your model code with random training data and it seems to train fine. However, one difference is that I removed the loss.requires_grad = True line as that causes
Traceback (most recent call last):
File "tempsequence.py", line 77, in <module>
loss.requires_grad = True
RuntimeError: you can only change requires_grad flags of leaf variables.
in my environment.
Output:
Epoch: 1/100
training r2: 0.009594063912129953, mean loss: 5.682353660464287
Epoch: 2/100
training r2: 0.16930472195690563, mean loss: 26.084057807922363
Epoch: 3/100
training r2: 0.2011665253106225, mean loss: 11.448175303637981
Epoch: 4/100
training r2: 0.09114844879538331, mean loss: 2.279003791511059
Epoch: 5/100
training r2: 0.3172596296459851, mean loss: 0.7606077138334513
Epoch: 6/100
training r2: 0.4809859426056538, mean loss: 0.5154492985457182
Epoch: 7/100
training r2: 0.5868341407534547, mean loss: 0.4187624929472804
Epoch: 8/100
training r2: 0.649767417816389, mean loss: 0.34969478007405996
Epoch: 9/100
training r2: 0.7071613647219848, mean loss: 0.29750150814652443
Epoch: 10/100
training r2: 0.7642161760963712, mean loss: 0.2450719503685832
Epoch: 11/100
training r2: 0.8191470208232381, mean loss: 0.18541160179302096
Epoch: 12/100
training r2: 0.866612613881297, mean loss: 0.13777912221848965
Epoch: 13/100
training r2: 0.9005741764866068, mean loss: 0.09871353628113866
Epoch: 14/100
training r2: 0.9297122659354237, mean loss: 0.07004990486893803
Epoch: 15/100
training r2: 0.9539838535204129, mean loss: 0.047599957906641066
Epoch: 16/100
training r2: 0.9682029463044977, mean loss: 0.03334217533119954
Epoch: 17/100
training r2: 0.9779388192414571, mean loss: 0.022128620053990744
Epoch: 18/100
training r2: 0.9855749007781102, mean loss: 0.014580676259356551
Epoch: 19/100
training r2: 0.9902337714874327, mean loss: 0.011297925255348673
Epoch: 20/100
training r2: 0.9925735608177005, mean loss: 0.007077659640344791
Epoch: 21/100
training r2: 0.9948008373417658, mean loss: 0.004619108618499013
Epoch: 22/100
training r2: 0.9967165985752503, mean loss: 0.003157338052915293
Epoch: 23/100
training r2: 0.9978223330030351, mean loss: 0.002256347268485115
Epoch: 24/100
training r2: 0.9982408967003787, mean loss: 0.001730209368361102
Epoch: 25/100
training r2: 0.9987114706485303, mean loss: 0.0013457234599627554
Epoch: 26/100
training r2: 0.9987114128846947, mean loss: 0.001346579964774719
Epoch: 27/100
training r2: 0.9985214830016457, mean loss: 0.0015366310344688827
Epoch: 28/100
training r2: 0.9979733118207817, mean loss: 0.0020749795767187607
Epoch: 29/100
training r2: 0.9976363805488251, mean loss: 0.0022758882041671313
Epoch: 30/100
training r2: 0.9969927904895611, mean loss: 0.0030411550051212544
Epoch: 31/100
training r2: 0.995648326719438, mean loss: 0.005057528265751898
Epoch: 32/100
training r2: 0.9922587923077028, mean loss: 0.008088470793154556
Epoch: 33/100
training r2: 0.9887959956127246, mean loss: 0.012593698847922496
Epoch: 34/100
training r2: 0.9831638414540375, mean loss: 0.018604541895911098
Epoch: 35/100
training r2: 0.9737390985477979, mean loss: 0.027635501814074814
Epoch: 36/100
training r2: 0.9641899244039341, mean loss: 0.038666998385451734
Epoch: 37/100
training r2: 0.951262654867677, mean loss: 0.052705009758938104
Epoch: 38/100
training r2: 0.9371287246394816, mean loss: 0.06455437233671546
Epoch: 39/100
training r2: 0.9568622813468258, mean loss: 0.04678037913981825
Epoch: 40/100
training r2: 0.9684106859661721, mean loss: 0.03428023273590952
Epoch: 41/100
training r2: 0.9769530552075255, mean loss: 0.024533933261409402
Epoch: 42/100
training r2: 0.986115178609799, mean loss: 0.01612667564768344
Epoch: 43/100
training r2: 0.9883539015220295, mean loss: 0.012954877864103764
Epoch: 44/100
training r2: 0.9924869826615559, mean loss: 0.0088661702175159
Epoch: 45/100
training r2: 0.9942897541431738, mean loss: 0.006273552269703941
Epoch: 46/100
training r2: 0.9968140855136599, mean loss: 0.0035802944257739
Epoch: 47/100
training r2: 0.997001327959991, mean loss: 0.003321810443594586
Epoch: 48/100
training r2: 0.9975052142543458, mean loss: 0.0028207608411321416
Epoch: 49/100
training r2: 0.9981577880948567, mean loss: 0.001989818636502605
Epoch: 50/100
training r2: 0.998546231725961, mean loss: 0.0016351598242181353
Epoch: 51/100
training r2: 0.9986027636978555, mean loss: 0.0015020208647911204
Epoch: 52/100
training r2: 0.9987384508660706, mean loss: 0.0014546711390721612
Epoch: 53/100
training r2: 0.9986857161310916, mean loss: 0.0015857035414228449
Epoch: 54/100
training r2: 0.9987154541868536, mean loss: 0.001486757697421126
Epoch: 55/100
training r2: 0.9986375054512535, mean loss: 0.0015756650318508036
Epoch: 56/100
training r2: 0.9983302812352135, mean loss: 0.001992172579775797
Epoch: 57/100
training r2: 0.9978964350882717, mean loss: 0.0023003785172477365
Epoch: 58/100
training r2: 0.9980009710848087, mean loss: 0.0022060492519813124
Epoch: 59/100
training r2: 0.9975635621751086, mean loss: 0.0026615302267600782
Epoch: 60/100
training r2: 0.9975992688318768, mean loss: 0.002975316012452822
Epoch: 61/100
training r2: 0.9963099659802808, mean loss: 0.004268930060788989
Epoch: 62/100
training r2: 0.9960123018852073, mean loss: 0.004779360577231273
Epoch: 63/100
training r2: 0.9953000198690384, mean loss: 0.005165318536455743
Epoch: 64/100
training r2: 0.9941095220427116, mean loss: 0.006454882866819389
Epoch: 65/100
training r2: 0.9930570804636689, mean loss: 0.007594891852932051
Epoch: 66/100
training r2: 0.9928955292326065, mean loss: 0.008498512339428999
Epoch: 67/100
training r2: 0.9911527403046312, mean loss: 0.010306075535481796
Epoch: 68/100
training r2: 0.9931718973224487, mean loss: 0.008304626011522487
Epoch: 69/100
training r2: 0.9933063535071627, mean loss: 0.00726965151989134
Epoch: 70/100
training r2: 0.993171505320739, mean loss: 0.008076088663074188
Epoch: 71/100
training r2: 0.994059443074772, mean loss: 0.0072878144710557535
Epoch: 72/100
training r2: 0.9931785372577545, mean loss: 0.007667907964787446
Epoch: 73/100
training r2: 0.9930847791765954, mean loss: 0.006984286243095994
Epoch: 74/100
training r2: 0.9955165209328762, mean loss: 0.004958275567332748
Epoch: 75/100
training r2: 0.9954397187994897, mean loss: 0.005190604671952315
Epoch: 76/100
training r2: 0.9958678910723257, mean loss: 0.005277530166495126
Epoch: 77/100
training r2: 0.9953963619106224, mean loss: 0.005750581309257541
Epoch: 78/100
training r2: 0.9963880844229783, mean loss: 0.004177943723334465
Epoch: 79/100
training r2: 0.9970984115343485, mean loss: 0.0033548544597579166
Epoch: 80/100
training r2: 0.996957118488398, mean loss: 0.0037890574349148665
Epoch: 81/100
training r2: 0.9953908147342319, mean loss: 0.004737168623250909
Epoch: 82/100
training r2: 0.9961195713550257, mean loss: 0.004688302433351055
Epoch: 83/100
training r2: 0.9964537987265749, mean loss: 0.004451523665920831
Epoch: 84/100
training r2: 0.9965007195822354, mean loss: 0.004249753299518488
Epoch: 85/100
training r2: 0.9952766414896639, mean loss: 0.005342122851288877
Epoch: 86/100
training r2: 0.9965416337287639, mean loss: 0.004103144659893587
Epoch: 87/100
training r2: 0.9954450219242742, mean loss: 0.0051249204989289865
Epoch: 88/100
training r2: 0.9967367979858002, mean loss: 0.0037255308488965966
Epoch: 89/100
training r2: 0.9961328934047874, mean loss: 0.0044315601444395725
Epoch: 90/100
training r2: 0.996738920442225, mean loss: 0.003717487008543685
Epoch: 91/100
training r2: 0.9960501740763243, mean loss: 0.004249425859597977
Epoch: 92/100
training r2: 0.9949915325389183, mean loss: 0.005676474203937687
Epoch: 93/100
training r2: 0.9944822243730989, mean loss: 0.006022317189490423
Epoch: 94/100
training r2: 0.9944785713113447, mean loss: 0.0061915624246466905
Epoch: 95/100
training r2: 0.9902922907694482, mean loss: 0.009277411510993261
Epoch: 96/100
training r2: 0.9932751325981961, mean loss: 0.008374881159397773
Epoch: 97/100
training r2: 0.9907874390780724, mean loss: 0.01135821822390426
Epoch: 98/100
training r2: 0.9889161612958226, mean loss: 0.013372064306167886
Epoch: 99/100
training r2: 0.982071001035948, mean loss: 0.0186732045840472
Epoch: 100/100
training r2: 0.984889725503154, mean loss: 0.01782958245894406
Code:
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F ## relu, tahn
import torch.utils.data as DataLoader # helps create batches to train on
from scipy.stats import pearsonr
import numpy as np
import torch.utils.data as data_utils
torch.set_printoptions(precision=10)
#Hyperparameters
learning_rate=0.001
batch_size = 64
num_epochs=100
n = 1000
data_train = torch.randn(n, 1024, 43)
data_test = torch.randn(n, 1024, 43)
targets_train = torch.randn(n, 1)
device = 'cuda'
data_train1 = torch.Tensor(data_train)
targets_train1=torch.Tensor(targets_train)
dataset_train = data_utils.TensorDataset(data_train1, targets_train1)
train_loader = DataLoader.DataLoader(dataset=dataset_train, batch_size=batch_size, shuffle=True)
class NN (nn.Module):
def __init__(self):#input_size=43x1024
super(NN,self).__init__()
self.layers = nn.Sequential(
nn.Conv1d(1024, 512, kernel_size=4), #I tried different in and out here
nn.ELU(),
nn.BatchNorm1d(512),
nn.Flatten(),
nn.Linear(512*40, 512),
nn.ReLU(),
nn.Linear(512, 1)
)
def forward(self, x):
return self.layers(x)
torch.manual_seed(100)
#Initialize network
model=NN().to(device)
#Loss and optimizer
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate) #to check
#Training the model
metric_name='r2'
for epoch in range(num_epochs):
score=[]
loss_all=[]
print(f"Epoch: {epoch+1}/{num_epochs}")
model.train()
for batch_idx, (data, targets) in enumerate(train_loader):
data=data.to(device=device)
targets=targets.to(device=device)
optimizer.zero_grad()
#forward
predictions=model(data)
loss=criterion(predictions,targets).to(device)
# loss.requires_grad = True
#backward
loss_all.append(loss.item())
loss.backward()
#gradient descent or adam step
optimizer.step()
#computing r squared
output=predictions.detach().cpu().numpy()
target=targets.detach().cpu().numpy()
output=np.squeeze(output)
target=np.squeeze(target)
score.append(pearsonr(target, output)[0]**2)
total_score = sum(score)/len(score)
print(f'training {metric_name}: {total_score}, mean loss: {sum(loss_all)/len(loss_all)}')
Can you verify whether the model can learn from random training data on your setup?
|
st30384
|
oh my, I just found the line
torch.set_grad_enabled(False)
somewhere in global settings which actually caused the problem!
and yes - loss.requires_grad = True need to be removed - good catch!
Thank you for your help! I appreciate it!
|
st30385
|
class CasingDataset2(Dataset):
def __init__(self, csv_file: str = "casing_model.csv"):
"""Construct an instance."""
self.len = 10
def __getitem__(
self, index: int
) -> typing.Tuple[torch.Tensor, torch.Tensor]:
"""Get an item."""
return torch.FloatTensor([1, 2, 3]), torch.LongTensor(index)
def __len__(self) -> int:
"""Return data length."""
return self.len
And I try run it with following arguments. It works
train_dataset2 = CasingDataset2()
train_loader2 = DataLoader(train_dataset2, batch_size=1, shuffle=True, num_workers=1, drop_last=False)
for idx, (data, target) in enumerate(train_loader2):
print(idx)
0
1
2
3
4
5
6
7
8
9
Then I try batch_size=2. It raises an error
train_dataset2 = CasingDataset2()
train_loader2 = DataLoader(train_dataset2, batch_size=2, shuffle=True, num_workers=1, drop_last=False)
for idx, (data, target) in enumerate(train_loader2):
print(idx)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-8986358bcacc> in <module>
----> 1 for idx, (data, target) in enumerate(train_loader2):
2 print(idx)
/i/pyenv/versions/py-default/lib/python3.8/site-packages/torch/utils/data/dataloader.py in __next__(self)
515 if self._sampler_iter is None:
516 self._reset()
--> 517 data = self._next_data()
518 self._num_yielded += 1
519 if self._dataset_kind == _DatasetKind.Iterable and \
/i/pyenv/versions/py-default/lib/python3.8/site-packages/torch/utils/data/dataloader.py in _next_data(self)
1197 else:
1198 del self._task_info[idx]
-> 1199 return self._process_data(data)
1200
1201 def _try_put_index(self):
/i/pyenv/versions/py-default/lib/python3.8/site-packages/torch/utils/data/dataloader.py in _process_data(self, data)
1223 self._try_put_index()
1224 if isinstance(data, ExceptionWrapper):
-> 1225 data.reraise()
1226 return data
1227
/i/pyenv/versions/py-default/lib/python3.8/site-packages/torch/_utils.py in reraise(self)
427 # have message field
428 raise self.exc_type(message=msg)
--> 429 raise self.exc_type(msg)
430
431
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/i/pyenv/versions/py-default/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/i/pyenv/versions/py-default/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/i/pyenv/versions/py-default/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 83, in default_collate
return [default_collate(samples) for samples in transposed]
File "/i/pyenv/versions/py-default/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 83, in <listcomp>
return [default_collate(samples) for samples in transposed]
File "/i/pyenv/versions/py-default/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [0] at entry 0 and [1] at entry 1
version: 1.8.1+cu102
Why I can’t use another batch_size?
|
st30386
|
Solved by bsridatta in post #2
change torch.LongTensor(index) → torch.LongTensor([index])
the first creates a tensor of size index, the second creates a tensor with value index.
|
st30387
|
change torch.LongTensor(index) → torch.LongTensor([index])
the first creates a tensor of size index, the second creates a tensor with value index.
|
st30388
|
input1 = #sth
input2 = # sth
label1 = # sth
label2 = # sth
output1 = model(input1)
output2 = model(input2)
loss1 = criterion(output1, label1)
loss2 = criterion(output2, label2)
total = loss1+loss2
total_loss.backward()
optimizer.step()
I wonder will the backward() function consider the effect of input1?
|
st30389
|
Solved by bsridatta in post #2
Yes, it should (if total.backward()). Try to print and see if they are different?
Since the backward is on total i.e loss1+loss2, the computation graph would include both 1, 2 inputs.
You could also refer the GAN tutorial where something similar is done
|
st30390
|
Yes, it should (if total.backward()). Try to print and see if they are different?
Since the backward is on total i.e loss1+loss2, the computation graph would include both 1, 2 inputs.
You could also refer the GAN tutorial where something similar is done
|
st30391
|
Hello everyone,
what is the term for an autoencoder, that is trained not to reconstruct X, but to create another output y instead. I’ve heard the term somewhere, but can’t remember it.
For example the autoencoder the autoencoder is trained on the MNIST dataset not to reconstruct the given number X_hat, but to create an image of another number Y.
|
st30392
|
U-Net, conditional autoencoders? They are usually still called an autoencoder. For example, image colorization auto encoders.
|
st30393
|
Is there a function in dataset that is called on every epoch by the dataloader? I want to be able to reset some variables that i am using to maintain statistics.
|
st30394
|
Since DataLoader takes an instance of Dataset, I think it is not possible to do programmatically manipulate the dataset. Recreating the Dataset should be the way to go.
To make it a bit efficient, I would create an instance of the processed content that can be directly used by a new dataset that you create every epoch. And have your logic of manipulating the data in the get_item method. This way there shouldn’t be much overhead.
|
st30395
|
I am trying to visualize the images in one batch of the data loader. I got this code snippet from Pytorch official site.
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1,2,0))
print(inp.shape)
plt.imshow(inp)
if title is not None:
plt.title(title)
# plt.pause(0.001) # pause a bit so that plots are updated
# Get a batch of training data
image, label = next(iter(train_loader))
print(image.shape)
#print(label)
# Make a grid from batch
out = torchvision.utils.make_grid(image)
imshow(out, title=[label[x] for x in label])
However, I am not able to understand inp.numpy().transpose((1,2,0)), what transpose is doing here. I know that it permutes the axis. However, the corresponding tensor shape is torch.Size([4, 3, 224, 224]). In that case (1,2,0) corresponds to 3,224,4. Then how imshow is correctly displaying images?
Apart from that, the shape of the tensor image is 3,224,224. but when it is being transformed to ndarray why the shape is being changed to (228, 906, 3). Should it become 224, 224, 3.
|
st30396
|
Hi Aleemsidra!
aleemsidra:
However, I am not able to understand inp.numpy().transpose((1,2,0)), what transpose is doing here.
…
However, the corresponding tensor shape is torch.Size([4, 3, 224, 224]).
You should get an error here. You need to pass four axes to numpy’s
transpose() to transpose a 4-d tensor.
Apart from that, the shape of the tensor image is 3,224,224. but when it is being transformed to ndarray why the shape is being changed to (228, 906, 3). Should it become 224, 224, 3.
I have no idea where your (228, 906, 3) is coming from. I get
(244, 244, 3), which is what you expect and want. (matplotlib’s
imshow() wants the three color channels as the last dimension,
rather than the first. That’s the purpose of the transpose().)
Here’s an illustration of these points:
>>> import torch
>>> torch.__version__
'1.7.1'
>>> inp = torch.randn (4, 3, 224, 224)
>>> inp.shape
torch.Size([4, 3, 224, 224])
>>> inp.numpy().transpose ((1, 2, 0))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: axes don't match array
>>> inp[0].shape
torch.Size([3, 224, 224])
>>> inp[0].numpy().transpose ((1, 2, 0)).shape
(224, 224, 3)
Best.
K. Frank
|
st30397
|
I realized that I am using make_grid function which is changing the dimension from 4d to 3d. It is also changing the the dimension from 224, 244,3 to 228,906,3. Which is still confusing for me that how it picks up this dimension.
|
st30398
|
The “grid” of images is created by flattening the images provided as a batch of image tensors (4D) into a single image tensor (3D).
The shapes are created by placing the images next to each other using the specified nrow and padding arguments as described in the docs 1.
|
st30399
|
I am trying to show the images along with respective labels being loaded in the data loader batch. Labels are: 0,1,2,3. However, the labels displayed on the top image are different from the labels that I simply print using the print command. Should not be both these labels same?
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
plt.imshow(inp)
if title is not None:
plt.title(title)
# Get a batch of training data
image, label = next(iter(train_loader))
print(label) # prinitng respective labels
# Make a grid from batch
out = torchvision.utils.make_grid(image)
imshow(out, title=[label[x] for x in label])
Below is the screenshot of printed labels and labels being displayed as titles on respective images.
Screenshot 2021-06-14 at 3.12.15 PM2118×338 53.7 KB
|
st30400
|
The different outputs are expected, since you are indexing label with itself:
label = torch.tensor([1, 3, 0, 1])
print(label)
> tensor([1, 3, 0, 1])
print([label[x] for x in label])
> [tensor(3), tensor(1), tensor(1), tensor(3)]
|
st30401
|
I did not get your point. Since the label of the 4 images respectively is tensor([1, 3, 0, 1]). So, how indexing through label is changing the output? lable[1]-> 1, lable[2]->3, lable[3]->0, label[4]-> 1. Should not be this output. Can you please explain?
|
st30402
|
The list comprehension uses:
label = torch.tensor([1, 3, 0, 1])
# which equals to
label[0] = 1
label[1] = 3
label[2] = 0
label[3] = 1
# your list comprehension uses the label itself to index into it
[label[x] for x in label]
== [label[1], label[3], label[0], label[1]]
== [3, 1, 1, 3]
|
st30403
|
Hello everyone,
I’m trying to understand a CVAE implemented in a paper and it just does not make sense to me, that the prior for z in the ELBO depends on X. Isn’t the prior supposed to be an assumption before you have seen the data?
image925×175 50.3 KB
Unlike typical autoencoders this one does not aim to reconstruct x with x_hat, but to produce a different output named y. So my hypothesis is. that the idea is not to approximate p with q but actually the other way around, because during inference the labels won’t be available. So we will probably use the prior during inference. The KL term makes sure, that p produces a smiliar z to q, without knowing y, while q does know y. So the neural network of p captures the relationship between x and y and makes y obsolete as an input. If that makes any sense…
My assumption is
|
st30404
|
After installing pytorch1.0, I am trying convert a caffe2 model (in particular caffe2/python/models/seq2seq/translate.py) into ONNX.
I thought I would use the command line script convert-caffe2-to-onnx.
I get a fail saying "No module named ‘onnx’’ from line 13 of conversion.py
I did not install ONNX explicitly.
|
st30405
|
If you need the whole ONNX library, you should install it via:
conda install -c conda-forge onnx
|
st30406
|
Hmm. Am I to understand when I install pytorch1.0 the onnx support is not complete enough to convert a caffe2 model to ONNX?
Also, I asked about this in the ONNX forum. They said this conda-forge version is not being maintained and asked to install from source.
Thanks @ptrblck for the advice.
Hmm. getting unsatisfiable conflict. ONNX requires python2.7 and I have python3.7 in the pytorch1.0 conda environment.
|
st30407
|
I didn’t know that the conda-forge version is not being maintained, as it’s recommended to install it using this command in the PyTorch ONNX docs 25.
Could you post the link to the thread in the ONNX forum?
I guess if you build ONNX from source it should also work with Python 3.7.
|
st30408
|
@houseroad Should the docs be changed to the pip install command?
It seems the conda-forge onnx version wasn’t updated in 7 months, but is still in the official ONNX install guide 12.
Let me know, what you think about this and I can fix it quickly.
|
st30409
|
The pip version seems to have problem too.
Assertion Error: type is ConstantFill
image.png990×84 2.2 KB
While searching for this type of error, AFAI understand, it is alluded to be fixed in 'facebook archived" version.
github.com/onnx/tutorials
Issue: Caffe2 to ONNX tutorial assertion fails - re.match('GivenTensor.*Fill', op.type) 7
opened by MrGeva
on 2017-12-11
I followed the instructions on Caffe2OnnxExport.ipynb
on Ubuntu16.04, ran the following command line:
convert-caffe2-to-onnx /home/eran/Downloads/squeezenet_caffe2/predict_net.pb --caffe2-init-net /home/eran/Downloads/squeezenet_caffe2/exec_net.pb --value-info '{"data": [1, [1, 3, 224,...
|
st30410
|
Installing from source with the below command worked for me.
python setup.py develop
Notice the option develop instead of install
Here is the link where I figured it out.
github.com/onnx/onnx
Issue: Problem installing onnx from source 11
opened by elboyran
on 2018-08-17
After installing onnx from binaries and encountering problems (missing functions) when running the Python API notebooks and after an advice to...
|
st30411
|
Hi,I need to load images from different folders,for example:batch_size=8,so I need to load 8 *3 images from 8 different folders,and load 3 images from each folder,all these images combined one batch.How to realize this?
I will be grateful for your help!
|
st30412
|
Solved by crcrpar in post #2
Hi
It would be enough to define your original torch.utils.data.DataSet subclass to handle 8 folders simultaneously. I write down one naive not complete example below.
Let the folder structure be
- train
- folder_1
- ...
- folder_8
In the above setting, you can list all the image file…
|
st30413
|
Hi
It would be enough to define your original torch.utils.data.DataSet subclass to handle 8 folders simultaneously. I write down one naive not complete example below.
Let the folder structure be
- train
- folder_1
- ...
- folder_8
In the above setting, you can list all the image files in each folder by os.listdir('train/folder_1).
Also you can override torch.utils.data.DataSet class as below and pass your dataset instance to DataLoader setting batch_size=3
import os
from torch.utils.data import DataSet
class ImageDataSet(DataSet):
def __init__(self, root='train', image_loader=None, transform=None):
self.root = root
self.image_files = [os.listdir(os.path.join(self.root, 'folder_{}'.format(i)) for i in range(1, 9)]
self.loader = image_loader
self.transform = transform
def __len__(self):
# Here, we need to return the number of samples in this dataset.
return sum([len(folder) for folder in self.image_files])
def __getitem__(self, index):
images = [self.loader(os.path.join(root, 'folder_{}'.format(i), self.image_files[i][index])) for i in range(1, 9)]
if self.transform is not None:
images = [self.transform(img) for img in images]
return images
|
st30414
|
@Harry-675
I’m really sorry it might not work. (I’m not sure)
If the above doesn’t work, try the below please.
More naive solution is that preparing DataSet and DataLoader for each folder. Then you loop over all the dataloaders like this Train simultaneously on two datasets 169 if you don’t care the order of sampling in each folder.
So,
class ImageData(torch.utils.data.DataSet):
def __init__(self, root='train/folder_1', loader=image_load_func, transform=None):
self.root = root
self.files = os.listdir(self.root)
self.loader = loader
self.transform = transform
def __len__(self):
return len(self.files)
def __getitem__(self, index):
return self.transform(self.loader(os.path.join(self.root, self.files[index])))
loader_1 = DataLoader(ImageData('train/folder_1'), batch_size=3)
...
loader_8 = DataLoader(ImageData('train/folder_8'), batch_size=3)
for batch in zip(loader_1, ..., loader_8):
batch = torch.cat(batch, dim=0)
|
st30415
|
If anyone is looking for a different method, torchvision has a utility to accomplish this task cleanly.
https://pytorch.org/docs/stable/torchvision/datasets.html#torchvision.datasets.ImageFolder 199
There’s a small caveat with this though, images in a particular batch are not guaranteed to come from different classes.
It assumes the images are arranged in the following manner:
root_dir/
class_1/
001.png
002.png
...
class_2/
001.png
002.png
...
...
from torchvision import datasets, transforms
from torch.utils import data
dataset = datasets.ImageFolder(root = root_dir,
transform = transforms.ToTensor())
loader = data.DataLoader(dataset, batch_size = 8, shuffle = True)
|
st30416
|
@vaasudev96 How to use root for the custom dataset(ImageDataSet) defined by @crcrpar ?
loader_1 = DataLoader(ImageData.ImageFolder(root=‘train/folder_1’ , batch_size=3)
|
st30417
|
Hey there!
Are there any tutorials on how to use Pyro to train a Neural Network for some toy datasets like moon or blobs data sets?
I haven’t found any so far.
Thanks!
|
st30418
|
I use a simple profiling code to profile my training process.
import cProfile, pstats
cProfile.run("main()", "{}.profile".format(__file__))
s = pstats.Stats("{}.profile".format(__file__))
s.strip_dirs()
s.sort_stats("time").print_stats(10)
and got something like this
image902×270 75.1 KB
When I did some reading on {method ‘acquire’ of ‘_thread.lock’ objects}, apparently this just shows what the parent process is calling and not the child process. So is it possible to do profiling on the data loader worker process?
|
st30419
|
Hello, did you find the solution to your question? I’m having the same problem about pytorch data loader worker process profiling like you.
|
st30420
|
I am trying to transform a code written in PyTorch 0.2 to PyTorch version 1.7.
Somehow I managed to run the code in PyTorch 1.7 but the results are different from the original code.
I believe the loss computed in these two versions is different.
The link of the code is written in PyTorch 0.2:
github.com
cai-lw/KBGAN/blob/master/base_model.py 1
import torch
import torch.nn as nn
import torch.nn.functional as nnf
from config import config
from torch.autograd import Variable
from torch.optim import Adam
from metrics import mrr_mr_hitk
from data_utils import batch_by_size
import logging
class BaseModule(nn.Module):
def __init__(self):
super(BaseModule, self).__init__()
def score(self, src, rel, dst):
raise NotImplementedError
def dist(self, src, rel, dst):
raise NotImplementedError
This file has been truncated. show original
Can anyone suggest changes in gen_step() and dis_step() in the base_model.py file in order to transform it to 1.7?
Thanks
|
st30421
|
Hello guys
I’ve been wondering how is it possible to apply a super resolution model to very large images (2000x2000)
I know and use pytorch distributed parallel training, but was wondering if there was something similar to it for the test
The test code I’m trying to run is the following’s :
GitHub
Lornatang/ESRGAN-PyTorch 3
A simple implementation of esrgan, which uses the pytorch framework. - Lornatang/ESRGAN-PyTorch
If the image I try to upscale surpasses a resolution (600x600) it is not possible to test on it because of out of memory issue.
In terms of graphic prowess, I work on a server with multiple nvidia Tesla gpus on multiple nodes so I can use as many gpus as possible.
Thank you
|
st30422
|
Hi!
I want to parallelize a simple for loop computation that iterates over a list of pairs (stored as PyTorch tensor) to run over a GPU. The computation inside the loop doesn’t seem to be the bottleneck, time is consumed because of the huge input size.
What it does?
Just applies a function to each pair and updates its value. Then increments a matrix according to the received pair as the index.
All the iterations of the loop are independent of each other, hence can be parallelized, however, I’m not getting a way to do that in PyTorch so that efficiency can be improved with CUDA. Any kind of help would be appreciated (I’m open to vectorizing, multiprocessing, or change in input style).
Thank you
for i in range(0, batch_size):
r_Pairs[i] = torch.floor(random.uniform(0.0, 1.0) * r_Pairs[i] * F) % F
matrix[r_Pairs[i][0]][r_Pairs[i][1]] += 1
|
st30423
|
line 2 is trivially convertable to vectorized form, just with torch RNG
I don’t remember if line 3 if easily vectorizable - there is scatter_add_, but you may need to convert to 1d view/indexes (i1*size1+i2). Or do that scattering loop on cpu, that will be faster.
|
st30424
|
I am getting above error while executing following function. My module runs very well for Training and Validation, but while in testing phase it gives me above error. Any suggestion/solution are welcome @ptrblck
def evaluate_test(self):
# restore model args
args = self.args
# evaluation mode
self.model.load_state_dict(torch.load(osp.join(self.args.save_path, 'max_acc.pth'))['params'])
self.model.eval()
record = np.zeros((2700, 2)) # loss and acc
label = torch.arange(args.eval_way, dtype=torch.int16).repeat(args.eval_query)
label = label.type(torch.LongTensor)
if torch.cuda.is_available():
label = label.cuda()
print('best epoch {}, best val acc={:.4f} + {:.4f}'.format(
self.trlog['max_acc_epoch'],
self.trlog['max_acc'],
self.trlog['max_acc_interval']))
confusion_matrix=torch.zeros(args.eval_way,args.eval_way)
with torch.no_grad():
for i, batch in tqdm(enumerate(self.test_loader, 1)):
if torch.cuda.is_available():
data, real_label= [_.cuda() for _ in batch]
else:
data = batch[0]
real_label = real_label.type(torch.LongTensor)[args.eval_way:]
if torch.cuda.is_available():
real_label = real_label.cuda()
logits = self.model(data)
loss = F.cross_entropy(logits, label)
pred = torch.argmax(logits, dim=1)
actual_pred=torch.where(pred.eq(label),real_label,real_label[label[pred]])
#print('actual_label is:', real_label)
#print('actual_pred is:',actual_pred)
acc = count_acc(logits, label)
record[i-1, 0] = loss.item()
#print('record is:', record)
record[i-1, 1] = acc
real_label = batch[1]
#print(real_label)
for t,p in zip(real_label.view(-1), actual_pred.view(-1)):
confusion_matrix[t.long(),p.long()]+=1
cm=confusion_matrix.numpy()
import pandas as pd
cm_col=cm / cm.sum(axis=1)
cm_row=cm / cm.sum(axis=0)
It is causing me error at this line in the above error
record[i-1, 0] = loss.item()
|
st30425
|
Indexing is 0-based, so the maximum index of a length 2700 index is 2699. What happens if you do record = np.zeros(len(self.test_loader), 2))?
|
st30426
|
Thanks but if I give 2699 also it gives me same error stating [Index 2699 is out of bounds for axis 0 with size 2699]. If I do ``record = np.zeros(len(self.test_loader), 2)) ? then it gives TypeError: len() takes exactly one argument (2 given) @eqy @ptrblck
|
st30427
|
Hello @eqy, I checked the type it is class 'torch.utils.data.dataloader.DataLoader'>.
|
st30428
|
1798×104 16.2 KB
Here is snapshot of error I am getting when I give record = np.zeros(len(self.test_loader, 2)) # loss and acc @eqy
|
st30429
|
I think this should be record = np.zeros(len(self.test_loader), 2) rather than record = np.zeros(len(self.test_loader, 2)).
|
st30430
|
1701×119 18.4 KB
Now I get this error with record=np.zeros(len(self.test_loader), 2)
|
st30431
|
It worked. Thanks. But when I was entering record=np.zeros((10000, 2)) it was working fine initially. But later on it is giving me above error. I wonder why? What caused this error? In the above line 10000 are no evaluation episodes.
|
st30432
|
Hello All;
I already have a trained model that has several convolution blocks (conv + relu + pooling), which is saved and loaded.
Here it is:
image1098×1092 71.3 KB
Here’s what I’m trying to do:
I want to be able, to reuse only a chunk of my model, let’s say inputting at Conv2d-5 and outputting at the last Linear-19, or inputting at BatchNorm2d-11 and outputting at the last layer still Linear-19.
The target is to be able to choose any intermediate layer, and give it a tensor as input w.r.t the layer input dimension, and outputting with the last layer.
I wanted to use nn.ModuleList() and append from mymodel’s layers, but mymodel.state_dict().items() doesn’t print my pooling layers.
I’ve tried also mymodel._modules.keys(), it’s viewing the pooling layer, but not in the order they are called in my Neural Network.
See:
image2178×172 27 KB
Thank you very much for your help,
Habib
|
st30433
|
I haven’t tried this before, but I wonder if this simple approach would work:
add two members to your model definition self.start_layer, self.end_layer
build a mapping from layer indices to the layer member in your model e.g., self.layers = {1:self.conv1, 2:self.conv2,...}
edit the forward function to be something like
def forward(self, x):
for i in range(self.start_layer, self.end_layer+1):
x = self.layers[i](x)
return x
Then you just need to do something like model.start_layer = ? and model.end_layer=? to set up how you want to use your model each time.
|
st30434
|
Thank you Eqy for your reply.
Your solution would have worked indeed, if I always had the code of all the models.
But my purpose targets all saved models, which means I don’t have access to the code, I can only load and save, like all we do with available pre-trained models (resnet18, vgg16, …).
My project is coding TCAV technique, which is compelling me to input from all the intermediate layers, for each of the received models.
I’ve found some code in a github repo, they use the .keys() and .items() attributes from inception pretrained, but as said in my first post, .keys() and .children() don’t know how to output pooling layers, and they don’t get back the right order of the layers (see my first post):
class InceptionV3_cutted(torch.nn.Module):
def __init__(self, inception_v3, bottleneck):
super(InceptionV3_cutted, self).__init__()
names = list(inception_v3._modules.keys())
layers = list(inception_v3.children())
self.layers = torch.nn.ModuleList()
self.layers_names = []
bottleneck_met = False
for name, layer in zip(names, layers):
if name == bottleneck:
bottleneck_met = True
continue # because we already have the output of the bottleneck layer
if not bottleneck_met:
continue
if name == 'AuxLogits':
continue
self.layers.append(layer)
self.layers_names.append(name)
The repo => GitHub - myracheng/tcav: Quantitative Testing with Concept Activation Vectors in PyTorch 1
Thank you very much,
|
st30435
|
I’m confused by what you mean when you say you don’t have the code of the model, as all pretrained models in torchvision like the ones you named have code as well. Can you share how you’re loading and running the model?
|
st30436
|
Thank you Eqy for your reply;
Sorry, I wasn’t clear with my explanations. I’m loading my model with the following code (I have my code of course):
mymodel = MyModelClass()
mymodel.load_state_dict(torch.load(PATH))
mymodel.eval()
My target is to cut any loaded model, without modifying the initial code. Hence being able to loop over all the layers, respecting their order in the neural network loaded, and input at any intermediate layer chosen for the cut.
If you can only show me how to access all the layers of a loaded model, respecting their order in the network, I’ll be able to copy weights and biases and all the parameters of the loaded model, in the new cut model. Actually, all the layers are printed not in order with .keys() and .named_parameters(), and also .state_dict().items(), so I cannot access the layers in their implemented sequence.
Also, I’ve seen that the source code of torchsummary is able to access the layers in their implemented order. They’re using hooks. I’m trying to figure how this works.
Tell me if I’m still not clear. (I’ve been using PyTorch for only 4 months, sorry for my poor level)
Thank you very much
Habib
|
st30437
|
Hi there,
I am planning on using the Batch Norm layer between the two other layers of my model. After training the model I am not sure how to 'un’normalize the output when I want to receive the predicted values?
If it would be simply normalization on some values - I would just calculate and save the variance and mean, then use these values for 'un’normalizing the output of the model - when I want to get the predictions.
In my view, in the case of batch normalization - since the normalization is done based on the batches and these batches are different at every epoch, these mean and variance values will be different at each epoch - so I dont see a way to save those values for using it for unnormalizing the predicted values…
|
st30438
|
Solved by ptrblck in post #4
The model.train() and model.eval() calls will switch between the training mode (normalizing the input batch with its own stats and updating the running stats) and evaluation mode (normalizing the input batch/sample with the running stats). You don’t need to apply the running stats manually.
|
st30439
|
To get the predictions of the model (assuming you are using a validation or test dataset), you would not “unnormalize” the data, but would use the running stats instead by calling model.eval().
During training the internal stats (stored in .running_mean and .running_var in batchnorm layers) will be updated with a specified momentum and the current batch stats.
|
st30440
|
@ptrblck Thank you for your reply.
I had one more question regarding the topic following your answer:
Let’s suppose that my forward method looks like this:
def forward(self, x, hs):
out, hs = self.lstm(x, hs)
out = self.batchnormalization(out)
out = self.fullyconnected(out)
return out, hs
Then in my training and validation code:
for e in range(epochs):
training_loss = 0
validation_loss = 0
model.train()
for x,y training_data:
train_h = model.init_hidden(batch_size)
train_h = tuple([h.data for h in train_h])
out, train_h = model(x, train_h)
**loss = criterion(out,y)**
training_loss += loss.item()
opt.zero_grad()
loss.backward()
optimizer.step()
training_losses.append(training_loss/len(training_data))
# validation
model.eval()
with torch.no_grad():
for x, y in validation_data:
val_h = model.init_hidden(batch_size)
val_h = tuple([h.data for h in val_h])
out, val_h = model(x, val_h)
**loss = criterion(out, y)**
validation_loss += loss.item()
validation_losses.append(validation_loss/len(validation_data))
Then in my training and validation code how should I use the running stats that you mentioned? Since as far as I understand, I cant just do loss = criterion(out, y), right? (because the ‘out’ was calculated using the BatchNorm)
|
st30441
|
The model.train() and model.eval() calls will switch between the training mode (normalizing the input batch with its own stats and updating the running stats) and evaluation mode (normalizing the input batch/sample with the running stats). You don’t need to apply the running stats manually.
|
st30442
|
Hi! I am working on the rock/scissors/paper classification in pytorch.
For some reasons, my validation is not increased. What I receive for validation is 38% maximum…
Can anyone help me to increase the validation accuracy?
MY dataset info - training dataset - generalized 300x300 rock/scissors/papers dataset from pytorch
My validation dataset is the data taken by some students (convert to the 300x300)
Let me show my code first.
Data converts to the png first,
import os
from PIL import Image
import cv2
c = 0
lst_fist01 = []
for i in os.listdir('./Dataset3/validation/paper'):
print(i)
if i.endswith('.jpg'):
img = Image.open(r'./Dataset3/validation/paper/{}'.format(i))
img = img.resize((300, 300), Image.LANCZOS)
img.save(r'./Dataset3/validation/paper/{}.png'.format(c))
c += 1
print(img)
lst_fist01.append(img)
else:
print('it is png')
Custom data
import os
import numpy as np
import torch
import torch.nn as nn
import natsort
from skimage.transform import resize
from PIL import Image
from skimage.color import rgb2gray
import imageio
# Data Loader
class CustomDataset(torch.utils.data.Dataset):
def __init__(self, data_dir, transform=None):#fdir, pdir, sdir, transform=None):
# 0: Paper, 1: Rock, 2: Scissors
self.paper_dir = os.path.join(data_dir,'paper/')
self.rock_dir = os.path.join(data_dir,'rock/')
self.scissors_dir = os.path.join(data_dir,'scissors/')
self.transform = transform
lst_paper = os.listdir(self.paper_dir)
lst_rock = os.listdir(self.rock_dir)
lst_scissors = os.listdir(self.scissors_dir)
lst_paper = [f for f in lst_paper if f.endswith('.png')]
lst_rock = [f for f in lst_rock if f.endswith('.png')]
lst_scissors = [f for f in lst_scissors if f.endswith('.png')]
self.lst_dir = [self.paper_dir] * len(lst_paper) + [self.rock_dir] * len(lst_rock) + [self.scissors_dir] * len(lst_scissors)
self.lst_prs = natsort.natsorted(lst_paper) + natsort.natsorted(lst_rock) + natsort.natsorted(lst_scissors)
def __len__(self):
return len(self.lst_prs)
def __getitem__(self, index):
self.img_dir = self.lst_dir[index]
self.img_name = self.lst_prs[index]
return [self.img_dir, self.img_name]
def custom_collate_fn(self, data):
inputImages = []
outputVectors = []
for sample in data:
prs_img = imageio.imread(os.path.join(sample[0] + sample[1]))
gray_img = rgb2gray(prs_img)
if gray_img.ndim == 2:
gray_img = gray_img[:, :, np.newaxis]
inputImages.append(gray_img.reshape(300, 300, 1))
# inputImages.append(resize(gray_img, (89, 100, 1)))
# 0: Paper, 1: Rock, 2: Scissors
dir_split = sample[0].split('/')
if dir_split[-2] == 'paper':
outputVectors.append(np.array(0))
elif dir_split[-2] == 'rock':
outputVectors.append(np.array(1))
elif dir_split[-2] == 'scissors':
outputVectors.append(np.array(2))
data = {'input': inputImages, 'label': outputVectors}
if self.transform:
data = self.transform(data)
return data
class ToTensor(object):
def __call__(self, data):
label, input = data['label'], data['input']
input_tensor = torch.empty(len(input),300, 300)
label_tensor = torch.empty(len(input))
for i in range(len(input)):
input[i] = input[i].transpose((2, 0, 1)).astype(np.float32)
input_tensor[i] = torch.from_numpy(input[i])
label_tensor[i] = torch.from_numpy(label[i])
input_tensor = torch.unsqueeze(input_tensor, 1)
data = {'label': label_tensor.long(), 'input': input_tensor}
return data
Training code
import os
import numpy as np
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import transforms, datasets
from copy import copy
import warnings
warnings.filterwarnings('ignore')
num_train = len(os.listdir("./Dataset3/train/paper")) + len(os.listdir("./Dataset3/train/rock")) + len(os.listdir("./Dataset3/train/scissors"))
num_val = len(os.listdir("./Dataset3/validation/paper")) + len(os.listdir("./Dataset3/validation/rock")) + len(os.listdir("./Dataset3/validation/scissors"))
transform = transforms.Compose([ToTensor()])
dataset_train = CustomDataset("./Dataset3/train/", transform=transform)
loader_train = DataLoader(dataset_train, batch_size = 64, \
shuffle=True, collate_fn=dataset_train.custom_collate_fn, num_workers=1)
dataset_val = CustomDataset("./Dataset3/validation/", transform=transform)
loader_val = DataLoader(dataset_val, batch_size=64, \
shuffle=True, collate_fn=dataset_val.custom_collate_fn, num_workers=1)
# print(len(dataset_train))
# print(len(dataset_val))
# print(len(loader_train))
# print(len(loader_val), loader_val, type(loader_val))
# print(type(dataset_val.custom_collate_fn), dataset_val.custom_collate_fn)
# Define Model
model = nn.Sequential(nn.Conv2d(1, 32, 2, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(32, 64, 2, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(64, 128, 2, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(128, 256, 2, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(256, 256, 2, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(256, 128, 2, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(128, 64, 2, padding=0),
nn.ReLU(),
nn.MaxPool2d(kernel_size=1),
torch.nn.Flatten(),
nn.Linear(1024, 64, bias = True),
nn.Dropout(0.85),
nn.Linear(64, 3, bias = True),
)
soft = nn.Softmax(dim=1)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print("Current device:", device)
model.to(device)
# Define the loss
criterion = nn.CrossEntropyLoss().to(device)
# Define the optimizer
optim = torch.optim.Adam(model.parameters(), lr = 0.001)
best_epoch = 0
accuracy_save = np.array(0)
epochs = 20
for epoch in range(epochs):
model.train()
train_loss = []
correct_train = 0
correct_val = 0
correct_batch = 0
for batch, data in enumerate(loader_train, 1):
label = data['label'].to(device)
inputs = data['input'].to(device)
output = model(inputs)
label_pred = soft(output).argmax(1)
optim.zero_grad()
loss = criterion(output, label)
loss.backward()
optim.step()
correct_train += (label == label_pred).float().sum()
train_loss += [loss.item()]
accuracy_train = correct_train / num_train
correct_val = 0
accuracy_tmp = np.array(0)
with torch.no_grad():
model.eval()
val_loss = []
for batch, data in enumerate(loader_val, 1):
label_val = data['label'].to(device)
input_val = data['input'].to(device)
output_val = model(input_val)
label_val_pred = soft(output_val).argmax(1)
correct_val += (label_val == label_val_pred).float().sum()
loss = criterion(output_val, label_val)
val_loss += [loss.item()]
accuracy_val = correct_val / num_val
# Save the best model wrt val accuracy
accuracy_tmp = accuracy_val.cpu().numpy()
if accuracy_save < accuracy_tmp:
best_epoch = epoch
accuracy_save = accuracy_tmp.copy()
torch.save(model.state_dict(), 'param.data')
print(".......model updated (epoch = ", epoch+1, ")")
print("epoch: %04d / %04d | train loss: %.5f | train accuracy: %.4f | validation loss: %.5f | validation accuracy: %.4f" %
(epoch+1, epochs, np.mean(train_loss), accuracy_train, np.mean(val_loss), accuracy_val))
print("Model with the best validation accuracy is saved.")
print("Best epoch: ", best_epoch)
print("Best validation accuracy: ", accuracy_save)
print("Done.")
For this, my result is
Current device: cuda
…model updated (epoch = 1 )
epoch: 0001 / 0020 | train loss: 1.10044 | train accuracy: 0.3313 | validation loss: 1.09863 | validation accuracy: 0.3311
…model updated (epoch = 2 )
epoch: 0002 / 0020 | train loss: 1.09831 | train accuracy: 0.3433 | validation loss: 1.09812 | validation accuracy: 0.3412
…model updated (epoch = 3 )
epoch: 0003 / 0020 | train loss: 0.70037 | train accuracy: 0.6794 | validation loss: 1.72336 | validation accuracy: 0.3682
…model updated (epoch = 4 )
epoch: 0004 / 0020 | train loss: 0.13533 | train accuracy: 0.9544 | validation loss: 2.76044 | validation accuracy: 0.3868
epoch: 0005 / 0020 | train loss: 0.06367 | train accuracy: 0.9790 | validation loss: 5.63855 | validation accuracy: 0.3514
epoch: 0006 / 0020 | train loss: 0.01963 | train accuracy: 0.9933 | validation loss: 6.07453 | validation accuracy: 0.3784
epoch: 0007 / 0020 | train loss: 0.00843 | train accuracy: 0.9972 | validation loss: 5.76545 | validation accuracy: 0.3733
epoch: 0008 / 0020 | train loss: 0.00917 | train accuracy: 0.9968 | validation loss: 8.41605 | validation accuracy: 0.3547
epoch: 0009 / 0020 | train loss: 0.01139 | train accuracy: 0.9960 | validation loss: 11.67817 | validation accuracy: 0.3395
epoch: 0010 / 0020 | train loss: 0.07132 | train accuracy: 0.9774 | validation loss: 3.93959 | validation accuracy: 0.3598
…model updated (epoch = 11 )
epoch: 0011 / 0020 | train loss: 0.03753 | train accuracy: 0.9857 | validation loss: 3.51701 | validation accuracy: 0.3885
epoch: 0012 / 0020 | train loss: 0.01213 | train accuracy: 0.9948 | validation loss: 3.99175 | validation accuracy: 0.3818
epoch: 0013 / 0020 | train loss: 0.00558 | train accuracy: 0.9984 | validation loss: 4.74905 | validation accuracy: 0.3784
…model updated (epoch = 14 )
epoch: 0014 / 0020 | train loss: 0.00826 | train accuracy: 0.9964 | validation loss: 4.72692 | validation accuracy: 0.3936
epoch: 0015 / 0020 | train loss: 0.00351 | train accuracy: 0.9996 | validation loss: 5.66631 | validation accuracy: 0.3716
Can anyone give me some tips to increase validation accuracy?
Thank you a lot for reading.
|
st30443
|
Hi Dong!
I haven’t looked at your code, but here are a couple of comments:
PytorchBeginners:
MY dataset info - training dataset - generalized 300x300 rock/scissors/papers dataset from pytorch
You don’t say how large you training dataset is. If it’s rather small
it will be easy to overfit, and hard to train your network well enough
to perform well on you validation dataset.
If your training dataset is small, the best thing would be to train
on a larger dataset. If you can’t get a larger training dataset, you
might be able to get better results by augmenting your training
dataset by including things like flipped and shifted and rescaled
versions of your original images.
My validation dataset is the data taken by some students
If your validation dataset is rather different in character from your
training dataset – and here it might be because it has been collected
differently from your training dataset – even a well-trained network
might perform poorly on your validation dataset.
In an ideal world, you would train your network on a training dataset
and then have perform inference well on rather different images. But
this isn’t always realistic.
Consider training a cat-dog classifier only on images of short-haired
cats and long-haired dogs. If you then try to validate it on a dataset
containing only long-haired cats and short-haired dogs, you shouldn’t
be surprised if it does poorly, maybe even systematically mistaking
cats for dogs and vice versa.
I would suggest that you randomly split your original training dataset
in to a smaller training dataset and a separate validation dataset.
You could also combine your original training and validation datasets
into one large dataset, and then randomly split that into a training
and a validation dataset.
Both of these approaches will ensure that your training and validation
datasets have the same character, avoiding the “short-haired cat,
long-haired dog” issue.
Best.
K. Frank
|
st30444
|
Thank you so much for your kind reply
My train data set is about 2,500 images (840 for each rock, scissors, papers)
My validation data is about 600 images (200 for each rock, scissors, papers)
So, I need to augment the “train data”? or both dataset, I am sorry to ask you dum question, but I am a beginner of learning pytorch.
Thanks!
|
st30445
|
my profile code is as follows
net = torchvision.models.resnext50_32x4d(pretrained=True).cuda()
net.train()
with torch.autograd.profiler.profile(use_cuda=True) as prof:
predict = net(input)
print(prof)
code and results can be found at: https://drive.google.com/open?id=1vyTkqBpQwUvCSnGjjJPdzIzLTEfJF8Sr 5
here are the snippet results for pytorch 1.4 and 1.6 :
...
is_leaf 0.00% 1.850us 0.00% 1.850us 1.850us 0.00% 1.024us 1.024us 1 []
is_leaf 0.00% 2.119us 0.00% 2.119us 2.119us 0.00% 2.048us 2.048us 1 []
max_pool2d_with_indices 0.05% 479.919us 0.05% 479.919us 479.919us 0.19% 2.061ms 2.061ms 1 []
--------------------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- -----------------------------------
Self CPU time total: 1.034s
CUDA time total: 1.097s
accuracy 0.687500
pytorch
torch.__version__ = 1.6.0.dev20200516
torch.version.cuda = 10.2
torch.backends.cudnn.version() = 7605
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.enabled = True
torch.backends.cudnn.deterministic = False
torch.cuda.device_count() = 1
torch.cuda.get_device_properties() = _CudaDeviceProperties(name='TITAN X (Pascal)', major=6, minor=1, total_memory=12192MB, multi_processor_count=28)
torch.cuda.memory_allocated() = 0 GB
torch.cuda.memory_reserved() = 8 GB
cudnn_convolution 0.12% 533.067us 0.12% 533.067us 533.067us 0.63% 4.750ms 4.750ms 1 []
add 0.00% 21.091us 0.00% 21.091us 21.091us 0.00% 9.215us 9.215us 1 []
batch_norm 0.07% 324.928us 0.07% 324.928us 324.928us 0.23% 1.723ms 1.723ms 1 []
_batch_norm_impl_index 0.07% 318.668us 0.07% 318.668us 318.668us 0.23% 1.721ms 1.721ms 1 []
--------------------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- -----------------------------------
Self CPU time total: 457.948ms
CUDA time total: 750.666ms
accuracy 0.687500
pytorch
torch.__version__ = 1.4.0
torch.version.cuda = 10.1
torch.backends.cudnn.version() = 7603
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.enabled = True
torch.backends.cudnn.deterministic = False
torch.cuda.device_count() = 1
torch.cuda.get_device_properties() = _CudaDeviceProperties(name='TITAN X (Pascal)', major=6, minor=1, total_memory=12192MB, multi_processor_count=28)
torch.cuda.memory_allocated() = 0 GB
torch.cuda.memory_reserved() = 8 GB
I also have similar observations for another machine using GTX 1080Ti.
Why is there a huge difference in the results?
|
st30446
|
Could you please post the slower layers here?
To isolate the issue further, I would also recommend to use the same CUDA and cudnn versions for the different PyTorch versions. Otherwise you are facing a lot of variables, which might affect the performance.
|
st30447
|
Now i change to the same cuda and cudnn version. the results are similar as before. The network is standard torchvision resnet50 model. i also incoude the profiler chrome tracking
here are the results:
using pytorch1.6
1.6_tracing1647×308 61.7 KB
output_nr 0.00% 1.705us 0.00% 1.705us 1.705us 0.00% 2.048us 2.048us 1 []
is_leaf 0.00% 1.823us 0.00% 1.823us 1.823us 0.00% 1.023us 1.023us 1 []
is_leaf 0.00% 1.717us 0.00% 1.717us 1.717us 0.00% 1.024us 1.024us 1 []
max_pool2d_with_indices 0.05% 486.468us 0.05% 486.468us 486.468us 0.17% 1.756ms 1.756ms 1 []
--------------------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- -----------------------------------
Self CPU time total: 987.386ms
CUDA time total: 1.046s
pytorch
torch.__version__ = 1.6.0.dev20200516+cu101
torch.version.cuda = 10.1
torch.backends.cudnn.version() = 7603
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.enabled = True
torch.backends.cudnn.deterministic = False
torch.cuda.device_count() = 1
torch.cuda.get_device_properties() = _CudaDeviceProperties(name='TITAN X (Pascal)', major=6, minor=1, total_memory=12192MB, multi_processor_count=28)
torch.cuda.memory_allocated() = 0 GB
torch.cuda.memory_reserved() = 8 GB
using pytorch1.4
1.4_tracing1619×381 63.4 KB
cudnn_convolution 0.16% 842.643us 0.16% 842.643us 842.643us 0.98% 8.491ms 8.491ms 1 []
add 0.00% 24.037us 0.00% 24.037us 24.037us 0.00% 9.215us 9.215us 1 []
batch_norm 0.06% 335.818us 0.06% 335.818us 335.818us 0.15% 1.270ms 1.270ms 1 []
_batch_norm_impl_index 0.06% 329.220us 0.06% 329.220us 329.220us 0.15% 1.268ms 1.268ms 1 []
--------------------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- -----------------------------------
Self CPU time total: 534.585ms
CUDA time total: 863.694ms
accuracy 0.687500
pytorch
torch.__version__ = 1.4.0
torch.version.cuda = 10.1
torch.backends.cudnn.version() = 7603
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.enabled = True
torch.backends.cudnn.deterministic = False
torch.cuda.device_count() = 1
torch.cuda.get_device_properties() = _CudaDeviceProperties(name='TITAN X (Pascal)', major=6, minor=1, total_memory=12192MB, multi_processor_count=28)
torch.cuda.memory_allocated() = 0 GB
torch.cuda.memory_reserved() = 8 GB
again, all code, results can be find at : https://drive.google.com/drive/u/0/folders/1vyTkqBpQwUvCSnGjjJPdzIzLTEfJF8Sr 2
|
st30448
|
Thanks for the additional run with the fixed libs!
We’ll triage this issue and check, where the difference is coming from.
@albanD do you have any quick idea, where the performance regression might be coming from?
|
st30449
|
Not really.
Note that with matching versions, the difference is not as big.
Not sure why the conv does not show up any more on the profiling…
|
st30450
|
@Hengck
I’ve encountered similar problem and discovered that it’s about contiguity of the input.
Try this:
prediction = net(input.contiguous())
You can read about breaking changes in release notes for 1.5.0 version (look for “contiguous”): https://github.com/pytorch/pytorch/releases/tag/v1.5.0 21
Script for reproducing slower runtime in Pytorch >= 1.5:
import numpy as np
import torch
import torchvision
net = torchvision.models.resnet50()
net.train()
net.cuda()
images_array = np.random.randn(4, 224, 224, 3).astype(np.float32)
images_array = np.rollaxis(images_array, 3, 1)
images_tensor = torch.from_numpy(images_array)
images_tensor = images_tensor.to(device="cuda")
print(images_tensor.is_contiguous()) # False
with torch.autograd.profiler.profile(use_cuda=True) as prof:
prediction = net(images_tensor) # slower
# prediction = net(images_tensor.contiguous()) # faster
print(prof)
|
st30451
|
Thanks @arnowaczynski for the accurate description of the case. Indeed you can end up having channels last inputs and not optimized for it hardware.
The only thing I can add here is that images_tensor = images_tensor.to(device="cuda", memory_format=torch.contiguous_format) will work faster than images_tensor.contiguous()
|
st30452
|
Hi,
I also observed a slowdown, for me, it was a factor of 2.5 on CPU on an Intel Mac.
In my case, it was for a matrix multiplication-based application.
(Just as an additional input to whom it may concern.)
|
st30453
|
Dunno if autograd is the right category for this question. There’s a related thread 2 here but I didn’t see an official answer.
My current guess is that torch use this paper for it [1806.01851] Pathwise Derivatives Beyond the Reparameterization Trick, which is different from what tensorflow-probability uses, which is the implicit reparam gradient from DeepMind. I looked through the code base but couldn’t find a concrete reference unlike the tensorflow-probability code that explicitly mentions they use the implicit reparam gradient.
The reason that I am asking is because I am trying to do stochastic gradient based training of LDA, the tensorflow-probability version trains nicely, but my pytorch implementation’s gradient tend to blow up, and need special handling. 1 example of such training instability can be seen below, where there’s a significant kink near the end of training for the orange curve, and this happens consistently. This time it recovered but it can blow up at other times.
image718×414 32.9 KB
The tfp code and the pytorch code I wrote are identical down to the initialization scheme, and the tfp code’s optimization curve follows almost exactly like the orange curve, but without the kinks.
Mentioning @fritzo @neerajprad you guys because based on my search on github you guys contribute to the pyro/torch.distributions a lot.
If we are not using the implicit reparametrization, I will probably request one on github.
|
st30454
|
image875×440 6.7 KB
Hello, I have some problems changing pretrained models (in my case Yolov5).
Model is built by selections of modules and numbers of repetition.
I want to change modules like above, trying to mask specific spatial regions of input features per image inference.
However, I don’t know how to change all module variables named mask per infernece.
How can I check and replace mask values in all modules?
This is the logic flow I want:
> for module in all_Modules:
> if mask in module variables:
> mask = specific_values_I_want
|
st30455
|
Solved by ptrblck in post #2
You could iterate the modules of the parent nn.Module, check for the mask attribute via hasattr and assign e.g. a new nn.Parameter to it as seen here:
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.mask = None
class MyModel(nn.Module):
def _…
|
st30456
|
You could iterate the modules of the parent nn.Module, check for the mask attribute via hasattr and assign e.g. a new nn.Parameter to it as seen here:
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.mask = None
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.lin1 = nn.Linear(1, 1)
self.mod1 = MyModule()
self.lin2 = nn.Linear(1, 1)
self.mod2 = MyModule()
model = MyModel()
print(dict(model.named_parameters()))
> {'lin1.weight': Parameter containing:
tensor([[-0.3372]], requires_grad=True), 'lin1.bias': Parameter containing:
tensor([0.7264], requires_grad=True), 'lin2.weight': Parameter containing:
tensor([[0.0947]], requires_grad=True), 'lin2.bias': Parameter containing:
tensor([0.2308], requires_grad=True)}
for name, module in model.named_modules():
if hasattr(module, 'mask'):
print('{} has mask attribute'.format(name))
module.mask = nn.Parameter(torch.randn(1))
> mod1 has mask attribute
mod2 has mask attribute
print(dict(model.named_parameters()))
> {'lin1.weight': Parameter containing:
tensor([[-0.3372]], requires_grad=True), 'lin1.bias': Parameter containing:
tensor([0.7264], requires_grad=True), 'mod1.mask': Parameter containing:
tensor([-0.6128], requires_grad=True), 'lin2.weight': Parameter containing:
tensor([[0.0947]], requires_grad=True), 'lin2.bias': Parameter containing:
tensor([0.2308], requires_grad=True), 'mod2.mask': Parameter containing:
tensor([-0.1601], requires_grad=True)}
|
st30457
|
i’m trying to extract features and got this error from this line
feats = model(samples).copy()
features = torch.zeros(len(data_loader.dataset), feats.shape[-1])
i read that i should convert it to numpy but couldn’t write it well
the first error was
AttributeError: ‘list’ object has no attribute ‘clone’
so i changed it to copy but got current error in the title of the post
|
st30458
|
Based on the error messages it seems that feats is a list, while you are trying to use it as a tensor.
You could thus make sure that the forward method of your model returns a tensor or convert the list to a tensor using e.g. torch.stack.
|
st30459
|
thanks but you mean that i got
AttributeError: ‘list’ object has no attribute ‘clone’
as feats is a list and i’m trying to use it as a tensor ?
|
st30460
|
Yes, Python lists don’t have the clone() or shape operations, which are PyTorch tensor ops, so it seems you are assuming feats is a tensor, while it’s a list.
|
st30461
|
excuse me do you mean this function
def forward(self, x):
x = self.forward_features(x)
if self.F4:
x = x[3:4]
return x
|
st30462
|
Yes, the forward method seems to return a list, so you would have to check why this is the case (check the type of x and try to isolate where it’s transformed to a list) and/or create a tensor, if needed.
|
st30463
|
I have a series of PyTorch trained models, such as “model.pth”, but I don’t know the input dimensions of the model.
For instance, in the following function: torch.onnx.export(model, args, f, export_params=True, verbose=False, training=False, input_names=None, output_names=None).
I don’t know the “args” of the function. How do I define it by just having the model file such as “model.pth”?
|
st30464
|
Hi folks
Is anything in transformers implementation specific to natural language modeling? Especially, wrt nn embedding.
|
st30465
|
Not really (see also: Vision Transformers [2010.11929] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale ).
|
st30466
|
Thanks. I didn’t phrase my question correctly. I mean whether the pytorch implementation is specific to language modeling using transformers. Or nothing is limiting to use it for tabular data.
|
st30467
|
I have a simple model:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(303, 128)
self.fc2 = nn.Linear(128, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
When I create the model on my CPU as such,
model = Net()
Both CPU and GPU memory usage remain unchanged. However, when I then move the model to the GPU,
model.cuda() # model.to(device) does the same
My CPU memory usage shoots up from 410MB to 1.95GB and my GPU memory usage goes from 0MB to 716MB.
My question is
Before I move the model to the GPU, why does creating the model not affect even my CPU memory? The parameters surely must go somewhere (unless they’re lazily created?)
Why does moving the model to the GPU suddenly consume both CPU and GPU memory? Shouldn’t all the weights go entirely to the GPU?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.