instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
masked kernel in convolution in pytorch | Let's say:
x = torch.arange(16, dtype=torch.float).reshape(1, 1, 4, 4)
and a 2d convolution layer is:
layer = torch.nn.Conv2d(in_channels=1, out_channels=1, kernel_size=2, stride=2)
layer.weight.data[:] = 1.
layer.bias.data[:] = 0.
Normally, passing x to layer will give:
>>layer(x)
tensor([[[[10., 18.],
[42., 50.]]]], grad_fn=<MkldnnConvolutionBackward>)
Considering having 4 mask filters, how it is done to mask kernel in each step?
for example the following picture indicates 4 filters(white: True, black: False)
The output should be:
tensor([[[[5., 15.],
[30., 40.]]]], grad_fn=<MkldnnConvolutionBackward>)
P.S: all masks are obtained by missing pixels in 2d input array. So 4 masks above are actually a matrix with the same shape as input.
| I found the solution.
Element-wise multiplication between input and the mask before feeding it to a Conv2d method would be enough.(masking input is much easier than masking kernel itself !!):
mask = torch.tensor([[[1, 1, 1, 0]], [[1, 0, 1, 1]], [[1, 1, 0, 1]], [[0, 1, 1, 1]]], dtype=torch.float, requires_grad=False).reshape(1, 1, 4, 4)
>>layer(torch.mul(x, mask))
tensor([[[[5., 15.],
[30., 40.]]]], grad_fn=<MkldnnConvolutionBackward>)
P.S Thanks to @Shai I got the idea from partial convolution represented in this paper. However it does some extra manipulation on output. it defines a mask ratio and I guess does some weighting the final output based on it.
| https://stackoverflow.com/questions/63710417/ |
How can I use my own data to test this Convolutional Neural Network on PyTorch? | so recently I have been following this tutorial from sentdex on convolutional neural networks and I have been trying to implement his code to test the trained neural network with my own images (in this case, I just pick random pictures from the dataset used in his program). So my intention is to train the neural network, test it and finally save it so I can later load it on a separate python file to use the already trained NN on one single image.
The dataset he uses is "dogs vs cats from microsoft". This is the code where I wrote the neural network program ("main.py").
import cv2
import numpy as np
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
REBUILD_DATA = False # set to true to one once, then back to false unless you want to change something in your training data.
class DogsVSCats():
IMG_SIZE = 100
CATS = "PetImages/Cat"
DOGS = "PetImages/Dog"
TESTING = "PetImages/Testing"
LABELS = {CATS: 0, DOGS: 1}
training_data = []
catcount = 0
dogcount = 0
def make_training_data(self):
for label in self.LABELS:
print(label)
for f in tqdm(os.listdir(label)):
if "jpg" in f:
try:
path = os.path.join(label, f)
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img, (self.IMG_SIZE, self.IMG_SIZE))
self.training_data.append([np.array(img), np.eye(2)[self.LABELS[label]]]) # do something like print(np.eye(2)[1]), just makes one_hot
#print(np.eye(2)[self.LABELS[label]])
if label == self.CATS:
self.catcount += 1
elif label == self.DOGS:
self.dogcount += 1
except Exception as e:
pass
#print(label, f, str(e))
np.random.shuffle(self.training_data)
np.save("training_data.npy", self.training_data)
print('Cats:',dogsvcats.catcount)
print('Dogs:',dogsvcats.dogcount)
class Net(nn.Module):
def __init__(self):
super().__init__() # just run the init of parent class (nn.Module)
self.conv1 = nn.Conv2d(1, 32, 5) # input is 1 image, 32 output channels, 5x5 kernel / window
self.conv2 = nn.Conv2d(32, 64, 5) # input is 32, bc the first layer output 32. Then we say the output will be 64 channels, 5x5 kernel / window
self.conv3 = nn.Conv2d(64, 128, 5)
x = torch.randn(50, 50).view(-1, 1, 50, 50)
self._to_linear = None
self.convs(x)
self.fc1 = nn.Linear(self._to_linear, 512) #flattening.
self.fc2 = nn.Linear(512, 2) # 512 in, 2 out bc we're doing 2 classes (dog vs cat).
def convs(self, x):
# max pooling over 2x2
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv3(x)), (2, 2))
if self._to_linear is None:
self._to_linear = x[0].shape[0]*x[0].shape[1]*x[0].shape[2]
return x
def forward(self, x):
x = self.convs(x)
x = x.view(-1, self._to_linear) # .view is reshape ... this flattens X before
x = F.relu(self.fc1(x))
x = self.fc2(x) # bc this is our output layer. No activation here.
return F.softmax(x, dim=1)
net = Net()
print(net)
if REBUILD_DATA:
dogsvcats = DogsVSCats()
dogsvcats.make_training_data()
training_data = np.load("training_data.npy", allow_pickle=True)
print(len(training_data))
optimizer = optim.Adam(net.parameters(), lr=0.001)
loss_function = nn.MSELoss()
X = torch.Tensor([i[0] for i in training_data]).view(-1,50,50)
X = X/255.0
y = torch.Tensor([i[1] for i in training_data])
VAL_PCT = 0.1 # lets reserve 10% of our data for validation
val_size = int(len(X)*VAL_PCT)
train_X = X[:-val_size]
train_y = y[:-val_size]
test_X = X[-val_size:]
test_y = y[-val_size:]
BATCH_SIZE = 100
EPOCHS = 1
def train(net):
for epoch in range(EPOCHS):
for i in tqdm(range(0, len(train_X), BATCH_SIZE)): # from 0, to the len of x, stepping BATCH_SIZE at a time. [:50] ..for now just to dev
#print(f"{i}:{i+BATCH_SIZE}")
batch_X = train_X[i:i+BATCH_SIZE].view(-1, 1, 50, 50)
batch_y = train_y[i:i+BATCH_SIZE]
net.zero_grad()
outputs = net(batch_X)
loss = loss_function(outputs, batch_y)
loss.backward()
optimizer.step() # Does the update
print(f"Epoch: {epoch}. Loss: {loss}")
def test(net):
correct = 0
total = 0
with torch.no_grad():
for i in tqdm(range(len(test_X))):
real_class = torch.argmax(test_y[i])
net_out = net(test_X[i].view(-1, 1, 50, 50))[0] # returns a list,
predicted_class = torch.argmax(net_out)
if predicted_class == real_class:
correct += 1
total += 1
print("Accuracy: ", round(correct/total, 3))
train(net)
test(net)
PATH = './object_detection.pth'
torch.save(net.state_dict(), PATH)
After training the neural network, I want to load it in this next program and simply test the images on the NN. However, every time I run this program, the neural network is trained and tested again, which makes this process much longer and annoying. And also, I think when I run this program and then input the image into the NN, the whole "main.py" is being run.
Please, can someone help me with this? It would be amazing, as I am using this as a base to my Bachelor's thesis. Potentially I would also like to adapt this code to run my own entire dataset through it, it would be incredible if someone would help me do this, as I am a newbie on pytorch.
import cv2
from main import Net, train, test
import numpy as np
classes = ('cat', 'dog')
imsize = 50
net = Net()
net.load_state_dict(torch.load('./object_detection.pth'))
def image_loader(image_name):
image = cv2.imread(image_name, cv2.IMREAD_GRAYSCALE)
image = cv2.resize(image, (imsize, imsize))
image = np.array(image)
image = torch.Tensor(image)/255
image = image.view(-1, 1, 50, 50)
return image
test_image = image_loader("./PetImages/Cat/1021.jpg")
result = net(test_image)
_, predicted = torch.max(result, 1)
print(result)
print(classes[predicted[0]])
| The problem you are facing is nothing related to NN, but the importing part.
In the second code snippet, you import classes and functions your first code snippet. At the same time, the statements will also be executed all the code inside and it is not what we want.
The simplest way to solve it is to collect your code inside a if case to avoid execution during being imported.
The result may look like this:
import cv2
import numpy as np
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class DogsVSCats():
IMG_SIZE = 100
CATS = "PetImages/Cat"
DOGS = "PetImages/Dog"
TESTING = "PetImages/Testing"
LABELS = {CATS: 0, DOGS: 1}
training_data = []
catcount = 0
dogcount = 0
def make_training_data(self):
for label in self.LABELS:
print(label)
for f in tqdm(os.listdir(label)):
if "jpg" in f:
try:
path = os.path.join(label, f)
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img, (self.IMG_SIZE, self.IMG_SIZE))
self.training_data.append([np.array(img), np.eye(2)[self.LABELS[label]]]) # do something like print(np.eye(2)[1]), just makes one_hot
#print(np.eye(2)[self.LABELS[label]])
if label == self.CATS:
self.catcount += 1
elif label == self.DOGS:
self.dogcount += 1
except Exception as e:
pass
#print(label, f, str(e))
np.random.shuffle(self.training_data)
np.save("training_data.npy", self.training_data)
print('Cats:',dogsvcats.catcount)
print('Dogs:',dogsvcats.dogcount)
class Net(nn.Module):
def __init__(self):
super().__init__() # just run the init of parent class (nn.Module)
self.conv1 = nn.Conv2d(1, 32, 5) # input is 1 image, 32 output channels, 5x5 kernel / window
self.conv2 = nn.Conv2d(32, 64, 5) # input is 32, bc the first layer output 32. Then we say the output will be 64 channels, 5x5 kernel / window
self.conv3 = nn.Conv2d(64, 128, 5)
x = torch.randn(50, 50).view(-1, 1, 50, 50)
self._to_linear = None
self.convs(x)
self.fc1 = nn.Linear(self._to_linear, 512) #flattening.
self.fc2 = nn.Linear(512, 2) # 512 in, 2 out bc we're doing 2 classes (dog vs cat).
def convs(self, x):
# max pooling over 2x2
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv3(x)), (2, 2))
if self._to_linear is None:
self._to_linear = x[0].shape[0]*x[0].shape[1]*x[0].shape[2]
return x
def forward(self, x):
x = self.convs(x)
x = x.view(-1, self._to_linear) # .view is reshape ... this flattens X before
x = F.relu(self.fc1(x))
x = self.fc2(x) # bc this is our output layer. No activation here.
return F.softmax(x, dim=1)
def train(net):
for epoch in range(EPOCHS):
for i in tqdm(range(0, len(train_X), BATCH_SIZE)): # from 0, to the len of x, stepping BATCH_SIZE at a time. [:50] ..for now just to dev
#print(f"{i}:{i+BATCH_SIZE}")
batch_X = train_X[i:i+BATCH_SIZE].view(-1, 1, 50, 50)
batch_y = train_y[i:i+BATCH_SIZE]
net.zero_grad()
outputs = net(batch_X)
loss = loss_function(outputs, batch_y)
loss.backward()
optimizer.step() # Does the update
print(f"Epoch: {epoch}. Loss: {loss}")
def test(net):
correct = 0
total = 0
with torch.no_grad():
for i in tqdm(range(len(test_X))):
real_class = torch.argmax(test_y[i])
net_out = net(test_X[i].view(-1, 1, 50, 50))[0] # returns a list,
predicted_class = torch.argmax(net_out)
if predicted_class == real_class:
correct += 1
total += 1
print("Accuracy: ", round(correct/total, 3))
if __name__ == "__main__":
REBUILD_DATA = False # set to true to one once, then back to false unless you want to change something in your training data.
net = Net()
print(net)
if REBUILD_DATA:
dogsvcats = DogsVSCats()
dogsvcats.make_training_data()
training_data = np.load("training_data.npy", allow_pickle=True)
print(len(training_data))
optimizer = optim.Adam(net.parameters(), lr=0.001)
loss_function = nn.MSELoss()
X = torch.Tensor([i[0] for i in training_data]).view(-1,50,50)
X = X/255.0
y = torch.Tensor([i[1] for i in training_data])
VAL_PCT = 0.1 # lets reserve 10% of our data for validation
val_size = int(len(X)*VAL_PCT)
train_X = X[:-val_size]
train_y = y[:-val_size]
test_X = X[-val_size:]
test_y = y[-val_size:]
BATCH_SIZE = 100
EPOCHS = 1
train(net)
test(net)
PATH = './object_detection.pth'
torch.save(net.state_dict(), PATH)
You can check more information on the official docs: import and main.
| https://stackoverflow.com/questions/63720578/ |
network values goes to 0 by linear layers | I designed the Graph Attention Network.
However, during the operations inside the layer, the values of features becoming equal.
class GraphAttentionLayer(nn.Module):
## in_features = out_features = 1024
def __init__(self, in_features, out_features, dropout):
super(GraphAttentionLayer, self).__init__()
self.dropout = dropout
self.in_features = in_features
self.out_features = out_features
self.W = nn.Parameter(torch.zeros(size=(in_features, out_features)))
self.a1 = nn.Parameter(torch.zeros(size=(out_features, 1)))
self.a2 = nn.Parameter(torch.zeros(size=(out_features, 1)))
nn.init.xavier_normal_(self.W.data, gain=1.414)
nn.init.xavier_normal_(self.a1.data, gain=1.414)
nn.init.xavier_normal_(self.a2.data, gain=1.414)
self.leakyrelu = nn.LeakyReLU()
def forward(self, input, adj):
h = torch.mm(input, self.W)
a_input1 = torch.mm(h, self.a1)
a_input2 = torch.mm(h, self.a2)
a_input = torch.mm(a_input1, a_input2.transpose(1, 0))
e = self.leakyrelu(a_input)
zero_vec = torch.zeros_like(e)
attention = torch.where(adj > 0, e, zero_vec) # most of values is close to 0
attention = F.softmax(attention, dim=1) # all values are 0.0014 which is 1/707 (707^2 is the dimension of attention)
attention = F.dropout(attention, self.dropout)
return attention
The dimension of 'attention' is (707 x 707) and I observed the value of attention is near 0 before the softmax.
After the softmax, all values are 0.0014 which is 1/707.
I wonder how to keep the values normalized and prevent this situation.
Thanks
| Since you say this happens during training I would assume it is at the start. With random initialization you often get near identical values at the end of the network during the start of the training process.
When all values are more or less equal the output of the softmax will be 1/num_elements for every element, so they sum up to 1 over the dimension you chose. So in your case you get 1/707 as all the values, which just sounds to me your weights are freshly initialized and the outputs are mostly random at this stage.
I would let it train for a while and observe if this changes.
| https://stackoverflow.com/questions/63721972/ |
RuntimeError: Cost Function returns nan values in its 1th output | I am working on a temporal model to predict future events. Here is the link to my colab notebook. I am facing an issue trying to train the model. I am getting NaN values for Train and valid loss. The loss function is a joint loss consisting of the cross entropy loss and the squared loss. Link to the blog here.
Tried following solution which didn`t worked -
Smaller learning rate - 0.01, 0.001, 0.0001
class cost_function():
def __init__(self, yhat, y, L_2=0.001, logEps=1e-8):
# logEps : log epsilon, very small positive value greater that 0.0
# CE = - [ ln(y*)(y) + ln(1-y*)(1-y) ],
self.yhat = yhat
self.y = y
self.logEps = logEps
self.L_2 = L_2
self.W_out = nn.Parameter(torch.randn(hiddenDimSize, numClass)*0.01)
def cross_entropy(self):
ce = -(self.y * torch.log(self.yhat + self.logEps) + (1. - self.y) * torch.log(1. - self.yhat + self.logEps))
print("Inside CrossEntrophy Loss fn : ", ce)
return ce
def prediction_loss(self):
# return (torch.sum(torch.sum(self.cross_entropy(), dim=0),dim=1)).float()/ lengths.float()
tmp_tensor = torch.sum(self.cross_entropy(), dim=0)
print("Inside PredictionLoss fn : Sum Dim 0", tmp_tensor)
print("Inside PredictionLoss fn : Sum Dim 1", torch.sum(tmp_tensor,dim=1))
print("Inside PredictionLoss fn : Final Result ", (torch.sum(tmp_tensor,dim=1)).float()/ lengths.float())
return (torch.sum(tmp_tensor,dim=1)).float()/ lengths.float()
def cost(self):
print("Inside Cost fn :", torch.mean(self.prediction_loss()) + self.L_2 * (self.W_out ** 2).sum())
return torch.mean(self.prediction_loss()) + self.L_2 * (self.W_out ** 2).sum() # regularize
build_EHRNN class - I have made modification in the forward method params to resolve the 'h' is not defined error.
torch.manual_seed(1)
class build_EHRNN(nn.Module):
def __init__(self, inputDimSize=4894, hiddenDimSize=[200,200], batchSize=100, embSize=200, numClass=4894, dropout=0.5, logEps=1e-8):
super(build_EHRNN, self).__init__()
self.inputDimSize = inputDimSize
self.hiddenDimSize = hiddenDimSize
self.numClass = numClass
self.embSize = embSize
self.batchSize = batchSize
self.dropout = nn.Dropout(p=0.5)
self.logEps = logEps
# Embedding inputs
self.W_emb = nn.Parameter(torch.randn(self.inputDimSize, self.embSize).cuda())
self.b_emb = nn.Parameter(torch.zeros(self.embSize).cuda())
self.W_out = nn.Parameter(torch.randn(self.hiddenDimSize, self.numClass).cuda())
self.b_out = nn.Parameter(torch.zeros(self.numClass).cuda())
self.params = [self.W_emb, self.W_out,
self.b_emb, self.b_out]
# def forward(self,x, y, h, lengths, mask):
def forward(self,x, y, lengths, mask):
self.emb = torch.tanh(torch.matmul(x, self.W_emb) + self.b_emb)
input_values = self.emb
self.outputs = [input_values]
for i, hiddenSize in enumerate([self.hiddenDimSize, self.hiddenDimSize]): # iterate over layers
rnn = EHRNN(self.inputDimSize,hiddenSize,self.embSize,self.batchSize,self.numClass) # calculate hidden states
hidden_state = []
h = self.init_hidden().cuda()
for i,seq in enumerate(input_values): # loop over sequences in each batch
h = rnn(seq, h)
hidden_state.append(h)
hidden_state = self.dropout(torch.stack(hidden_state)) # apply dropout between layers
input_values = hidden_state
y_linear = torch.matmul(hidden_state, self.W_out) + self.b_out # fully connected layer
yhat = F.softmax(y_linear, dim=1) # yhat
yhat = yhat*mask[:,:,None] # apply mask
# Loss calculation
cross_entropy = -(y * torch.log(yhat + self.logEps) + (1. - y) * torch.log(1. - yhat + self.logEps))
last_step = -torch.mean(y[-1] * torch.log(yhat[-1] + self.logEps) + (1. - y[-1]) * torch.log(1. - yhat[-1] + self.logEps))
prediction_loss = torch.sum(torch.sum(cross_entropy, dim=0),dim=1)/ torch.cuda.FloatTensor(lengths)
cost = torch.mean(prediction_loss) + 0.000001 * (self.W_out ** 2).sum() # regularize
return (yhat, hidden_state, cost)
def init_hidden(self):
return torch.zeros(self.batchSize, self.hiddenDimSize) # initial state
model training
artificalData_seqs = np.array(pickle.load(open(os.path.join(GOOGLE_DRV_PATH,BASE_DIR,'data.encodedDxs'),'rb')))
train, test, valid = load_data(artificalData_seqs, artificalData_seqs)
batchSize = 50 # decreased from 100 to 50
n_batches = int(np.ceil(float(len(train[0])) / float(batchSize)))-1
n_batches_valid = int(np.ceil(float(len(valid[0])) / float(batchSize)))-1
model = build_EHRNN(inputDimSize=4894, hiddenDimSize=200, batchSize=50, embSize=200, numClass=4894, dropout=0.5, logEps=1e-8)
model = model.to(device)
import torch.nn.functional as F
import pdb
optimizer = torch.optim.Adadelta(model.parameters(), lr = 0.001, rho=0.95)
epochs = 10
counter = 0
# with torch.autograd.detect_anomaly():
for e in range(epochs):
for x, y in train_dl:
x, y , mask, lengths = padding(x, y, inputDimSize, numClass)
output, h = model(x, mask)
loss = cost_function(output, y).cost()
# pdb.set_trace()
loss.backward()
print("loss ",loss)
nn.utils.clip_grad_norm_(model.parameters(), 5) # Constraining the weight matrix directly == regularization.
optimizer.step()
optimizer.zero_grad()
with torch.no_grad():
model.eval()
val_loss = []
for x_valid, y_valid in valid_dl:
x_val, y_val, mask, lengths = padding(x_valid, y_valid, inputDimSize, numClass)
outputs_val, hidden_val = model(x_val, mask)
loss = cost_function(outputs_val, y_val).cost()
val_loss.append(loss.item())
model.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Training Loss: {:.4f}...".format(loss.item()),
"Val Loss: {:.4f}".format(torch.mean(torch.tensor(val_loss))))
Error (Start getting NaNs in the loss)
Inside PredictionLoss fn : Sum Dim 0 tensor([[0.1008, 0.1539, 0.1211, ..., 0.1533, 0.1218, 0.1418],
[0.0253, 0.0449, 0.0249, ..., 0.0439, 0.0134, 0.0332],
[0.0306, 0.0799, 0.0570, ..., 0.0790, 0.0484, 0.0678],
...,
[0.0253, 0.0450, 0.0249, ..., 0.0439, 0.0134, 0.0332],
[0.0253, 0.0450, 0.0249, ..., 0.0439, 0.0134, 0.0332],
[0.0038, 0.0106, 0.0106, ..., 0.0098, 0.0004, 0.0106]],
grad_fn=<SumBackward1>)
Inside PredictionLoss fn : Sum Dim 1 tensor([ 372.4754, 133.2620, 219.1195, 37.5425, 141.3354, 37.5070,
229.2947, 0.0000, 379.1829, 217.3962, 80.1226, 37.5074,
138.4665, 82.1034, 89.7893, 81.8173, 92.8159, 141.8856,
95.9898, 216.0511, 133.2535, 385.0391, 369.4958, 244.9362,
37.5088, 37.5087, 141.6083, 0.0000, 95.3367, 37.5074,
735.0569, 378.0407, 37.5135, 40.7778, 82.0872, 225.9998,
216.6189, 379.0732, 81.4742, 144.4226, 93.3905, 214.0228,
37.5078, 224.0793, 88.3753, 41.2919, 140.4855, 37.5086,
226.6366, 148.7171, 137.9226, 13887.5811, 81.1428, 84.6804,
226.6779, 37.5065, 223.8841, 220.5979, 83.2484, 37.5080,
84.5247, 384.2115, 80.1173, 0.0000, 146.9714, 37.6982,
134.6618, 84.1838, 37.5421, 730.5516, 37.5085, 0.0000,
215.1523, 136.5673, 81.2887, 37.5080, 94.4181, 140.6268,
133.9295, 136.2485, 80.1173, 386.2103, 39.0282, 0.0000,
37.5055, 42.1506, 80.1662, 228.5819, 39.3403, 138.7672,
1768.6033, 143.5350, 40.2060, 147.7809, 37.5085, 380.9214,
750.6883, 141.0447, 136.9028, 37.5049],
grad_fn=<SumBackward1>)
Inside PredictionLoss fn : lengths tensor([5., 3., 4., 1., 3., 1., 4., 0., 5., 4., 2., 1., 3., 2., 2., 2., 2., 3.,
2., 4., 3., 5., 5., 4., 1., 1., 3., 0., 2., 1., 6., 5., 1., 1., 2., 4.,
4., 5., 2., 3., 2., 4., 1., 4., 2., 1., 3., 1., 4., 3., 3., 9., 2., 2.,
4., 1., 4., 4., 2., 1., 2., 5., 2., 0., 3., 1., 3., 2., 1., 6., 1., 0.,
4., 3., 2., 1., 2., 3., 3., 3., 2., 5., 1., 0., 1., 1., 2., 4., 1., 3.,
7., 3., 1., 3., 1., 5., 6., 3., 3., 1.])
Inside PredictionLoss fn : Final Result tensor([ 74.4951, 44.4207, 54.7799, 37.5425, 47.1118, 37.5070,
57.3237, nan, 75.8366, 54.3491, 40.0613, 37.5074,
46.1555, 41.0517, 44.8946, 40.9086, 46.4080, 47.2952,
47.9949, 54.0128, 44.4178, 77.0078, 73.8992, 61.2340,
37.5088, 37.5087, 47.2028, nan, 47.6683, 37.5074,
122.5095, 75.6081, 37.5135, 40.7778, 41.0436, 56.5000,
54.1547, 75.8146, 40.7371, 48.1409, 46.6952, 53.5057,
37.5078, 56.0198, 44.1876, 41.2919, 46.8285, 37.5086,
56.6591, 49.5724, 45.9742, 1543.0646, 40.5714, 42.3402,
56.6695, 37.5065, 55.9710, 55.1495, 41.6242, 37.5080,
42.2623, 76.8423, 40.0586, nan, 48.9905, 37.6982,
44.8873, 42.0919, 37.5421, 121.7586, 37.5085, nan,
53.7881, 45.5224, 40.6443, 37.5080, 47.2090, 46.8756,
44.6432, 45.4162, 40.0587, 77.2421, 39.0282, nan,
37.5055, 42.1506, 40.0831, 57.1455, 39.3403, 46.2557,
252.6576, 47.8450, 40.2060, 49.2603, 37.5085, 76.1843,
125.1147, 47.0149, 45.6343, 37.5049], grad_fn=<DivBackward0>)
Inside PredictionLoss fn : Sum Dim 0 tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : Sum Dim 1 tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : lengths tensor([2., 1., 1., 2., 3., 5., 3., 3., 1., 4., 2., 2., 2., 3., 2., 2., 5., 2.,
4., 2., 1., 4., 2., 5., 3., 0., 4., 7., 6., 4., 4., 1., 7., 1., 3., 3.,
5., 3., 5., 5., 4., 2., 2., 4., 1., 5., 6., 2., 5., 5., 2., 1., 3., 1.,
4., 4., 3., 3., 0., 2., 2., 4., 2., 2., 1., 0., 3., 2., 3., 0., 1., 2.,
4., 4., 5., 1., 1., 3., 3., 2., 5., 0., 2., 6., 5., 5., 5., 1., 2., 4.,
0., 2., 6., 1., 2., 0., 1., 1., 2., 3.])
Inside PredictionLoss fn : Final Result tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<DivBackward0>)
Inside PredictionLoss fn : Sum Dim 0 tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : Sum Dim 1 tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : lengths tensor([3., 2., 4., 4., 6., 1., 1., 4., 2., 2., 4., 2., 1., 1., 1., 1., 3., 1.,
2., 3., 4., 2., 2., 3., 2., 0., 1., 3., 4., 1., 2., 3., 4., 3., 3., 1.,
2., 3., 3., 2., 1., 4., 6., 3., 4., 2., 3., 0., 1., 1., 3., 7., 2., 3.,
4., 2., 3., 0., 3., 3., 2., 1., 1., 3., 6., 2., 2., 3., 2., 3., 3., 2.,
1., 2., 5., 3., 4., 2., 5., 3., 5., 5., 5., 3., 2., 1., 4., 3., 3., 3.,
1., 4., 3., 2., 5., 3., 6., 4., 3., 2.])
Inside PredictionLoss fn : Final Result tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<DivBackward0>)
Inside PredictionLoss fn : Sum Dim 0 tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : Sum Dim 1 tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : lengths tensor([4., 6., 2., 2., 4., 5., 0., 3., 2., 3., 4., 3., 3., 3., 4., 3., 0., 1.,
2., 2., 4., 2., 4., 4., 1., 0., 3., 0., 5., 5., 1., 2., 3., 2., 2., 3.,
1., 2., 1., 2., 3., 1., 3., 1., 3., 0., 4., 2., 1., 6., 2., 2., 0., 1.,
4., 4., 2., 2., 0., 3., 5., 5., 5., 5., 3., 4., 4., 4., 5., 4., 2., 2.,
3., 1., 5., 6., 3., 4., 1., 3., 5., 1., 3., 2., 3., 6., 2., 4., 3., 2.,
3., 2., 4., 1., 2., 2., 1., 2., 3., 4.])
Inside PredictionLoss fn : Final Result tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<DivBackward0>)
Inside PredictionLoss fn : Sum Dim 0 tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : Sum Dim 1 tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : lengths tensor([1., 5., 1., 5., 3., 0., 1., 4., 3., 3., 5., 5., 2., 0., 1., 2., 3., 2.,
2., 2., 2., 2., 2., 2., 2., 3., 4., 1., 4., 2., 1., 3., 4., 0., 4., 2.,
0., 4., 1., 1., 2., 3., 3., 5., 1., 3., 2., 3., 1., 1., 3., 6., 3., 0.,
2., 3., 2., 3., 3., 4., 1., 5., 2., 6., 2., 1., 5., 3., 2., 1., 3., 5.,
2., 0., 3., 0., 1., 3., 3., 2., 3., 1., 2., 4., 1., 2., 7., 2., 7., 2.,
2., 2., 1., 1., 1., 2., 4., 2., 2., 3.])
Inside PredictionLoss fn : Final Result tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<DivBackward0>)
Inside PredictionLoss fn : Sum Dim 0 tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : Sum Dim 1 tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : lengths tensor([2., 2., 1., 4., 3., 2., 2., 4., 5., 2., 3., 1., 1., 2., 3., 1., 3., 1.,
3., 1., 6., 4., 2., 2., 0., 4., 1., 0., 2., 0., 1., 1., 1., 1., 2., 2.,
3., 1., 3., 2., 0., 1., 3., 2., 4., 1., 2., 1., 6., 2., 1., 2., 1., 2.,
2., 1., 1., 1., 4., 1., 0., 2., 2., 2., 3., 6., 5., 1., 1., 3., 3., 3.,
1., 4., 4., 2., 4., 2., 3., 7., 1., 1., 2., 3., 0., 0., 1., 4., 2., 3.,
1., 2., 3., 2., 7., 0., 2., 3., 4., 4.])
Inside PredictionLoss fn : Final Result tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<DivBackward0>)
Inside PredictionLoss fn : Sum Dim 0 tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : Sum Dim 1 tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : lengths tensor([1., 2., 6., 7., 2., 1., 1., 3., 1., 2., 2., 4., 3., 3., 4., 3., 2., 2.,
5., 2., 1., 3., 2., 2., 2., 1., 4., 3., 2., 2., 1., 4., 2., 0., 2., 2.,
1., 3., 4., 4., 5., 1., 5., 6., 1., 3., 1., 3., 0., 1., 6., 1., 2., 0.,
3., 2., 2., 3., 4., 3., 3., 2., 3., 2., 2., 3., 4., 3., 2., 0., 3., 1.,
1., 3., 1., 4., 7., 2., 2., 1., 1., 1., 2., 3., 3., 2., 2., 3., 1., 3.,
3., 1., 1., 3., 1., 2., 7., 3., 1., 6.])
Inside PredictionLoss fn : Final Result tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<DivBackward0>)
Inside PredictionLoss fn : Sum Dim 0 tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : Sum Dim 1 tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : lengths tensor([2., 2., 4., 2., 5., 1., 2., 4., 2., 2., 2., 5., 1., 2., 1., 4., 1., 2.,
4., 3., 4., 0., 2., 5., 2., 6., 2., 6., 3., 4., 3., 5., 7., 4., 3., 5.,
1., 3., 2., 3., 2., 2., 2., 3., 3., 3., 2., 3., 1., 4., 3., 3., 2., 1.,
6., 9., 2., 1., 6., 3., 1., 5., 1., 6., 2., 2., 6., 2., 4., 2., 4., 3.,
2., 2., 4., 1., 2., 2., 2., 1., 3., 1., 3., 2., 3., 3., 2., 2., 4., 3.,
3., 3., 5., 4., 3., 2., 4., 2., 1., 8.])
Inside PredictionLoss fn : Final Result tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<DivBackward0>)
Inside PredictionLoss fn : Sum Dim 0 tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : Sum Dim 1 tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<SumBackward1>)
Inside PredictionLoss fn : lengths tensor([1., 3., 2., 3., 3., 2., 7., 4., 3., 1., 1., 5., 1., 3., 4., 4., 1., 5.,
1., 3., 3., 1., 4., 3., 0., 4., 4., 2., 4., 1., 2., 4., 3., 2., 3., 2.,
2., 2., 2., 3., 3., 2., 5., 1., 2., 1., 2., 2., 3., 2., 2., 2., 3., 3.,
3., 5., 1., 4., 8., 4., 0., 2., 4., 2., 0., 1., 4., 4., 1., 5., 0., 1.,
3., 1., 2., 3., 3., 4., 3., 4., 2., 4., 2., 3., 4., 1., 1., 2., 4., 1.,
1., 0., 0., 0., 4., 3., 1., 3., 4., 3.])
Inside PredictionLoss fn : Final Result tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan], grad_fn=<DivBackward0>)
Inside PredictionLoss fn : Sum Dim 0 tensor([[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]], grad_fn=<SumBackward1>)
| The problem is the values in lengths variable.
In your cost_function.prediction_loss you divide the cross entropy loss by the length of each sequece: (torch.sum(tmp_tensor,dim=1)).float()/ lengths.float().
However, if you look at the values of your lengths tensor:
Inside PredictionLoss fn : lengths tensor([5., 3., 4., 1., 3., 1., 4., 0., 5., 4., 2., 1., 3., 2., 2., 2., 2., 3.,
2., 4., 3., 5., 5., 4., 1., 1., 3., 0., 2., 1., 6., 5., 1., 1., 2., 4.,
4., 5., 2., 3., 2., 4., 1., 4., 2., 1., 3., 1., 4., 3., 3., 9., 2., 2.,
4., 1., 4., 4., 2., 1., 2., 5., 2., 0., 3., 1., 3., 2., 1., 6., 1., 0.,
4., 3., 2., 1., 2., 3., 3., 3., 2., 5., 1., 0., 1., 1., 2., 4., 1., 3.,
7., 3., 1., 3., 1., 5., 6., 3., 3., 1.])
You will notice that some of the entries are 0(!). The corresponding values in the loss function are also zero (no loss for zero-length sequence). When you divide zero by zero you get nan.
Some good practices for coding:
If possible, use library functions instead of re-implementing things. These functions are usually tested and optimized and are more numerically stable.
For instance, you can use torch.nn.CrossEntropyLoss that combines both the cross entropy loss and the softmax in a numerically robust manner.
the variable lengths used for loss computation is not explicitly an argument of the loss function or a class member. You should make it an explicit argument.
| https://stackoverflow.com/questions/63726387/ |
In PyTorch calc Euclidean distance instead of matrix multiplication | Let say we have 2 matrices:
mat = torch.randn([20, 7]) * 100
mat2 = torch.randn([7, 20]) * 100
n, m = mat.shape
The simplest usual matrix multiplication looks like this:
def mat_vec_dot_product(mat, vect):
n, m = mat.shape
res = torch.zeros([n])
for i in range(n):
for j in range(m):
res[i] += mat[i][j] * vect[j]
return res
res = torch.zeros([n, n])
for k in range(n):
res[:, k] = mat_vec_dot_product(mat, mat2[:, k])
But what if I need to apply L2 norm instead of dot product? The code is next:
def mat_vec_l2_mult(mat, vect):
n, m = mat.shape
res = torch.zeros([n])
for i in range(n):
for j in range(m):
res[i] += (mat[i][j] - vect[j]) ** 2
res = res.sqrt()
return res
for k in range(n):
res[:, k] = mat_vec_l2_mult(mat, mat2[:, k])
Can we do this somehow in an optimal way using Torch or any other libraries? Cause naive O(n^3) Python code works really slow.
| Use torch.cdist for L2 norm - euclidean distance
res = torch.cdist(mat, mat2.permute(1,0), p=2)
Here, I have used permute to swap dim of mat2 from 7,20 to 20,7
| https://stackoverflow.com/questions/63727907/ |
Tried installing FastAi but I got "ERROR: No matching distribution found for torchvision>=0.7" | I tried installing Fast Ai using the command
pip install fastai
But then I got this error
ERROR: Could not find a version that satisfies the requirement torchvision>=0.7 (from fastai) (from versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.2.post2, 0.2.2.post3, 0.3.0, 0.4.1, 0.5.0)
ERROR: No matching distribution found for torchvision>=0.7 (from fastai)
What happened and how do I solve this? I tried installing some specific torch versions but none works.
| The error just means that you are missing a required library of the given version PyTorch>=0.7.
As it states in the Instructions if you installing it by using pip make sure to install pytorch manually to the required version. You can see here on instructions for installing PyTorch.
| https://stackoverflow.com/questions/63730320/ |
How do I compute bootstrapped cross entropy loss in PyTorch? | I have read some papers that use something called "Bootstrapped Cross Entropy Loss" to train their segmentation network. The idea is to focus only on the hardest k% (say 15%) of the pixels into account to improve learning performance, especially when easy pixels dominate.
Currently, I am using the standard cross entropy:
loss = F.binary_cross_entropy(mask, gt)
How do I convert this to the bootstrapped version efficiently in PyTorch?
| Often we would also add a "warm-up" period to the loss such that the network can learn to adapt to the easy regions first and transit to the harder regions.
This implementation starts from k=100 and continues for 20000 iterations, then linearly decay it to k=15 for another 50000 iterations.
class BootstrappedCE(nn.Module):
def __init__(self, start_warm=20000, end_warm=70000, top_p=0.15):
super().__init__()
self.start_warm = start_warm
self.end_warm = end_warm
self.top_p = top_p
def forward(self, input, target, it):
if it < self.start_warm:
return F.cross_entropy(input, target), 1.0
raw_loss = F.cross_entropy(input, target, reduction='none').view(-1)
num_pixels = raw_loss.numel()
if it > self.end_warm:
this_p = self.top_p
else:
this_p = self.top_p + (1-self.top_p)*((self.end_warm-it)/(self.end_warm-self.start_warm))
loss, _ = torch.topk(raw_loss, int(num_pixels * this_p), sorted=False)
return loss.mean(), this_p
| https://stackoverflow.com/questions/63735255/ |
pytorch : How to stack 2 tensors | I’m trying to stack 2 tensors A.shape=(64,16,16) and B.shape=(64,16,16) in a tensor of shape C.shape=(1,128,16,16)
and non of functions I’ve tried work where
torch.stack => C.shape=(2,64,16,16) and
torch.cat => C.shape=(128,16,16)
can enyones help me
| Concat first and then use unsqueeze to add singleton dimension at 0th position
torch.cat([A, B]).unsqueeze(0)
| https://stackoverflow.com/questions/63738632/ |
Getting variance values for random samples generated from a standard normal distribution using numpy | I have a function that gives me probability distributions for each class, in terms of a matrix corresponding to mean values and another matrix corresponding to variance values. For example, if I had four classes then I would have the following outputs:
y_means = [1,2,3,4]
y_variance = [0.01,0.02,0.03,0.04]
I need to do the following calculation to the mean values to continue with the rest of my program:
y_means = np.array(y_means)
y_means = np.reshape(y_means,(y_means.size,1))
A = np.random.randn(10,y_means.size)
y_means = np.matmul(A,y_means)
Here, I have used the numpy.random.randn function to generate random samples from a standard normal distribution, and then multiply this with the matrix with the mean value to obtain a new output matrix. The dimension of the output matrix would then be of the size (10 x 1).
I need to do a similar calculation such that my output_variances will also be a (10 x 1) matrix. But it is not meaningful to multiply the variances in the same way with random samples from a standard normal distribution, because this would result in negative values as well. This is undesirable because my ultimate aim would be to create a normal distribution with these mean values and their corresponding variance values using:
torch.distributions.normal.Normal(loc=y_means, scale=y_variance)
So my question is if there is any method by which I get a variance value for each random sample generated by numpy.random.randn? Because then the multplication of such a matrix would make more sense with output_variance.
Or if there is any other strategy for this that I might be unaware of, please let me know.
| The problem mentioned in the question required another matrix of the same dimension as A that corresponded to a variance measure for the random samples present in A.
Taking a row-wise or column-wise variance of the matrix denoted by A using numpy.var() didn't give a similar 10 x 4 matrix to multiply with y_variance.
I had solved the above problem by using the following approach:
First create a matrix with the same dimensions as A with zero entries, using the following line of code:
A_var = np.zeros_like(A)
then, using torch.distributions, create normal distributions with the values in A as the mean and zeroes as variance:
dist_A = torch.distributions.normal.Normal(loc=torch.Tensor(A), scale=torch.Tensor(A_var))
https://pytorch.org/docs/stable/distributions.html lists all the operations possible on Normal distributions in PyTorch. The sample() method can generate samples from a given distribution for any size. This property was exploited to first generate a sample matrix of size 10 X 10 x 4 and then calculating the variance along axis 0.
np.var(np.array(dist2.sample((10,))),axis=0)
This would result in a variance matrix of size 10 x 4, which can be used for calculations with y_variance.
| https://stackoverflow.com/questions/63740367/ |
How to Efficiently Find the Indices of Max Values in a Multidimensional Array of Matrices using Pytorch and/or Numpy | Background
It is common in machine learning to deal with data of a high dimensionality. For example, in a Convolutional Neural Network (CNN) the dimensions of each input image may be 256x256, and each image may have 3 color channels (Red, Green, and Blue). If we assume that the model takes in a batch of 16 images at a time, the dimensionality of the input going into our CNN is [16,3,256,256]. Each individual convolutional layer expects data in the form [batch_size, in_channels, in_y, in_x], and all of these quantities often change layer-to-layer (except batch_size). The term we use for the matrix made up of the [in_y, in_x] values is feature map, and this question is concerned with finding the maximum value, and its index, in every feature map at a given layer.
Why do I want to do this? I want to apply a mask to every feature map, and I want to apply that mask centered at the max value in each feature map, and to do that I need to know where each max value is located. This mask application is done during both training and testing of the model, so efficiency is vitally important to keep computational times down. There are many Pytorch and Numpy solutions for finding singleton max values and indices, and for finding the maximum values or indices along a single dimension, but no (that I could find) dedicated and efficient built-in functions for finding the indices of maximum values along 2 or more dimensions at a time. Yes, we can nest functions that operate on a single dimension, but these are some of the least efficient approaches.
What I've Tried
I've looked at this Stackoverflow question, but the author is dealing with a special-case 4D array which is trivially squeezed to a 3D array. The accepted answer is specialized for this case, and the answer pointing to TopK is misguided because it not only operates on a single dimension, but would necessitate that k=1 given the question asked, thus devlolving to a regular torch.max call.
I've looked at this Stackoverflow question, but this question, and its answer, focus on looking through a single dimension.
I have looked at this Stackoverflow question, but I already know of the answer's approach as I independently formulated it in my own answer here (where I amended that the approach is very inefficient).
I have looked at this Stackoverflow question, but it does not satisfy the key part of this question, which is concerned with efficiency.
I have read many other Stackoverflow questions and answers, as well as the Numpy documentation, Pytorch documentation, and posts on the Pytorch forums.
I've tried implementing a LOT of varying approaches to this problem, enough that I have created this question so that I can answer it and give back to the community, and anyone who goes looking for a solution to this problem in the future.
Standard of Performance
If I am asking a question about efficiency I need to detail expectations clearly. I am trying to find a time-efficient solution (space is secondary) for the problem above without writing C code/extensions, and which is reasonably flexible (hyper specialized approaches aren't what I'm after). The approach must accept an [a,b,c,d] Torch tensor of datatype float32 or float64 as input, and output an array or tensor of the form [a,b,2] of datatype int32 or int64 (because we are using the output as indices).
Solutions should be benchmarked against the following typical solution:
max_indices = torch.stack([torch.stack([(x[k][j]==torch.max(x[k][j])).nonzero()[0] for j in range(x.size()[1])]) for k in range(x.size()[0])])
| The Approach
We are going to take advantage of the Numpy community and libraries, as well as the fact that Pytorch tensors and Numpy arrays can be converted to/from one another without copying or moving the underlying arrays in memory (so conversions are low cost). From the Pytorch documentation:
Converting a torch Tensor to a Numpy array and vice versa is a breeze. The torch Tensor and Numpy array will share their underlying memory locations, and changing one will change the other.
Solution One
We are first going to use the Numba library to write a function that will be just-in-time (JIT) compiled upon its first usage, meaning we can get C speeds without having to write C code ourselves. Of course, there are caveats to what can get JIT-ed, and one of those caveats is that we work with Numpy functions. But this isn't too bad because, remember, converting from our torch tensor to Numpy is low cost. The function we create is:
@njit(cache=True)
def indexFunc(array, item):
for idx, val in np.ndenumerate(array):
if val == item:
return idx
This function if from another Stackoverflow answer located here (This was the answer which introduced me to Numba). The function takes an N-Dimensional Numpy array and looks for the first occurrence of a given item. It immediately returns the index of the found item on a successful match. The @njit decorator is short for @jit(nopython=True), and tells the compiler that we want it to compile the function using no Python objects, and to throw an error if it is not able to do so (Numba is the fastest when no Python objects are used, and speed is what we are after).
With this speedy function backing us, we can get the indices of the max values in a tensor as follows:
import numpy as np
x = x.numpy()
maxVals = np.amax(x, axis=(2,3))
max_indices = np.zeros((n,p,2),dtype=np.int64)
for index in np.ndindex(x.shape[0],x.shape[1]):
max_indices[index] = np.asarray(indexFunc(x[index], maxVals[index]),dtype=np.int64)
max_indices = torch.from_numpy(max_indices)
We use np.amax because it can accept a tuple for its axis argument, allowing it to return the max values of each 2D feature map in the 4D input. We initialize max_indices with np.zeros ahead of time because appending to numpy arrays is expensive, so we allocate the space we need ahead of time. This approach is much faster than the Typical Solution in the question (by an order of magnitude), but it also uses a for loop outside the JIT-ed function, so we can improve...
Solution Two
We will use the following solution:
@njit(cache=True)
def indexFunc(array, item):
for idx, val in np.ndenumerate(array):
if val == item:
return idx
raise RuntimeError
@njit(cache=True, parallel=True)
def indexFunc2(x,maxVals):
max_indices = np.zeros((x.shape[0],x.shape[1],2),dtype=np.int64)
for i in prange(x.shape[0]):
for j in prange(x.shape[1]):
max_indices[i,j] = np.asarray(indexFunc(x[i,j], maxVals[i,j]),dtype=np.int64)
return max_indices
x = x.numpy()
maxVals = np.amax(x, axis=(2,3))
max_indices = torch.from_numpy(indexFunc2(x,maxVals))
Instead of iterating through our feature maps one-at-a-time with a for loop, we can take advantage of parallelization using Numba's prange function (which behaves exactly like range but tells the compiler we want the loop to be parallelized) and the parallel=True decorator argument. Numba also parallelizes the np.zeros function. Because our function is compiled Just-In-Time and uses no Python objects, Numba can take advantage of all the threads available in our system! It is worth noting that there is now a raise RuntimeError in the indexFunc. We need to include this, otherwise the Numba compiler will try to infer the return type of the function and infer that it will either be an array or None. This doesn't jive with our usage in indexFunc2, so the compiler would throw an error. Of course, from our setup we know that indexFunc will always return an array, so we can simply raise and error in the other logical branch.
This approach is functionally identical to Solution One, but changes the iteration using nd.index into two for loops using prange. This approach is about 4x faster than Solution One.
Solution Three
Solution Two is fast, but it is still finding the max values using regular Python. Can we speed this up using a more comprehensive JIT-ed function?
@njit(cache=True)
def indexFunc(array, item):
for idx, val in np.ndenumerate(array):
if val == item:
return idx
raise RuntimeError
@njit(cache=True, parallel=True)
def indexFunc3(x):
maxVals = np.zeros((x.shape[0],x.shape[1]),dtype=np.float32)
for i in prange(x.shape[0]):
for j in prange(x.shape[1]):
maxVals[i][j] = np.max(x[i][j])
max_indices = np.zeros((x.shape[0],x.shape[1],2),dtype=np.int64)
for i in prange(x.shape[0]):
for j in prange(x.shape[1]):
x[i][j] == np.max(x[i][j])
max_indices[i,j] = np.asarray(indexFunc(x[i,j], maxVals[i,j]),dtype=np.int64)
return max_indices
max_indices = torch.from_numpy(indexFunc3(x))
It might look like there is a lot more going on in this solution, but the only change is that instead of calculating the maximum values of each feature map using np.amax, we have now parallelized the operation. This approach is marginally faster than Solution Two.
Solution Four
This solution is the best I've been able to come up with:
@njit(cache=True, parallel=True)
def indexFunc4(x):
max_indices = np.zeros((x.shape[0],x.shape[1],2),dtype=np.int64)
for i in prange(x.shape[0]):
for j in prange(x.shape[1]):
maxTemp = np.argmax(x[i][j])
max_indices[i][j] = [maxTemp // x.shape[2], maxTemp % x.shape[2]]
return max_indices
max_indices = torch.from_numpy(indexFunc4(x))
This approach is more condensed and also the fastest at 33% faster than Solution Three and 50x faster than the Typical Solution. We use np.argmax to get the index of the max value of each feature map, but np.argmax only returns the index as if each feature map were flattened. That is, we get a single integer telling us which number the element is in our feature map, not the indices we need to be able to access that element. The math [maxTemp // x.shape[2], maxTemp % x.shape[2]] is to turn that singular int into the [row,column] that we need.
Benchmarking
All approaches were benchmarked together against a random input of shape [32,d,64,64], where d was incremented from 5 to 245. For each d, 15 samples were gathered and the times were averaged. An equality test ensured that all solutions provided identical values. An example of the benchmark output is:
A plot of the benchmarking times as d increased is (leaving out the Typical Solution so the graph isn't squashed):
Woah! What is going on at the start with those spikes?
Solution Five
Numba allows us to produce Just-In-Time compiled functions, but it doesn't compile them until the first time we use them; It then caches the result for when we call the function again. This means the very first time we call our JIT-ed functions we get a spike in compute time as the function is compiled. Luckily, there is a way around this- if we specify ahead of time what our function's return type and argument types will be, the function will be eagerly compiled instead of compiled just-in-time. Applying this knowledge to Solution Four we get:
@njit('i8[:,:,:](f4[:,:,:,:])',cache=True, parallel=True)
def indexFunc4(x):
max_indices = np.zeros((x.shape[0],x.shape[1],2),dtype=np.int64)
for i in prange(x.shape[0]):
for j in prange(x.shape[1]):
maxTemp = np.argmax(x[i][j])
max_indices[i][j] = [maxTemp // x.shape[2], maxTemp % x.shape[2]]
return max_indices
max_indices6 = torch.from_numpy(indexFunc4(x))
And if we restart our kernel and rerun our benchmark, we can look at the first result where d==5 and the second result where d==10 and note that all of the JIT-ed solutions were slower when d==5 because they had to be compiled, except for Solution Four, because we explicitly provided the function signature ahead of time:
There we go! That's the best solution I have so far for this problem.
EDIT #1
Solution Six
An improved solution has been developed which is 33% faster than the previously posted best solution. This solution only works if the input array is C-contiguous, but this isn't a big restriction since numpy arrays or torch tensors will be contiguous unless they are reshaped, and both have functions to make the array/tensor contiguous if needed.
This solution is the same as the previous best, but the function decorator which specifies the input and return types are changed from
@njit('i8[:,:,:](f4[:,:,:,:])',cache=True, parallel=True)
to
@njit('i8[:,:,::1](f4[:,:,:,::1])',cache=True, parallel=True)
The only difference is that the last : in each array typing becomes ::1, which signals to the numba njit compiler that the input arrays are C-contiguous, allowing it to better optimize.
The full solution six is then:
@njit('i8[:,:,::1](f4[:,:,:,::1])',cache=True, parallel=True)
def indexFunc5(x):
max_indices = np.zeros((x.shape[0],x.shape[1],2),dtype=np.int64)
for i in prange(x.shape[0]):
for j in prange(x.shape[1]):
maxTemp = np.argmax(x[i][j])
max_indices[i][j] = [maxTemp // x.shape[2], maxTemp % x.shape[2]]
return max_indices
max_indices7 = torch.from_numpy(indexFunc5(x))
The benchmark including this new solution confirms the speedup:
| https://stackoverflow.com/questions/63749267/ |
Convolution - Deconvolution for even and odd size | I have two different size tensors to put in the network.
C = nn.Conv1d(1, 1, kernel_size=1, stride=2)
TC = nn.ConvTranspose1d(1, 1, kernel_size=1, stride=2)
a = torch.rand(1, 1, 100)
b = torch.rand(1, 1, 101)
a_out, b_out = TC(C(a)), TC(C(b))
The results are
a_out = torch.size([1, 1, 99]) # What I want is [1, 1, 100]
b_out = torch.size([1, 1, 101])
Is there any method to handle this problem?
I need your help.
Thanks
| It is expected behaviour as per documentation. May be padding can be used when even input length is detected to get same length as input.
Something like this
class PadEven(nn.Module):
def __init__(self, conv, deconv, pad_value=0, padding=(0, 1)):
super().__init__()
self.conv = conv
self.deconv = deconv
self.pad = nn.ConstantPad1d(padding=padding, value=pad_value)
def forward(self, x):
nd = x.size(-1)
x = self.deconv(self.conv(x))
if nd % 2 == 0:
x = self.pad(x)
return x
C = nn.Conv1d(1, 1, kernel_size=1, stride=2)
TC = nn.ConvTranspose1d(1, 1, kernel_size=1, stride=2)
P = PadEven(C, TC)
a = torch.rand(1, 1, 100)
b = torch.rand(1, 1, 101)
a_out, b_out = P(a), P(b)
| https://stackoverflow.com/questions/63750356/ |
Using GPU inside docker container - CUDA Version: N/A and torch.cuda.is_available returns False | I'm trying to use GPU from inside my docker container. I'm using docker with version 19.03 on Ubuntu 18.04.
Outside the docker container if I run nvidia-smi I get the below output.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.05 Driver Version: 450.51.05 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 30C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
If I run the samething inside the container created from nvidia/cuda docker image, I get the same output as above and everything is running smoothly. torch.cuda.is_available() returns True.
But If I run the same nvidia-smi command inside any other docker container, it gives the following output where you can see that the CUDA Version is coming as N/A. Inside the containers torch.cuda.is_available() also returns False.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.05 Driver Version: 450.51.05 CUDA Version: N/A |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 30C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
I have installed nvidia-container-toolkit using the following commands.
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu18.04/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install nvidia-container-toolkit
sudo systemctl restart docker
I started my containers using the following commands
sudo docker run --rm --gpus all nvidia/cuda nvidia-smi
sudo docker run -it --rm --gpus all ubuntu nvidia-smi
| docker run --rm --gpus all nvidia/cuda nvidia-smi should NOT return CUDA Version: N/A if everything (aka nvidia driver, CUDA toolkit, and nvidia-container-toolkit) is installed correctly on the host machine.
Given that docker run --rm --gpus all nvidia/cuda nvidia-smi returns correctly. I also had problem with CUDA Version: N/A inside of the container, which I had luck in solving:
Please see my answer https://stackoverflow.com/a/64422438/2202107 (obviously you need to adjust and install the matching/correct versions of everything)
| https://stackoverflow.com/questions/63751883/ |
How can I apply a transformation to a torch tensor | I have a Torch Tensor z and I would like to apply a transformation matrix mat to z and have the output be exactly the same size as z. Here is the code I am running:
def trans(z):
print(z)
mat = transforms.Compose([transforms.ToPILImage(),transforms.RandomRotation(90),transforms.ToTensor()])
z = Variable(mat(z.cpu()).cuda())
z = nnf.interpolate(z, size=(28, 28), mode='linear', align_corners=False)
return z
z = trans(z)
However, I get this error:
RuntimeError Traceback (most recent call last)
<ipython-input-12-e2fc36889ba5> in <module>()
3 inputs,targs=next(iter(tst_loader))
4 recon, mean, var = vae.predict(model, inputs[img_idx])
----> 5 out = vae.generate(model, mean, var)
4 frames
/content/vae.py in generate(model, mean, var)
90 z = trans(z)
91 z = Variable(z.cpu().cuda())
---> 92 out = model.decode(z)
93 return out.data.cpu()
94
/content/vae.py in decode(self, z)
56
57 def decode(self, z):
---> 58 out = self.z_develop(z)
59 out = out.view(z.size(0), 64, self.z_dim, self.z_dim)
60 out = self.decoder(out)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py in forward(self, input)
89
90 def forward(self, input: Tensor) -> Tensor:
---> 91 return F.linear(input, self.weight, self.bias)
92
93 def extra_repr(self) -> str:
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1674 ret = torch.addmm(bias, input, weight.t())
1675 else:
-> 1676 output = input.matmul(weight.t())
1677 if bias is not None:
1678 output += bias
RuntimeError: mat1 dim 1 must match mat2 dim 0
How can I successfully apply this rotation transform mat and not get any errors doing so?
Thanks,
Vinny
| The problem is that interpolate expects a batch dimension, and looks like your data does not have one, based on the error message and the successful application of transforms. Since your input is spatial (based on the size=(28, 28)), you can fix that by adding the batch dimension and changing the mode, since linear is not implemented for spatial input:
z = nnf.interpolate(z.unsqueeze(0), size=(28, 28), mode='bilinear', align_corners=False)
If you want z to still have a shape like (C, H, W), then:
z = nnf.interpolate(z.unsqueeze(0), size=(28, 28), mode='bilinear', align_corners=False).squeeze(0)
| https://stackoverflow.com/questions/63756773/ |
Alternatively train multi task learning model in pytorch - weight updating | I want to build a multi task learning model on two related datasets with different inputs and targets. The two tasks are sharing lower-level layers but with different header layers, a minimal example:
class MultiMLP(nn.Module):
"""
A simple dense network for MTL on hard parameter sharing.
"""
def __init__(self):
super().__init__()
self.hidden = nn.Linear(100, 200)
self.out_task0= nn.Linear(200, 1)
self.out_task0= nn.Linear(200, 1)
def forward(self, x):
x = self.hidden(x)
x = F.relu(x)
y_task0 = self.out_task0(x)
y_task1 = self.out_task1(x)
return [y_task0, y_task1]
The dataloader is constructed so that the batches are alternatively generated from two datasets, i.e. batch 0, 2, 4, ... from task 0, batch 1, 3, 5, ... from task 1. I wanted to train the network in this way: only update weights for hidden layer and out_task0 for batches from task 0, and update only hidden and out_task1 for task 1.
I then alternatively switch requires_grad for the corresponding tasks during training as following. But I observed that all weights are updated for every iteration.
...
criterion = MSELoss()
for i, data in enumerate(combined_loader):
x, y = data[0], data[1]
optimizer.zero_grad()
# controller is 0 for task0, 1 for task1
# altenate the header layer
controller = i % 2
task0_mode = True if controller == 0 else False
for name, param in model.named_parameters():
if name in ['out_task0.weight', 'out_task0.bias']:
param.requires_grad = task0_mode
elif name in ['out_task1.weight', 'out_task1.bias']:
param.requires_grad = not task0_mode
outputs = model(x)[controller]
loss = criterion(outputs, y)
loss.backward()
optimizer.step()
# Monitor the parameter updates
for name, p in model.named_parameters():
if name in ['out_task0.weight', 'out_task1.weight']:
print(f"Controller: {controller}")
print(name, p)
Did I miss anything in the training procedure? Or the overall setup will not work?
| Disclaimer: the question has been answered from PyTorch Forum, I put things together here in case someone runs into the same problem, the credit goes to ptrblk
The problem could arise from any variants of stochastic gradient descent(sgd) which utilizes gradients from previous steps, for instance, stochastic gradient descent with momentum(sgd-m), Nesterov accelerated gradient (NAG), Adagrad, RMSprop, Adam and so on. Zero-ing gradient at step t would not affect the terms relying on historical gradients. Thus the weights are still updated with the setting in the posted question.
One can see that from the following code example.
model = nn.Linear(1, 1, bias=False)
#optimizer = torch.optim.SGD(model.parameters(), lr=1., momentum=0.) # same results for w1 and w2
optimizer = torch.optim.SGD(model.parameters(), lr=1., momentum=0.5) # w2 gets updated
#optimizer = torch.optim.Adam(model.parameters(), lr=1.) # w2 gets updated
w0 = model.weight.clone()
out = model(torch.randn(1, 1))
out.mean().backward()
optimizer.step()
w1 = model.weight.clone()
optimizer.zero_grad()
print(model.weight.grad)
optimizer.step()
w2 = model.weight.clone()
print(w1 - w0)
print(w2 - w1)
With the native SGD optimizer, w2 and w1 are the same. But it is not the case for SGD-M and Adam.
| https://stackoverflow.com/questions/63763164/ |
Parameters of my network on PyTorch are not updated | I want to make an auto calibration system using PyTorch.
I try to deal with a homogeneous transform matrix as weights of neural networks.
I write a code referring to PyTorch tutorials, but my custom parameters are not updated after backward method is called.
When I print a 'grad' attribute of each parameter, it is a None.
My code is below. Is there anything wrong?
Please give any advise to me. Thank you.
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.params = nn.Parameter(torch.rand(6))
self.rx, self.ry, self.rz = self.params[0], self.params[1], self.params[2]
self.tx, self.ty, self.tz = self.params[3], self.params[4], self.params[5]
def forward(self, x):
tr_mat = torch.tensor([[1, 0, 0, self.params[3]],
[0, 1, 0, self.params[4]],
[0, 0, 1, self.params[5]],
[0, 0, 0, 1]], requires_grad=True)
rz_mat = torch.tensor([[torch.cos(self.params[2]), -torch.sin(self.params[2]), 0, 0],
[torch.sin(self.params[2]), torch.cos(self.params[2]), 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]], requires_grad=True)
ry_mat = torch.tensor([[torch.cos(self.params[1]), 0, torch.sin(self.params[1]), 0],
[0, 1, 0, 0],
[-torch.sin(self.params[1]), 0, torch.cos(self.params[1]), 0],
[0, 0, 0, 1]], requires_grad=True)
rx_mat = torch.tensor([[1, 0, 0, 0],
[0, torch.cos(self.params[0]), -torch.sin(self.params[0]), 0],
[0, torch.sin(self.params[0]), torch.cos(self.params[0]), 0],
[0, 0, 0, 1]], requires_grad=True)
tf1 = torch.matmul(tr_mat, rz_mat)
tf2 = torch.matmul(tf1, ry_mat)
tf3 = torch.matmul(tf2, rx_mat)
tr_local = torch.tensor([[1, 0, 0, x[0]],
[0, 1, 0, x[1]],
[0, 0, 1, x[2]],
[0, 0, 0, 1]])
tf_output = torch.matmul(tf3, tr_local)
output = tf_output[:3, 3]
return output
def get_loss(self, output):
pass
model = Net()
input_ex = np.array([[-0.01, 0.05, 0.92],
[-0.06, 0.03, 0.94]])
output_ex = np.array([[-0.3, 0.4, 0.09],
[-0.5, 0.2, 0.07]])
print(list(model.parameters()))
optimizer = optim.Adam(model.parameters(), 0.001)
criterion = nn.MSELoss()
for input_np, label_np in zip(input_ex, output_ex):
input_tensor = torch.from_numpy(input_np).float()
label_tensor = torch.from_numpy(label_np).float()
output = model(input_tensor)
optimizer.zero_grad()
loss = criterion(output, label_tensor)
loss.backward()
optimizer.step()
print(list(model.parameters()))
| What happens
Your problem is related to PyTorch's implicit conversion of torch.tensor to float. Let's say you have this:
tr_mat = torch.tensor(
[
[1, 0, 0, self.params[3]],
[0, 1, 0, self.params[4]],
[0, 0, 1, self.params[5]],
[0, 0, 0, 1],
],
requires_grad=True,
)
torch.tensor can only be constructed from list which has Python like values, it cannot have torch.tensor inside it. What happens under the hood (let's say) is each element of self.params which can be converted to float is (in this case all of them can, e.g. self.params[3], self.params[4], self.params[5]).
When tensor's value is casted to float it's value is copied into Python counterpart hence it is not part of computational graph anymore, it's a new pure Python variable (which cannot be backpropagated obviously).
Solution
What you can do is choose elements of your self.params and insert them into eye matrices so the gradient flows. You can see a rewrite of your forward method taking this into account:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.params = nn.Parameter(torch.randn(6))
def forward(self, x):
sinus = torch.cos(self.params)
cosinus = torch.cos(self.params)
tr_mat = torch.eye(4)
tr_mat[:-1, -1] = self.params[3:]
rz_mat = torch.eye(4)
rz_mat[0, 0] = cosinus[2]
rz_mat[0, 1] = -sinus[2]
rz_mat[1, 0] = sinus[2]
rz_mat[1, 1] = cosinus[2]
ry_mat = torch.eye(4)
ry_mat[0, 0] = cosinus[1]
ry_mat[0, 2] = sinus[1]
ry_mat[2, 0] = -sinus[1]
ry_mat[2, 2] = cosinus[1]
rx_mat = torch.eye(4)
rx_mat[1, 1] = cosinus[0]
rx_mat[1, 2] = -sinus[0]
rx_mat[2, 1] = sinus[0]
rx_mat[2, 2] = cosinus[0]
tf1 = torch.matmul(tr_mat, rz_mat)
tf2 = torch.matmul(tf1, ry_mat)
tf3 = torch.matmul(tf2, rx_mat)
tr_local = torch.tensor(
[[1, 0, 0, x[0]], [0, 1, 0, x[1]], [0, 0, 1, x[2]], [0, 0, 0, 1]],
)
tf_output = torch.matmul(tf3, tr_local)
output = tf_output[:3, 3]
return output
(you may want to double check this rewrite but the idea holds).
Also notice tr_local can be done "your way" as we don't need any values to keep gradient.
requires_grad
You can see requires_grad wasn't used anywhere in the code. It's because what requires gradient is not the whole eye matrix (we will not optimize 0 and 1), but parameters which are inserted into it. Usually you don't need requires_grad at all in your neural network code because:
input tensors are not optimized (usually, those could be when you are doing adversarial attacks or such)
nn.Parameter requires gradient by default (unless frozen)
layers and other neural network specific stuff requires gradient by default (unless frozen)
values which don't need gradient (input tensors) going through layers which do require it (or parameters or w/e) can be backpropagated
| https://stackoverflow.com/questions/63772506/ |
Compare the efficiency of the data loading methods in deep learning | I need to load a time-series dataset to train a network. The dataset was split into many chunks train_x_0.npy, train_x_1.npy, ..., train_x_40.npy (41 chunks) because of memory issue when I extract these .npy files from the raw data. However, their sizes are too large (around 1000 GB) that I couldn't load everything into the RAM. I have been considering two ways to solve this problem.
Loading the data chunks using np.load() with argument mmap_mode='r+'. The memory-mapped chunks are stored in a Python list self.data. In the __getitem__(self, idx) method of Pytorch Dataset class, I convert idx to chunk_idx and sample_idx, then get the sample by self.data[chunk_idx][sample_idx].
Extract .npy files again from raw data, and save the data sample-by-sample, i.e. one .npy file is now one sample, not a data chunk. In the __getitem__(self, idx) method, I will get one sample by loading it using np.load(sample_path).
Assuming the Pytorch DataLoader will be used to iterate through all samples, then which method will be faster?
If you have another suggestion to extract the raw data or to load the .npy files, please share your opinion.
| Both suggested approaches will be limited by your filesystem's IO, since each sample will be loaded from disk on-demand (memory mapping does not speed up the actual loading, once a given patch is requested).
Especially when you are planning to train for many epochs, you can achieve a strong speedup by loading your original chunks train_x_0.npy, train_x_1.npy, etc. one (or as many as you can hold in RAM) at a time and training multiple epochs on this chunk before switching to the next.
For this, you would need control over the sample indices requested by the dataloader. For that you could define a sampler which is passed the sample indices available in the respective cached data chunk. In pseudocode, your training loop could look something like this when caching one chunk at a time:
from yourproject import Dataset
from torch.utils.data import DataLoader
from torch.utils.data.sampler import SubsetRandomSampler
dataset = Dataset(train_data_path, ...)
for chunk_idx in range(num_chunks):
dataset.cache_chunk(chunk_idx)
chunk_sample_inds = dataset.get_chunk_sample_inds(chunk_idx)
chunk_sampler = SubsetRandomSampler(chunk_sample_inds)
chunk_loader = DataLoader(dataset=dataset, sampler=chunk_sampler)
for chunk_epoch in range(num_chunk_epoch):
for sample_idx, sample in enumerate(chunk_loader):
output = model(sample)
Hereby, your Dataset class needs to take care of
caching (loading to RAM) a specified chunk, given a chunk idx (indicated by the cache_chunk method)
returning a list of valid sample indices for a given chunk idx (indicated by the get_chunk_sample_inds method)
If you use a fast GPU (which is often limited by shuffling data back and forth between RAM and VRAM, even for RAM-cached data), you can expect several orders of magnitude speed up using this approach (as opposed to attempting to fill the VRAM from HDD for each sample).
| https://stackoverflow.com/questions/63776219/ |
Confusion about model.train() | I am a beginner in pytorch. I saw on github that some deep learning models have model.train(), and some don’t, but they can run normally. I want to know if model.train() is necessary? what's the effect?
| train mode or eval mode only matters when you have modules that behave asymmetrically (e.g. BatchNorm, Dropout) in training/testing. I would like to emphasize that it does not affect gradient accumulation at all. Even with asymmetrical modules, one can perfectly train a model in eval mode. Some do this in order to save memory in training using a pretrained ImageNet model.
If you don't have any asymmetrical modules, it does not matter at all.
By default, all modules start with training=True.
| https://stackoverflow.com/questions/63777213/ |
How exactly does torch / np einsum work internally | This is a query regarding the internal working of torch.einsum in the GPU. I know how to use einsum. Does it perform all possible matrix multiplications, and just pick out the relevant ones, or does it perform only the required computation?
For example, consider two tensors a and b, of shape (N,P), and I wish to find the dot product of each corresponding tensor ni, of shape (1,P).
Using einsum, the code is:
torch.einsum('ij,ij->i',a,b)
Without using einsum, another way to obtain the output is :
torch.diag(a @ b.t())
Now, the second code is supposed to perform significantly more computations than the first one (eg if N = 2000, it performs 2000 times more computation). However, when I try to time the two operations, they take roughly the same amount of time to complete, which begs the question. Does einsum perform all combinations (like the second code), and picks out the relevant values?
Sample Code to test:
import time
import torch
for i in range(100):
a = torch.rand(50000, 256).cuda()
b = torch.rand(50000, 256).cuda()
t1 = time.time()
val = torch.diag(a @ b.t())
t2 = time.time()
val2 = torch.einsum('ij,ij->i',a,b)
t3 = time.time()
print(t2-t1,t3-t2, torch.allclose(val,val2))
| It probably has to do with the fact that the GPU can parallelize the computation of a @ b.t(). This means that the GPU doesn't actually have to wait for each row-column multiplication computation to finish to compute then next multiplication.
If you check on CPU then you see that torch.diag(a @ b.t()) is significantly slower than torch.einsum('ij,ij->i',a,b) for large a and b.
| https://stackoverflow.com/questions/63778509/ |
Adding xavier initiliazation in pytorch | I want to add Xavier initialization to the first layer of my Neural Network, but I am getting an error in this class:
class DemoNN(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.net = nn.Sequential(
nn.Linear(2,2),
torch.nn.init.xavier_uniform((nn.Linear(2,2)).weights),
nn.Sigmoid(),
nn.Linear(2,2),
nn.Sigmoid(),
nn.Linear(2,4),
nn.Softmax()
)
def forward(self, X):
self.net(X)
| You seem to try and initialize the second linear layer within the constructor of an nn.Sequential object.
What you need to do is to first construct self.net and only then initialize the second linear layer as you wish.
Here is how you should do it:
import torch
import torch.nn as nn
class DemoNN(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.net = nn.Sequential(
nn.Linear(2,2),
nn.Linear(2,2),
nn.Sigmoid(),
nn.Linear(2,2),
nn.Sigmoid(),
nn.Linear(2,4),
nn.Softmax()
)
torch.nn.init.xavier_uniform_(self.net[1].weight)
def forward(self, X):
self.net(X)
| https://stackoverflow.com/questions/63779798/ |
AssertionError: Torch not compiled with CUDA enabled (problem in torch vision) | so I am trying to run my object detection program and I keep getting the following error message:
AssertionError: Torch not compiled with CUDA enabled.
I don't understand why this happens, as I have a 2017 MacBook Pro with an AMD GPU, so I have no CUDA enabled GPU.
I added this statement in my code to make sure the device is set to 'cpu', however, it looks as if the program keeps trying to run it through a GPU even though it does not exist.
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
This is the place where the error happens (4th line):
for epoch in range(num_epochs):
# train for one epoch, printing every 10 iterations
print("Hey")
train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
print("Hey")
# update the learning rate
lr_scheduler.step()
# evaluate on the test dataset
evaluate(model, data_loader_test, device=device)
It would be really great, if anyone could help me with this issue!
Thanks everyone in advance!
PS: I already tried updating the Pytorch version, but still same problem.
Error output:
import os
import pandas as pd
import torch
import torch.utils.data
import torchvision
from PIL import Image
import utils
from engine import train_one_epoch, evaluate
import transforms as T
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
def parse_one_annot(path_to_data_file, filename):
data = pd.read_csv(path_to_data_file)
boxes_array = data[data["filename"] == filename][["xmin", "ymin", "xmax", "ymax"]].values
return boxes_array
class RaccoonDataset(torch.utils.data.Dataset):
def __init__(self, root, data_file, transforms=None):
self.root = root
self.transforms = transforms
self.imgs = sorted(os.listdir(os.path.join(root, "images")))
self.path_to_data_file = data_file
def __getitem__(self, idx):
# load images and bounding boxes
img_path = os.path.join(self.root, "images", self.imgs[idx])
img = Image.open(img_path).convert("RGB")
box_list = parse_one_annot(self.path_to_data_file,
self.imgs[idx])
boxes = torch.as_tensor(box_list, dtype=torch.float32)
num_objs = len(box_list)
# there is only one class
labels = torch.ones((num_objs,), dtype=torch.int64)
image_id = torch.tensor([idx])
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
# suppose all instances are not crowd
iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
target["image_id"] = image_id
target["area"] = area
target["iscrowd"] = iscrowd
if self.transforms is not None:
img, target = self.transforms(img, target)
return img, target
def __len__(self):
return len(self.imgs)
dataset = RaccoonDataset(root="./raccoon_dataset", data_file="./raccoon_dataset/data/raccoon_labels.csv")
dataset.__getitem__(0)
def get_model(num_classes):
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new on
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
return model
def get_transform(train):
transforms = []
# converts the image, a PIL image, into a PyTorch Tensor
transforms.append(T.ToTensor())
if train:
# during training, randomly flip the training images
# and ground-truth for data augmentation
transforms.append(T.RandomHorizontalFlip(0.5))
return T.Compose(transforms)
def main():
dataset = RaccoonDataset(root="./raccoon_dataset",
data_file="raccoon_dataset/data/raccoon_labels.csv",
transforms=get_transform(train=True))
dataset_test = RaccoonDataset(root="./raccoon_dataset",
data_file="raccoon_dataset/data/raccoon_labels.csv",
transforms=get_transform(train=False))
torch.manual_seed(1)
indices = torch.randperm(len(dataset)).tolist()
dataset = torch.utils.data.Subset(dataset, indices[:-40])
dataset_test = torch.utils.data.Subset(dataset_test, indices[-40:])
# define training and validation data loaders
data_loader = torch.utils.data.DataLoader(dataset, batch_size=2, shuffle=True, num_workers=4,
collate_fn=utils.collate_fn)
data_loader_test = torch.utils.data.DataLoader(dataset_test, batch_size=1, shuffle=False, num_workers=4,
collate_fn=utils.collate_fn)
print("We have: {} examples, {} are training and {} testing".format(len(indices), len(dataset), len(dataset_test)))
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
num_classes = 2
model = get_model(num_classes)
# construct an optimizer
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005)
# and a learning rate scheduler which decreases the learning rate by
# 10x every 3 epochs
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1)
# let's train it for 10 epochs
num_epochs = 10
for epoch in range(num_epochs):
# train for one epoch, printing every 10 iterations
train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
# update the learning rate
lr_scheduler.step()
# evaluate on the test dataset
evaluate(model, data_loader_test, device=device)
os.mkdir("pytorch object detection/raccoon/")
torch.save(model.state_dict(), "pytorch object detection/raccoon/model")
if __name__ == '__main__':
main()
| Turns out I had to reinstall torch and torch vision to make everything work
| https://stackoverflow.com/questions/63781490/ |
Can I speed up inference in PyTorch using autocast (automatic mixed precision)? | The docs (see also this) for autocast in PyTorch only discuss training. Does it speed things up if I also use autocast for inference?
| Yes it could (may not in some cases though).
You are processing data with lower precision (e.g. float16 vs float32).
Your program has to read and process less data in this case.
This might help with cache locality and hardware specific software (e.g. tensor cores if using CUDA)
| https://stackoverflow.com/questions/63782688/ |
PyTorch torch.no_grad() versus requires_grad=False | I'm following a PyTorch tutorial which uses the BERT NLP model (feature extractor) from the Huggingface Transformers library. There are two pieces of interrelated code for gradient updates that I don't understand.
(1) torch.no_grad()
The tutorial has a class where the forward() function creates a torch.no_grad() block around a call to the BERT feature extractor, like this:
bert = BertModel.from_pretrained('bert-base-uncased')
class BERTGRUSentiment(nn.Module):
def __init__(self, bert):
super().__init__()
self.bert = bert
def forward(self, text):
with torch.no_grad():
embedded = self.bert(text)[0]
(2) param.requires_grad = False
There is another portion in the same tutorial where the BERT parameters are frozen.
for name, param in model.named_parameters():
if name.startswith('bert'):
param.requires_grad = False
When would I need (1) and/or (2)?
If I want to train with a frozen BERT, would I need to enable both?
If I want to train to let BERT be updated, would I need to disable both?
Additionaly, I ran all four combinations and found:
with torch.no_grad requires_grad = False Parameters Ran
------------------ --------------------- ---------- ---
a. Yes Yes 3M Successfully
b. Yes No 112M Successfully
c. No Yes 3M Successfully
d. No No 112M CUDA out of memory
Can someone please explain what's going on? Why am I getting CUDA out of memory for (d) but not (b)? Both have 112M learnable parameters.
| This is an older discussion, which has changed slightly over the years (mainly due to the purpose of with torch.no_grad() as a pattern. An excellent answer that kind of answers your question as well can be found on Stackoverflow already.
However, since the original question is vastly different, I'll refrain from marking as duplicate, especially due to the second part about the memory.
An initial explanation of no_grad is given here:
with torch.no_grad() is a context manager and is used to prevent calculating gradients [...].
requires_grad on the other hand is used
to freeze part of your model and train the rest [...].
Source again the SO post.
Essentially, with requires_grad you are just disabling parts of a network, whereas no_grad will not store any gradients at all, since you're likely using it for inference and not training.
To analyze the behavior of your combinations of parameters, let us investigate what is happening:
a) and b) do not store any gradients at all, which means that you have vastly more memory available to you, no matter the number of parameters, since you're not retaining them for a potential backward pass.
c) has to store the forward pass for later backpropagation, however, only a limited number of parameter (3 million) are stored, which makes this still manageable.
d), however, needs to store the forward pass for all 112 million parameters, which causes you to run out of memory.
| https://stackoverflow.com/questions/63785319/ |
PyTorch C++ extension: How to index tensor and update it? | I'm creating a PyTorch C++ extension and after much research I can't figure out how to index a tensor and update its values. I found out how to iterate over a tensor's entries using the data_ptr() method, but that's not applicable to my use case.
Given is a matrix M, a list of lists (blocks) of index pairs P and a function f: dtype(M)^2 -> dtype(M)^2 that takes two values and spits out two new values.
I'm trying to implement the following pseudo code:
for each block B in P:
for each row R in M:
for each index-pair (i,j) in B:
M[R,i], M[R,j] = f(M[R,i], M[R,j])
After all, this code is going to run on the GPU using CUDA, but since I don't have any experience with that, I wanted to first write a pure C++ program and then convert it.
Can anyone suggest how to do this or how to convert the algorithm to do something equivalent?
| What I wanted to do can be done using the
tensor.accessor<scalar_dtype, num_dimensions>()
method. If executing on the GPU instead use scalars.packed_accessor64<scalar_dtype, num_dimensions, torch::RestrictPtrTraits>()
or
scalars.packed_accessor32<scalar_dtype, num_dimensions, torch::RestrictPtrTraits>() (depending on the size of your tensor).
auto num_rows = scalars.size(0);
matrix = torch::rand({10, 8});
auto a = matrix.accessor<float, 2>();
for (auto i = 0; i < num_rows; ++i) {
auto x = a[i][some_index];
auto new_x = some_function(x);
a[i][some_index] = new_x;
}
| https://stackoverflow.com/questions/63786278/ |
The results of tf.pow and torch.pow are different | I am trying to convert PyTorch scripts to Tensorflow.
This is the situation.
>>> a=np.random.rand(500,500)
>>> b=tf.pow(tf.constant(a, tf.float32),2)
>>> c=torch.pow(torch.tensor(a, dtype=torch.float32),2)
>>> np.sum(b.numpy() - c.numpy())
3.2455164e-06
I guess this difference is caused by floating-point representations between the two platform. (I am not sure)
Questions
Do I need to make them exactly the same with each other?
If so how do solve this problem?
| As the comments already say, this is a common issue.
float32 only provides 6 digits of precision, which is what you see, since the difference is smaller than e-06. So as the comments already mentioned this is usual beheviour, if you need more precision you might want to use float64.
There was already a github issue regarding this "problem", which was explained by albanD, who I took this information from.
| https://stackoverflow.com/questions/63786689/ |
In the PyTorch C++ extension, how can I access a single element in a tensor and convert it to a standard c++ datatype? | I am writing a c++ extension for pytorch, in which I need to access the elements of a tensor by index, and I also need to convert the element to a standard c++ type. Here is a short example. Suppose I have a 2d tensor a and I need to access a[i][j] and convert it to float.
#include <torch/extension.h>
float get(torch::Tensor a, int i, int j) {
return a[i][j];
}
The above is put into a file called tensortest.cpp. In another file setup.py I write
from setuptools import setup, Extension
from torch.utils import cpp_extension
setup(name='tensortest',
ext_modules=[cpp_extension.CppExtension('tensortest_cpp', ['tensortest.cpp'])],
cmdclass={'build_ext': cpp_extension.BuildExtension})
When I run python setup.py install the compiler reports the following error
running install
running bdist_egg
running egg_info
creating tensortest.egg-info
writing tensortest.egg-info/PKG-INFO
writing dependency_links to tensortest.egg-info/dependency_links.txt
writing top-level names to tensortest.egg-info/top_level.txt
writing manifest file 'tensortest.egg-info/SOURCES.txt'
/home/trisst/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py:335: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
reading manifest file 'tensortest.egg-info/SOURCES.txt'
writing manifest file 'tensortest.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'tensortest_cpp' extension
creating build
creating build/temp.linux-x86_64-3.8
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/user/.local/lib/python3.8/site-packages/torch/include -I/home/user/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/.local/lib/python3.8/site-packages/torch/include/TH -I/home/user/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/include/python3.8 -c tensortest.cpp -o build/temp.linux-x86_64-3.8/tensortest.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=tensortest_cpp -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /home/user/.local/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:149,
from /home/user/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /home/user/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /home/user/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /home/user/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7,
from /home/user/.local/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
from tensortest.cpp:1:
/home/user/.local/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:84: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
84 | #pragma omp parallel for if ((end - begin) >= grain_size)
|
tensortest.cpp: In function ‘float get(at::Tensor, int, int)’:
tensortest.cpp:4:15: error: cannot convert ‘at::Tensor’ to ‘float’ in return
4 | return a[i][j];
| ^
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
What can I do?
| Edited
#include <torch/extension.h>
float get(torch::Tensor a, int i, int j)
{
return a[i][j].item<float>();
}
| https://stackoverflow.com/questions/63786750/ |
Why the CUDA memory is not release with torch.cuda.empty_cache() | On my Windows 10, if I directly create a GPU tensor, I can successfully release its memory.
import torch
a = torch.zeros(300000000, dtype=torch.int8, device='cuda')
del a
torch.cuda.empty_cache()
But if I create a normal tensor and convert it to GPU tensor, I can no longer release its memory.
import torch
a = torch.zeros(300000000, dtype=torch.int8)
a.cuda()
del a
torch.cuda.empty_cache()
Why this is happening.
| At least in Ubuntu, your script does not release memory when it is run in the interactive shell and works as expected when running as a script. I think there are some reference issues in the in-place call. The following will work in both the interactive shell and as a script.
import torch
a = torch.zeros(300000000, dtype=torch.int8)
a = a.cuda()
del a
torch.cuda.empty_cache()
| https://stackoverflow.com/questions/63787404/ |
pytorch does not save pre-trained model weights loaded and the parts of it in the final model | I am currently working on pre-trained model on CIFAR-10 on my data, have removed the final fc layer of the model and have appended my own fc layer and softmax. There are seven networks which each of them are same as pre-trained part and are combined using appended fc layer. The following is pre-trained Network code :
class Bottleneck(nn.Module):
def __init__(self, inplanes, expansion=4, growthRate=12, dropRate=0):
super(Bottleneck, self).__init__()
planes = expansion * growthRate
self.bn1 = nn.BatchNorm2d(inplanes)
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, growthRate, kernel_size=3,
padding=1, bias=False)
self.relu = nn.ReLU(inplace=True)
self.dropRate = dropRate
def forward(self, x):
out = self.bn1(x)
out = self.relu(out)
out = self.conv1(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv2(out)
if self.dropRate > 0:
out = F.dropout(out, p=self.dropRate, training=self.training)
out = torch.cat((x, out), 1)
return out
class BasicBlock(nn.Module):
def __init__(self, inplanes, expansion=1, growthRate=12, dropRate=0):
super(BasicBlock, self).__init__()
planes = expansion * growthRate
self.bn1 = nn.BatchNorm2d(inplanes)
self.conv1 = nn.Conv2d(inplanes, growthRate, kernel_size=3,
padding=1, bias=False)
self.relu = nn.ReLU(inplace=True)
self.dropRate = dropRate
def forward(self, x):
out = self.bn1(x)
out = self.relu(out)
out = self.conv1(out)
if self.dropRate > 0:
out = F.dropout(out, p=self.dropRate, training=self.training)
out = torch.cat((x, out), 1)
return out
class Transition(nn.Module):
def __init__(self, inplanes, outplanes):
super(Transition, self).__init__()
self.bn1 = nn.BatchNorm2d(inplanes)
self.conv1 = nn.Conv2d(inplanes, outplanes, kernel_size=1,
bias=False)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
out = self.bn1(x)
out = self.relu(out)
out = self.conv1(out)
out = F.avg_pool2d(out, 2)
return out
class DenseNet(nn.Module):
def __init__(self, depth = 22, block = Bottleneck,
dropRate = 0, num_classes = 10, growthRate = 12, compressionRate = 2):
super(DenseNet, self).__init__()
assert (depth - 4) % 3 == 0, 'depth should be 3n+4'
n = (depth - 4) / 3 if block == BasicBlock else (depth - 4) // 6
self.growthRate = growthRate
self.dropRate = dropRate
# self.inplanes is a global variable used across multiple
# helper functions
self.inplanes = growthRate * 2
self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size = 3, padding = 1,
bias = False)
self.dense1 = self._make_denseblock(block, n)
self.trans1 = self._make_transition(compressionRate)
self.dense2 = self._make_denseblock(block, n)
self.trans2 = self._make_transition(compressionRate)
self.dense3 = self._make_denseblock(block, n)
self.bn = nn.BatchNorm2d(self.inplanes)
self.relu = nn.ReLU(inplace=True)
self.avgpool = nn.AvgPool2d(8)
#self.fc = nn.Linear(self.inplanes, num_classes)
# Weight initialization
# for m in self.modules():
# if isinstance(m, nn.Conv2d):
# n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
# m.weight.data.normal_(0, math.sqrt(2. / n))
# elif isinstance(m, nn.BatchNorm2d):
# m.weight.data.fill_(1)
# m.bias.data.zero_()
def _make_denseblock(self, block, blocks):
layers = []
for i in range(blocks):
# Currently we fix the expansion ratio as the default value
layers.append(block(self.inplanes, growthRate = self.growthRate, dropRate=self.dropRate))
self.inplanes += self.growthRate
return nn.Sequential(*layers)
def _make_transition(self, compressionRate):
inplanes = self.inplanes
outplanes = int(math.floor(self.inplanes // compressionRate))
self.inplanes = outplanes
return Transition(inplanes, outplanes)
def forward(self, x):
x = self.conv1(x)
x = self.trans1(self.dense1(x))
x = self.trans2(self.dense2(x))
x = self.dense3(x)
x = self.bn(x)
x = self.relu(x)
x = self.avgpool(x)
#x = x.view(x.size(0), -1)
#x = self.fc(x)
return x
def getParams(self, paramName):
if paramName == 'inplanes':
return self.inplanes
elif paramName == 'growthRate':
return self.growthRate
elif paramName == 'dropRate':
return self.dropRate
def densenet(**kwargs):
"""
Constructs a DenseNet model.
"""
return DenseNet(**kwargs)
and the next following is my code:
class Network(nn.Module):
def __init__(self,pretrained_dict, num_classes = 6, num_channels = 7,
expansion = 4, depth = 100, growthRate = 12, dropRate = 0):
super(Network, self).__init__()
self.num_channels = num_channels
# creating 7 channels networks
self.channels_dnsnets = []
for ch in range(self.num_channels):
# print(ch)
d = densenet(depth = depth)
d_dict = d.state_dict()
# 1. filter out unnecessary keys
pretrained_dict2 = {k[7:]: v for k, v in pretrained_dict.items() if k[7:] in d_dict}
# print('d_dict_keys :')
# print(d_dict.keys())
# print('*'*50)
# print('pretrained_dict2.keys:')
# print(pretrained_dict2.keys())
# print('*'*50)
# 2. overwrite entries in the existing state dict
d_dict.update(pretrained_dict2)
# 3. load the new state dict
d.load_state_dict(pretrained_dict2)
# freeze the layers of densenet
for param in d.parameters():
param.requires_grad = False
self.channels_dnsnets.append(d)
self.inplanes = self.channels_dnsnets[0].getParams(paramName = 'inplanes')
self.fc = nn.Linear(self.inplanes * self.num_channels, num_classes)
self.softmax = nn.Softmax(dim = 1)
def forward(self, x):
batch_size, channels, ht, wd, in_channels = x.shape
x = np.reshape(x,(batch_size,channels,in_channels,ht,wd))
out = []
for num in range(self.num_channels):
temp_out = self.channels_dnsnets[0](x[:,num,:])
temp_out = temp_out.view(temp_out.size(0),-1)
# print(temp_out.shape)
# print('*' * 50)
out.append(temp_out)
out = torch.stack(out,dim = 1)
# print(out.shape)
out = out.view(out.size(0),-1)
out = self.fc(out)
out = self.softmax(out)
return out
I am setting the optimizer as :
optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr = lr,
betas = (0.9, 0.999), eps = 1e-08, weight_decay = wd, amsgrad = False)
However, whenever I save the model the list of densenets and their weights are not saved and only fc layer and softmax layer weights are saved. Is there anything problematic with code? I am new to pytorch.
| The problem is that self.channels_dnsnets is just a list and will not be part of the state_dict. Only self.fc and self.softmax will be registered into the Module. The simplest change would be to define it like this:
self.channels_dnsnets = nn.ModuleList()
| https://stackoverflow.com/questions/63793021/ |
Pytorch Softmax giving nans and negative values as output | I am using softmax at the end of my model.
However after some training softmax is giving negative probability.In some situations I have encountered nans as probability as well.
one solution i found on searching is to use normalized softmax…however I can not find any pytorch imlpementaion for this.
Can someone please help to let know if there is a normalized softmax available or how to achieve this so that forward and backward propagations are smooth.
Please note that I am already using torch.nn.utils.clip_grad_norm_(model.parameters(), 40) to avoid exploding gradients
I am using pytorch 1.6.0
| Softmax will always return positive results, but it will keep track of other results:
m = nn.Softmax(dim=1)
input = torch.randn(2, 3)
print(input)
output = m(input)
output
Out:
tensor([[ 0.0983, 0.4150, -1.1342],
[ 0.3411, 0.5553, 0.0182]])
tensor([[0.3754, 0.5152, 0.1094],
[0.3375, 0.4181, 0.2444]])
You are tracking the rows.
Note how for
0.0983, 0.4150, -1.1342
You will get
0.3411, 0.5553, 0.0182
Saying that 0.4150 is the biggest value.
The hard max (as we know this is max()) will just return the maximum value.
So, if you have negative results for softmax this is not possible, you may have hit some implementation failure.
| https://stackoverflow.com/questions/63810637/ |
How to get columns from 2D tensor list in Pytorch | I have a one 2D list that contains tensors like:
[
[tensor([-0.0705, 1.2019]), tensor([[0]]), tensor([-1.3865], dtype=torch.float64), tensor([-0.0744, 1.1880]), tensor([False])],
[tensor([-0.0187, 1.3574]), tensor([[2]]), tensor([0.3373], dtype=torch.float64), tensor([-0.0221, 1.3473]), tensor([False])],
[....] ]
The outer list contains 64 little list. One little list contains 5 different tensor elements.
And I want to get first elements of inner lists like tensor([-0.0705, 1.2019]) and tensor([-0.0187, 1.3574]) and create tensor like 64x2 to feed my neural net.
How can I do this in the fastest way?
Thanks
| Use list comprehension
[item[0] for item in your_list]
Example:
li = [[tensor([-0.0705, 1.2019]), tensor([[0]]), tensor([-1.3865], dtype=torch.float64), tensor([-0.0744, 1.1880]), tensor([False])],
[tensor([-0.0187, 1.3574]), tensor([[2]]), tensor([0.3373], dtype=torch.float64), tensor([-0.0221, 1.3473]), tensor([False])]]
[item[0] for item in li]
[tensor([-0.0705, 1.2019]), tensor([-0.0187, 1.3574])]
| https://stackoverflow.com/questions/63810741/ |
Getting Unknown type annotation error when JIT saving pytorch model | When JIT saving “model.pt” of a complex pytorch model with many custom classes, I am encountering the error that pytorch doesn’t know the type annotation of one of those custom classes. In other words, the following code (drastically summarized from original) fails on the seventh line:
import torch
from gan import Generator
from gan.blocks import SpadeBlock
generator = Generator()
generator.load_weights("path/to/weigts")
jitted = torch.jit.script(generator)
torch.jit.save(jitted, "model.pt")
Error:
Traceback (most recent call last):
File "pth2onnx.py", line 72, in <module>
to_torch_jit(generator)
File "pth2onnx.py", line 24, in to_torch_jit
jitted = torch.jit.script(generator)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/__init__.py", line 1516, in script
return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 310, in create_script_module
concrete_type = concrete_type_store.get_or_create_concrete_type(nn_module)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 269, in get_or_create_concrete_type
concrete_type_builder = infer_concrete_type_builder(nn_module)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 138, in infer_concrete_type_builder
sub_concrete_type = concrete_type_store.get_or_create_concrete_type(item)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 269, in get_or_create_concrete_type
concrete_type_builder = infer_concrete_type_builder(nn_module)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 138, in infer_concrete_type_builder
sub_concrete_type = concrete_type_store.get_or_create_concrete_type(item)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 269, in get_or_create_concrete_type
concrete_type_builder = infer_concrete_type_builder(nn_module)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 126, in infer_concrete_type_builder
attr_type = infer_type(name, item)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 99, in infer_type
attr_type = torch.jit.annotations.ann_to_type(class_annotations[name], _jit_internal.fake_range())
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/annotations.py", line 303, in ann_to_type
raise ValueError("Unknown type annotation: '{}'".format(ann))
ValueError: Unknown type annotation: '<class 'gan.blocks.SpadeBlock'>'
The type it complains about is indeed a class we ourselves have programmed and used in the loaded Generator. I would appreciate pointers on what could cause this or how to investigate this!
I tried the following:
explicitly importing SpadeBlock in the script that calls torch.jit.script
ensured it inherits from nn.Module (as does Generator)
ensured the gan package is installed, using pip install --user -e
Any ideas? Thanks in advance!
| The issue turned out to be the that I was using class variable names which got mangled. Example:
class Generator(nn.Module):
__main: nn.Module
The two leading underscores are the cause. Changing them to a single or no underscore. resolves the issue.
class Generator(nn.Module):
main: nn.Module
| https://stackoverflow.com/questions/63813893/ |
PyTorch: Dropout (?) causes different model convergence for training+validation V. training-only | We are facing a very strange issue. We tested the exact same model into two different “execution” settings. In the first case, given a certain amount of epochs, we train using mini-batches for one epoch, and thereafter we test on the validation set following the same criteria. Then, we go for the next epoch. Clearly, before each training epoch, we use model.train(), and before validation we turn on model.eval().
Then we take the exact same model (same init, same dataset, same epochs, etc.) and we just train it without validation after each epoch.
Just looking at performance on training set, we observed that, even if we fixed all seeds, the two training procedures evolve differently and produce quite different metrics results (losses, accuracy, and so on). Specifically, the training-only procedure is less performing.
We also observe the following things:
It is not a reproducibility issue, because multiple executions of the
same procedure produce exactly the same results (and this is
intended);
Removing the dropout, it appears that the problem vanishes;
Batchnorm1d layer, that still has different behaviours between
training and evaluation, seems to work properly;
The issue still happens if we move from training onto TPUs to CPUs.
We are working and tried Pythorch 1.6, Pythorch nightly, XLA 1.6.
We quite lost one full day in trying to tackle this issue (and no, we cannot avoid using dropout). Does anyone have any idea about how to solve this fact?
Thank you very much!
p.s. Here the code employed for the training (on CPU).
def sigmoid(x):
return 1 / (1 + torch.exp(-x))
def _run(model, EPOCHS, training_data_in, validation_data_in=None):
def train_fn(train_dataloader, model, optimizer, criterion):
running_loss = 0.
running_accuracy = 0.
running_tp = 0.
running_tn = 0.
running_fp = 0.
running_fn = 0.
model.train()
for batch_idx, (ecg, spo2, labels) in enumerate(train_dataloader, 1):
optimizer.zero_grad()
outputs = model(ecg)
loss = criterion(outputs, labels)
loss.backward() # calculate the gradients
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optimizer.step() # update the network weights
running_loss += loss.item()
predicted = torch.round(sigmoid(outputs.data)) # here determining the sigmoid, not included in the model
running_accuracy += (predicted == labels).sum().item() / labels.size(0)
fp = ((predicted - labels) == 1.).sum().item()
fn = ((predicted - labels) == -1.).sum().item()
tp = ((predicted + labels) == 2.).sum().item()
tn = ((predicted + labels) == 0.).sum().item()
running_tp += tp
running_fp += fp
running_tn += tn
running_fn += fn
retval = {'loss':running_loss / batch_idx,
'accuracy':running_accuracy / batch_idx,
'tp':running_tp,
'tn':running_tn,
'fp':running_fp,
'fn':running_fn
}
return retval
def valid_fn(valid_dataloader, model, criterion):
running_loss = 0.
running_accuracy = 0.
running_tp = 0.
running_tn = 0.
running_fp = 0.
running_fn = 0.
model.eval()
for batch_idx, (ecg, spo2, labels) in enumerate(valid_dataloader, 1):
outputs = model(ecg)
loss = criterion(outputs, labels)
running_loss += loss.item()
predicted = torch.round(sigmoid(outputs.data)) # here determining the sigmoid, not included in the model
running_accuracy += (predicted == labels).sum().item() / labels.size(0)
fp = ((predicted - labels) == 1.).sum().item()
fn = ((predicted - labels) == -1.).sum().item()
tp = ((predicted + labels) == 2.).sum().item()
tn = ((predicted + labels) == 0.).sum().item()
running_tp += tp
running_fp += fp
running_tn += tn
running_fn += fn
retval = {'loss':running_loss / batch_idx,
'accuracy':running_accuracy / batch_idx,
'tp':running_tp,
'tn':running_tn,
'fp':running_fp,
'fn':running_fn
}
return retval
# Defining data loaders
train_dataloader = torch.utils.data.DataLoader(training_data_in, batch_size=BATCH_SIZE, shuffle=True, num_workers=1)
if validation_data_in != None:
validation_dataloader = torch.utils.data.DataLoader(validation_data_in, batch_size=BATCH_SIZE, shuffle=False, num_workers=1)
# Defining the loss function
criterion = nn.BCEWithLogitsLoss()
# Defining the optimizer
import torch.optim as optim
optimizer = optim.AdamW(model.parameters(), lr=3e-4, amsgrad=False, eps=1e-07)
# Training code
metrics_history = {"loss":[], "accuracy":[], "precision":[], "recall":[], "f1":[], "specificity":[], "accuracy_bis":[], "tp":[], "tn":[], "fp":[], "fn":[],
"val_loss":[], "val_accuracy":[], "val_precision":[], "val_recall":[], "val_f1":[], "val_specificity":[], "val_accuracy_bis":[], "val_tp":[], "val_tn":[], "val_fp":[], "val_fn":[],}
train_begin = time.time()
for epoch in range(EPOCHS):
start = time.time()
print("EPOCH:", epoch+1)
train_metrics = train_fn(train_dataloader=train_dataloader,
model=model,
optimizer=optimizer,
criterion=criterion)
metrics_history["loss"].append(train_metrics["loss"])
metrics_history["accuracy"].append(train_metrics["accuracy"])
metrics_history["tp"].append(train_metrics["tp"])
metrics_history["tn"].append(train_metrics["tn"])
metrics_history["fp"].append(train_metrics["fp"])
metrics_history["fn"].append(train_metrics["fn"])
precision = train_metrics["tp"] / (train_metrics["tp"] + train_metrics["fp"]) if train_metrics["tp"] > 0 else 0
recall = train_metrics["tp"] / (train_metrics["tp"] + train_metrics["fn"]) if train_metrics["tp"] > 0 else 0
specificity = train_metrics["tn"] / (train_metrics["tn"] + train_metrics["fp"]) if train_metrics["tn"] > 0 else 0
f1 = 2*precision*recall / (precision + recall) if precision*recall > 0 else 0
metrics_history["precision"].append(precision)
metrics_history["recall"].append(recall)
metrics_history["f1"].append(f1)
metrics_history["specificity"].append(specificity)
if validation_data_in != None:
# Calculate the metrics on the validation data, in the same way as done for training
with torch.no_grad(): # don't keep track of the info necessary to calculate the gradients
val_metrics = valid_fn(valid_dataloader=validation_dataloader,
model=model,
criterion=criterion)
metrics_history["val_loss"].append(val_metrics["loss"])
metrics_history["val_accuracy"].append(val_metrics["accuracy"])
metrics_history["val_tp"].append(val_metrics["tp"])
metrics_history["val_tn"].append(val_metrics["tn"])
metrics_history["val_fp"].append(val_metrics["fp"])
metrics_history["val_fn"].append(val_metrics["fn"])
val_precision = val_metrics["tp"] / (val_metrics["tp"] + val_metrics["fp"]) if val_metrics["tp"] > 0 else 0
val_recall = val_metrics["tp"] / (val_metrics["tp"] + val_metrics["fn"]) if val_metrics["tp"] > 0 else 0
val_specificity = val_metrics["tn"] / (val_metrics["tn"] + val_metrics["fp"]) if val_metrics["tn"] > 0 else 0
val_f1 = 2*val_precision*val_recall / (val_precision + val_recall) if val_precision*val_recall > 0 else 0
metrics_history["val_precision"].append(val_precision)
metrics_history["val_recall"].append(val_recall)
metrics_history["val_f1"].append(val_f1)
metrics_history["val_specificity"].append(val_specificity)
print(" > Training/validation loss:", round(train_metrics['loss'], 4), round(val_metrics['loss'], 4))
print(" > Training/validation accuracy:", round(train_metrics['accuracy'], 4), round(val_metrics['accuracy'], 4))
print(" > Training/validation precision:", round(precision, 4), round(val_precision, 4))
print(" > Training/validation recall:", round(recall, 4), round(val_recall, 4))
print(" > Training/validation f1:", round(f1, 4), round(val_f1, 4))
print(" > Training/validation specificity:", round(specificity, 4), round(val_specificity, 4))
else:
print(" > Training loss:", round(train_metrics['loss'], 4))
print(" > Training accuracy:", round(train_metrics['accuracy'], 4))
print(" > Training precision:", round(precision, 4))
print(" > Training recall:", round(recall, 4))
print(" > Training f1:", round(f1, 4))
print(" > Training specificity:", round(specificity, 4))
print("Completed in:", round(time.time() - start, 1), "seconds \n")
print("Training completed in:", round((time.time()- train_begin)/60, 1), "minutes")
# Save the model weights
torch.save(model.state_dict(), './nnet_model.pt')
# Save the metrics history
torch.save(metrics_history, 'training_history')
And here is the function that initializes the model and set the seeds, called before each execution of the code of "_run":
def reinit_model():
torch.manual_seed(42)
np.random.seed(42)
random.seed(42)
net = Net() # the model
return net
| Ok, I found the issue.
The problem is determined by the fact that, apparently, running the evaluation some random seeds are changed, and this affects the training phase.
The solution is thus as follows:
at the beginning of function "_run()", set all seeds states to the desired value, e.g., 42. Then, save those seeds to disk.
at the beginning of function "train_fn()", read the seeds states from disk, and set them
at the end of function "train_fn()", save the seeds states to disk
For instance, running on TPU with XLA, the following instructions have to be used:
at the beginning of function "_run()": xm.set_rng_state(42), xm.save(xm.get_rng_state(), 'xm_seed')
at the beginning of function "train_fn()": xm.set_rng_state(torch.load('xm_seed'), device=device) (you can also print here the seed for verification purposes with xm.master_print(xm.get_rng_state())
at the end of function "train_fn_()": xm.save(xm.get_rng_state(), 'xm_seed')
| https://stackoverflow.com/questions/63815020/ |
Why Pytorch autograd need another vector to backward instead of computing Jacobian? | To perform backward in Pytorch, we can use an optional parameter y.backward(v) to compute the Jacobian matrix multiplied by v:
x = torch.randn(3, requires_grad=True)
y = x * 2
v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(v)
print(x.grad)
I think that costs the same to compute the Jacobian matrix, because each node in the AD graph which is necessary to compute the Jacobian matrix is still computed. So why not Pytorch doesn't want to give us the Jacobian matrix?
| When you call backward() PyTorch udpdates the grad of each learnable parameter with the gradient of some loss function L w.r.t to that parameter. It has been designed with Gradient Descent [GD] (and its variants) in mind. Once the gradient has been computed you can update each parameter with x = x - learning_rate * x.grad. Indeed in the background the Jacobian has to be computed but it is not what one needs (generally) when applying GD optimization. The vector [0.1, 1.0, 0.0001] lets you reduce the output to a scalar so that x.grad will be a vector (and not a matrix, in case you do not reduce), and hence GD is well defined. You could, however, obtain the Jacobian using backward with one-hot vectors. For example, in this case:
x = torch.randn(3, requires_grad=True)
y = x * 2
J = torch.zeros(x.shape[0],x.shape[0])
for i in range(x.shape[0]):
v = torch.tensor([1 if j==i else 0 for j in range(x.shape[0])], dtype=torch.float)
y.backward(v, retain_graph=True)
J[:,i] = x.grad
x.grad.zero_()
print(J)
| https://stackoverflow.com/questions/63816321/ |
How to make a 3-d like visualization of a CNN model developed using pytorch? | Currently, I have a CNN model that I developed in Pytorch. I have used hiddenlayer package to create an image like shown in image 1. But, I want to create an image of the model that should look like something in image 2. Is there any package that I can use to achieve this?
image 1:
image 2:
| I personally use figma.com and "draw" it myself, but if you want to create it automatically you should check out this github repository, you might find a nice tool.
| https://stackoverflow.com/questions/63818907/ |
Converting Tensorflow code to Pytorch - performance metrics very different | I have converted a tensorflow code for timeseries analysis to pytorch and performance difference is very high, in fact pytorch layers cannot account for seasonality at all. It feels like I must be missing something important.
Please help find where the pytorch code is lacking that the learning is not up to the par. I noticed that loss values has high jumps when it encounters the season change and is not learning that. With the same layers, nodes and every other thing, I imagined the performance to be close.
# tensorflow code
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(100, input_shape=[window_size], activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1)
])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(lr=1e-6, momentum=0.9))
model.fit(dataset,epochs=100,verbose=0)
forecast = []
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
# pytorch code
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
class tsdataset(Dataset):
def __init__(self, series, window_size):
self.series = series
self.window_size = window_size
self.dataset, self.labels = self.preprocess()
def preprocess(self):
series = self.series
final, labels = [], []
for i in range(len(series)-self.window_size):
final.append(np.array(series[i:i+window_size]))
labels.append(np.array(series[i+window_size]))
return torch.from_numpy(np.array(final)), torch.from_numpy(np.array(labels))
def __getitem__(self,index):
# print(self.dataset[index], self.labels[index], index)
return self.dataset[index], self.labels[index]
def __len__(self):
return len(self.dataset)
train_dataset = tsdataset(x_train, window_size)
train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
class tspredictor(nn.Module):
def __init__(self, window_size, out1, out2, out3):
super(tspredictor, self).__init__()
self.l1 = nn.Linear(window_size, out1)
self.l2 = nn.Linear(out1, out2)
self.l3 = nn.Linear(out2, out3)
def forward(self,seq):
l1 = F.relu(self.l1(seq))
l2 = F.relu(self.l2(l1))
l3 = self.l3(l2)
return l3
model = tspredictor(20, 100,10,1)
loss_function = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=1e-6, momentum=0.9)
for epoch in range(100):
for t,l in train_dataloader:
model.zero_grad()
tag_scores = model(t)
loss = loss_function(tag_scores, l)
loss.backward()
optimizer.step()
# print("Epoch is {}, loss is {}".format(epoch, loss.data))
forecast = []
for time in range(len(series) - window_size):
prediction = model(torch.from_numpy(series[time:time + window_size][np.newaxis]))
forecast.append(prediction)
forecast = forecast[split_time-window_size:]
results = np.array(forecast)
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
To generate data, you can use:
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(False)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.1,
np.cos(season_time * 6 * np.pi),
2 / np.exp(9 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(10 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.005
noise_level = 3
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=51)
split_time = 3000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
| There was a broadcasting issue in the loss function. Changing to the loss to one below fixes it:
loss = loss_function(tag_scores, l.view(-1,1))
| https://stackoverflow.com/questions/63819602/ |
PyTorch RNN is more efficient with `batch_first=False`? | In machine translation, we always need to slice out the first timestep (the SOS token) in the annotation and prediction.
When using batch_first=False, slicing out the first timestep still keeps the tensor contiguous.
import torch
batch_size = 128
seq_len = 12
embedding = 50
# Making a dummy output that is `batch_first=False`
batch_not_first = torch.randn((seq_len,batch_size,embedding))
batch_not_first = batch_first[1:].view(-1, embedding) # slicing out the first time step
However, if we use batch_first=True, after slicing, the tensor is no longer contiguous. We need to make it contiguous before we can do different operations such as view.
batch_first = torch.randn((batch_size,seq_len,embedding))
batch_first[:,1:].view(-1, embedding) # slicing out the first time step
output>>>
"""
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-8-a9bd590a1679> in <module>
----> 1 batch_first[:,1:].view(-1, embedding) # slicing out the first time step
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
"""
Does that mean batch_first=False is better, at least, in the context of machine translation? Since it saves us from doing the contiguous() step. Is there any cases that batch_first=True works better?
| Performance
There doesn't seem to be a considerable difference between batch_first=True and batch_first=False. Please see the script below:
import time
import torch
def time_measure(batch_first: bool):
torch.cuda.synchronize()
layer = torch.nn.RNN(10, 20, batch_first=batch_first).cuda()
if batch_first:
inputs = torch.randn(100000, 7, 10).cuda()
else:
inputs = torch.randn(7, 100000, 10).cuda()
start = time.perf_counter()
for chunk in torch.chunk(inputs, 100000 // 64, dim=0 if batch_first else 1):
_, last = layer(chunk)
return time.perf_counter() - start
print(f"Time taken for batch_first=False: {time_measure(False)}")
print(f"Time taken for batch_first=True: {time_measure(True)}")
On my device (GTX 1050 Ti), PyTorch 1.6.0 and CUDA 11.0 here are the results:
Time taken for batch_first=False: 0.3275816479999776
Time taken for batch_first=True: 0.3159054920001836
(and it varies either way so nothing conclusive).
Code readability
batch_first=True is simpler when you want to use other PyTorch layers which require batch as 0th dimension (which is the case for almost all torch.nn layers like torch.nn.Linear).
In this case you would have to permute returned tensor anyway if batch_first=False was specified.
Machine translation
It should be better as the tensor is contiguous all the time and no copy of data has to be done. It also looks cleaner to slice using [1:] instead of [:,1:].
| https://stackoverflow.com/questions/63822152/ |
Why do we need the custom dataset class and use of _getitem_ method in NLP, BERT fine tuning etc | I am a newbie in NLP and has been studying the usage of BERT for NLP tasks. In many notebooks, I See that a custom dataset class is defined and getitem method is defined (along with len).
Tweetdataset class in this notebook - https://www.kaggle.com/abhishek/roberta-inference-5-folds
and text_Dataset class in this notebook - https://engineering.wootric.com/when-bert-meets-pytorch
Can some one please explain the reason, need for defining the custom dataset class and the getitem (and len) method. thank you
| It is a recommended abstraction in pytorch to define datasets by inheriting torch.utils.data.Dataset. Those objects define how many elements are there (__len__ method) and how to get a single item via specified index (__getitem__(index)).
Its source code:
class Dataset(object):
def __getitem__(self, index):
raise NotImplementedError
def __add__(self, other):
return ConcatDataset([self, other])
So it's basically a thin wrapper which adds possibility to concatenate two Dataset objects. For readability and API compatibility you should inherit from it (unlike the one provided in kaggle).
You can read more about PyTorch's data functionality here
| https://stackoverflow.com/questions/63822730/ |
AttributeError : 'tuple' has no attribute 'to' | I am writing this Image Classifier and I have defined the loaders but getting this mistake and I have no clue about it.
I have defined the train loader, for a better explanation I tried this
for ina,lab in train_loader:
print(type(ina))
print(type(lab))
and I got
<class 'torch.Tensor'>
<class 'tuple'>
Now, For training of the model, I did
def train_model(model,optimizer,n_epochs,criterion):
start_time = time.time()
for epoch in range(1,n_epochs-1):
epoch_time = time.time()
epoch_loss = 0
correct = 0
total = 0
print( "Epoch {}/{}".format(epoch,n_epochs))
model.train()
for inputs,labels in train_loader:
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
output = model(inputs)
loss = criterion(output,labels)
loss.backward()
optimizer.step()
epoch_loss +=loss.item()
_,pred =torch.max(output,1)
correct += (pred.cpu()==label.cpu()).sum().item()
total +=labels.shape[0]
acc = correct/total
and I got the error:
Epoch 1/15
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-36-fea243b3636a> in <module>
----> 1 train_model(model=arch, optimizer=optim, n_epochs=15, criterion=criterion)
<ipython-input-34-b53149a4bac0> in train_model(model, optimizer, n_epochs, criterion)
12 for inputs,labels in train_loader:
13 inputs = inputs.to(device)
---> 14 labels = labels.to(device)
15 optimizer.zero_grad()
16 output = model(inputs)
AttributeError: 'tuple' object has no attribute 'to'
If you want anything more, please tell me!
Thanks
Edit: The label looks like this.
This was an Image Classification between Bee and Wasp. It also contains insects and non insects
('wasp', 'wasp', 'insect', 'insect', 'wasp', 'insect', 'insect', 'wasp', 'wasp', 'bee', 'insect', 'insect', 'other', 'bee', 'other', 'wasp', 'other', 'wasp', 'bee', 'bee', 'wasp', 'wasp', 'wasp', 'wasp', 'bee', 'wasp', 'wasp', 'other', 'bee', 'wasp', 'bee', 'bee')
('wasp', 'wasp', 'insect', 'bee', 'other', 'wasp', 'insect', 'wasp', 'insect', 'insect', 'insect', 'wasp', 'wasp', 'insect', 'wasp', 'wasp', 'wasp', 'bee', 'wasp', 'wasp', 'insect', 'insect', 'wasp', 'wasp', 'bee', 'wasp', 'insect', 'bee', 'bee', 'insect', 'insect', 'other')
| It literally means that the the tuple class in Python doesn't have a method called to. Since you're trying to put your labels onto your device, just do labels = torch.tensor(labels).to(device).
If you don't want to do this, you can change the way the DataLoader works by making it return your labels as a PyTorch tensor rather than a tuple.
Edit
Since the labels seem to be strings, I would convert them to one-hot encoded vectors first:
>>> import torch
>>> labels_unique = set(labels)
>>> keys = {key: value for key, value in zip(labels_unique, range(len(labels_unique)))}
>>> labels_onehot = torch.zeros(size=(len(labels), len(keys)))
>>> for idx, label in enumerate(labels_onehot):
... labels_onehot[idx][keys[label]] = 1
...
>>> labels_onehot = labels.to(device)
I'm shooting a bit in the dark here because I don't know the details exactly, but yeah strings won't work with tensors.
| https://stackoverflow.com/questions/63825841/ |
torch.nn.functional vs torch.nn - Pytorch | While adding loss in Pytorch, I have the same function in torch.nn.Functional as well as in torch.nn. what is the difference ?
torch.nn.CrossEntropyLoss() and torch.nn.functional.cross_entropy
| Putting same text from PyTorch discussion forum @Alban D has given answer to similar question.
F.cross entropy vs torch.nn.Cross_Entropy_Loss
There isn’t much difference for losses.
The main difference between the nn.functional.xxx and the nn.Xxx is that one has a state and one does not.
This means that for a linear layer for example, if you use the functional version, you will need to handle the weights yourself (including passing them to the optimizer or moving them to the gpu) while the nn.Xxx version will do all of that for you with .parameters() or .to(device).
For loss functions, as no parameters are needed (in general), you won’t find much difference. Except for example, if you use cross entropy with some weighting between your classes, using the nn.CrossEntropyLoss() module, you will give your weights only once while creating the module and then use it. If you were using the functional version, you will need to pass the weights every single time you will use it.
| https://stackoverflow.com/questions/63826328/ |
what is the pytorch equivalent of a tensorflow linear regression? | I am learning pytorch, that to do a basic linear regression on this data created this way here:
from sklearn.datasets import make_regression
x, y = make_regression(n_samples=100, n_features=1, noise=15, random_state=42)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x, y)
I know that using tensorflow this code can solve:
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(units=1, activation='linear', input_shape=(x.shape[1], )))
model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.05), loss='mse')
hist = model.fit(x, y, epochs=15, verbose=0)
but I need to know what the pytorch equivalent would be like, what I tried to do was this:
# Model Class
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linear = nn.Linear(1,1)
def forward(self, x):
x = self.linear(x)
return x
def predict(self, x):
return self.forward(x)
model = Net()
loss_fn = F.mse_loss
opt = torch.optim.SGD(modelo.parameters(), lr=0.05)
# Funcao para treinar
def fit(num_epochs, model, loss_fn, opt, train_dl):
# Repeat for given number of epochs
for epoch in range(num_epochs):
# Train with batches of data
for xb, yb in train_dl:
# 1. Generate predictions
pred = model(xb)
# 2. Calculate Loss
loss = loss_fn(pred, yb)
# 3. Campute gradients
loss.backward()
# 4. Update parameters using gradients
opt.step()
# 5. Reset the gradients to zero
opt.zero_grad()
# Print the progress
if (epoch+1) % 10 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
# Training
fit(200, model, loss_fn, opt, data_loader)
But the model doesn't learn anything, I don't know what I can do anymore.
The input/output dimensions is (1/1)
| Dataset
First of all, you should define torch.utils.data.Dataset
import torch
from sklearn.datasets import make_regression
class RegressionDataset(torch.utils.data.Dataset):
def __init__(self):
data = make_regression(n_samples=100, n_features=1, noise=0.1, random_state=42)
self.x = torch.from_numpy(data[0]).float()
self.y = torch.from_numpy(data[1]).float()
def __len__(self):
return len(self.x)
def __getitem__(self, index):
return self.x[index], self.y[index]
It converts numpy data to PyTorch's tensor inside __init__ and converts data to float (numpy has double by default while PyTorch's default is float in order to use less memory).
Apart from that it will simply return tuple of features and respective regression targets.
Fit
Almost there, but you have to flatten output from the model (described below). torch.nn.Linear will return tensors of shape (batch, 1) while your targets are of shape (batch,). flatten() will remove unnecessary 1 dimension.
# 2. Calculate Loss
loss = criterion(pred.flatten(), yb)
Model
That is all you need actually:
model = torch.nn.Linear(1, 1)
Any layer can be called directly, no need for forward and inheritance for simple models.
Calling
The rest is almost okay, you just have to create torch.utils.data.DataLoader and pass instance of our dataset. What DataLoader does is it issues __getitem__ of dataset multiple times and creates a batch of specified size (there is some other funny business, but that's the idea):
dataset = RegressionDataset()
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32)
model = torch.nn.Linear(1, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=3e-4)
fit(5000, model, criterion, optimizer, dataloader)
Also notice I've used torch.nn.MSELoss(), as we are passing object it looks better than function in this case.
Whole code
To make it easier:
import torch
from sklearn.datasets import make_regression
class RegressionDataset(torch.utils.data.Dataset):
def __init__(self):
data = make_regression(n_samples=100, n_features=1, noise=0.1, random_state=42)
self.x = torch.from_numpy(data[0]).float()
self.y = torch.from_numpy(data[1]).float()
def __len__(self):
return len(self.x)
def __getitem__(self, index):
return self.x[index], self.y[index]
# Funcao para treinar
def fit(num_epochs, model, criterion, optimizer, train_dl):
# Repeat for given number of epochs
for epoch in range(num_epochs):
# Train with batches of data
for xb, yb in train_dl:
# 1. Generate predictions
pred = model(xb)
# 2. Calculate Loss
loss = criterion(pred.flatten(), yb)
# 3. Compute gradients
loss.backward()
# 4. Update parameters using gradients
optimizer.step()
# 5. Reset the gradients to zero
optimizer.zero_grad()
# Print the progress
if (epoch + 1) % 10 == 0:
print(
"Epoch [{}/{}], Loss: {:.4f}".format(epoch + 1, num_epochs, loss.item())
)
dataset = RegressionDataset()
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32)
model = torch.nn.Linear(1, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=3e-4)
fit(5000, model, criterion, optimizer, dataloader)
You should get around 0.053 loss or so, vary noise or other params for harder/easier regression task.
| https://stackoverflow.com/questions/63830441/ |
Low Validation Score on Pretrained Alexnet from Pytorch models for ImageNet 2012 dataset | I am using pre-trained AlexNet network to validate some prior work.
The code is as follows:
import os
import torch
import torchvision
import torchvision.datasets as datasets
import torchvision.models as models
import torchvision.transforms as transforms
model = torch.hub.load('pytorch/vision:v0.6.0', 'alexnet', pretrained=True)
model.eval()
batchsize = 50000
workers = 1
dataset_path = 'data/imagenet_2012/'
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
val_data = datasets.ImageFolder(root=os.path.join(dataset_path, 'val'), transform=transforms.Compose( [transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), normalize,]))
val_loader = torch.utils.data.DataLoader(val_data, batch_size=batchsize, num_workers=workers)
batch = next(iter(val_loader))
images, labels = batch
with torch.no_grad():
output = model(images)
for i in output:
out_soft = torch.nn.functional.softmax(i, dim=0)
print(int(torch.argmax(out_soft)))
When I execute this and compare with ILSVRC2012_validation_ground_truth.txt, I get top-1 accuracy of 5% only.
What am I doing wrong here?
Thank you.
| So, Pytorch/Caffe have their own "ground truth" files, which can be obtained from here:
https://gist.github.com/ksimonyan/fd8800eeb36e276cd6f9#note
I manually tested the images in the validation folder of the Imagenet dataset against the val.txt file in the tar file provided at the link above to verify the order.
Update:
New validation accuracy based on the groundtruth in the zip file in the link:
Top_1 = 56.522%
Top_5 = 79.066%
| https://stackoverflow.com/questions/63835782/ |
Tensorboard resume training plot | I ran a reinforcement learning training script which used Pytorch and logged data to tensorboardX and saved checkpoints. Now I want to continue training. How do I tell tensorboardX to continue from where I left off? Thank you!
| I figured out how to continue the training plot. While creating the summarywriter, we need to provide the same log_dir that we used while training the first time.
from tensorboardX import SummaryWriter
writer = SummaryWriter('log_dir')
Then inside the training loop step needs to start from where it left (not from 0):
writer.add_scalar('average reward',rewards.mean(),step)
| https://stackoverflow.com/questions/63838417/ |
Anaconda reading wrong CUDA version | I have a conda environment with PyTorch and Tensorflow, which both require CUDA 9.0 (~cudatoolkit 9.0 from conda). After installing pytorch with torchvision and the cudatoolkit (like they provided on their website) I wanted to install Tensorflow, the problem here is that I get this error:
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: /
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:
Specifications:
- tensorflow==1.12.0 -> python[version='2.7.*|3.6.*']
- tensorflow==1.12.0 -> python[version='>=2.7,<2.8.0a0|>=3.6,<3.7.0a0']
Your python: python=3.5
If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
The following specifications were found to be incompatible with your system:
- feature:/linux-64::__cuda==10.2=0
- feature:|@/linux-64::__cuda==10.2=0
Your installed version is: 10.2
If I run nvcc or nvidia-smi on my host or the activated conda environment, I get that I have installed CUDA 10.2, even though conda list shows me that cudatoolkit 9.0 is installed. Any solution to this?
EDIT:
When running this code sample:
# setting device on GPU if available, else CPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
print()
#Additional Info when using cuda
if device.type == 'cuda':
print(torch.cuda.get_device_name(0))
print('Memory Usage:')
print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
print('Cached: ', round(torch.cuda.memory_cached(0)/1024**3,1), 'GB')
print(torch.version.cuda)
I get this output:
GeForce GTX 1050
Memory Usage:
Allocated: 0.0 GB
Cached: 0.0 GB
9.0.176
So PyTorch does get the correct CUDA version, I just cant get tensorflow-gpu installed.
|
If I run nvcc or nvidia-smi on my host or the activated conda environment, I get that I have installed CUDA 10.2, even though conda list shows me that cudatoolkit 9.0 is installed. Any solution to this?
cudatoolkit doesn't ship with compiler (nvcc), thus when you run nvcc you start compiler from system wide installation. That's why it prints 10.2 istead of 9.0, and pytorch sees the local cudatoolkit.
anaconda / packages / cudatoolkit :
This CUDA Toolkit includes GPU-accelerated libraries, and the CUDA runtime for the Conda ecosystem. For the full CUDA Toolkit with a compiler and development tools visit https://developer.nvidia.com/cuda-downloads
From your comment above I understood that you are using python=3.5.6. So, first of all you should search for available tensorflow py35 builds using:
conda search tensorflow | grep py35
I have the following output:
tensorflow 1.9.0 eigen_py35h8c89287_1 pkgs/main
tensorflow 1.9.0 gpu_py35h42d5ad8_1 pkgs/main
tensorflow 1.9.0 gpu_py35h60c0932_1 pkgs/main
tensorflow 1.9.0 gpu_py35hb39db67_1 pkgs/main
tensorflow 1.9.0 mkl_py35h5be851a_1 pkgs/main
tensorflow 1.10.0 eigen_py35h5ed898b_0 pkgs/main
tensorflow 1.10.0 gpu_py35h566a776_0 pkgs/main
tensorflow 1.10.0 gpu_py35ha6119f3_0 pkgs/main
tensorflow 1.10.0 gpu_py35hd9c640d_0 pkgs/main
tensorflow 1.10.0 mkl_py35heddcb22_0 pkgs/main
As you can see there is no tensorflow 1.12.0 builds for py35, and that's why you are getting that error. You can try to inspect other conda channels, for example, conda-forge:
conda search tensorflow -c conda-forge | grep py35
But that wasn't helpful:
tensorflow 0.9.0 py35_0 conda-forge
tensorflow 0.10.0 py35_0 conda-forge
tensorflow 0.11.0rc0 py35_0 conda-forge
tensorflow 0.11.0rc2 py35_0 conda-forge
tensorflow 0.11.0 py35_0 conda-forge
tensorflow 0.12.1 py35_0 conda-forge
tensorflow 0.12.1 py35_1 conda-forge
tensorflow 0.12.1 py35_2 conda-forge
tensorflow 1.0.0 py35_0 conda-forge
tensorflow 1.1.0 py35_0 conda-forge
tensorflow 1.2.0 py35_0 conda-forge
tensorflow 1.2.1 py35_0 conda-forge
tensorflow 1.3.0 py35_0 conda-forge
tensorflow 1.4.0 py35_0 conda-forge
tensorflow 1.5.0 py35_0 conda-forge
tensorflow 1.5.1 py35_0 conda-forge
tensorflow 1.6.0 py35_0 conda-forge
tensorflow 1.8.0 py35_0 conda-forge
tensorflow 1.8.0 py35_1 conda-forge
tensorflow 1.9.0 eigen_py35h8c89287_1 pkgs/main
tensorflow 1.9.0 gpu_py35h42d5ad8_1 pkgs/main
tensorflow 1.9.0 gpu_py35h60c0932_1 pkgs/main
tensorflow 1.9.0 gpu_py35hb39db67_1 pkgs/main
tensorflow 1.9.0 mkl_py35h5be851a_1 pkgs/main
tensorflow 1.9.0 py35_0 conda-forge
tensorflow 1.10.0 eigen_py35h5ed898b_0 pkgs/main
tensorflow 1.10.0 gpu_py35h566a776_0 pkgs/main
tensorflow 1.10.0 gpu_py35ha6119f3_0 pkgs/main
tensorflow 1.10.0 gpu_py35hd9c640d_0 pkgs/main
tensorflow 1.10.0 mkl_py35heddcb22_0 pkgs/main
tensorflow 1.10.0 py35_0 conda-forge
So, the possible solutions are:
Install one of the older available tensorflow 1.10.0 gpu_py35 builds.
Switch to python 3.6.
conda search tensorflow | grep py36
...
tensorflow 1.11.0 gpu_py36h4459f94_0 pkgs/main
tensorflow 1.11.0 gpu_py36h9c9050a_0 pkgs/main
...
tensorflow 1.12.0 gpu_py36he68c306_0 pkgs/main
tensorflow 1.12.0 gpu_py36he74679b_0 pkgs/main
...
Note that versions >=1.13.1 doesn't support CUDA 9.
Use pip install inside conda env to install missing tensorflow build, because pip hosts more build combinations: Tested build configurations
Here is some best practices from Anaconda how to use pip w/ conda: Using Pip in a Conda Environment
The last option is to build your own missing conda package with conda-build
| https://stackoverflow.com/questions/63842961/ |
Cannot get the value of hidden weights of RNN | I declare my RNN as
self.rnn = torch.nn.RNN(input_size=encoding_dim, hidden_size=1, num_layers=1, nonlinearity='relu')
Later
self.rnn.all_weights
# [[Parameter containing:
tensor([[-0.8099, -0.9543, 0.1117, 0.6221, 0.5034, -0.6766, -0.3360, -0.1700,
-0.9361, -0.3428]], requires_grad=True), Parameter containing:
tensor([[-0.1929]], requires_grad=True), Parameter containing:
tensor([0.7881], requires_grad=True), Parameter containing:
tensor([0.4320], requires_grad=True)]]
self.rnn.all_weights[0][0][0].values
# {RuntimeError}Could not run 'aten::values' with arguments from the 'CPU' backend. 'aten::values' is only available for these backends: [SparseCPU, Autograd, Profiler, Tracer].
Clearly I see the value of the weights, but cannot access to it. Documentation says I need to specify requires_grad=True, but that does not work.
Is there a more elegant and usable way than self.rnn.all_weights[0][0][0]?
| Use torch.nn.Module.named_parameters or torch.nn.Module.parameters.
>>> import torch.nn as nn
>>> model = nn.RNN(input_size=encoding_dim, hidden_size=1, num_layers=1, nonlinearity='relu')
>>> weights = []
>>> for name, parameter in model.named_parameters():
... weights.append({name: parameter[0]})
...
>>> just_weights = []
>>> for parameter in model.parameters():
... just_weights.append(parameter[0])
...
| https://stackoverflow.com/questions/63845991/ |
How can I get a view of input as a complex tensor? RuntimeError: Tensor must have a last dimension with stride 1 | I have a tensor with 64 elements in pytorch and I want to convert it to a complex tensor with 32 elements. Order is important for me and everything should be in PyTorch so I can use it in my customized loss function:
the first half in my primary tensor (W) are my real numbers and the second half are my imaginary ones. so my final tensor should be like:
W_final = tensor(W[0]+jW[32], W[1]+jW[33], W[2]+jW[34], W[3]+jW[35], ... , W[31]+jW[63])
I tried this approach:
import torch
W_1 = = torch.reshape(W,(2,32)) #reshape W with shape (64) to W_1 with shape (2,32)
W_2 = torch.transpose(W_1,0,1) #transpose W_1 to W_2 with shape (32,2), so I can use view_as_complex
W_final = torch.view_as_complex(W_2)
The problem is that with transpose, the stride also changes and I get this error:
RuntimeError: Tensor must have a last dimension with stride 1
Do know how can I deal with stride? or is there any way to reshape with different orders same as numpy?
or any other way to convert to complex?
| It has to do with the non contiguous memory allocation for W_2 after you do reshape.
To handle this error you should call .contiguous() on W_2.
From Pytorch Docs:
" Strides are a list of integers: the k-th stride represents the jump in the memory necessary to go from one element to the next one in the k-th dimension of the Tensor. This concept makes it possible to perform many tensor operations efficiently."
Once you call contiguous all dimensions of returned tensor will have stride 1.
Here is a working sample code:
import torch
W = torch.randn(64)
W_2 = W.view(-1,32).permute(1,0).contiguous()
W_final = torch.view_as_complex(W_2)
First call view to reshape tensor to shape (2,32), then permute dimensions to transpose the result and call contiguous.
| https://stackoverflow.com/questions/63852258/ |
Is max operation differentiable in Pytorch? | I am using Pytorch to training some neural networks. The part I am confused about is:
prediction = myNetwork(img_batch)
max_act = prediction.max(1)[0].sum()
loss = softcrossentropy_loss - alpha * max_act
In the above codes, "prediction" is the output tensor of "myNetwork".
I hope to maximize the larget output of "prediction" over a batch.
For example:
[[-1.2, 2.0, 5.0, 0.1, -1.5] [9.6, -1.1, 0.7, 4,3, 3.3]]
For the first prediction vector, the 3rd element is the larget, while for the second vector, the 1st element is the largets. And I want to maximize "5.0+9.6", although we cannot know what index is the larget output for a new input data.
In fact, my training seems to be successful, because the "max_act" part was really increased, which is the desired behavior to me. However, I heard some discussion about whether max() operation is differentiable or not:
Some says, mathmatically, max() is not differentiable.
Some says, max() is just an identity function to select the largest element, and this largest element is differentiable.
So I got confused now, and I am worried if my idea of maximizing "max_act" is wrong from the beginning.
Could someone provide some guidance if max() operation is differentiable in Pytorch?
| max is differentiable with respect to the values, not the indices. It is perfectly valid in your application.
From the gradient point of view, d(max_value)/d(v) is 1 if max_value==v and 0 otherwise. You can consider it as a selector.
d(max_index)/d(v) is not really meaningful as it is a discontinuous function, with only 0 and undefined as possible gradients.
| https://stackoverflow.com/questions/63855608/ |
matrix multiplication for complex numbers in PyTorch | I am trying to multiply two complex matrices in PyTorch and it seems the torch.matmul functions is not added yet to PyTorch library for complex numbers.
Do you have any recommendation or is there another method to multiply complex matrices in PyTorch?
| Currently torch.matmul is not supported for complex tensors such as ComplexFloatTensor but you could do something as compact as the following code:
def matmul_complex(t1,t2):
return torch.view_as_complex(torch.stack((t1.real @ t2.real - t1.imag @ t2.imag, t1.real @ t2.imag + t1.imag @ t2.real),dim=2))
When possible avoid using for loops as these will result in much slower implementations.
Vectorization is achieved by using built-in methods as demonstrated in the code I have attached.
For example, your code takes roughly 6.1s on CPU while the vectorized version takes only 101ms (~60 times faster) for 2 random complex matrices with dimensions 1000 X 1000.
Update:
Since PyTorch 1.7.0 (as @EduardoReis mentioned) you can do matrix multiplication between complex matrices similarly to real-valued matrices as follows:
t1 @ t2
(for t1, t2 complex matrices).
| https://stackoverflow.com/questions/63855692/ |
Pytorch - Should 'CenterCrop' be used to test set? Does this count as cheating? | I'm learning image classification with Pytorch. I found some papers code use 'CenterCrop' to both train set and test set,e.g. Resize to larger size,then apply CenterCrop to obtain a smaller size. The smaller size is a general size in this research direction.
In my experience, I found apply CenterCrop can get a significant improvement(e.g. 1% or 2%) on test, compare to without CenterCrop on test set.
Because it is used in the top conference papers, confused me. So, Should CenterCrop be used to test set this count as cheating? In addition, should I use any data augmentation to test set except 'Resize' and 'Normalization'?
Thank you for your answer.
| That is not cheating. You can apply any augmentation as long as the label is not used.
In image classification, sometimes people use a FiveCrop+Reflection technique, which is to take five crops (Center, TopLeft, TopRight, BottomLeft, BottomRight) and their reflections as augmentations. They would then predict class probabilities for each crop and average the results, typically giving some performance boost with 10X running time.
In segmentation, people also use similar test-time augmentation "multi-scale testing" which is to resize the input image to different scales before feeding it to the network. The predictions are also averaged.
If you do use such kind of augmentation, do report them when you compare with other methods for a fair comparison.
| https://stackoverflow.com/questions/63856270/ |
_th_addr_out not supported on CPUType for ComplexFloat | I am trying to use a customized loss function for my NN. I've implemented all operations in torch and I have complex numbers among my data.
I get the error while training a NN:
RuntimeError: _th_addr_out not supported on CPUType for ComplexFloat
Do you know any possible solution to deal with it?
| Well it seems Complex Autograd in PyTorch is currently in a prototype state, and the backward functionality for some of function is not included.
For example: torch.sign, which is used in the backward computation of torch.abs, is not defined for complex tensors. same for torch.mv. So I debugged my code line by line to find the functions which are not included, and replaced them with a customized function :)
Hope for a lot more functions to be included in the next release of PyTorch.
| https://stackoverflow.com/questions/63857358/ |
How do I mutate the input using gradient descent in PyTorch? | I'm new to PyTorch. I learned it uses autograd to automatically calculate the gradients for the gradient descent function.
Instead of adjusting the weights, I would like to mutate the input to achieve a desired output, using gradient descent. So, instead of the weights of neurons changing, I want to keep all of the weights the same and just change the input to minimize the loss.
For example. The network is a trained image classifier with the numbers 0-9. I input random noise, and I want to morph it so that the network considers it a 3 with 60% confidence. I would like to utilize gradient descent to adjust the values of the input (originally noise) until the network considers the input to be a 3, with 60% confidence.
Is there a way to do this?
| I assume you know how to do regular training with gradient descent. You only need to change the parameters to be optimized by the optimizer. Something like
# ... Setup your network, load the input
# ...
# Set proper requires_grad -> We train the input, not the parameters
input.requires_grad = True
for p in network.parameters():
p.requires_grad = False
# Setup the optimizer
# Previously we should have SomeOptimizer(net.parameters())
optim = SomeOptimizer([input])
output_that_you_want = ...
actual_output = net(input)
some_loss = SomeLossFunction(output_that_you_want, actual_output)
# ...
# Back-prop and optim.step() as usual
| https://stackoverflow.com/questions/63864199/ |
Pytorch NLP sequence length of target in Transformer | I'm trying to understand the code of Transformer (https://github.com/SamLynnEvans/Transformer).
If seeing the train_model function in "train" script, I wonder why need to use the different sequence length of trg_input from trg:
trg_input = trg[:, :-1]
In this case, the sequence length of trg_input is "seq_len(trg) - 1".
It means that trg is like:
<sos> tok1 tok2 tokn <eos>
and trg_input is like:
<sos> tok1 tok2 tokn (no eos token)
Please let me know the reason.
Thank you.
The related code is like below:
for i, batch in enumerate(opt.train):
src = batch.src.transpose(0, 1).to('cuda')
trg = batch.trg.transpose(0, 1).to('cuda')
trg_input = trg[:, :-1]
src_mask, trg_mask = create_masks(src, trg_input, opt)
preds = model(src, trg_input, src_mask, trg_mask)
ys = trg[:, 1:].contiguous().view(-1)
opt.optimizer.zero_grad()
loss = F.cross_entropy(preds.view(-1, preds.size(-1)), ys, ignore_index=opt.trg_pad)
loss.backward()
opt.optimizer.step()
def create_masks(src, trg, opt):
src_mask = (src != opt.src_pad).unsqueeze(-2)
if trg is not None:
trg_mask = (trg != opt.trg_pad).unsqueeze(-2)
size = trg.size(1) # get seq_len for matrix
np_mask = nopeak_mask(size, opt)
if trg.is_cuda:
np_mask.cuda()
trg_mask = trg_mask & np_mask
else:
trg_mask = None
return src_mask, trg_mask
| That's because the entire aim is to generate the next token based on the tokens we've seen so far. Take a look at the input into the model when we get our predictions. We're not just feeding the source sequence, but also the target sequence up until our current step. The model inside Models.py looks like:
class Transformer(nn.Module):
def __init__(self, src_vocab, trg_vocab, d_model, N, heads, dropout):
super().__init__()
self.encoder = Encoder(src_vocab, d_model, N, heads, dropout)
self.decoder = Decoder(trg_vocab, d_model, N, heads, dropout)
self.out = nn.Linear(d_model, trg_vocab)
def forward(self, src, trg, src_mask, trg_mask):
e_outputs = self.encoder(src, src_mask)
#print("DECODER")
d_output = self.decoder(trg, e_outputs, src_mask, trg_mask)
output = self.out(d_output)
return output
So you can see that the forward method receives src and trg, which are each fed into the encoder and decoder. This is a bit easier to grasp if you take a look at the model architecture from the original paper:
The "Outputs (shifted right)" corresponds to trg[:, :-1] in the code.
| https://stackoverflow.com/questions/63867124/ |
Training Loss and Accuracy both decreasing for my transformer model for Time Series Prediction | I am using a transformer model for predicting the forex market. I transformed the open price data and calculated the difference between each 30 min interval. And converted the difference into tokens. The tokens are obtained by applying log1.5 to the difference. I obtained 28 types of tokens for 6 years. 14-27 represents a bull market and 0-13 tokens represent bear market.
I created a transformer model in PyTorch and applied the data.
import torch
import math
import numpy as np
import copy
from torch import nn
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader
import ast
from numpy import load
import torch.nn as nn
import random
import time
import matplotlib.pyplot as plt
class Embedder(nn.Module):
def __init__(self, vocab_size, d_model):
super().__init__()
# print(vocab_size,d_model)
self.embed = nn.Embedding(vocab_size+1, d_model,padding_idx=0)
def forward(self, x):
# print(x.shape)
# print("Embed",self.embed(x).shape)
return self.embed(x)
class PositionalEncoder(nn.Module):
def __init__(self, d_model, max_seq_len = 500):
super().__init__()
self.d_model = d_model
# create constant 'pe' matrix with values dependant on
# pos and i
pe = torch.zeros(max_seq_len, d_model)
for pos in range(max_seq_len):
for i in range(0, d_model, 2):
pe[pos, i] = \
math.sin(pos / (10000 ** ((2 * i)/d_model)))
pe[pos, i + 1] = \
math.cos(pos / (10000 ** ((2 * (i + 1))/d_model)))
pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)
def forward(self, x):
x = x * math.sqrt(self.d_model)
seq_len = x.size(1)
x = x + torch.autograd.Variable(self.pe[:,:seq_len],requires_grad=False)
return x
def attention(q, k, v, d_k, mask=None, dropout=None):
scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(d_k)
if mask is not None:
mask = mask.unsqueeze(1)
scores = scores.masked_fill(mask == 0, -1e9)
scores = torch.nn.functional.softmax(scores, dim=-1)
if dropout is not None:
scores = dropout(scores)
output = torch.matmul(scores, v)
return output
class MultiHeadAttention(nn.Module):
def __init__(self, heads, d_model, dropout = 0.1):
super().__init__()
self.d_model = d_model
self.d_k = d_model // heads
self.h = heads
self.q_linear = nn.Linear(d_model, d_model)
self.v_linear = nn.Linear(d_model, d_model)
self.k_linear = nn.Linear(d_model, d_model)
self.dropout = nn.Dropout(dropout)
self.out = nn.Linear(d_model, d_model)
def forward(self, q, k, v, mask=None):
bs = q.size(0)
# perform linear operation and split into h heads
k = self.k_linear(k).view(bs, -1, self.h, self.d_k)
q = self.q_linear(q).view(bs, -1, self.h, self.d_k)
v = self.v_linear(v).view(bs, -1, self.h, self.d_k)
# transpose to get dimensions bs * h * sl * d_model
k = k.transpose(1,2)
q = q.transpose(1,2)
v = v.transpose(1,2)
# calculate attention using function we will define next
scores = attention(q, k, v, self.d_k, mask, self.dropout)
# concatenate heads and put through final linear layer
concat = scores.transpose(1,2).contiguous()\
.view(bs, -1, self.d_model)
output = self.out(concat)
return output
class FeedForward(nn.Module):
def __init__(self, d_model, d_ff=512, dropout = 0.1):
super().__init__()
# We set d_ff as a default to 2048
self.linear_1 = nn.Linear(d_model, d_ff)
self.dropout = nn.Dropout(dropout)
self.linear_2 = nn.Linear(d_ff, d_model)
def forward(self, x):
x = self.dropout(torch.nn.functional.relu(self.linear_1(x)))
x = self.linear_2(x)
return x
class Norm(nn.Module):
def __init__(self, d_model, eps = 1e-6):
super().__init__()
self.size = d_model
# create two learnable parameters to calibrate normalisation
self.alpha = nn.Parameter(torch.ones(self.size))
self.bias = nn.Parameter(torch.zeros(self.size))
self.eps = eps
def forward(self, x):
norm = self.alpha * (x - x.mean(dim=-1, keepdim=True)) \
/ (x.std(dim=-1, keepdim=True) + self.eps) + self.bias
return norm
class EncoderLayer(nn.Module):
def __init__(self, d_model, heads, dropout = 0.1):
super().__init__()
self.norm_1 = Norm(d_model)
self.norm_2 = Norm(d_model)
self.attn = MultiHeadAttention(heads, d_model)
self.ff = FeedForward(d_model)
self.dropout_1 = nn.Dropout(dropout)
self.dropout_2 = nn.Dropout(dropout)
def forward(self, x, mask):
x2 = self.norm_1(x)
x = x + self.dropout_1(self.attn(x2,x2,x2,mask))
x2 = self.norm_2(x)
x = x + self.dropout_2(self.ff(x2))
return x
class DecoderLayer(nn.Module):
def __init__(self, d_model, heads, dropout=0.1):
super().__init__()
self.norm_1 = Norm(d_model)
self.norm_2 = Norm(d_model)
self.norm_3 = Norm(d_model)
self.dropout_1 = nn.Dropout(dropout)
self.dropout_2 = nn.Dropout(dropout)
self.dropout_3 = nn.Dropout(dropout)
self.attn_1 = MultiHeadAttention(heads, d_model)
self.attn_2 = MultiHeadAttention(heads, d_model)
self.ff = FeedForward(d_model).cuda()
# self.ff = FeedForward(d_model)
def forward(self, x, e_outputs, src_mask, trg_mask):
x2 = self.norm_1(x)
x = x + self.dropout_1(self.attn_1(x2, x2, x2, trg_mask))
x2 = self.norm_2(x)
x = x + self.dropout_2(self.attn_2(x2, e_outputs, e_outputs,
src_mask))
x2 = self.norm_3(x)
x = x + self.dropout_3(self.ff(x2))
return x
# We can then build a convenient cloning function that can generate multiple layers:
def get_clones(module, N):
return nn.ModuleList([copy.deepcopy(module) for i in range(N)])
class Encoder(nn.Module):
def __init__(self, vocab_size, d_model, N, heads):
super().__init__()
self.N = N
self.embed = Embedder(vocab_size, d_model)
self.pe = PositionalEncoder(d_model)
self.layers = get_clones(EncoderLayer(d_model, heads), N)
self.norm = Norm(d_model)
def forward(self, src, mask):
x = self.embed(src)
x = self.pe(x)
for i in range(self.N):
x = self.layers[i](x, mask)
return self.norm(x)
class Decoder(nn.Module):
def __init__(self, vocab_size, d_model, N, heads):
super().__init__()
self.N = N
self.embed = Embedder(vocab_size, d_model)
self.pe = PositionalEncoder(d_model)
self.layers = get_clones(DecoderLayer(d_model, heads), N)
self.norm = Norm(d_model)
def forward(self, trg, e_outputs, src_mask, trg_mask):
x = self.embed(trg)
x = self.pe(x)
for i in range(self.N):
x = self.layers[i](x, e_outputs, src_mask, trg_mask)
return self.norm(x)
class Transformer(nn.Module):
def __init__(self, src_vocab, trg_vocab, d_model, N, heads):
super().__init__()
self.encoder = Encoder(src_vocab, d_model, N, heads)
self.decoder = Decoder(trg_vocab, d_model, N, heads)
self.out = nn.Linear(d_model, trg_vocab)
def forward(self, src, trg, src_mask, trg_mask):
e_outputs = self.encoder(src, src_mask)
d_output = self.decoder(trg, e_outputs, src_mask, trg_mask)
output = self.out(d_output)
return output
def batchify(data, bsz):
nbatch = data.size(0) // bsz
data = data.narrow(0, 0, nbatch * bsz)
data = data.view(bsz, -1).t().contiguous()
return data
bptt = 128
class CustomDataLoader:
def __init__(self,source):
print("Source",source.shape)
self.batches = list(range(0, source.size(0) - 2*bptt))
# random.shuffle(self.batches)
# print(self.batches)
self.data = source
self.sample = random.sample(self.batches,120)
def batchcount(self):
return len(self.batches)
def shuffle_batches(self):
random.shuffle(self.batches)
def get_batch_from_batches(self,i):
if i==0:
random.shuffle(self.batches)
ind = self.batches[i]
seq_len = min(bptt,len(self.data)-1-ind)
src = self.data[ind:ind+seq_len]
tar = self.data[ind+seq_len-3:ind+seq_len-3+seq_len+1]
return src,tar
def get_batch(self,i):
# print(i,len(self.batches))
ind = self.sample[i]
seq_len = min(bptt,len(self.data)-1-ind)
src = self.data[ind:ind+seq_len]
tar = self.data[ind+seq_len-3:ind+seq_len-3+seq_len+1]
# tar = tar.view(-1)
if(i==len(self.sample)-1):
random.sample(self.batches,60)
# print("Data shuffled",self.batches[:10])
return src,tar
def get_batch(source, i):
seq_len = min(bptt, len(source) - 1 - i)
data = source[i:i+seq_len]
target = source[i+seq_len-3:i+seq_len-3+seq_len]
return data, target
def plot_multiple(data,legend):
fig,ax = plt.subplots()
for line in data:
plt.plot(list(range(len(line))),line)
plt.legend(legend)
plt.show()
def plot_subplots(data,legends,name):
names = ['Accuracy', 'Loss']
plt.figure(figsize=(10, 5))
for i in range(len(data)):
plt.subplot(121+i)
plt.plot(list(range(0,len(data[i])*3,3)),data[i])
plt.title(legends[i])
plt.xlabel("Epochs")
plt.savefig(name)
def evaluate(eval_model, data_source):
eval_model.eval() # Turn on the evaluation mode
total_loss = 0.
ntokens = 28
count = 0
with torch.no_grad():
cum_loss = 0
acc_count = 0
accs = 0
print(data_source.shape)
for batch, i in enumerate(range(0, data_source.size(0) - bptt*2, bptt)):
data, targets = get_batch(data_source, i)
# data,targets = dataLoader.get_batch(i)
data = data.transpose(0,1).contiguous()
targets= targets.transpose(0,1).contiguous()
trg_input = targets[:,:-1]
trg_output = targets[:,1:].contiguous().view(-1)
src_mask , trg_mask = create_masks(data,trg_input)
output = model(data,trg_input,src_mask,trg_mask)
output = output.view(-1,output.size(-1))
loss = torch.nn.functional.cross_entropy(output,trg_output-1)
accs += ((torch.argmax(output,dim=1)==trg_output).sum().item()/output.size(0))
# accs += ((torch.argmax(output,dim=1)==targets).sum().item()/output.size(0))
cum_loss += loss
count+=1
# print(epoch,"Loss: ",(cum_loss/count),"Accuracy ",accs/count)
return cum_loss/ (count), accs/count
def nopeak_mask(size,cuda_enabled):
np_mask = np.triu(np.ones((1, size, size)),
k=1).astype('uint8')
np_mask = torch.autograd.Variable(torch.from_numpy(np_mask) == 0)
if cuda_enabled:
np_mask = np_mask.cuda()
return np_mask
def create_masks(src, trg):
src_mask = (src != 0).unsqueeze(-2)
if trg is not None:
trg_mask = (trg != 0).unsqueeze(-2)
size = trg.size(1) # get seq_len for matrix
# print("Sequence lenght in mask ",size)
np_mask = nopeak_mask(size,True)
# print(np_mask.shape,trg_mask.shape)
if trg.is_cuda:
np_mask.cuda()
trg_mask = trg_mask & np_mask
else:
trg_mask = None
return src_mask, trg_mask
def create_padding_mask(seq):
seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
# add extra dimensions to add the padding
# to the attention logits.
return seq[:, tf.newaxis, tf.newaxis, :] # (batch_size, 1, 1, seq_len)
if __name__ == '__main__':
data = []
dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
procsd_data = load("Eavg_open.npy")
print(set(procsd_data[:,0]))
train_data =torch.tensor(procsd_data)[:30000*2]
print(train_data.shape)
val_data = torch.tensor(procsd_data)[30000*2:35000*2]
test_data = torch.tensor(procsd_data)[35000*2:]
train_data = train_data.to(dev)
val_data = val_data.to(dev)
test_data = test_data.to(dev)
# train_data = train_data.transpose(1,0).contiguous()
# val_data = val_data.transpose(1,0).contiguous()
batch_size = 32
ntokens = 28
train_data = batchify(train_data,batch_size)
# print(train_data.shape)
val_data = batchify(val_data,batch_size)
test_data = batchify(train_data,batch_size)
# model = Transformer(n_blocks=3,d_model=256,n_heads=8,d_ff=256,dropout=0.5)
model = Transformer(28,28,64,3,4)
# model = torch.load("modela")
for p in model.parameters():
if p.dim() > 1:
nn.init.xavier_uniform_(p)
model.to(dev)
criterion = nn.CrossEntropyLoss()
lr = 0.00001 # learning rate
optim = torch.optim.Adam(model.parameters(), lr=0.0001, betas=(0.9, 0.98), eps=1e-9)
#########training starts###########
accuracies = []
lossies = []
val_loss = []
val_accuracy = []
dataLoader = CustomDataLoader(train_data)
_onehot = torch.eye(29)
for epoch in range(500):
count = 0
cum_loss = 0
acc_count = 0
accs = 0
s = time.time()
# for i in range(len(range(0, train_data.size(0) - bptt))):
model.train()
# dataLoader.shuffle_batches()
for i in range(300):
# data, targets = get_batch(train_data, i)
# d = time.time()
hh = time.time()
data,targets = dataLoader.get_batch_from_batches(i)
data = data.transpose(0,1).contiguous()
targets= targets.transpose(0,1).contiguous()
# print(data.shape,targets.shape)
trg_input = targets[:,:-1]
trg_output = targets[:,1:].contiguous().view(-1)
# print(data.shape,trg_input.shape)
src_mask , trg_mask = create_masks(data,trg_input)
# print("Source Mask",src_mask)
# print("Target Mask",trg_mask)
output = model(data,trg_input,src_mask,trg_mask)
# output = output.view(-1,28)
output = output.view(-1,output.size(-1))
loss = torch.nn.functional.cross_entropy(output,trg_output-1)
accuracy = ((torch.argmax(output,dim=1)==trg_output).sum().item()/output.size(0))
accs += accuracy
cum_loss += loss.item();
loss.backward()
optim.step()
model.zero_grad()
optim.zero_grad()
print(i," Batch Loss", loss.item()," Batch Accuracy ",accuracy," Time taken ",time.time()-hh)
count+=1
data,targets = None,None
print(epoch,"Loss: ",(cum_loss/count),"Accuracy ",accs/count," Time Taken: ",time.time()-s)
if(epoch%3==0):
lossies.append(cum_loss/count)
accuracies.append(accs/count)
legend = ["accuracy","Loss"]
plot_subplots([accuracies,lossies],legend,"A&L_v1")
print("Valdata",val_data.shape)
eval_loss,eval_acc = evaluate(model,val_data)
val_accuracy.append(eval_acc)
val_loss.append(eval_loss)
plot_subplots([val_accuracy,val_loss],legend,"Val A&L_v1")
print(epoch,"Loss: ",(cum_loss/count),"Accuracy ",accs/count," Valid_loss: ",eval_loss," Valid_accuracy: ",eval_acc)
if len(val_loss)>0 and eval_loss < val_loss[-1]:
val_loss.append(eval_loss)
torch.save(model,"evalModel")
else:
val_loss.append(eval_loss)
torch.save(model,"evalModel")
if(epoch%5==0):
torch.save(model,"modela")
I got the following loss and accuracy while training:
What is causing this behaviour?
Am I wrong in my tokenization method?
Is it necessary to add any time embedding to the data?
| Actually I made a small mistake in calculating accuracy.
accuracy = ((torch.argmax(output,dim=1)==trg_output).sum().item()/output.size(0))
Here the trg_output has tokens numbered from 1 to n but the argmax function for output returns a range from 0 to n-1. So this is causing this problem.
So I modified the above line to
accuracy = ((torch.argmax(output,dim=1)==(trg_output-1) ).sum().item()/output.size(0))
Same should be applied in the evaluation function also.
| https://stackoverflow.com/questions/63869529/ |
Decoder always predicts the same token | I have the following decoder for machine translation that after a few steps only predicts the EOS token. Overfitting on a dummy, tiny dataset is impossible because of this so it seems that there is a big error in the code.
Decoder(
(embedding): Embeddings(
(word_embeddings): Embedding(30002, 768, padding_idx=3)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.5, inplace=False)
)
(ffn1): FFN(
(dense): Linear(in_features=768, out_features=512, bias=False)
(layernorm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.5, inplace=False)
(activation): GELU()
)
(rnn): GRU(512, 512, batch_first=True, bidirectional=True)
(ffn2): FFN(
(dense): Linear(in_features=1024, out_features=512, bias=False)
(layernorm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.5, inplace=False)
(activation): GELU()
)
(selector): Sequential(
(0): Linear(in_features=512, out_features=30002, bias=True)
(1): LogSoftmax(dim=-1)
)
)
The forward is relatively straightforward (see what I did there?): pass the input_ids to the embedding and a FFN, then use that representation in the RNN with the given sembedding as initial hidden state. Pass the output through another FFN and do softmax. Return logits and last hidden states of the RNN. In the next step, use those hidden states as the new hidden states, and the highest predicted token as the new input.
def forward(self, input_ids, sembedding):
embedded = self.embedding(input_ids)
output = self.ffn1(embedded)
output, hidden = self.rnn(output, sembedding)
output = self.ffn2(output)
logits = self.selector(output)
return logits, hidden
sembedding is the initial hidden_state for the RNN. This is similar to an encoder-deocder architecture only here we do not train the encoder but we do have access to pretrained encoder representations.
In my training loop I start off each batch with a SOS token and feed every top predicted token to next step until target_len is reached. I also swap randomly between teacher forced training.
def step(self, batch, teacher_forcing_ratio=0.5):
batch_size, target_len = batch["input_ids"].size()[:2]
# Init first decoder input woth SOS (BOS) token
decoder_input = torch.tensor([[self.tokenizer.bos_token_id]] * batch_size).to(self.device)
batch["input_ids"] = batch["input_ids"].to(self.device)
# Init first decoder hidden_state: one zero'd second embedding in case the RNN is bidirectional
decoder_hidden = torch.stack((batch["sembedding"],
torch.zeros(*batch["sembedding"].size()))
).to(self.device) if self.model.num_directions == 2 \
else batch["sembedding"].unsqueeze(0).to(self.device)
loss = torch.tensor([0.]).to(self.device)
use_teacher_forcing = random.random() < teacher_forcing_ratio
# contains tuples of predicted and correct words
tokens = []
for i in range(target_len):
# overwrite previous decoder_hidden
output, decoder_hidden = self.model(decoder_input, decoder_hidden)
batch_correct_ids = batch["input_ids"][:, i]
# NLLLoss compute loss between predicted classes (bs x classes) and correct classes for _this word_
# set to ignore the padding index
loss += self.criterion(output[:, 0, :], batch_correct_ids)
batch_predicted_ids = output.topk(1).indices.squeeze(1).detach()
# if use teacher training: use current correct word for next prediction
# else do NOT use teacher training: us current predction for next prediction
decoder_input = batch_correct_ids.unsqueeze(1) if use_teacher_forcing else batch_predicted_ids
return loss, loss.item() / target_len
I also clip the gradients after each step:
clip_grad_norm_(self.model.parameters(), 1.0)
At first subsequent predictions are already relatively identical, but after a few iterations there's a bit more variation. But relatively quickly ALL predictions turn into other words (but always the same ones), eventually turning into EOS tokens (edit: after changing the activation to ReLU, another token is always predicted - it seems like a random token that always gets repeated). Note that this already happens after 80 steps (batch_size 128).
I found that the returned hidden state of the RNN contains a lot of zeros. I am not sure if that is the problem but it seems like it could be related.
tensor([[[ 3.9874e-02, -6.7757e-06, 2.6094e-04, ..., -1.2708e-17,
4.1839e-02, 7.8125e-03],
[ -7.8125e-03, -2.5341e-02, 7.8125e-03, ..., -7.8125e-03,
-7.8125e-03, -7.8125e-03],
[ -0.0000e+00, -1.0610e-314, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
-0.0000e+00, 1.0610e-314]]], device='cuda:0', dtype=torch.float64,
grad_fn=<CudnnRnnBackward>)
I have no idea what might be going wrong although I suspect that the issue is rather with my step than with the model. I already tried playing with the learning rate, disabling some layers (LayerNorm, dropout, ffn2), using pretrained embeddings and freezing or unfreezing them, and disabling teacher forcing, using bidrectional vs unidirectional GRU. The end result is always the same.
If you have any pointers, that would be very helpful. I have googled many things concerning neural networks always predicting the same item and I have tried all the suggestions that I could find. Any new ones, no matter how crazy, are welcome!
| In my case the issue appeared to be that the dtype of the initial hidden state was a double and the input was a float. I don't quite understand why that is an issue, but casting the hidden state to a float solved the issue. If you have any intuition about why this might be a problem for PyTorch, do let me know in the comments or, better yet, on the official PyTorch forums.
EDIT: as that topic shows, this is a bug in PyTorch 1.6 that is solved in 1.7, In 1.7, you will get an error message which will hopefully save you the trouble of debugging all your code and not finding what causes strange behaviour.
| https://stackoverflow.com/questions/63870162/ |
Runtimeerror: Cuda out of memory - problem in code or gpu? | I am currently working on a computer vision project. I keep getting a runtime error that says "CUDA out of memory". I have tried all possible ways like reducing batch size and image resolution, clearing the cache, deleting variables after training starts, reducing image data and so on... Unfortunately, this error doesn't stop. I have a Nvidia Geforce 940MX graphics card on my HP Pavilion laptop. I have installed cuda 10.2 and cudNN from the pytorch installation page. My aim was to create a flask website out of this model but I am stuck with this issue. Any suggestions to this problem will be helpful.
This is my code
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
import os
import cv2
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import albumentations as A
from torch.utils.data import TensorDataset, DataLoader,Dataset
from torchvision import models
from collections import defaultdict
from torch.utils.data.sampler import RandomSampler
import torch.optim as optim
from torch.optim import lr_scheduler
from sklearn import model_selection
from tqdm import tqdm
import gc
# generate data from csv file
class Build_dataset(Dataset):
def __init__(self, csv, split, mode, transform=None):
self.csv = csv.reset_index(drop=True)
self.split = split
self.mode = mode
self.transform = transform
def __len__(self):
return self.csv.shape[0]
def __getitem__(self, index):
row = self.csv.iloc[index]
image = cv2.imread(row.filepath)
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if self.transform is not None:
res = self.transform(image=image)
image = res['image'].astype(np.float32)
else:
image = image.astype(np.float32)
image = image.transpose(2, 0, 1)
data = torch.tensor(image).float()
if self.mode == 'test':
return data
else:
return data, torch.tensor(self.csv.iloc[index].target).long()
# training epoch
def train_epoch(model, loader, optimizer,loss_fn,device, scheduler,n_examples):
model = model.train()
losses = []
correct_predictions = 0
for inputs, labels in tqdm(loader):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, dim=1)
loss = loss_fn(outputs, labels)
correct_predictions += torch.sum(preds == labels)
losses.append(loss.item())
loss.backward()
optimizer.step()
optimizer.zero_grad()
# here you delete inputs and labels and then use gc.collect
del inputs, labels
gc.collect()
return correct_predictions.double() / n_examples, np.mean(losses)
# validation epoch
def val_epoch(model, loader,loss_fn, device,n_examples):
model = model.eval()
losses = []
correct_predictions = 0
with torch.no_grad():
for inputs, labels in tqdm(loader):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, dim=1)
loss = loss_fn(outputs, labels)
correct_predictions += torch.sum(preds == labels)
losses.append(loss.item())
# here you delete inputs and labels and then use gc.collect
del inputs, labels
gc.collect()
return correct_predictions.double() / n_examples, np.mean(losses)
def train(model,device, num_epochs):
# generate data
dataset_train = Build_dataset(df_train, 'train', 'train', transform=transforms_train)
dataset_valid = Build_dataset(df_valid, 'train', 'val', transform=transforms_val)
#load data
train_loader = DataLoader(dataset_train, batch_size = 16,sampler=RandomSampler(dataset_train), num_workers=4)
valid_loader = DataLoader(dataset_valid, batch_size = 16,shuffle = True, num_workers= 4 )
dataset_train_size = len(dataset_train)
dataset_valid_size = len(dataset_valid)
optimizer = optim.Adam(model.parameters(), lr = 3e-5)
model = model.to(device)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, patience = 3,threshold = 0.001, mode = 'max')
loss_fn = nn.CrossEntropyLoss().to(device)
history = defaultdict(list)
best_accuracy = 0.0
for epoch in range(num_epochs):
print(f'Epoch {epoch+1} / {num_epochs}')
print ('-'*30)
train_acc, train_loss = train_epoch(model, train_loader, optimizer, loss_fn, device, scheduler, dataset_train_size)
print(f'Train loss {train_loss} accuracy {train_acc}')
valid_acc, valid_loss = val_epoch(model, valid_loader, loss_fn, device,dataset_valid_size)
print(f'Val loss {valid_loss} accuracy {valid_acc}')
print()
history['train_acc'].append(train_acc)
history['train_loss'].append(train_loss)
history['val_acc'].append(valid_acc)
history['val_loss'].append(valid_loss)
if valid_acc > best_accuracy:
torch.save(model.state_dict(), 'best_model_state.bin')
best_accuracy = valid_acc
print('Best Accuracy: {best_accuracy}')
model.load_state_dict(torch.load('best_model_state.bin'))
return model, history
if __name__ == '__main__':
#competition data -2020
data_dir = "C:\\Users\\Aniruddh\\Documents\\kaggle\\jpeg_melanoma_2020"
#competition data - 2019
data_dir2 = "C:\\Users\\Aniruddh\\Documents\\kaggle\\jpeg_melanoma_2019"
# device
device = torch.device("cuda")
# augmenting images
image_size = 384
transforms_train = A.Compose([
A.Transpose(p=0.5),
A.VerticalFlip(p=0.5),
A.HorizontalFlip(p=0.5),
A.RandomBrightness(limit=0.2, p=0.75),
A.RandomContrast(limit=0.2, p=0.75),
A.OneOf([
A.MedianBlur(blur_limit=5),
A.GaussianBlur(blur_limit=5),
A.GaussNoise(var_limit=(5.0, 30.0)),
], p=0.7),
A.OneOf([
A.OpticalDistortion(distort_limit=1.0),
A.GridDistortion(num_steps=5, distort_limit=1.),
A.ElasticTransform(alpha=3),
], p=0.7),
A.CLAHE(clip_limit=4.0, p=0.7),
A.HueSaturationValue(hue_shift_limit=10, sat_shift_limit=20, val_shift_limit=10, p=0.5),
A.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.1, rotate_limit=15, border_mode=0, p=0.85),
A.Resize(image_size, image_size),
A.Cutout(max_h_size=int(image_size * 0.375), max_w_size=int(image_size * 0.375), num_holes=1, p=0.7),
A.Normalize()
])
transforms_val = A.Compose([
A.Resize(image_size, image_size),
A.Normalize()
])
# create data
df_train = pd.read_csv("C:\\Users\\Aniruddh\\Documents\\kaggle\\jpeg_melanoma_2020\\train.csv") #/kaggle/input/siim-isic-melanoma-classification/train.csv
df_train.head()
df_train['is_ext'] = 0
df_train['filepath'] = df_train['image_name'].apply(lambda x: os.path.join(data_dir, 'train', f'{x}.jpg'))
# dataset from 2020 data
df_train['diagnosis'] = df_train['diagnosis'].apply(lambda x: x.replace('seborrheic keratosis', 'BKL'))
df_train['diagnosis'] = df_train['diagnosis'].apply(lambda x: x.replace('lichenoid keratosis', 'BKL'))
df_train['diagnosis'] = df_train['diagnosis'].apply(lambda x: x.replace('solar lentigo', 'BKL'))
df_train['diagnosis'] = df_train['diagnosis'].apply(lambda x: x.replace('lentigo NOS', 'BKL'))
df_train['diagnosis'] = df_train['diagnosis'].apply(lambda x: x.replace('cafe-au-lait macule', 'unknown'))
df_train['diagnosis'] = df_train['diagnosis'].apply(lambda x: x.replace('atypical melanocytic proliferation', 'unknown'))
# dataset from 2019 data
df_train2 = pd.read_csv('/content/drive/My Drive/siim_melanoma images/train_2019.csv')
df_train2 = df_train2[df_train2['tfrecord'] >= 0].reset_index(drop=True)
#df_train2['fold'] = df_train2['tfrecord'] % 5
df_train2['is_ext'] = 1
df_train2['filepath'] = df_train2['image_name'].apply(lambda x: os.path.join(data_dir2, 'train', f'{x}.jpg'))
df_train2['diagnosis'] = df_train2['diagnosis'].apply(lambda x: x.replace('NV', 'nevus'))
df_train2['diagnosis'] = df_train2['diagnosis'].apply(lambda x: x.replace('MEL', 'melanoma'))
#concat both 2019 and 2020 data
df_train = pd.concat([df_train, df_train2]).reset_index(drop=True)
# shuffle data
df = df_train.sample(frac=1).reset_index(drop=True)
# creating 8 different target values
new_target = {d: idx for idx, d in enumerate(sorted(df.diagnosis.unique()))}
df['target'] = df['diagnosis'].map(new_target)
mel_idx = new_target['melanoma']
df = df[['filepath','diagnosis', 'target', 'is_ext']]
class_names = list(df['diagnosis'].unique())
# splitting train and validation data by 20%
df_valid = df[:11471]
df_train = df[11472:].reset_index()
df_train = df_train.drop(columns = ['index'])
# create model
def create_model(n_classes):
model = models.resnet50(pretrained=True)
n_features = model.fc.in_features
model.fc = nn.Linear(n_features, n_classes)
return model.to(device)
# model
base_model = create_model(len(class_names))
# train model
base_model, history = train(base_model, device, num_epochs = 15)
Code Objective
The purpose of the project is to classify skin cancer images by creating 8 different target variables from the given datasets (i.e the competition was about classifying benign and malignant images but I used the diagnosis column on the dataset as my target variable as the data was really skewed). The model used is Resnet-50 from torchvision models.
These were the data used
skin images (this year competition): https://www.kaggle.com/cdeotte/jpeg-melanoma-384x384
skin images (last year competition): https://www.kaggle.com/cdeotte/jpeg-isic2019-384x384
I decided to create a Flask application out of this but, the CUDA memory was always causing a runtime error
RuntimeError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 0; 2.00 GiB total capacity; 1.21 GiB already allocated; 43.55 MiB free; 1.23 GiB reserved in total by PyTorch)
These are the details about my Nvidia GPU
Sun Sep 13 19:09:34 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 451.67 Driver Version: 451.67 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 940MX WDDM | 00000000:01:00.0 Off | N/A |
| N/A 63C P8 N/A / N/A | 37MiB / 2048MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
# more information about my GPU
==============NVSMI LOG==============
Timestamp : Sun Sep 13 19:11:22 2020
Driver Version : 451.67
CUDA Version : 11.0
Attached GPUs : 1
GPU 00000000:01:00.0
Product Name : GeForce 940MX
Product Brand : GeForce
Display Mode : Disabled
Display Active : Disabled
Persistence Mode : N/A
Accounting Mode : Disabled
Accounting Mode Buffer Size : 4000
Driver Model
Current : WDDM
Pending : WDDM
Serial Number : N/A
GPU UUID : GPU-9a8c69df-26f2-2a98-3712-ea22f6add038
Minor Number : N/A
VBIOS Version : 82.08.6D.00.8C
MultiGPU Board : No
Board ID : 0x100
GPU Part Number : N/A
Inforom Version
Image Version : N/A
OEM Object : N/A
ECC Object : N/A
Power Management Object : N/A
GPU Operation Mode
Current : N/A
Pending : N/A
GPU Virtualization Mode
Virtualization Mode : None
Host VGPU Mode : N/A
IBMNPU
Relaxed Ordering Mode : N/A
PCI
Bus : 0x01
Device : 0x00
Domain : 0x0000
Device Id : 0x134D10DE
Bus Id : 00000000:01:00.0
Sub System Id : 0x83F9103C
GPU Link Info
PCIe Generation
Max : 3
Current : 1
Link Width
Max : 4x
Current : 4x
Bridge Chip
Type : N/A
Firmware : N/A
Replays Since Reset : 0
Replay Number Rollovers : 0
Tx Throughput : 0 KB/s
Rx Throughput : 0 KB/s
Fan Speed : N/A
Performance State : P8
Clocks Throttle Reasons
Idle : Not Active
Applications Clocks Setting : Not Active
SW Power Cap : Not Active
HW Slowdown : Not Active
HW Thermal Slowdown : N/A
HW Power Brake Slowdown : N/A
Sync Boost : Not Active
SW Thermal Slowdown : Not Active
Display Clock Setting : Not Active
FB Memory Usage
Total : 2048 MiB
Used : 37 MiB
Free : 2011 MiB
BAR1 Memory Usage
Total : 256 MiB
Used : 225 MiB
Free : 31 MiB
Compute Mode : Default
Utilization
Gpu : 0 %
Memory : 0 %
Encoder : N/A
Decoder : N/A
Encoder Stats
Active Sessions : 0
Average FPS : 0
Average Latency : 0
FBC Stats
Active Sessions : 0
Average FPS : 0
Average Latency : 0
Ecc Mode
Current : N/A
Pending : N/A
ECC Errors
Volatile
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Texture Shared : N/A
CBU : N/A
Total : N/A
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Texture Shared : N/A
CBU : N/A
Total : N/A
Aggregate
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Texture Shared : N/A
CBU : N/A
Total : N/A
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Texture Shared : N/A
CBU : N/A
Total : N/A
Retired Pages
Single Bit ECC : N/A
Double Bit ECC : N/A
Pending Page Blacklist : N/A
Remapped Rows : N/A
Temperature
GPU Current Temp : 60 C
GPU Shutdown Temp : 99 C
GPU Slowdown Temp : 94 C
GPU Max Operating Temp : 90 C
Memory Current Temp : N/A
Memory Max Operating Temp : N/A
Power Readings
Power Management : N/A
Power Draw : N/A
Power Limit : N/A
Default Power Limit : N/A
Enforced Power Limit : N/A
Min Power Limit : N/A
Max Power Limit : N/A
Clocks
Graphics : 405 MHz
SM : 405 MHz
Memory : 405 MHz
Video : 396 MHz
Applications Clocks
Graphics : 1006 MHz
Memory : 1001 MHz
Default Applications Clocks
Graphics : 1004 MHz
Memory : 1001 MHz
Max Clocks
Graphics : 1241 MHz
SM : 1241 MHz
Memory : 1001 MHz
Video : 1216 MHz
Max Customer Boost Clocks
Graphics : N/A
Clock Policy
Auto Boost : N/A
Auto Boost Default : N/A
Processes : None
if I try running this on the CPU, the whole system freezes to the point where I have to manually restart the computer. Also if I try running the code with lower image resolution, lower batch sizes etc, each epoch takes around 12 hours to complete on a CPU which is definitely impractical.
| I ran your model on Kaggle with a batch_size = 48 and attached a screenshot of the requirements. An epoch takes around 30-40 mins to complete. I would say you could easily train your model with the 30+ hrs Kaggle gives.
I also tested inference with batch_size=1 and set num_workers=0 in your dataloader, the GPU Usage is 1.3GB.
I would recommend you to train your model on Kaggle/Colab and download the weights onto your local machine. Later, you could run inference on your machine with batch size = 1. Inference, usually happens faster.
| https://stackoverflow.com/questions/63871643/ |
How to convert a torch tensor into a byte string? | I'm trying to serialize a torch tensor using protobuf and it seems using BytesIO along with torch.save() doesn't work. I have tried:
import torch
import io
x = torch.randn(size=(1,20))
buff = io.BytesIO()
torch.save(x, buff)
print(f'buffer: {buff.read()}')
to no avail as it results in b'' in the output! How should I be going about this?
| You need to seek to the beginning of the buffer before reading:
import torch
import io
x = torch.randn(size=(1,20))
buff = io.BytesIO()
torch.save(x, buff)
buff.seek(0) # <-- this is what you were missing
print(f'buffer: {buff.read()}')
gives you this magnificent output:
buffer: b'PK\x03\x04\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x12\x00archive/data.pklFB\x0e\x00ZZZZZZZZZZZZZZ\x80\x02ctorch._utils\n_rebuild_tensor_v2\nq\x00((X\x07\x00\x00\x00storageq\x01ctorch\nFloatStorage\nq\x02X\x0f\x00\x00\x00140417054790352q\x03X\x03\x00\x00\x00cpuq\x04K\x14tq\x05QK\x00K\x01K\x14\x86q\x06K\x14K\x01\x86q\x07\x89ccollections\nOrderedDict\nq\x08)Rq\ttq\nRq\x0b.PK\x07\x08\xf3\x08u\x13\xa8\x00\x00\x00\xa8\x00\x00\x00PK\x03\x04\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1c\x00\x0e\x00archive/data/140417054790352FB\n\x00ZZZZZZZZZZ\xba\xf3x?\xb5\xe2\xc4=)R\x89\xbfM\x08\x19\xbfo%Y\xbf\x05\xc0_\xbf\x03N4\xbe\xdd_ \xc0&\xc4\xb5?\xa7\xfd\xc4?f\xf1$?Ll\xa6?\xee\x8e\x80\xbf\x88Uq?.<\xd8?{\x08\xb2?\xb3\xa3\xba>q\xcd\xbc?\xba\xe3h\xbd\xcan\x11\xc0PK\x07\x08A\xf3\xdc>P\x00\x00\x00P\x00\x00\x00PK\x03\x04\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x003\x00archive/versionFB/\x00ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ3\nPK\x07\x08\xd1\x9egU\x02\x00\x00\x00\x02\x00\x00\x00PK\x01\x02\x00\x00\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00\xf3\x08u\x13\xa8\x00\x00\x00\xa8\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00archive/data.pklPK\x01\x02\x00\x00\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00A\xf3\xdc>P\x00\x00\x00P\x00\x00\x00\x1c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00archive/data/140417054790352PK\x01\x02\x00\x00\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00\xd1\x9egU\x02\x00\x00\x00\x02\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xa0\x01\x00\x00archive/versionPK\x06\x06,\x00\x00\x00\x00\x00\x00\x00\x1e\x03-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\xc5\x00\x00\x00\x00\x00\x00\x00\x12\x02\x00\x00\x00\x00\x00\x00PK\x06\x07\x00\x00\x00\x00\xd7\x02\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00PK\x05\x06\x00\x00\x00\x00\x03\x00\x03\x00\xc5\x00\x00\x00\x12\x02\x00\x00\x00\x00'
| https://stackoverflow.com/questions/63880081/ |
Is my batch accumulation implementation correct? | I’d like to know if my code for training a model with batch accumulation is correct. Especially the part regarding the loss calculation because I’m not so sure this is the right way to do this.
Here’s my code:
def train (start_epochs, n_epochs, best_acc, train_generator, val_generator, model, optimizer, criterion, checkpoint_path, best_model_path):
#num_epochs = 25
since = time.time()
#best_model_wts = copy.deepcopy(model.state_dict())
#best_acc = 0.0
train_loss = []
val_loss = []
train_acc = []
val_acc = []
batch_accumulation = 8
for epoch in tqdm(range(start_epochs, n_epochs+1)):
running_train_loss = 0.0
running_val_loss = 0.0
running_train_corrects = 0
running_val_corrects = 0
optimizer.zero_grad
#Training
model.train()
for i, (faces, labels) in tqdm(enumerate(train_generator)):
faces = faces.to(device)
labels = labels.to(device)
#forward
outputs = model(faces)
#predictions of the model determined using the torch.max() function, which returns the index of the maximum value in a tensor.
_, preds = torch.max(outputs[1], 1)
#pass the model outputs and the true image labels to the loss function
loss = criterion(outputs[1], labels)
#loss = loss / batch_accumulation
running_train_loss += loss.item()
# Backprop and Adam optimisation
loss.backward()
# Track the accuracy and loss
running_train_corrects += torch.sum(preds == labels.data)
if (i+1)% batch_accumulation == 0:
optimizer.step()
optimizer.zero_grad # zero the gradient buffers
# calculate average losses and accuracy
epoch_train_loss = running_train_loss / len(train_generator.dataset)
epoch_train_acc = ((running_train_corrects.double() / len(train_generator.dataset)) * 100)
train_loss.append(epoch_train_loss)
train_acc.append(epoch_train_acc)
print('Train Loss: {:.4f} Train Acc: {:.2f}%'.format(epoch_train_loss, epoch_train_acc))
#Validation
with torch.set_grad_enabled(False):
model.eval()
for i , (faces_val, labels_val) in tqdm(enumerate(val_generator)):
faces_val = faces_val.to(device)
labels_val = labels_val.to(device)
if (i+1)% batch_accumulation == 0:
outputs_val = model(faces_val)
_, preds_val = torch.max(outputs_val[1], 1)
loss_val = criterion(outputs_val[1], labels_val)
running_val_loss += loss_val.item()
#running_val_loss = running_val_loss +((1 /(i+1)) * (loss.item() - running_val_loss))
running_val_corrects += torch.sum(preds_val == labels_val.data)
# calculate average losses and accuracy
epoch_val_loss = running_val_loss / len(validation_generator.dataset)
epoch_val_acc = (running_val_corrects.double() / len(validation_generator.dataset)) * 100
val_loss.append(epoch_val_loss)
val_acc.append(epoch_val_acc)
print('Validation Loss: {:.4f} Validation Acc: {:.2f}%'.format(epoch_val_loss, epoch_val_acc))
I got strange epoch train results (like 456.890) and I’m note sure about the if statement in the validation part.
| There is no need for gradient accumulation (actual term) during validation phase, so this part over there:
if (i+1)% batch_accumulation == 0:
outputs_val = model(faces_val)
does not make any sense (no need for if). This technique is used solely for training to make the gradient estimation more accurate for small batches, hence we should focus on it.
Gradient accumulation
Each time we run backward() calculated gradients are added to the leafs of the graph. Usually, we use mean across whole batch (divide the sum by number of elements in batch). Here, we accumulate the loss, hence we should divide it by the number of accumulation steps, which gives us (actually you had it commented out):
loss = criterion(outputs[1], labels)
loss = loss / batch_accumulation
Otherwise loss might be too big (probably the case here) and make the network unstable even with really small learning rates).
You may also run this:
running_train_loss += loss.item()
On a per-accumulation basis.
And lastly, as pointed out by @Dishin H Goyani zero_grad is a function so you should run:
optimizer.zero_grad()
| https://stackoverflow.com/questions/63880813/ |
Neural network layer without all connections | The weights in a dense layer of a neural network is a (n,d) matrix, and I want to force some of these weights to always be zero. I have another (n,d) matrix which is the mask of which entries can be non-zero. The idea is that the layer should not be truly dense, but have some connections missing (i.e. equal to 0).
How can achieve this while training with PyTorch (or Tensorflow)? I don't want these weights to become non-zero while training.
One method, if it doesn't support it directly, would be to zero-out the desired entries after each iteration of training.
| You can take advantage of pytorch's sparse data type:
class SparseLinear(nn.Module):
def __init__(self, in_features, out_features, sparse_indices):
super(SparseLinear, self).__init__()
self.weight = nn.Parameter(data=torch.sparse.FloatTensor(sparse_indices, torch.randn(sparse_indices.shape[1]), [in_features, out_features]), requires_grad=True)
self.bias = nn.Parameter(data=torch.randn(out_features), requires_grad=True)
def forward(self, x):
return torch.sparse.admm(self.bias, self.weight, x, 1., 1.)
| https://stackoverflow.com/questions/63893602/ |
Pytorch NLP model doesn’t use GPU when making inference | I have a NLP model trained on Pytorch to be run in Jetson Xavier. I installed Jetson stats to monitor usage of CPU and GPU. When I run the Python script, only CPU cores work on-load, GPU bar does not increase. I have searched on Google about that with keywords of " How to check if pytorch is using the GPU?" and checked results on stackoverflow.com etc. According to their advices to someone else facing similar issue, cuda is available and there is cuda device in my Jetson Xavier. However, I don’t understand why GPU bar does not change, CPU core bars go to the ends.
I don’t want to use CPU, it takes so long to compute. In my opinion, it uses CPU, not GPU. How can I be sure and if it uses CPU, how can I change it to GPU?
Note: Model is taken from huggingface transformers library. I have tried to use cuda() method on the model. (model.cuda()) In this scenario, GPU is used but I can not get an output from model and raises exception.
Here is the code:
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
import torch
BERT_DIR = "savasy/bert-base-turkish-squad"
tokenizer = AutoTokenizer.from_pretrained(BERT_DIR)
model = AutoModelForQuestionAnswering.from_pretrained(BERT_DIR)
nlp=pipeline("question-answering", model=model, tokenizer=tokenizer)
def infer(question,corpus):
try:
ans = nlp(question=question, context=corpus)
return ans["answer"], ans["score"]
except:
ans = None
pass
return None, 0
| The problem has been solved with loading pipeline containing device parameter:
nlp = pipeline("question-answering", model=BERT_DIR, device=0)
| https://stackoverflow.com/questions/63899303/ |
Using Transformer for Text-Summarization | I am using huggingface transformer models for text-summarization.
Currently I am testing different models such as T5 and Pegasus.
Now these models were trained for summarizing Big Texts into very short like a maximum of two sentences. Now I have the task, that I want summarizations, that are about half the size of the text, ergo the generated summaries are too small for my purpose.
My question now is, if there is a way to tell the model that another sentence came before?
Kind of similar to the logic inside stateful RNNs (although I know they work completly different).
If yes, I could summarize small windows over the sentences always with the information which content came before.
Is that just a thing of my mind? I cant believe that I am the only one, who wants to create shorter summaries, but not only 1 or two sentence long ones.
Thank you
| Why not transfer learning? Train them on your specific texts and summaries.
I trained T5 on specific limited text over 5 epoch and got very good results. I adopted the code from here to my needs https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb
Let me know if you have a specific training questions.
| https://stackoverflow.com/questions/63904821/ |
Specifying number of cells in LSTM layer in PyTorch | I don't fully understand the LSTM layer in PyTorch. When I instantiate an LSTM layer how can I specify the number of LSTM cells inside the layer? My first thought was that it was the "num_layers" argument, if we assume that LSTM cells are connected vertically. But if that is the case how can we implement stacked LSTM with for example two layers with 8 cells each?
| The amount of cells of an LSTM (or RNN or GRU) is the amount of timesteps your input has/needs. For example, when you want to run the word „hello“ through the LSTM function in Pytorch, you can just convert the word to a vector (with one-hot encoding or embeddings) and then pass that vector though the LSTM function. It will then, in the background, iterate through all the embedded characters („h“, „e“, „l“, ...). And each input can even have a different amount of timesteps/cells, for example when you want to pass „hello“ and after that „Joe“ the LSTM will need different amount of iterations (5 for hello, 3 for Joe). So as you can see, there is no need to give an amount of cells! Hope that answer satisfied you. :)
Edit
An example:
sentence = "Hey Im Joe"
embedding_size = 300
batch_size = 1 # batch-size 1 for demonstration
input_ = [create_embedding(word, dims=embedding_size) for word in sentence]
# the LSTM will need three timesteps or cells to process that sentence
input_ = torch.tensor(x).reshape(1, embedding_size)
hidden_size = 256
layers = 256
lstm = nn.LSTM(input_size=embedding_size, hidden_size=hidden_size, num_layers=layers, dropout=0.5, batch_first=True)
# initialize hidden-state (must be tuple of following dimentsions
hidden = (torch.zeros(layers, batch_size, hidden_size), torch.zeros(layers, batch_size, hidden_size))
outputs, hidden = lstm(input_, hidden)
# outputs is now a list containing the outputs of each timestep
# for classification you can take output of the last timestep and use it further, like this
output = outputs[:, -1]
So what happens in this (outputs, hidden = lstm(input, hidden)) line?
Again pseudo code:
# inside the LSTM function
# lets say for demonstration reasons the embedding for "Hey" is a, for "Im" its b and for "Joe" its c
input_ = [a, b, c]
def LSTM(input_sentence):
hidden = ...
outputs = []
# each iteration is one timestep/cell
for embedded_word in input_sentence:
output, hidden = neural_network(embedded_word, hidden)
outputs.append(output)
# returns all outputs and hiddenstate (which you normally dont need)
return outputs, hidden
outputs, hidden = LSTM(input_)
Is it clear now how what the LSTM function does and how to use it?
| https://stackoverflow.com/questions/63909619/ |
Why there are orders of magnitude in the evaluation of torch.sin() and numpy.sin() at integer multiples of Pi? | Why are trigonometric functions of Pytorch and Numpy when evaluated at integer multiples of Pi result in such huge discrepancy in the orders of magnitude?
>>> torch.sin(torch.ones(1)*2*np.pi)
tensor([1.7485e-07])
>>> np.sin(np.ones(1)*2*np.pi)
array([-2.4492936e-16])
| Torch defaults to 32-bit floats, while NumPy defaults to 64-bit. The rounding error you're getting in Torch is around the expected scale for 32-bit floats. Specify a different dtype if you want a different dtype.
| https://stackoverflow.com/questions/63910283/ |
Randomly select items from two equally sized tensors | Assume that we have two equally sized tensors of size batch_size * 1. For each index in the batch dimension we want to choose randomly between the two tensors. My solution was to create an indices tensor that contains random 0 or 1 indices of size batch_size and use those to index_select from the concatenation of the two tensors. However, to do so I had the "view" that cat tensor and the solution ended up to be quite "ugly":
import torch
bs = 8
a = torch.zeros(bs, 1)
print("a size", a.size())
b = torch.ones(bs, 1)
c = torch.cat([a, b], dim=-1)
print(c)
print("c size", c.size())
# create bs number of random 0 and 1's
indices = torch.randint(0, 2, [bs])
print("idxs size", indices.size())
print("idxs", indices)
# use `indices` to slice the `cat`ted tensor
d = c.view(1, -1).index_select(-1, indices).view(-1, 1)
print("d size", d.size())
print(d)
I am wondering whether there is a prettier and, more importantly, more efficient solution.
| Posting two answers that I got over at the PyTorch forums
import torch
bs = 8
a = torch.zeros(bs, 1)
b = torch.ones(bs, 1)
c = torch.cat([a, b], dim=-1)
choices_flat = c.view(-1)
# index = torch.randint(choices_flat.numel(), (bs,))
# or if replace = False
index = torch.randperm(choices_flat.numel())[:bs]
select = choices_flat[index]
print(select)
import torch
bs = 8
a = torch.zeros(bs, 1)
print("a size", a.size())
b = torch.ones(bs, 1)
idx = torch.randint(2 * bs, (bs,))
d = torch.cat([a, b])[idx] # [bs, 1]
| https://stackoverflow.com/questions/63911276/ |
Where is the definition of torch.embedding in PyTorch? | I'm trying to understand how PyTorch creates embeddings and read the source code of torch.nn.functional.embedding github link.
The function returns the result of torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse).
Then, I try to understand the definition of torch.embedding but I can't find its source code in the GitHub repository. Where is it?
| Many of PyTorch's functions are implemented in C++. The entrypoint for torch.embedding is located here.
| https://stackoverflow.com/questions/63913756/ |
Is that normal to train properly with 2 sets and/or 2 diff losses function? | I have 2 train sets: one with label and one with no label.
When training, i’m simultaneously loading one batch from a labelled set, then calculating using the first loss function; and then, one batch from unlabeled set, calculating using the using the other function. Finally I sum them (2 losses) and loss.backward() .
Does this way work ? it’s quite uncommon in my mind so just ask if the engine know how to back-propagate properly (not wrong)?
Thank you.
| I've got an answer from pytorch forum discussion. Autograd engine in pytorch work well and correctly with 2 or more different loss function so we don't need to worry about its accuracy.
Thanks
| https://stackoverflow.com/questions/63915186/ |
Get value from c10::Dict in Pytorch C++ | I'm using a TorchScript Model on Pytorch C++ Frontend.
The model in Python returns an output dict as Dict[str, List[torch.Tensor]].
When I use it in C++, it returns a c10::Dict<c10::IValue, c10::IValue>. What is the equivalent of this Python code:
value_a = output['key_a']
value_b = output['key_b']
in C++ to get value from c10::Dict?
I have tried this but it does not work.
torch::IValue key_a("key_a");
torch::IValue key_b("key_b");
c10::IValue value_a = output[key_a];
c10::IValue value_b = output[key_b];
std::cout << value_a << std::endl;
std::cout << value_b << std::endl;
And error:
error: type 'c10::Dict<c10::IValue, c10::IValue>' does not provide a subscript operator
| You can find header file of c10:Dict here. What you want is at method (defined here), so:
auto value_a = output.at(key_a);
Should do the trick.
Also you don't have to create torch::IValue key_ay("key_a") explicitly, this should be sufficient:
auto value_a = output.at("key_a");
| https://stackoverflow.com/questions/63917001/ |
Unittest the pytorch forward function | I want to unittest the overridden forward function of my Network modell in Pytorch. So I loaded my model (pretrained from Zoo) with the setUp method, loaded a seed and created some random batch. In my method testForward I tested the result of forward against shape and numel, but I also want to check a specific value which a apears to be 0. I wasn't shure about that so checked my params in setUp also, which appears not to be 0.
import unittest
import torch
from SemanticSegmentation.models.fcn8 import FCN8
class TestFCN8(unittest.TestCase):
def setUp(self):
self.model = FCN8(8, pretrained=True)
torch.manual_seed(0)
self.x = torch.rand((4, 3, 45, 45))
for param in self.model.parameters():
print(param.data)
def testForward(self):
self.assertEqual(self.model.forward(self.x).shape.numel(), 64800)
self.assertEqual(str(self.model.forward(self.x).shape), 'torch.Size([4, 8, 45, 45])')
print(self.model.named_parameters)
if __name__ == "__main__":
unittest.main()
So my Question is: the sahpe of the forward return tensor is what I expect but why is this tensor completly zero? I expected at least a few values.
The imported modell is based on an VGG16 network with upscoring the after ConvLayer 4 , 8 and 16. If needed I could also present the modell code.
| Ok after tinkering around and debugging the forward function I came to following explanation:
Some Information about the architecture
If you do classes from Andrew Ng or others you learn not to initialize the weights to the same value, as example "0". This is what the writers of the original Paper of FCNs do and they say, because it doesn't change the performance or didn't yield to faster convergence (FCN-Paper).
My Solution
So for testing purpose I initlize in the testing module to seeded random values, which I can test against:
import unittest
import torch
from SemanticSegmentation.models.fcn8 import FCN8
class TestFCN8(unittest.TestCase):
def setUp(self):
self.model = FCN8(8, pretrained=True)
torch.manual_seed(0)
# instead of zero init for score tensors use random init
self.model.score_fr[6].weight.data.random_()
self.model.score_fr[6].bias.data.random_()
self.model.score_pool3.weight.data.random_()
self.model.score_pool3.bias.data.random_()
self.model.score_pool4.weight.data.random_()
self.model.score_pool4.bias.data.random_()
self.x = torch.rand((4, 3, 45, 45))
def testForward(self):
self.assertEqual(
self.model.forward(self.x).shape.numel(), 64800)
self.assertEqual(
list(self.model.forward(self.x).shape), [4, 8, 45, 45])
self.assertEqual(
float(self.model.forward(self.x)[3][4][44][4]), 2277257216.0))
if __name__ == "__main__":
unittest.main()
| https://stackoverflow.com/questions/63917991/ |
Whitelist tokens for text generation (XLNet, GPT-2) in huggingface-transformers | In the documentation on text generation (https://huggingface.co/transformers/main_classes/model.html#generative-models) there is the option to put
bad_words_ids (List[int], optional) – List of token ids that are not allowed to be generated. In order to get the tokens of the words that should not appear in the generated text, use tokenizer.encode(bad_word, add_prefix_space=True).
Is there also the option to put something along the lines of "allowed_words_ids"? The idea would be to restrict the language of the generated texts.
| I'd also suggest to do what Sahar Mills said. You can do it in the following way.
You get the whole vocab of the model you are using, e.g.
from transformers import AutoTokenizer
# Load tokenizer
checkpoint = "CenIA/distillbert-base-spanish-uncased" #Example model
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
vocab = tokenizer.get_vocab()
list(vocab.keys())[:100] # to see the first 100 words
Define words you do want in the model.
words_to_delete = ['forzado', 'vendieron', 'verticales'] # or load them from somewhere else
Define function to create the bad_words_ids, that is, the whole model vocab minus the words you want in the model
def create_bad_words_ids(bad_words_ids, words_to_delete):
for pictogram in range(len(words_to_delete)):
if words_to_delete[pictogram] in bad_words_ids:
bad_words_ids.remove(words_to_delete[pictogram])
return bad_words_ids
bad_words_ids = create_bad_words_ids(bad_words_ids=bad_words_ids, words_to_delete=words_to_delete)
print(bad_words_ids)
Hope it helps,
cheers
| https://stackoverflow.com/questions/63920887/ |
Intercepting CUDA calls | I am trying to intercept cudaMemcpy calls from the pytorch library for analysis. I noticed NVIDIA has a cuHook example in the CUDA toolkit samples. However that example requires one to modify the source code of the application itself which I cannot do in this case. So is there a way to write a hook to intercept CUDA calls without modifying the application source code?
| A CUDA runtime API call can be hooked (on linux) using the "LD_PRELOAD trick" if the application that is being run is dynamically linked to the CUDA runtime library (libcudart.so).
Here is a simple example on linux:
$ cat mylib.cpp
#include <stdio.h>
#include <unistd.h>
#include <dlfcn.h>
#include <cuda_runtime.h>
cudaError_t cudaMemcpy ( void* dst, const void* src, size_t count, cudaMemcpyKind kind )
{
cudaError_t (*lcudaMemcpy) ( void*, const void*, size_t, cudaMemcpyKind) = (cudaError_t (*) ( void* , const void* , size_t , cudaMemcpyKind ))dlsym(RTLD_NEXT, "cudaMemcpy");
printf("cudaMemcpy hooked\n");
return lcudaMemcpy( dst, src, count, kind );
}
cudaError_t cudaMemcpyAsync ( void* dst, const void* src, size_t count, cudaMemcpyKind kind, cudaStream_t str )
{
cudaError_t (*lcudaMemcpyAsync) ( void*, const void*, size_t, cudaMemcpyKind, cudaStream_t) = (cudaError_t (*) ( void* , const void* , size_t , cudaMemcpyKind, cudaStream_t ))dlsym(RTLD_NEXT, "cudaMemcpyAsync");
printf("cudaMemcpyAsync hooked\n");
return lcudaMemcpyAsync( dst, src, count, kind, str );
}
$ g++ -I/usr/local/cuda/include -fPIC -shared -o libmylib.so mylib.cpp -ldl -L/usr/local/cuda/lib64 -lcudart
$ cat t1.cu
#include <stdio.h>
int main(){
int a, *d_a;
cudaMalloc(&d_a, sizeof(d_a[0]));
cudaMemcpy(d_a, &a, sizeof(a), cudaMemcpyHostToDevice);
cudaStream_t str;
cudaStreamCreate(&str);
cudaMemcpyAsync(d_a, &a, sizeof(a), cudaMemcpyHostToDevice);
cudaMemcpyAsync(d_a, &a, sizeof(a), cudaMemcpyHostToDevice, str);
cudaDeviceSynchronize();
}
$ nvcc -o t1 t1.cu -cudart shared
$ LD_LIBRARY_PATH=/usr/local/cuda/lib64 LD_PRELOAD=./libmylib.so cuda-memcheck ./t1
========= CUDA-MEMCHECK
cudaMemcpy hooked
cudaMemcpyAsync hooked
cudaMemcpyAsync hooked
========= ERROR SUMMARY: 0 errors
$
(CentOS 7, CUDA 10.2)
A simple test with pytorch seems to indicate that it works:
$ docker run --gpus all -it nvcr.io/nvidia/pytorch:20.08-py3
...
Status: Downloaded newer image for nvcr.io/nvidia/pytorch:20.08-py3
=============
== PyTorch ==
=============
NVIDIA Release 20.08 (build 15516749)
PyTorch Version 1.7.0a0+8deb4fe
Container image Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
Copyright (c) 2014-2020 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015 Google Inc.
Copyright (c) 2015 Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION. All rights reserved.
NVIDIA modifications are covered by the license terms that apply to the underlying project or file.
NOTE: MOFED driver for multi-node communication was not detected.
Multi-node communication performance may be reduced.
NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be
insufficient for PyTorch. NVIDIA recommends the use of the following flags:
nvidia-docker run --ipc=host ...
...
root@946934df529b:/workspace# cat mylib.cpp
#include <stdio.h>
#include <unistd.h>
#include <dlfcn.h>
#include <cuda_runtime.h>
cudaError_t cudaMemcpy ( void* dst, const void* src, size_t count, cudaMemcpyKind kind )
{
cudaError_t (*lcudaMemcpy) ( void*, const void*, size_t, cudaMemcpyKind) = (cudaError_t (*) ( void* , const void* , size_t , cudaMemcpyKind ))dlsym(RTLD_NEXT, "cudaMemcpy");
printf("cudaMemcpy hooked\n");
return lcudaMemcpy( dst, src, count, kind );
}
cudaError_t cudaMemcpyAsync ( void* dst, const void* src, size_t count, cudaMemcpyKind kind, cudaStream_t str )
{
cudaError_t (*lcudaMemcpyAsync) ( void*, const void*, size_t, cudaMemcpyKind, cudaStream_t) = (cudaError_t (*) ( void* , const void* , size_t , cudaMemcpyKind, cudaStream_t ))dlsym(RTLD_NEXT, "cudaMemcpyAsync");
printf("cudaMemcpyAsync hooked\n");
return lcudaMemcpyAsync( dst, src, count, kind, str );
}
root@946934df529b:/workspace# g++ -I/usr/local/cuda/include -fPIC -shared -o libmylib.so mylib.cpp -ldl -L/usr/local/cuda/lib64 -lcudart
root@946934df529b:/workspace# cat tt.py
import torch
device = torch.cuda.current_device()
x = torch.randn(1024, 1024).to(device)
y = torch.randn(1024, 1024).to(device)
z = torch.matmul(x, y)
root@946934df529b:/workspace# LD_LIBRARY_PATH=/usr/local/cuda/lib64 LD_PRELOAD=./libmylib.so python tt.py
cudaMemcpyAsync hooked
cudaMemcpyAsync hooked
root@946934df529b:/workspace#
(using NVIDIA NGC PyTorch container )
| https://stackoverflow.com/questions/63924563/ |
GPT2 on Hugging face(pytorch transformers) RuntimeError: grad can be implicitly created only for scalar outputs | I am trying to fine-tune gpt2 with a custom dataset of mine. I created a basic example with the documentation from hugging-face transformers. I receive the mentioned error. I know what it means: (basically it is calling backward on a non-scalar tensor) but since I almost use only API calls, I have no idea how to fix this issue. Any suggestions?
from pathlib import Path
from absl import flags, app
import IPython
import torch
from transformers import GPT2LMHeadModel, Trainer, TrainingArguments
from data_reader import GetDataAsPython
# this is my custom data, but i get the same error for the basic case below
# data = GetDataAsPython('data.json')
# data = [data_point.GetText2Text() for data_point in data]
# print("Number of data samples is", len(data))
data = ["this is a trial text", "this is another trial text"]
train_texts = data
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
special_tokens_dict = {'pad_token': '<PAD>'}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
train_encodigs = tokenizer(train_texts, truncation=True, padding=True)
class BugFixDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
self.encodings = encodings
def __getitem__(self, index):
item = {key: torch.tensor(val[index]) for key, val in self.encodings.items()}
return item
def __len__(self):
return len(self.encodings['input_ids'])
train_dataset = BugFixDataset(train_encodigs)
training_args = TrainingArguments(
output_dir='./results',
num_train_epochs=3,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
warmup_steps=500,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=10,
)
model = GPT2LMHeadModel.from_pretrained('gpt2', return_dict=True)
model.resize_token_embeddings(len(tokenizer))
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
)
trainer.train()
| I finally figured it out. The problem was that the data samples did not contain a target output. Even tough gpt is self-supervised, this has to be explicitly told to the model.
you have to add the line:
item['labels'] = torch.tensor(self.encodings['input_ids'][index])
to the getitem function of the Dataset class and then it runs okay!
| https://stackoverflow.com/questions/63924567/ |
Can't iterate through PyTorch DataLoader | I am trying to learn PyTorch and create my first neural network. I am using a custom dataset, here is a sample of the data:
ID_REF cg00001854 cg00270460 cg00293191 cg00585219 cg00702638 cg01434611 cg02370734 cg02644867 cg02879967 cg03036557 cg03123104 cg03670302 cg04146801 cg04570540 cg04880546 cg07044749 cg07135408 cg07303143 cg07475178 cg07553761 cg07917901 cg08016257 cg08548498 cg08715791 cg09334636 cg11153071 cg11441796 cg11642652 cg12256803 cg12352902 cg12541127 cg13313833 cg13500819 cg13975075 cg14061946 cg14086922 cg14224196 cg14530143 cg15456742 cg16230982 cg16734549 cg17166941 cg17290213 cg17292667 cg18266594 cg18335535 cg18584803 cg19273773 cg19378199 cg19523692 cg20115827 cg20558024 cg20608895 cg20899581 cg21186299 cg22115892 cg22454769 cg22549547 cg23098693 cg23193759 cg23500537 cg23606718 cg24079702 cg24888989 cg25090514 cg25344401 cg25635000 cg25726357 cg25743481 cg26019498 cg26647566 cg26792755 cg26928195 cg26940620 Age
0 0.252486 0.284724 0.243242 0.200685 0.904132 0.102795 0.473919 0.264084 0.367480 0.671434 0.075955 0.329343 0.217375 0.210861 1.000000 0.356048 0.577945 0.557148 0.249014 0.847134 0.254539 0.319858 0.220589 0.796789 0.361994 0.296101 0.105965 0.239796 0.169738 0.357586 0.365674 0.132575 0.250932 0.283227 1.000000 0.262259 0.208146 0.290623 0.113049 0.255710 0.555382 0.281046 0.168826 0.492007 0.442871 0.509569 0.219183 0.641244 0.339088 0.164062 0.227678 0.340220 0.541491 0.423010 0.621303 0.243750 0.869947 0.124120 0.317660 0.985243 0.645869 0.590888 0.841485 0.825372 0.904037 0.407343 0.223722 0.352113 0.855653 0.289593 0.428849 0.719758 0.800240 0.473586 68
1 0.867671 0.606590 0.803673 0.845942 0.086222 0.996915 0.871998 0.791823 0.877639 0.095326 0.857108 0.959701 0.688322 0.650640 0.062329 0.920434 0.687537 0.193038 0.891809 0.273775 0.583457 0.793486 0.798427 0.102910 0.773496 0.658568 0.759050 0.754877 0.787817 0.585895 0.792240 0.734543 0.854528 0.735642 0.389495 0.736709 0.600386 0.775989 0.819579 0.696350 0.110374 0.878199 0.659849 0.716714 0.771206 0.870711 0.919629 0.359592 0.677752 0.693433 0.683448 0.792423 0.933971 0.170669 0.249908 0.879879 0.111498 0.623053 0.626821 0.000000 0.157429 0.197567 0.160809 0.183031 0.202754 0.597896 0.826429 0.886736 0.086038 0.844088 0.761793 0.056548 0.270670 0.940083 21
2 0.789439 0.594060 0.857086 0.633195 0.000000 0.953293 0.832107 0.692119 0.641294 0.169303 0.935807 0.674698 0.789146 0.796555 0.208590 0.791318 0.777537 0.221895 0.804405 0.138006 0.738616 0.758083 0.749127 0.180998 0.769312 0.592938 0.578885 0.896125 0.553588 0.781393 0.898768 0.705339 0.861029 0.966552 0.274496 0.575738 0.490313 0.951172 0.833724 0.901890 0.115235 0.651489 0.619196 0.760758 0.902768 0.835082 0.610065 0.294962 0.907979 0.703284 0.775867 0.910324 0.858090 0.190595 0.041909 0.792941 0.146005 0.615639 0.761822 0.254161 0.101765 0.343289 0.356166 0.088915 0.114347 0.628616 0.697758 0.910687 0.133282 0.775737 0.809420 0.129848 0.126485 0.875580 20
3 0.615803 0.710968 0.874037 0.771136 0.199428 0.861378 0.861346 0.695713 0.638599 0.158479 0.903668 0.758718 0.581146 0.857357 0.307756 0.977337 0.805049 0.188333 0.788991 0.312119 0.706578 0.782006 0.793232 0.288111 0.691131 0.758102 0.829221 1.000000 0.742666 0.897607 0.797869 0.803221 0.912101 0.736800 0.315636 0.760577 0.609101 0.733923 0.578598 0.796944 0.096960 0.924135 0.612601 0.727117 0.905177 0.776481 0.727865 0.429820 0.666803 0.924595 0.567474 0.752196 0.742709 0.303662 0.168286 0.720899 0.099313 0.595328 0.734024 0.268583 0.293437 0.244840 0.311726 0.213415 0.418673 0.819981 0.816660 0.684730 0.146797 0.686270 0.777680 0.087826 0.335125 1.000000 23
4 0.847329 0.735766 0.858018 0.896453 0.186994 0.831964 0.762522 0.840186 0.830930 0.199264 0.788487 0.912629 0.702284 0.838771 0.065271 0.959230 0.912387 0.377203 0.794480 0.207909 0.766246 0.582117 0.902944 0.301144 0.765401 0.715115 0.646735 0.812084 0.697886 0.714310 0.890658 0.826644 0.944022 0.729517 0.530379 0.756268 0.764899 0.914573 0.825766 0.673394 0.017316 0.949335 0.614375 0.650553 0.898788 0.685396 0.823348 0.210175 0.831852 0.829067 0.858212 0.916433 0.778864 0.241186 0.144072 0.889536 0.058360 0.703567 0.852496 0.094223 0.341236 0.284903 0.231957 0.125196 0.333207 0.752592 0.899356 0.839006 0.174601 0.937948 0.716135 0.000000 0.114062 0.969760 22
I split the data into train/test/val data like this:
train_df, rest_df = train_test_split(df, test_size=0.4)
test_df, val_df = train_test_split(rest_df, test_size=0.5)
x_train_tensor = torch.tensor(train_df.drop('Age', axis=1).to_numpy(), requires_grad=True)
y_train_tensor = torch.tensor(train_df['Age'].to_numpy())
x_test_tensor = torch.tensor(test_df.drop('Age', axis=1).to_numpy(), requires_grad=True)
y_test_tensor = torch.tensor(test_df['Age'].to_numpy())
x_val_tensor = torch.tensor(val_df.drop('Age', axis=1).to_numpy(), requires_grad=True)
y_val_tensor = torch.tensor(val_df['Age'].to_numpy())
bs = len(train_df.index)//10
train_dl = DataLoader(train_df, bs, shuffle=True)
test_dl = DataLoader(test_df, len(test_df), shuffle=False)
val_dl = DataLoader(val_df, bs, shuffle=False)
And here is the Network so far (very basic, just to test if it works):
class Net(nn.Module):
def __init__(self):
super().__init__()
input_size = len(df.columns)-1
self.fc1 = nn.Linear(input_size, input_size//2)
self.fc2 = nn.Linear(input_size//2, input_size//4)
self.fc3 = nn.Linear(input_size//4, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
return x
net = Net()
print(net)
Here is where I get the error, on the last line:
loss = torch.nn.MSELoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)
EPOCHS = 3
STEPS_PER_EPOCH = len(train_dl.dataset)//bs
iterator = iter(train_dl)
print(train_dl.dataset)
for epoch in range(EPOCHS):
for s in range(STEPS_PER_EPOCH):
print(iterator)
iterator.next()
ID_REF cg00001854 cg00270460 cg00293191 ... cg26928195 cg26940620 Age
29 0.781979 0.744825 0.744579 ... 0.242138 0.854054 19
44 0.185400 0.299145 0.160084 ... 0.638449 0.413286 69
21 0.085470 0.217421 0.277675 ... 0.863455 0.512334 75
4 0.847329 0.735766 0.858018 ... 0.114062 0.969760 22
20 0.457293 0.462984 0.323835 ... 0.584259 0.481060 68
33 0.784562 0.845031 0.958335 ... 0.122210 0.854005 19
25 0.258434 0.354822 0.405620 ... 0.677245 0.540463 70
27 0.737131 0.768188 0.897724 ... 0.203228 0.831175 20
37 0.002051 0.202403 0.134198 ... 0.753844 0.302229 70
10 0.737427 0.537413 0.614343 ... 0.464244 0.723953 23
0 0.252486 0.284724 0.243242 ... 0.800240 0.473586 68
32 0.927260 1.000000 0.853864 ... 0.261990 0.892503 18
7 0.035825 0.271602 0.236109 ... 1.000000 0.471256 69
17 0.000000 0.202986 0.132144 ... 0.874550 0.342981 79
18 0.282112 0.479775 0.218852 ... 0.908217 0.426143 79
11 0.708797 0.536074 0.721171 ... 0.048768 0.699540 27
15 0.686921 0.639198 0.858981 ... 0.305142 0.978350 24
38 0.246031 0.186011 0.235928 ... 0.754013 0.342380 70
30 0.814767 0.771483 0.437789 ... 0.000000 0.658354 18
43 0.247471 0.399231 0.271619 ... 0.895016 0.468336 72
46 0.000428 0.263164 0.163303 ... 0.567005 0.252806 76
3 0.615803 0.710968 0.874037 ... 0.335125 1.000000 23
5 0.777925 0.821814 0.636676 ... 0.233359 0.753266 20
34 0.316262 0.307535 0.203090 ... 0.570755 0.351226 73
23 0.133038 0.000000 0.208442 ... 0.631202 0.459593 76
6 0.746102 0.585211 0.626580 ... 0.311914 0.753994 25
1 0.867671 0.606590 0.803673 ... 0.270670 0.940083 21
47 0.444606 0.502357 0.207560 ... 0.987106 0.446959 71
[28 rows x 75 columns]
<torch.utils.data.dataloader._SingleProcessDataLoaderIter object at 0x7f166241c048>
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2645 try:
-> 2646 return self._engine.get_loc(key)
2647 except KeyError:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 13
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
6 frames
/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2646 return self._engine.get_loc(key)
2647 except KeyError:
-> 2648 return self._engine.get_loc(self._maybe_cast_indexer(key))
2649 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
2650 if indexer.ndim > 1 or indexer.size > 1:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 13
I really have no idea what the error means or where to look.
I'd greatly appreciate some guidance, thank you!
| Use Numpy array instead of dataframe. You can use to_numpy() to convert dataframe to numpy array.
train_dl = DataLoader(train_df.to_numpy(), bs, shuffle=True)
test_dl = DataLoader(test_df.to_numpy(), len(test_df), shuffle=False)
val_dl = DataLoader(val_df.to_numpy(), bs, shuffle=False)
| https://stackoverflow.com/questions/63927648/ |
Determinant of a complex matrix in PyTorch | Is there a way to calculate the determinant of a complex matrix in PyTroch?
torch.det is not implemented for 'ComplexFloat'
| Unfortunately it's not implemented currently. One way would be to implement your own version or simply use np.linalg.det.
Here is a short function which computes the determinant of a complex matrix that I wrote using LU-decomposition:
def complex_det(A):
def complex_diag(A):
return torch.view_as_complex(torch.stack((A.real.diag(), A.imag.diag()),dim=1))
#Perform LU decomposition to matrix A:
A_LU, pivots = A.lu()
P, A_L, A_U = torch.lu_unpack(A_LU, pivots)
#Det. of multiplied matrices is multiplcation of det.:
det = torch.prod(complex_diag(A_L)) * torch.prod(complex_diag(A_U)) * torch.det(P.real) #Could probably calculate det(P) [which is +-1] efficiently using Sylvester's determinant identity
return det
#Test it:
A = torch.view_as_complex(torch.randn(3,3,2))
complex_det(A)
| https://stackoverflow.com/questions/63928808/ |
Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same | I'm trying to implement ResNet18 on pyTorch but I'm having some troubles with it. My code is this:
device = torch.device("cuda:0")
class ResnetBlock(nn.Module):
def __init__(self, strides, nf, nf0, reps, bn):
super(ResnetBlock, self).__init__()
self.adapt = strides == 2
self.layers = []
self.relus = []
self.adapt_layer = nn.Conv2d(nf0, nf, kernel_size=1, stride=strides, padding=0) if self.adapt else None
for i in range(reps):
self.layers.append(nn.Sequential(
nn.Conv2d(nf0, nf, kernel_size=3, stride=strides, padding=1),
nn.BatchNorm2d(nf, eps=0.001, momentum=0.99),
nn.ReLU(),
nn.Conv2d(nf, nf, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(nf, eps=0.001, momentum=0.99)))
self.relus.append(nn.ReLU())
strides = 1
nf0 = nf
def forward(self, x):
for i, (layer, relu) in enumerate(zip(self.layers, self.relus)):
rama = layer(x)
if self.adapt and i == 0:
x = self.adapt_layer(x)
x = x + rama
x = relu(x)
return x
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3),
nn.MaxPool2d(kernel_size=2, stride=2))
self.blocks = nn.Sequential(
ResnetBlock(1, 64, 64, 2, bn),
ResnetBlock(2, 128, 64, 2, bn),
ResnetBlock(2, 256, 128, 2, bn),
ResnetBlock(2, 512, 256, 2, bn))
self.fcout = nn.Linear(512, 10)
def forward(self, x):
out = self.layer1(x)
out = self.blocks(out)
out = out.reshape(out.size(0), -1)
out = self.fcout(out)
return out
num_epochs = 50
num_classes = 10
batch_size = 50
learning_rate = 0.00001
trans = transforms.ToTensor()
train_dataset = torchvision.datasets.CIFAR10(root="./dataset_pytorch", train=True, download=True, transform=trans)
test_dataset = torchvision.datasets.CIFAR10(root="./dataset_pytorch", train=False, download=True, transform=trans)
train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
def weights_init(m):
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
nn.init.xavier_uniform_(m.weight.data)
nn.init.zeros_(m.bias.data)
model = ConvNet()
model.apply(weights_init)
model.to(device)
summary(model, (3,32,32))
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, eps=1e-6)
# Train the model
total_step = len(train_loader)
loss_list = []
acc_list = []
acc_list_test = []
for epoch in range(num_epochs):
total = 0
correct = 0
for i, (images, labels) in enumerate(train_loader):
images = images.to(device)
labels = labels.to(device)
optimizer.zero_grad()
# Run the forward pass
outputs = model(images)
loss = criterion(outputs, labels)
loss_list.append(loss.item())
# Backprop and perform Adam optimisation
loss.backward()
optimizer.step()
# Track the accuracy
total += labels.size(0)
_, predicted = torch.max(outputs.data, 1)
correct += (predicted == labels).sum().item()
acc_list.append(correct / total)
print("Train")
print('Epoch [{}/{}], Accuracy: {:.2f}%'
.format(epoch + 1, num_epochs, (correct / total) * 100))
total_test = 0
correct_test = 0
for i, (images, labels) in enumerate(test_loader):
images = images.to(device)
labels = labels.to(device)
# Run the forward pass
outputs = model(images)
# Track the accuracy
total_test += labels.size(0)
_, predicted = torch.max(outputs.data, 1)
correct_test += (predicted == labels).sum().item()
acc_list_test.append(correct_test / total_test)
print("Test")
print('Epoch [{}/{}], Accuracy: {:.2f}%'
.format(epoch + 1, num_epochs, (correct_test / total_test) * 100))
It's weird because it's throwing me that error Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same even though I've moved both the model and the data to cuda.
I guess it's related with how I defined or used "ResnetBlock", because if I remove from ConvNet those blocks (removing the line out = self.blocks(out)), the code works. But I don't know what I'm doing wrong.
| The problem is in this line:
model.to(device)
to is not in-place. It returns the converted model. You need to change it to:
model = model.to(device)
EDIT: Another problem: vanilla list cannot be tracked by PyTorch. You need to use nn.ModuleList.
From
self.layers = []
self.relus = []
To
self.layers = nn.ModuleList()
self.relus = nn.ModuleList()
| https://stackoverflow.com/questions/63929783/ |
How to get the total number of batch iteration from pytorch dataloader? | I have a question that How to get the total number of batch iteration from pytorch dataloader?
The following is a common code for training
for i, batch in enumerate(dataloader):
Then, is there any method to get the total number of iteration for the "for loop"?
In my NLP problem, the total number of iteration is different from int(n_train_samples/batch_size)...
For example, if I truncate train data only 10,000 samples and set the batch size as 1024, then 363 iteration occurs in my NLP problem.
I wonder how to get the number of total iteration in "the for-loop".
Thank you.
| len(dataloader) returns the total number of batches. It depends on the __len__ function of your dataset, so make sure it is set correctly.
| https://stackoverflow.com/questions/63930621/ |
Gradient accumulation in an RNN | I ran into some memory issues (GPU) when running a large RNN network, but I want to keep my batch size reasonable so I wanted to try out gradient accumulation. In a network where you predict the output in one go, that seems self-evident but in an RNN you do multiple forward passes for each input step. Because of that, I fear that my implementation does not work as intended. I started from user albanD's excellent examples here , but I think they should be modified when using an RNN. The reason I think that is because you accumulate much more gradients because you do multiple forwards per sequence.
My current implementation looks like this, at the same time allowing for AMP in PyTorch 1.6 which seems important - everything needs to be called in the right place. Note that this is just an abstract version, which might seem like a lot of code but it is mostly comments.
def train(epochs):
"""Main training loop. Loops for `epoch` number of epochs. Calls `process`."""
for epoch in range(1, epochs + 1):
train_loss = process("train")
valid_loss = process("valid")
# ... check whether we improved over earlier epochs
if lr_scheduler:
lr_scheduler.step(valid_loss)
def process(do):
"""Do a single epoch run through the dataloader of the training or validation set.
Also takes care of optimizing the model after every `gradient_accumulation_steps` steps.
Calls `step` for each batch where it gets the loss from."""
if do == "train":
model.train()
torch.set_grad_enabled(True)
else:
model.eval()
torch.set_grad_enabled(False)
loss = 0.
for batch_idx, batch in enumerate(dataloaders[do]):
step_loss, avg_step_loss = step(batch)
loss += avg_step_loss
if do == "train":
if amp:
scaler.scale(step_loss).backward()
if (batch_idx + 1) % gradient_accumulation_steps == 0:
# Unscales the gradients of optimizer's assigned params in-place
scaler.unscale_(optimizer)
# clip in-place
clip_grad_norm_(model.parameters(), 2.0)
scaler.step(optimizer)
scaler.update()
model.zero_grad()
else:
step_loss.backward()
if (batch_idx + 1) % gradient_accumulation_steps == 0:
clip_grad_norm_(model.parameters(), 2.0)
optimizer.step()
model.zero_grad()
# return average loss
return loss / len(dataloaders[do])
def step():
"""Processes one step (one batch) by forwarding multiple times to get a final prediction for a given sequence."""
# do stuff... init hidden state and first input etc.
loss = torch.tensor([0.]).to(device)
for i in range(target_len):
with torch.cuda.amp.autocast(enabled=amp):
# overwrite previous decoder_hidden
output, decoder_hidden = model(decoder_input, decoder_hidden)
# compute loss between predicted classes (bs x classes) and correct classes for _this word_
item_loss = criterion(output, target_tensor[i])
# We calculate the gradients for the average step so that when
# we do take an optimizer.step, it takes into account the mean step_loss
# across batches. So basically (A+B+C)/3 = A/3 + B/3 + C/3
loss += (item_loss / gradient_accumulation_steps)
topv, topi = output.topk(1)
decoder_input = topi.detach()
return loss, loss.item() / target_len
The above does not seem to work as I had hoped, i.e. it still runs into out-of-memory issues very quickly. I think the reason is that step already accumulates so much information, but I am not sure.
| For simplicity, I will only take care of amp enabled gradient accumulation, without amp the idea is the same. And your step presented runs under amp so let's stick to that.
step
In PyTorch documentation about amp you have an example of gradient accumulation. You should do it inside step. Each time you run loss.backward() gradient is accumulated inside tensor leafs which can be optimized by optimizer. Hence, your step should look like this (see comments):
def step():
"""Processes one step (one batch) by forwarding multiple times to get a final prediction for a given sequence."""
# You should not accumulate loss on `GPU`, RAM and CPU is better for that
# Use GPU only for calculations, not for gathering metrics etc.
loss = 0
for i in range(target_len):
with torch.cuda.amp.autocast(enabled=amp):
# where decoder_input is from?
# I assume there is one in real code
output, decoder_hidden = model(decoder_input, decoder_hidden)
# Here you divide by accumulation steps
item_loss = criterion(output, target_tensor[i]) / (
gradient_accumulation_steps * target_len
)
scaler.scale(item_loss).backward()
loss += item_loss.detach().item()
# Not sure what was topv for here
_, topi = output.topk(1)
decoder_input = topi.detach()
# No need to return loss now as we did backward above
return loss / target_len
As you detach decoder_input anyway (so it is like totally new hidden input without history and parameters will be optimized based on that, not based on all runs) there is no need for backward in process. Also, you probably don't need decoder_hidden, if it isn't passed to the network, torch.tensor filled with zeros is passed implicitly.
Also we should divide by gradient_accumulation_steps * target_len as that's how many backwards we will run before single optimization step.
As some of your variables are ill-defined I assume you just made a scheme of what's going on.
Also, if you want the history to be kept you shouldn't detach decoder_input, in this case it would look like this:
def step():
"""Processes one step (one batch) by forwarding multiple times to get a final prediction for a given sequence."""
loss = 0
for i in range(target_len):
with torch.cuda.amp.autocast(enabled=amp):
output, decoder_hidden = model(decoder_input, decoder_hidden)
item_loss = criterion(output, target_tensor[i]) / (
gradient_accumulation_steps * target_len
)
_, topi = output.topk(1)
decoder_input = topi
loss += item_loss
scaler.scale(loss).backward()
return loss.detach().cpu() / target_len
This effectively goes through RNN multiple times and will probably raise OOM, not sure what you are after here. If that's the case there's not much you can do AFAIK as the RNN computations are simply too long to fit into the GPU.
process
Only relevant part of this code is presented, so it would be:
loss = 0.0
for batch_idx, batch in enumerate(dataloaders[do]):
# Here everything is detached from graph so we're safe
avg_step_loss = step(batch)
loss += avg_step_loss
if do == "train":
if (batch_idx + 1) % gradient_accumulation_steps == 0:
# You can use unscale as in the example in PyTorch's docs
# just like you did
scaler.unscale_(optimizer)
# clip in-place
clip_grad_norm_(model.parameters(), 2.0)
scaler.step(optimizer)
scaler.update()
# IMO in this case optimizer.zero_grad is more readable
# but it's a nitpicking
optimizer.zero_grad()
# return average loss
return loss / len(dataloaders[do])
Question-like
[...] in an RNN you do multiple forward passes for each input step.
Because of that, I fear that my implementation does not work as
intended.
It does not matter. For each forward you should usually do one backward (seems to be the case here, see steps for possible options). After that we (usually) don't need loss connected to graph as we already performed backpropagation, got our gradients and are ready to optimize parameters.
That loss needs to have history, as it goes back to the process loop
where backward will be called on it.
No need to call backward in process as presented.
| https://stackoverflow.com/questions/63934070/ |
IPython Widgets Separated by Character Rather Than Word | As you can tell in the picture below, the table is separated by characters rather than full words.
def apply(f):
text = f
text = re.sub(r'\W+', ' ', text)
res = LM().check_probabilities(text, topk=20)
l = str(res)
paraphrase_widget = widgets.SelectMultiple(
options=l,
description='Paraphrases',
disabled=False,
layout= widgets.Layout(width='100%')
)
display(paraphrase_widget)
return {"result": res}
apply("In order to")
| The issue here is in unpacking pytorch's prediction and passing those results to the widget in proper format (a list of tuples). Here's how you can do that:
# Modify your widget to the following
paraphrase_widget = widgets.SelectMultiple(
options=res['pred_topk'][2],
description='Paraphrases',
disabled=False,
layout= widgets.Layout(width='100%', height="300px")
)
Here's what this looks like for me:
| https://stackoverflow.com/questions/63946820/ |
Generate random vectors with a given (numerical) distribution matrix | I'm trying to come up with a fast and smart way of generating random vectors from a distribution matrix, much like what is being discussed here:
Generate random numbers with a given (numerical) distribution
But the key difference is that I have a distribution matrix, rather than just a single vector.
Now obviously I could just create a for loop and loop over each vector in my matrix, but that doesn't seem very pythonic, or fast, so I'm kinda hoping that there is a better way of doing it.
To give a quick example of what I want to do:
Given a one-probability matrix
p = [[0.2, 0.4, 0.4],[0.1, 0.7, 0.2],[0.44, 0.5, 0.06],...]
I wish to draw elements, where each element gets selected with the probability in the probability matrix. (Essentially I want to generate a one-hot encoding from my one-probability matrix).
Which could for instance look like this given the above probabilities:
t = [2,1,2,...]
I need to do this for long sequences, and I need to do it millions of times, but only 1 time for each sequence each time. (Data augmentation for deep learning)
Does anyone have a good way of doing this?
| You could use inverse transform sampling. Compute a cumulative distribution on your p matrix, sample a single random vector of size the height of the matrix, then return the largest index along each row of the cumulative matrix. In code:
p = np.array([[0.2, 0.4, 0.4],[0.1, 0.7, 0.2],[0.44, 0.5, 0.06]])
u = np.random.rand(p.shape[0])
idxs = (p.cumsum(1) < u).sum(1)
then the idxs will be sampled according to the rows of p. e.g.:
np.histogram((p[0].cumsum() < np.random.rand(10000,1)).sum(1), bins=3)
# array([1977, 4018, 4005]), ...
| https://stackoverflow.com/questions/63948265/ |
"@" for tensor multiplication using pytorch | This article https://towardsdatascience.com/understand-kaiming-initialization-and-implementation-detail-in-pytorch-f7aa967e9138 about intelligent weights initialization uses the syntax
x@w
to signify tensor (/matrix) multiplication. I had not seen that before and instead had presumed we would need to "spell it out" as :
torch.mm(x, w.t())
What is required to use the former (nicer) syntax? That article did not show a complete set of the imports they were using.
| Just Python 3.5 and above can use this "@" syntax. The following are equivalent:
a = torch.rand(2,2)
b = torch.rand(2,2)
c = a.mm(b)
print(c)
c = torch.mm(a, b)
print(c)
c = torch.matmul(a, b)
print(c)
c = a @ b # python > 3.5+
print(c)
Output:
tensor([[0.2675, 0.8140],
[0.0415, 0.1644]])
tensor([[0.2675, 0.8140],
[0.0415, 0.1644]])
tensor([[0.2675, 0.8140],
[0.0415, 0.1644]])
tensor([[0.2675, 0.8140],
[0.0415, 0.1644]])
I like to use mm syntax for matrix to matrix multiplication and mv for matrix to vector multiplication.
To get the transposed matrix I like to use easy a.T syntax.
One more thing to add:
a = torch.rand(2,2,2)
b = torch.rand(2,2,2)
c = torch.matmul(a, b)
print(c)
c = a @ b # python > 3.5+
print(c)
Output:
tensor([[[0.2951, 0.3021],
[0.8663, 1.0430]],
[[0.2674, 1.3792],
[0.0895, 0.9703]]])
tensor([[[0.2951, 0.3021],
[0.8663, 1.0430]],
[[0.2674, 1.3792],
[0.0895, 0.9703]]])
mm cannot work for rank>2 (tensor of rank 3 or more). So if you work with bigger ranks use just these: matmul or @ notation.
| https://stackoverflow.com/questions/63949828/ |
How to mutate weights of a NN in pytorch | Im playing around with genetic algorithms with pytorch and I'm looking for a more efficient way of mutating the weights of the network (applying a small modification to them)
Right now I have a suboptimal solution where I loop through the parameters and apply a random modification.
child_agent = network()
for param in child_agent.parameters():
if len(param.shape) == 4: # weights of Conv2D
for i0 in range(param.shape[0]):
for i1 in range(param.shape[1]):
for i2 in range(param.shape[2]):
for i3 in range(param.shape[3]):
param[i0][i1][i2][i3] += mutation_power * np.random.randn()
elif len(param.shape) == 2: # weights of linear layer
for i0 in range(param.shape[0]):
for i1 in range(param.shape[1]):
param[i0][i1] += mutation_power * np.random.randn()
elif len(param.shape) == 1: # biases of linear layer or conv layer
for i0 in range(param.shape[0]):
param[i0] += mutation_power * np.random.randn()
This solution is bound to my architecture and needs recoding If I decide to add more layers. Is there any way to do this more efficiently and clean? Preferable that it works regardless of how the architecture of my network looks like.
Thanks
| pytorch and numpy are tensor oriented, e.g. you make operations on multiple items contained within multidimensional array-like object.
You can change your whole code to this single line:
import torch
child_agent = network()
for param in child_agent.parameters():
param.data += mutation.power * torch.randn_like(param)
randn_like (docs here) creates random normal tensor with the same shape as param.
Also, if this parameter requires grad (which it probably does), you should modify it's data field.
MCVE:
import torch
mutation_power = 0.4
child_agent = torch.nn.Sequential(
torch.nn.Conv2d(1, 3, 3, padding=1), torch.nn.Linear(10, 20)
)
for param in child_agent.parameters():
param.data += mutation_power * torch.randn_like(param)
| https://stackoverflow.com/questions/63951120/ |
cnn wrong prediction even though model shows good accuracy in training and validation data | I have used the skin cancer classification competition data in Kaggle. There are 4 labels and the entire data is imbalanced. I ran the resnet 18 model on a 10 fold cross validation split to train the data and each fold was given around 2 epochs. The code has been attached below.
Basically the model gave 98.2% accuracy with 0.07 loss value in the train data and 98.1% accuracy and 0.06 loss value in the validation data. So this seemed pretty good.
However the problem is...prediction.py(code attached below). When I tried to predict, the model keeps giving the result as [0]. Even if it's a train image data.
Is there something wrong with my code?
Expected result:
if the image is the input, the output should be either 0,1,2 or 3
model.py(where the training happens)
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
import os
import cv2
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import albumentations as A
from torch.utils.data import TensorDataset, DataLoader,Dataset
from torchvision import models
from collections import defaultdict
from torch.utils.data.sampler import RandomSampler
import torch.optim as optim
from torch.optim import lr_scheduler
from sklearn import model_selection
from tqdm import tqdm
import gc
# generate data from csv file
class Build_dataset(Dataset):
def __init__(self, csv, split, mode, transform=None):
self.csv = csv.reset_index(drop=True)
self.split = split
self.mode = mode
self.transform = transform
def __len__(self):
return self.csv.shape[0]
def __getitem__(self, index):
row = self.csv.iloc[index]
image = cv2.imread(row.filepath)
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if self.transform is not None:
res = self.transform(image=image)
image = res['image'].astype(np.float32)
else:
image = image.astype(np.float32)
image = image.transpose(2, 0, 1)
data = torch.tensor(image).float()
if self.mode == 'test':
return data
else:
return data, torch.tensor(self.csv.iloc[index].target).long()
# training data
def train_epoch(model, loader, optimizer,loss_fn,device, scheduler,n_examples):
model = model.train()
losses = []
correct_predictions = 0
for inputs, labels in tqdm(loader):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, dim=1)
loss = loss_fn(outputs, labels)
correct_predictions += torch.sum(preds == labels)
losses.append(loss.item())
loss.backward()
optimizer.step()
optimizer.zero_grad()
# here you delete inputs and labels and then use gc.collect
del inputs, labels
gc.collect()
return correct_predictions.double() / n_examples, np.mean(losses)
# validation data
def val_epoch(model, loader,loss_fn, device,n_examples):
model = model.eval()
losses = []
correct_predictions = 0
with torch.no_grad():
for inputs, labels in tqdm(loader):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, dim=1)
loss = loss_fn(outputs, labels)
correct_predictions += torch.sum(preds == labels)
losses.append(loss.item())
# here you delete inputs and labels and then use gc.collect
del inputs, labels
gc.collect()
return correct_predictions.double() / n_examples, np.mean(losses)
def train(fold, model,device, num_epochs):
df_train = df[df.kfold != fold].reset_index(drop=True)
df_valid = df[df.kfold == fold].reset_index(drop=True)
# generate data
dataset_train = Build_dataset(df_train, 'train', 'train', transform=transforms_train)
dataset_valid = Build_dataset(df_valid, 'train', 'val', transform=transforms_val)
#load data
train_loader = DataLoader(dataset_train, batch_size = 64,sampler=RandomSampler(dataset_train),
num_workers=4)
valid_loader = DataLoader(dataset_valid, batch_size = 32,shuffle = True, num_workers= 4 )
dataset_train_size = len(dataset_train)
dataset_valid_size = len(dataset_valid)
optimizer = optim.Adam(model.parameters(), lr = 1e-4)
model = model.to(device)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, patience = 3,threshold = 0.001, mode =
'max')
loss_fn = nn.CrossEntropyLoss().to(device)
history = defaultdict(list)
best_accuracy = 0.0
for epoch in range(num_epochs):
print(f'Epoch {epoch+1} / {num_epochs}')
print ('-'*30)
train_acc, train_loss = train_epoch(model, train_loader, optimizer, loss_fn, device,
scheduler, dataset_train_size)
print(f'Train loss {train_loss} accuracy {train_acc}')
valid_acc, valid_loss = val_epoch(model, valid_loader, loss_fn, device,dataset_valid_size)
print(f'Val loss {valid_loss} accuracy {valid_acc}')
print()
history['train_acc'].append(train_acc)
history['train_loss'].append(train_loss)
history['val_acc'].append(valid_acc)
history['val_loss'].append(valid_loss)
if valid_acc > best_accuracy:
print('saving model')
torch.save(model.state_dict(), f'best_model_{fold}.bin')
best_accuracy = valid_acc
print(f'Best Accuracy: {best_accuracy}')
model.load_state_dict(torch.load(f'best_model_{fold}.bin'))
return model, history
if __name__ == '__main__':
#competition data -2020
data_dir = "../input/jpeg-melanoma-384x384"
#competition data - 2019
data_dir2 = "../input/jpeg-isic2019-384x384"
# device
device = torch.device("cuda")
# augmenting images
image_size = 384
transforms_train = A.Compose([
A.Transpose(p=0.5),
A.VerticalFlip(p=0.5),
A.HorizontalFlip(p=0.5),
A.RandomBrightness(limit=0.2, p=0.75),
A.RandomContrast(limit=0.2, p=0.75),
A.OneOf([
A.MedianBlur(blur_limit=5),
A.GaussianBlur(blur_limit=5),
A.GaussNoise(var_limit=(5.0, 30.0)),
], p=0.7),
A.OneOf([
A.OpticalDistortion(distort_limit=1.0),
A.GridDistortion(num_steps=5, distort_limit=1.),
A.ElasticTransform(alpha=3),
], p=0.7),
A.CLAHE(clip_limit=4.0, p=0.7),
A.HueSaturationValue(hue_shift_limit=10, sat_shift_limit=20, val_shift_limit=10, p=0.5),
A.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.1, rotate_limit=15, border_mode=0, p=0.85),
A.Resize(image_size, image_size),
A.Cutout(max_h_size=int(image_size * 0.375), max_w_size=int(image_size * 0.375), num_holes=1,
p=0.7),
A.Normalize()
])
transforms_val = A.Compose([
A.Resize(image_size, image_size),
A.Normalize()
])
# create data
df_train = pd.read_csv(os.path.join(data_dir, "train.csv")) #/kaggle/input/siim-isic-melanoma-classification/train.csv
df_train.head()
df_train['is_ext'] = 0
df_train['filepath'] = df_train['image_name'].apply(lambda x: os.path.join(data_dir, 'train', f'{x}.jpg'))
# dataset from 2020 data
df_train['diagnosis'] = df_train['diagnosis'].apply(lambda x: x.replace('seborrheic keratosis', 'BKL'))
df_train['diagnosis'] = df_train['diagnosis'].apply(lambda x: x.replace('lichenoid keratosis', 'BKL'))
df_train['diagnosis'] = df_train['diagnosis'].apply(lambda x: x.replace('solar lentigo', 'BKL'))
df_train['diagnosis'] = df_train['diagnosis'].apply(lambda x: x.replace('lentigo NOS', 'BKL'))
df_train['diagnosis'] = df_train['diagnosis'].apply(lambda x: x.replace('cafe-au-lait macule', 'unknown'))
df_train['diagnosis'] = df_train['diagnosis'].apply(lambda x: x.replace('atypical melanocytic proliferation', 'unknown'))
# shuffle data
df = df_train.sample(frac=1).reset_index(drop=True)
# creating 8 different target values
new_target = {d: idx for idx, d in enumerate(sorted(df.diagnosis.unique()))}
df['target'] = df['diagnosis'].map(new_target)
mel_idx = new_target['melanoma']
# creating 10 fold cross validation data
df = df_train.sample(frac=1).reset_index(drop=True)
df['kfold'] = -1
y = df_train.target.values
kf = model_selection.StratifiedKFold(n_splits=10,shuffle=True)
idx = kf.get_n_splits(X=df,y=y)
print(idx)
for fold,(x,y) in enumerate(kf.split(X=df,y=y)):
df.loc[y,'kfold'] = fold
df = df[['filepath','diagnosis', 'target', 'is_ext', 'kfold']]
class_names = list(df['diagnosis'].unique())
# create model
def create_model(n_classes):
model = models.resnet18(pretrained=True)
n_features = model.fc.in_features
model.fc = nn.Linear(n_features, n_classes)
return model.to(device)
base_model = create_model(len(class_names)) # model ready
# run the model
for i in range(10):
#train
base_model, history = train(i, base_model, device, num_epochs = 2) # train data
prediction.py
from torchvision import models
import torch
import torch.nn as nn
import albumentations as A
import cv2
import os
import numpy as np
device = torch.device("cuda")
MODEL = None
MODEL_PATH = "../input/prediction/best_model_4.bin"
def create_model(n_classes):
model = models.resnet18(pretrained=True)
n_features = model.fc.in_features
model.fc = nn.Linear(n_features, n_classes)
return model.to(device)
# generate the data to tensor with transform application
# converting the image to tensor by using the transforms function
class get_image:
def __init__(self, image_path, targets, transform = None):
self.image_path = image_path
self.targets = targets
self.transform = transform
def __len__(self):
return len(self.image_path)
def __getitem__(self, item):
targets = self.targets[item]
resize = 384
image = cv2.imread(self.image_path[item])
image = cv2.resize(image, (resize, resize))
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if self.transform is not None:
res = self.transform(image = image)
image = res['image'].astype(np.float32)
image = image.transpose(2, 0, 1)
data = torch.tensor(image).float()
targets = torch.tensor(targets)
return data, targets
# load the data by using torch data
# predict values
# predict function
def predict(image_path, model, model_path):
image_size = 384
transforms_val = A.Compose([
A.Resize(image_size, image_size),
A.Normalize()
])
test_images = [image_path]
test_targets = [0]
test_data = get_image(
image_path = test_images,
targets = test_targets,
transform=transforms_val)
# loading the data
test_dataloader = torch.utils.data.DataLoader(test_data, batch_size=1, shuffle = False,
num_workers=0)
model = create_model(n_classes = 4)
model.load_state_dict(torch.load(model_path))
model.to(device)
model.eval()
prediction = []
with torch.no_grad():
for test_data, test_target in test_dataloader:
test_data = test_data.to(device)
test_target = test_target.to(device)
outputs = model(test_data)
_,preds = torch.max(outputs.cpu(), 1)
#prediction.extend(preds)
prediction = np.vstack((preds)).ravel()
return prediction
def upload_predict():
image_file = "../input/whatever/ISIC_0075663.jpg"
if image_file:
pred = predict(image_file, MODEL, MODEL_PATH)
print(pred)
return pred
the label and it's count is given right here
3 27126
2 5193
1 584
0 223
Here 0 is considered malignant type cancer and the other labels are of different types.
Here is the link to the data: https://www.kaggle.com/cdeotte/jpeg-melanoma-384x384
| I think you might have the answer to your question! You said:
There are 4 labels and the entire data is imbalanced
Assuming that label 0 is no cancer and 1, 2, 3 are cases with different types of skin cancer. If you said that prediction classes are imbalanced, I'm guessing that 98% of the entire sample is 0, so your algorithm simply predicts every case to be 0 so that it will get right 98% of the time. When your algorithm gets to your test set, it will simply predict everything to be 0.
So the problem isn't with your code. You must balance your dataset by upsampling minority classes, downsampling majority class, assigning a weight/bias to your data or using some sort of model ensemble see https://elitedatascience.com/imbalanced-classes. Check out the credit card fraud detection tutorials such as https://towardsdatascience.com/credit-card-fraud-detection-1b3b3b44109b.
| https://stackoverflow.com/questions/63957454/ |
How to transform pytorch tensor to view such as ((value0, row0, column0), (value1, row0, column1)...)? | How do I transform PyTorch tensor to view such as ((value0, row0, column0), (value1, row0, column1)...)? For example, if I have a tensor
a = torch.tensor([[5, 7],
[9, 11]])
The function should return:
((5, 0, 0), (7, 0, 1), (9, 1, 0), (11, 1, 1)). Also, I want it function to be applied for constants, 1D, 3D, 4D, etc. tensors, e.g. for 3D I want it to output (value0, dim1, dim2, dim3).
I tried to use for loops but didn't realize how to use them for this task: I can make a function only for a constant number of dimensions, such as only for 2D.
| welcome to StackOverflow !
If the assumption that your tensor contains only non-zero values, then I guess the easiest way is to use to_sparse :
a = torch.tensor([[5, 7], [9, 11]])
def coords(t):
sp_t = t.to_sparse()
idx = sp_t.indices().T
vals = sp_t.values()
return torch.cat([vals.view(-1,1), idx],1)
coords(a)
>>> tensor([[ 5, 0, 0],
[ 7, 0, 1],
[ 9, 1, 0],
[11, 1, 1]])
If that cannot be assumed, please tell me, a workaround with repeat and repeat_interleave should be possible
| https://stackoverflow.com/questions/63959545/ |
Using AWS Sagemaker for model performance without creating endpoint | I've been using Amazon Sagemaker Notebooks to build a pytorch model for an NLP task.
I know you can use Sagemaker to train, deploy, hyper parameter tuning, and model monitoring.
However, it looks like you have to create an inference endpoint in order to monitor the model's inference performance.
I already have a EC2 instance setup to perform inference tasks on our model, which is currently on a development box and rather not use an endpoint to make
Is it possible to use Sagemaker to train, run hyperparam tuning and model eval without creating an endpoint.
| If you don't want to keep an inference endpoint up, one option is to use SageMaker Processing to run a job that takes your trained model and test dataset as input, performs inference and computes evaluation metrics, and saves them to S3 in a JSON file.
This Jupyter notebook example steps through (1) preprocessing training and test data, (2) training a model, then (3) evaluating the model
| https://stackoverflow.com/questions/63960011/ |
Reshaping order in PyTorch - Fortran-like index ordering | In numpy, there is an ordering feature for reshaping arrays, by default it is C, but you can specify other ordering like F:
a = np.arange(6).reshape((3, 2))
f = np.reshape(a, (2, 3), order='F') # Fortran-like index ordering
c = np.reshape(a, (2, 3))
print('a= \n', a)
print('f= \n', b)
print('c= \n', c)
the result:
a=
[[0 1]
[2 3]
[4 5]]
f=
[[0 4 3]
[2 1 5]]
c=
[[0 1 2]
[3 4 5]]
There is no such option in torch.reshape or tensor.view for reshaping in F order.
Is there any way to do that F order reshape in PyTorch? I need everything to be in PyTorch.
| I don't think pytorch has built-in support for this. That said you can achieve the desired result using Tensor.permute. Unfortunately I doubt this will be very efficient since AFAIK permute internally makes a copy of the tensor.
def reshape_fortran(x, shape):
if len(x.shape) > 0:
x = x.permute(*reversed(range(len(x.shape))))
return x.reshape(*reversed(shape)).permute(*reversed(range(len(shape))))
Example usage:
a = torch.arange(6).reshape(3, 2)
f = reshape_fortran(a, (2, 3))
c = a.reshape(2, 3)
which results in
a =
tensor([[0, 1],
[2, 3],
[4, 5]])
f =
tensor([[0, 4, 3],
[2, 1, 5]])
c =
tensor([[0, 1, 2],
[3, 4, 5]])
| https://stackoverflow.com/questions/63960352/ |
Creating pytorch Tensors from `torch` or `numpy` vectors | I am trying to create some testing torch tensors by assembling the dimensions from vectors calculated via basic math functions. As a precursor: assembling Tensors from primitive python arrays does work:
import torch
import numpy as np
torch.Tensor([[1.0, 0.8, 0.6],[0.0, 0.5, 0.75]])
>> tensor([[1.0000, 0.8000, 0.6000],
[0.0000, 0.5000, 0.7500]])
In addition we can assemble Tensors from numpy arrays
https://pytorch.org/docs/stable/tensors.html:
torch.tensor(np.array([[1, 2, 3], [4, 5, 6]]))
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
However assembling from calculated vectors is eluding me. Here are some of the attempts made:
X = torch.arange(0,6.28)
x = X
torch.Tensor([[torch.cos(X),torch.tan(x)]])
torch.Tensor([torch.cos(X),torch.tan(x)])
torch.Tensor([np.cos(X),np.tan(x)])
torch.Tensor([[np.cos(X),np.tan(x)]])
torch.Tensor(np.array([np.cos(X),np.tan(x)]))
All of the above have the following error:
ValueError: only one element tensors can be converted to Python scalars
What is the correct syntax?
Update A comment requested showing x / X. They're actually set to the same (I changed mind mid-course which to use)
In [56]: x == X
Out[56]: tensor([True, True, True, True, True, True, True])
In [51]: x
Out[51]: tensor([0., 1., 2., 3., 4., 5., 6.])
In [52]: X
Out[52]: tensor([0., 1., 2., 3., 4., 5., 6.])
| torch.arange returns a torch.Tensor as seen below -
X = torch.arange(0,6.28)
x
>> tensor([0., 1., 2., 3., 4., 5., 6.])
Similarly, torch.cos(x) and torch.tan(x) returns instances of torch.Tensor
The ideal way to concatenate a sequence of tensors in torch is to use torch.stack
torch.stack([torch.cos(x), torch.tan(x)])
Output
>> tensor([[ 1.0000, 0.5403, -0.4161, -0.9900, -0.6536, 0.2837, 0.9602],
[ 0.0000, 1.5574, -2.1850, -0.1425, 1.1578, -3.3805, -0.2910]])
If you prefer to concatenate along axis=0, use torch.cat([torch.cos(x), torch.tan(x)]) instead.
| https://stackoverflow.com/questions/63964730/ |
What does the .numpy() function do? | I tried searching for the documentation online but I can't find anything that gives me an answer. What does .numpy() function do? The example code given is:
y_true = []
for X_batch, y_batch in mnist_test:
y_true.append(y_batch.numpy()[0].tolist())
| Both in Pytorch and Tensorflow, the .numpy() method is pretty much straightforward. It converts a tensor object into an numpy.ndarray object. This implicitly means that the converted tensor will be now processed on the CPU.
| https://stackoverflow.com/questions/63968868/ |
Matrix Multiplication with PyTorch | I'm sorry if this is a basic question, but I am reasonably new to Pytorch and have been trying to perform linear regression. I've been stuck on the following problem for about 3 hours and have tried everything, so I hope you're able to help.
I want to predict grades using a set of four inputs (time studied, travel time, failures and absenses). For my outputs I have grades on 3 tests. There are 395 students in the dataset.
I've listed all my code at the bottom, but I'll paste the relevant part here too:
So far I've created input and output tensors and have created a model for matrix multiplication. Here's my code:
w = torch.randn(395,4, requires_grad=True)
b = torch.randn(4, requires_grad=True)
print(w)
print(b)
def model(x):
return x @ w.t() + b
predictions = model(inputs)
print(predictions)
I know this isn't the linear regression yet, but I'm really struggling with the matrix multiplication aspect of it. Whenever I run the print(predictions) code I get the following message:
RuntimeError Traceback (most recent call last)
<ipython-input-277-c6db141153ac> in <module>
----> 1 preds = model(inputs)
2 print(preds)
<ipython-input-276-3b94bfbc599e> in model(x)
1 def model(x):
----> 2 return x @ w.t() + b
RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #3 'mat2' in call to _th_addmm_out
I have a feeling the numbers for w and b are wrong (395,4) and (4), but I have no idea why or what to change them to. Is there any chance someone could point me in the right direction please?
Thanks in advance!
Here's my whole code:
'''
import torch
import numpy as np
from numpy import genfromtxt
data = np.genfromtxt('student-mat.csv', delimiter=',',dtype=float)
data
travel_time = data[1:, 0:1]
study_time = data[1:, 1:2]
failures = data[1:, 2:3]
absenses = data[1:, 3:4]
grade_one = data[1:, 4:5]
grade_two = data[1:, 5:6]
grade_three = data[1:, 6:7]
data_input = data[1:, 0:4]
output = data[1:, 4:7]
inputs = torch.from_numpy(data_input)
outputs = torch.from_numpy(grade_one)
print(inputs)
print(grade_one)
w = torch.randn(395,4, requires_grad=True)
b = torch.randn(4, requires_grad=True)
print(w)
print(b)
def model(x):
return x @ w.t() + b
preds = model(inputs)
print(preds)
Dataset - https://archive.ics.uci.edu/ml/datasets/Student+Performance
| The error message says it all. The tensors involved contain elements of different data types. By default, w and b have elements of type torch.float32, while data_input is a NumPy array with the Python default floating point type, i.e. double. That datatype will be preserved when you convert with from_numpy. Try using dtype=np.float32 in your np.genfromtxt call.
| https://stackoverflow.com/questions/63969683/ |
How to check if model in on CUDA? | I would like to check if model is on CUDA. How to do that?
import torch
import torchvision
model = torchvision.models.resnet18()
model.to('cuda')
Seams that model.is_cuda() is not working.
| This code should do it:
import torch
import torchvision
model = torchvision.models.resnet18()
model.to('cuda')
next(model.parameters()).is_cuda
Out:
True
Note there is no is_cuda() method inside nn.Module.
Also note model.to('cuda') is the same as model.cuda() and both are inplace.
On the other hand moving the data.to('cuda') is not inplace and you typically call:
data = data.to('cuda')
to move the data to CUDA.
| https://stackoverflow.com/questions/63970104/ |
Error installing torchvision (python 3.6) | I am trying to install torchvision by writing
pip3 install torchvision
but I get the following message
Requirement already satisfied: torchvision in ./venv/lib/python3.6/site-packages (0.7.0)
Requirement already satisfied: numpy in ./venv/lib/python3.6/site-packages (from torchvision) (1.19.2)
Requirement already satisfied: pillow>=4.1.1 in ./venv/lib/python3.6/site-packages (from torchvision) (7.2.0)
Collecting torch==1.6.0 (from torchvision)
Could not find a version that satisfies the requirement torch==1.6.0 (from torchvision) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.3.0.post4, 0.3.1, 0.4.0, 0.4.1, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.1.0.post2, 1.2.0, 1.3.0, 1.3.0.post2, 1.3.1, 1.4.0)
No matching distribution found for torch==1.6.0 (from torchvision)
I currently have torch==1.4.0 installed but it is apparently not sufficient to install torchvision. When I tried the code pip3 install torch==1.6.0
I got the following error
Collecting torch==1.6.0
Could not find a version that satisfies the requirement torch==1.6.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.3.0.post4, 0.3.1, 0.4.0, 0.4.1, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.1.0.post2, 1.2.0, 1.3.0, 1.3.0.post2, 1.3.1, 1.4.0)
No matching distribution found for torch==1.6.0
Does someone know how I can fix this?
| I have solved the issue. So I had torch==1.4.0 installed as this is the latest version acknowledged on my laptop, but for some reason when I tried to install torchvision, it immediately got to the latest version (0.7.0). I found on the pytorch website that torch==1.4.0 is associated to torchvision==0.5.0. So I ran the code pip3 install torch==1.4.0 torchvision==0.5.0 to get torchvision==0.7.0 uninstalled and install 0.5.0. Now it is working fine. Hope this explanation helps
| https://stackoverflow.com/questions/63972921/ |
Setting the same seed for torch, random number and numpy throughout all the modules | I am trying to set the same seed throughout all the project.
Below are the parameters I am setting in my main file, in which all other modules will be imported -
seed = 42
os.environ['PYTHONHASHSEED'] = str(seed)
# Torch RNG
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
# Python RNG
np.random.seed(seed)
random.seed(seed)
My project directory looks like below -
├── Combined_Files_without_label5.csv
├── __pycache__
│ ├── dataset.cpython-37.pyc
│ ├── datasets.cpython-37.pyc
│ └── testing.cpython-37.pyc
├── datasets.py
├── import_packages
│ ├── __init__.py
│ ├── __pycache__
│ │ ├── __init__.cpython-37.pyc
│ │ ├── dataset.cpython-37.pyc
│ │ ├── dataset_class.cpython-37.pyc
│ │ ├── dataset_partition.cpython-37.pyc
│ │ └── visualising.cpython-37.pyc
│ ├── dataset_class.py
│ ├── dataset_partition.py
│ └── visualising.py
├── main.py
Now, the problem is I am importing the module from dataset_partition.py and the function needs a seed value there. e.g -
df_train, df_temp, y_train, y_temp = train_test_split(X,
y,
stratify=y,
test_size=(1.0 - frac_train), # noqa
random_state=seed)
Now, my questions are, 1)If I just remove the random_state parameter from the above statement so will it take the seed from my main file? If not, then how to set it?
2)Does all the other function which requires seed like torch.manual.seed, torch.cuda.manual_seed(seed) will behave in the same way?(If not, then how to resolve it)
|
1)If I just remove the random_state parameter from the above statement
so will it take the seed from my main file?
Yes, as the docs for default (None) value say:
Use the global random state instance from numpy.random. Calling the
function multiple times will reuse the same instance, and will produce
different results.
As you are using this in __init__ I suppose, it will be run before any other function you use from your package and you are fine.
2)Does all the other function which requires seed
like torch.manual.seed, torch.cuda.manual_seed(seed) will behave in
the same way?
Yes, those will set global seed for Python and PyTorch to use and you are also fine here.
| https://stackoverflow.com/questions/63973563/ |
How to get only specific classes from PyTorch's FashionMNIST dataset? | The FashionMNIST dataset has 10 different output classes. How can I get a subset of this dataset with only specific classes? In my case, I only want images of sneaker, pullover, sandal and shirt classes (their classes are 7,2,5 and 6 respectively).
This is how I load my dataset.
train_dataset_full = torchvision.datasets.FashionMNIST(data_folder, train = True, download = True, transform = transforms.ToTensor())
The approach I’ve followed is below.
Iterate through the dataset, one by one, then compare the 1st element (i.e. class) in the returned tuple to my required class. I’m stuck here. If the value returned is true, how can I append/add this observation to an empty dataset?
sneaker = 0
pullover = 0
sandal = 0
shirt = 0
for i in range(60000):
if train_dataset_full[i][1] == 7:
sneaker += 1
elif train_dataset_full[i][1] == 2:
pullover += 1
elif train_dataset_full[i][1] == 5:
sandal += 1
elif train_dataset_full[i][1] == 6:
shirt += 1
Now, in place of sneaker += 1, pullover += 1, sandal += 1 and shirt += 1 I want to do something like this empty_dataset.append(train_dataset_full[i]) or something similar.
If the above approach is incorrect, please suggest another method.
| Finally found the answer.
dataset_full = torchvision.datasets.FashionMNIST(data_folder, train = True, download = True, transform = transforms.ToTensor())
# Selecting classes 7, 2, 5 and 6
idx = (dataset_full.targets==7) | (dataset_full.targets==2) | (dataset_full.targets==5) | (dataset_full.targets==6)
dataset_full.targets = dataset_full.targets[idx]
dataset_full.data = dataset_full.data[idx]
| https://stackoverflow.com/questions/63975130/ |
pytorch LSTM model not learning | I created a simple LSTM model to predict Uniqlo closing price. The problem is, my model doesn't seem to learn anything. This is the link to my notebook
This is my model creation class (I tried relu activation function before, get the same outcome):
class lstm(torch.nn.Module):
def __init__(self,hidden_layers):
super(lstm,self).__init__()
self.hidden_layers = hidden_layers
self.lstm = torch.nn.LSTM(input_size = 2,hidden_size = 100,num_layers = self.hidden_layers,batch_first=True)
self.hidden1 = torch.nn.Linear(100,80)
self.dropout1 = torch.nn.Dropout(0.1)
self.hidden2 = torch.nn.Linear(80,60)
self.dropout2 = torch.nn.Dropout(0.1)
self.output = torch.nn.Linear(60,1)
def forward(self,x):
batch = len(x)
h = torch.randn(self.hidden_layers,batch,100).requires_grad_().cuda()
c = torch.randn(self.hidden_layers,batch,100).requires_grad_().cuda()
x,(ho,co)= self.lstm(x.view(batch,10,2),(h.detach(),c.detach()))
x = torch.reshape(x[:,-1,:],(batch,-1))
x = self.hidden1(x)
x = torch.nn.functional.tanh(x)
x = self.dropout1(x)
x = self.hidden2(x)
x = torch.nn.functional.tanh(x)
x = self.dropout2(x)
x = self.output(x)
return x
model = lstm(10)
This is my training loss plot:
training loss
This is my validation loss plot:
validation loss
This is my ground truth (blue) vs prediction (orange):
ground truth vs prediction
Can anyone please point out what did I do wrongly?
| You train scaler with whole data. This is not a good strategy. You should use only the training data.
You don't have to scale targets. Use it directly or apply log function or use returns.
About hidden state and cell memory, why do you kept track gradients and detach them just after? You don't have to detach hidden state and cell memory when feeding lstm layer because it participates of back propagation.
If I understand what you did, you predict the next close price using the last 10 open prices and volumes. I don't think you can get good results with this configuration. You should formalize the problem better.
| https://stackoverflow.com/questions/63976297/ |
Understanding behavior of index_put in PyTorch | I am trying to understand the behavior of index_put in PyTorch, but the document is not clear to me.
Given
a = torch.zeros(2, 3)
a.index_put([torch.tensor(1, 0), torch.tensor([1, 1])], torch.tensor(1.))
it returns
tensor([[1., 1., 0.],
[0., 0., 0.])
While given
a = torch.zeros(2, 3)
a.index_put([torch.tensor(0, 0), torch.tensor([1, 1])], torch.tensor(1.))
it returns
tensor([[0., 1., 0.],
[0., 0., 0.])
I am wondering what the rule of index_put on earth? What if I want to put three values to a, such that it returns
tensor([0., 1., 1.,],
[0., 1., 0.])
Any help is appreciated!
| I copied your examples here with argument names inserted, fixed brackets and correct output(yours was swapped):
a.index_put(indices=[torch.tensor([1, 0]), torch.tensor([1, 1])], values=torch.tensor(1.))
tensor([[0., 1., 0.],
[0., 1., 0.]])
a.index_put(indices=[torch.tensor([0, 0]), torch.tensor([0, 1])], values = torch.tensor(1.))
tensor([[1., 1., 0.],
[0., 0., 0.]]
What this method does is inserting value(s) into locations in the original a tensor indicated by indices. indices is a list of x coordinates of insertions and y coordinates of insertions. values may be single value or a 1d tensor.
to obtain the desired output use:
a.index_put(indices=[torch.tensor([0,0,1]), torch.tensor([1, 2, 1])], values=torch.tensor(1.))
tensor([[0., 1., 1.],
[0., 1., 0.]])
moreover, you can pass multiple values in values argument to insert them into the indicated positions:
a.index_put(indices=[torch.tensor([0,0,1]), torch.tensor([1, 2, 1])], values=torch.tensor([1., 2., 3.]))
tensor([[0., 1., 2.],
[0., 3., 0.]])
| https://stackoverflow.com/questions/63977774/ |
Output activation function for tensors for multilabel classification | My expected label is a tensor like tensor([[0, 1, 0, 1], [1, 1, 1, 0]]).
The output from my model (note that I stopped training just to obtain the values so they do not accurately represent the activations of neurons) is also a tensor like tensor([[-10.6964, -13.8998, 0.8348, -45.7040], [-10.3260, -13.8385, -9.2342, -5.3424]])
Am I wrong in using BCEWithLogitsLoss directly on the outputs and labels? Would I need to convert the output tensor to a binary one similar to the expected label prior to using BCEWithLogitsLoss? I understand that BCEWithLogitsLoss is just BCELoss + Sigmoid activation. How can I obtain values of the type of the expected label tensor and what loss should I use in that case?
| In case of multi-label classification BCELoss is a common choice. BCEWithLogitsLoss works directly on the outputs and labels in the sense that it expects the logits as one input and the class (0 or 1) as the second input.
You can find more details regarding this loss and the expected label tensor here:
BCEWithLogitsLoss
| https://stackoverflow.com/questions/63980931/ |
Understanding ‘backward()’: How to code the Pytorch function ‘.backward()’ from scratch? | I’m a newbie learning Deep Learning, I’m stuck trying to understand what ‘.backward()’ from Pytorch does since it does pretty much most the work there. Therefor, I’m trying to understand what backward function does in detail so, I’m going to try to code what the function does step by step. Any resource that you can recommend me (Book, video, GitHub repo) to start coding the function? Thank you for time and hopefully for your help.
| backward() is calculating the gradients with respect to (w.r.t.) graph leaves.
grad() function is more general it can calculate the gradients w.r.t. any inputs (leaves included).
I implemented the grad() function, some time ago, you may check this. It uses the power of Automatic Differentiation (AD).
import math
class ADNumber:
def __init__(self,val, name=""):
self.name=name
self._val=val
self._children=[]
def __truediv__(self,other):
new = ADNumber(self._val / other._val, name=f"{self.name}/{other.name}")
self._children.append((1.0/other._val,new))
other._children.append((-self._val/other._val**2,new)) # first derivation of 1/x is -1/x^2
return new
def __mul__(self,other):
new = ADNumber(self._val*other._val, name=f"{self.name}*{other.name}")
self._children.append((other._val,new))
other._children.append((self._val,new))
return new
def __add__(self,other):
if isinstance(other, (int, float)):
other = ADNumber(other, str(other))
new = ADNumber(self._val+other._val, name=f"{self.name}+{other.name}")
self._children.append((1.0,new))
other._children.append((1.0,new))
return new
def __sub__(self,other):
new = ADNumber(self._val-other._val, name=f"{self.name}-{other.name}")
self._children.append((1.0,new))
other._children.append((-1.0,new))
return new
@staticmethod
def exp(self):
new = ADNumber(math.exp(self._val), name=f"exp({self.name})")
self._children.append((self._val,new))
return new
@staticmethod
def sin(self):
new = ADNumber(math.sin(self._val), name=f"sin({self.name})")
self._children.append((math.cos(self._val),new)) # first derivative is cos
return new
def grad(self,other):
if self==other:
return 1.0
else:
result=0.0
for child in other._children:
result+=child[0]*self.grad(child[1])
return result
A = ADNumber # shortcuts
sin = A.sin
exp = A.exp
def print_childs(f, wrt): # with respect to
for e in f._children:
print("child:", wrt, "->" , e[1].name, "grad: ", e[0])
print_child(e[1], e[1].name)
x1 = A(1.5, name="x1")
x2 = A(0.5, name="x2")
f=(sin(x2)+1)/(x2+exp(x1))+x1*x2
print_childs(x2,"x2")
print("\ncalculated gradient for the function f with respect to x2:", f.grad(x2))
Out:
child: x2 -> sin(x2) grad: 0.8775825618903728
child: sin(x2) -> sin(x2)+1 grad: 1.0
child: sin(x2)+1 -> sin(x2)+1/x2+exp(x1) grad: 0.20073512936690338
child: sin(x2)+1/x2+exp(x1) -> sin(x2)+1/x2+exp(x1)+x1*x2 grad: 1.0
child: x2 -> x2+exp(x1) grad: 1.0
child: x2+exp(x1) -> sin(x2)+1/x2+exp(x1) grad: -0.05961284871202578
child: sin(x2)+1/x2+exp(x1) -> sin(x2)+1/x2+exp(x1)+x1*x2 grad: 1.0
child: x2 -> x1*x2 grad: 1.5
child: x1*x2 -> sin(x2)+1/x2+exp(x1)+x1*x2 grad: 1.0
calculated gradient for the function f with respect to x2: 1.6165488003791766
| https://stackoverflow.com/questions/63982778/ |
How can I convert SmoothedValue into float for plotting using pyplot? | I am trying to plot the loss vs epoch graph from my neural network using matplotlib.pyplot, but I am encountering a problem. After each epoch, the losses are gathered in an array and are SmoothedValue objects (from utils.py). Right now I am trying to get the loss after each epoch and storing it in an array so that later I can use it as an axis for pyplot. I do this the following way:
for epoch in range(num_epochs):
epoch_list.append(epoch)
# train for one epoch, printing every 10 iterations
metric_logger = train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
loss_list = metric_logger.meters.get('loss')
loss_axis.append(loss_list)
print("Loss: ", loss_list.value)
print("Loss: ", loss_list.deque)
# update the learning rate
lr_scheduler.step()
# evaluate on the test dataset
coco_evaluator = evaluate(model, data_loader_test, device=device)
After training in each epoch, I get the value corresponding to the key 'loss' in the dictionary metric_logger.meters, which I then append to my array loss_axis. However, the elements in metric_logger.meters are of the type SmoothedValue, which then my program can't interpret to plot it. How can I convert this Smoothed Value type to float so that I can plot it?
| It should work if you replace loss_axis.append(loss_list) with loss_axis.append(loss_list.value).
| https://stackoverflow.com/questions/63987990/ |
pytorch - how to troubleshoot device (cpu \ gpu) settings of tensors \ models | I have a torch model that i'm trying to port from CPU do be device independent.
setting the device parameter when creating tensors, or calling model.to(device) to move a full model to the target device solves part of the problem, but there are some "left behind" tensors (like variables created during the forward call)
is there a way to detect these without using an interactive debugger?
something like tracing tensor creation to allow discover of tensors that are created on the wrong device?
| You could check the garbage collector:
import gc
import torch
s = torch.tensor([2], device='cuda:0')
t = torch.tensor([1])
for obj in gc.get_objects():
if torch.is_tensor(obj):
print(obj)
Output:
tensor([2], device='cuda:0')
tensor([1])
| https://stackoverflow.com/questions/63993407/ |
Is it possible to train a neural netowrk using a loss function on unseen data (data different from input data)? | Normally, a loss function may be defined as
L(y_hat, y) or L(f(X), y), where f is the neural network, X is the input data, y is the target.
Is it possible to implement (preferably in PyTorch) a loss function that depends not only on the input data X, but also on X' (X != X)?
For example, let's say I have a neural network f, input data (X,y) and X'. Can I construct a loss function such that
f(X) is as close as possible to y, and also
f(X') > f(X)?
The first part can be easily implemented (PyTorch: nn.MSELoss()), the second part seems to be way harder.
P.S: this question is a reformulation of Multiple regression while avoiding line intersections using neural nets, which was closed. In the original data, input data and photos with a theoretical example are available.
| Yes it is possible.
For instance, you can add a loss term using ReLU as follows:
loss = nn.MSELoss()(f(X),y) + lambd * nn.ReLU()(f(X)-f(X'))
where lambd is a hyperparameter.
Note that this corresponds to f(X') >= f(X) but it's easily modifiable to f(X') > f(X) by adding an epsilon<0 (small enough in absolute value) constant inside the ReLU.
| https://stackoverflow.com/questions/63997404/ |
BCEWithLogitsLoss: Trying to get binary output for predicted label as a tensor, confused with output layer | Each element of my dataset has a multi-label tensor like [1, 0, 0, 1] with varying combinations of 1's and 0's. In this scenario, since I have 4 tensors, I have the output layer of my neural network to be 4. In doing so with BCEWithLogitsLoss, I obtain an output tensor like [3, 2, 0, 0] when I call model(inputs) which is in the range of (0, 3) as I specified with the output layer to have 4 output neurons. This does not match the format of what the target is expected to be, although when I change the number of output neurons to 2, I get a shape mismatch error. What needs to be done to fix this?
| When using BCEWithLogitsLoss you make a 1D prediction per output binary label.
In your example, you have 4 binary labels to predict, and therefore, your model outputs 4d vector, each entry represents the prediction of one of the binary labels.
Using BCEWithLogitsLoss you implicitly apply Sigmoid to your outputs:
This loss combines a Sigmoid layer and the BCELoss in one single class.
Therefore, if you want to get the predicted probabilities of your model, you need to add a torch.sigmoid on top of your prediction. The sigmoid function will convert your predicted logits to probabilities.
| https://stackoverflow.com/questions/64002566/ |
torch.einsum doesn't accept float tensors? | I get the following error:
RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #3 'mat2' in call to _th_addmm_out
I use torch.einsum as follows:
mu = torch.einsum('ijl, akij -> akl', idxs, activation_map)
I don't understand this, as in the documentation they are using float tensors too (https://pytorch.org/docs/stable/generated/torch.einsum.html). Also choosing a long tensor is no option, as all values in activation_map are between 0 and 1.
| It seems like your first argument, idxs is of type Long.
All input tensors to torch.einsum should be Float.
| https://stackoverflow.com/questions/64008153/ |
How can I import my CNN-model in another code? | This is my "model", I just added a piece of it here for understanding purpose. I want this model to be imported in another piece of code named as "trainer".
import torch.nn as nn
import torch
class Signet(nn.Module):
def __init__(self, num_users=10):
super(Signet, self).__init__()
self.num_classes = num_users
self.forward_util_layer = {}
This is another piece of code. Here "Model" is what I want to import like I mentioned above.
import model
class Trainer:
def __init__(self, trainData, testData, num_users=10):
self.trainData = trainData
self.testData = testData
What are the ways to import it. Where to save my model.ipynb, so that my trainer.ipynb can import it. Or if there is any way to call it directly in my trainer.ipynb code?
| Move your model code to model.py file. Keep that file in the same directory as your trainer.ipynb
Now, to load the model in the ipynb file, follow these steps.
import sys
sys.path.insert(0, '/path/to/model/folder')
from model import Signet
m = Signet(15)
| https://stackoverflow.com/questions/64008600/ |
Best way to define a scalar using nn.Parameter in Pytorch | In my CNN at some stage I want to multiply a feature map with some scalar which should be learnt by the network. Which of the following is the best way to do it or all are same? The scalar has to be initialised to 5.
# Method 1
def __init__(self):
super(..., self).__init__()
...
...
alpha = nn.Parameter(5)
...
def forward(self, x):
...
x = x * alpha
return x
# Method 2
def __init__(self):
super(..., self).__init__()
...
...
alpha = nn.Parameter(torch.tensor(5))
...
def forward(self, x):
...
x = x * alpha
return x
# Method 3
def __init__(self):
super(..., self).__init__()
...
...
alpha = nn.Parameter(torch.ones(1)*5)
...
def forward(self, x):
...
x = x * alpha
return x
If all are same I would prefer Method 1 and let CNN learn the appropriate multiplier alpha for the feature map x. I hope in all cases alpha will be a float32 tensor initialised to 5. I am using PyTorch 1.3.1
Yours sincerely,
Mohit
| The third option will work since the parameter constructor needs a float. Parameters are updated with the optimizer, so they need to have gradients, apart from buffers.
Buffers are managed by yourself, not with the optimizer.
You may play with this experimental code.
BS=2
class M(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.randn(BS, 2))
self.bias = nn.Parameter(torch.zeros(BS))
self.alpha = nn.Parameter(torch.tensor(5.))
def forward(self, x):
return x @ self.weights + self.bias
m=M()
m.parameters()
list(m.parameters())
Out:
[Parameter containing:
tensor([[-0.5627, 2.3067],
[ 1.3821, -0.1893]], requires_grad=True), Parameter containing:
tensor([0., 0.], requires_grad=True), Parameter containing:
tensor(5., requires_grad=True)]
Here I directly set the value 5. to the parameter alpha, and I added a few more parameters for fun.
You could also do as explained by Shai:
self.register_parameter(name='alpha', param=torch.nn.Parameter(torch.tensor(5.)))
You may asked why we have nn.Module.register_parameter, why don't we just use the nn.Parameter() approach?
nn.Module.register_parameter takes the name and tensor and first checks if the name is in the dictionary of the module. While nn.Parameter() doesn't have such a check.
| https://stackoverflow.com/questions/64009286/ |
pytorch conv2d with weights | I have these 2 variables:
result- tensor 1 X 251 X 20
kernel - tensor 1 X 10 X 10
when I run the command:
from torch.nn import functional as F
result = F.conv2d(result, kernel)
I get the error:
RuntimeError: expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]
I am not giving any stride, what am I doing wrong?
| import torch
import torch.nn.functional as F
image = torch.rand(16, 3, 32, 32)
filter = torch.rand(1, 3, 5, 5)
out_feat_F = F.conv2d(image, filter,stride=1, padding=0)
print(out_feat_F.shape)
Out:
torch.Size([16, 1, 28, 28])
Which is equivalent with:
import torch
import torch.nn
image = torch.rand(16, 3, 32, 32)
conv_filter = torch.nn.Conv2d(in_channels=3, out_channels=1, kernel_size=5, stride=1, padding=0)
output_feature = conv_filter(image)
print(output_feature.shape)
Out:
torch.Size([16, 1, 28, 28])
Padding is by default 0, stride is by default 1.
The filter last two dimensions in the first example correspond to the
kernel size in the second example.
kernel_size=5 is the same as kernel_size=(5,5).
| https://stackoverflow.com/questions/64010424/ |
Subsets and Splits