instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Why does code A runs well whereas B got an error? | I encountered a question when I try to understand a line of code:
scores = [s[tuple(k.t())] for s, k in zip(scores, keypoints)].
Here I am going to give the code that can reproduce it:
import torch # v1.5.1
scores = [torch.rand(480, 640)]
keypoints = [torch.randint(0, 480, [978, 2])]
scores = [s[tuple(k.t())] for s, k in zip(scores, keypoints)] # label A, OK
for s, k in zip(scores, keypoints):
print(s[tuple(k.t())]) # label B, Error
>>> IndexError: too many indices for tensor of dimension 1
I think the above two kinds of code (A and B respectively) are almost the same, but the latter threw out an error. I discussed it with my roommate but we have no idea what's going on.
Could someone help me? Thanks in advance!
| You can just change the order of A and B, for you have already changed "scores" when you execute B. It will definitely works.
import torch
scores = [torch.rand(2, 3)]
keypoints = [torch.randint(0, 2, [4, 2])]
for s, k in zip(scores, keypoints):
print(s[tuple(k.t())]) # label B, Error (now ok)
scores = [s[tuple(k.t())] for s, k in zip(scores, keypoints)] # label A, OK
print(scores)
The output is
[tensor([0.3175, 0.6934, 0.2842, 0.3175])]
[tensor([0.3175, 0.6934, 0.2842, 0.3175])]
which is exactly the same.
| https://stackoverflow.com/questions/63102655/ |
How to display graphs of loss and accuracy on pytorch using matplotlib | I am new to pytorch, and i would like to know how to display graphs of loss and accuraccy And how exactly should i store these values,knowing that i'm applying a cnn model for image classification using CIFAR10.
here is my current implementation :
def train(num_epochs,optimizer,criterion,model):
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(trainloader):
# origin shape: [4, 3, 32, 32] = 4, 3, 1024
# input_layer: 3 input channels, 6 output channels, 5 kernel size
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 2000 == 0:
print (f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{n_total_steps}], Loss: {loss.item():.4f}')
PATH = './cnn.pth'
torch.save(model.state_dict(), PATH)
def test ():
with torch.no_grad():
n_correct = 0
n_samples = 0
n_class_correct = [0 for i in range(10)]
n_class_samples = [0 for i in range(10)]
for images, labels in testloader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
# max returns (value ,index)
_, predicted = torch.max(outputs, 1)
n_samples += labels.size(0)
n_correct += (predicted == labels).sum().item()
for i in range(batch_size):
label = labels[i]
pred = predicted[i]
if (label == pred):
n_class_correct[label] += 1
n_class_samples[label] += 1
acc = 100.0 * n_correct / n_samples
print(f'Accuracy of the network: {acc} %')
for i in range(10):
acc = 100.0 * n_class_correct[i] / n_class_samples[i]
print(f'Accuracy of {classes[i]}: {acc} %')
test_score = np.mean([100 * n_class_correct[i] / n_class_samples[i] for i in range(10)])
print("the score test is : {0:.3f}%".format(test_score))
return acc
| What you need to do is: Average the loss over all the batches and then append it to a variable after every epoch and then plot it. Implementation would be something like this:
import matplotlib.pyplot as plt
def my_plot(epochs, loss):
plt.plot(epochs, loss)
def train(num_epochs,optimizer,criterion,model):
loss_vals= []
for epoch in range(num_epochs):
epoch_loss= []
for i, (images, labels) in enumerate(trainloader):
# rest of the code
loss.backward()
epoch_loss.append(loss.item())
# rest of the code
# rest of the code
loss_vals.append(sum(epoch_loss)/len(epoch_loss))
# rest of the code
# plotting
my_plot(np.linspace(1, num_epochs, num_epochs).astype(int), loss_vals)
my_plot([1, 2, 3, 4, 5], [100, 90, 60, 30, 10])
You can do a similar calculation for accuracy.
| https://stackoverflow.com/questions/63106109/ |
"error while loading shared libraries: libnvinfer.so.7: cannot open shared object file: No such file or directory" when running TRTorch sample | I'm trying to compile my pytorch model with TRTorch engine.
I've installed TRTorch according to this link.
When the sample code is run (with the below command from this link) the given error arise:
sudo bazel run //cpp/trtorchexec -- $(realpath /home/TRTorch/tests/modules/alexnet_scripted.jit.pt) "(1,3,227,227)"
error while loading shared libraries: libnvinfer.so.7: cannot open shared object file: No such file or directory
Also, the LD_LIBRARY_PATH is set correctly.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/TensorRT/TensorRT-7.0.0.11/lib
More info:
TRTorch: latest version (python package and binary)
TensorRT: 7.0.0.11
Pytorch: 1.5.1
CUDA: 10.2
Python: 3.6
| I asked this question in TRTorch github and fixed it using:
sudo LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/TensorRT/TensorRT-7.0.0.11/lib bazel run //cpp/trtorchexec $(realpath tests/models/alexnet_traced.jit.pt) "(32 3 227 227)"
The issue is available here.
| https://stackoverflow.com/questions/63109740/ |
Is there a power function for tensors in pytorch? | Is there an equivalent of numpy.power in pytorch?
The function basically raises each element in the first tensor to the power represented in each corresponding element in the second tensor.
| You are looking for torch.pow.
As mentioned by @Szymon Maszke, you can also use ** operator:
y = torch.pow(x, y)
# the same as
y = x ** y
# and also the same as
y = x.pow(y)
where y can be a scalar or a tensor with shape broadcastable to x's shape.
| https://stackoverflow.com/questions/63110621/ |
Expected device cuda:0 but got device cpu in PyTorch when I have already assigned the device to be cuda | I have a following neural network code, and I get "Expected device cuda:0 but got device cpu in PyTorch" error and I can't figure out why. I assign the device to be cuda and the print line returns cuda. I've tried assign the device as device = cuda:0 as well just in case but that had no effect. Here's the code:
def run():
torch.multiprocessing.freeze_support()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
metabolites = pd.read_excel("testmetabolitedata.xlsx")
subject_metadata = pd.read_excel("testsubj.xlsx")
metabolitesdf = pd.DataFrame(data=metabolites)
metabolitesdf = metabolitesdf.iloc[:, 1:9153]
subjectsdf = pd.DataFrame(data=subject_metadata)
n_samples, n_metabolites = metabolitesdf.shape
print(n_samples)
#genotypes of the target gene
print(subjectsdf['SLCO1B1_rs4149056'])
genotypes = subjectsdf['SLCO1B1_rs4149056']
print(genotypes)
# print('{} unique genotypes'.format(len(set(genotypes))))
labels = [1 if g == 1 else 0 for g in genotypes]
print('{} samples with genotype 1 out of {} samples ({:.1%})'.format(sum(labels), len(labels),
sum(labels) / len(labels)))
#Insert 0 into index 0 (first) into the list for the first row with column names
labels.insert(0, 0)
#log transform
log_metabol = np.log10(metabolitesdf + 1)
#Split data into training and validation 70% / 30%
data = torch.utils.data.TensorDataset(torch.Tensor(np.array(log_metabol)),
torch.Tensor(labels))
train, val = torch.utils.data.random_split(data, [int(0.7 * len(data)),
len(data) - int(0.7 * len(data))])
print('{:.0f}/{} training/total ({:.1%}) in training set, {:.0f}/{} val/total ({:.1%}) in validation set'.format(\
train[:][1].sum(), len(train), train[:][1].sum() / len(train),
val[:][1].sum(), len(val), val[:][1].sum() / len(val)))
class MultiLayerPredictor(torch.nn.Module):
def __init__(self, input_shape, output_shape=1, hidden_dim=1024, **kwargs):
super().__init__()
self.fc1 = torch.nn.Linear(in_features=input_shape, out_features=hidden_dim)
self.bn1 = torch.nn.BatchNorm1d(hidden_dim)
self.fc2 = torch.nn.Linear(in_features=hidden_dim, out_features=hidden_dim)
self.bn2 = torch.nn.BatchNorm1d(hidden_dim)
self.fc3 = torch.nn.Linear(in_features=hidden_dim, out_features=output_shape)
def forward(self, x):
l1 = torch.relu(self.bn1(self.fc1(x)))
l2 = torch.relu(self.bn2(self.fc2(l1)))
return torch.sigmoid(self.fc3(l2)).reshape(-1)
#load the training and validation sets
print("Load training and validation data ")
train_loader = torch.utils.data.DataLoader(train, batch_size=128,
shuffle=True, num_workers=10, pin_memory=True)
val_loader = torch.utils.data.DataLoader(val, batch_size=128,
shuffle=False, num_workers=10, pin_memory=True)
print("Loading complete, create model")
model3 = MultiLayerPredictor(input_shape=n_metabolites).to(device)
print("Model created! Moving to optimizer")
optimizer3 = torch.optim.SGD(model3.parameters(), lr=1e-2)
print("Optimizer done")
objective3 = torch.nn.BCELoss()
epochs = 30
print_stats_interval = 10
log3 = []
print("Moving to training loop")
for epoch in range(epochs):
loss = n_correct = 0
model3.train()
for batch, target in train_loader:
batch = batch.view(-1, n_metabolites).to(device)
optimizer3.zero_grad()
outputs = model3(batch) # stack trace shows the issue being either on this line
train_loss = objective3(outputs, target) # or this line
loss += train_loss.item()
n_correct += (target == (outputs.reshape(-1) > 0.5).float()).sum()
train_loss.backward()
optimizer3.step()
loss = loss / len(train_loader)
acc = (n_correct.float() / len(train)).numpy()
epoch += 1
model3.eval();
val_loss = val_n_correct = 0
with torch.no_grad():
for batch, target in val_loader:
batch = batch.view(-1, n_metabolites).to(device)
outputs = model3(batch)
val_loss += objective3(outputs, target)
val_n_correct += (target == (outputs.reshape(-1) > 0.5).float()).sum()
val_loss = (val_loss / len(val_loader)).numpy()
val_acc = (val_n_correct.float() / len(val)).numpy()
if (epoch % print_stats_interval) == 0 or epoch == epochs:
print(f'epoch={epoch:.0f}, loss={loss:.5f}, val_loss={np.round(val_loss,5):.5f}, acc={np.round(acc,5):.5f}, val_acc={np.round(val_acc,5):.5f}')
log3.append((epoch, loss, val_loss, acc, val_acc))
log3 = pd.DataFrame(log3, columns=['epoch', 'loss', 'val_loss', 'acc', 'val_acc'])
plt.figure(figsize=(6, 3))
plt.plot(log3['epoch'], log3['loss'], label='Training');
plt.plot(log3['epoch'], log3['val_loss'], label='Validation');
plt.xlabel('Epoch'); plt.ylabel('Loss')
plt.legend();
val_log_mutations = val_hcc[:][0].numpy().reshape(-1)
val_true_labels = val_hcc[:][1].numpy() + 0
res = model3(val_hcc[:][0])
predictions = (res.detach().numpy().reshape(-1) > 0.5) + 0
correct = (val_true_labels == predictions) + 0
n_correct = correct.sum()
print('{}/{} ({:.1%}) in the validation set'.format(n_correct, len(correct), n_correct / len(correct)))
print('Majority classifier accuracy: {:.1%}'.format((len(correct) - val_true_labels.sum()) / len(correct)))
if __name__ == '__main__':
run()
What's going on here? The stack trace here:
Traceback (most recent call last):
File "//ad..fi/home/h/h/Desktop/neuralnet/neuralnet_train.py", line 142, in <module>
run()
File "//ad..fi/home/h/h/Desktop/neuralnet/neuralnet_train.py", line 99, in run
train_loss = objective3(outputs, target)
File "C:\Users\h\AppData\Roaming\Python\Python38\site-packages\torch\nn\modules\module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\h\AppData\Roaming\Python\Python38\site-packages\torch\nn\modules\loss.py", line 516, in forward
return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
File "C:\Users\h\AppData\Roaming\Python\Python38\site-packages\torch\nn\functional.py", line 2378, in binary_cross_entropy
return torch._C._nn.binary_cross_entropy(
RuntimeError: expected device cuda:0 but got device cpu
PS Microsoft.PowerShell.Core\FileSystem::\\ad..fi\home\h\h\Desktop\neuralnet>
| Also move the targets to CUDA in both the training and validation for loops.
for batch, target in train_loader:
batch,target = batch.view(-1, n_metabolites).to(device),target.to(device)
.
.
.
for batch, target in val_loader:
batch,target = batch.view(-1, n_metabolites).to(device),target.to(device)``
.
.
.
| https://stackoverflow.com/questions/63111030/ |
Problem with pytorch stack function with the labels of a set of images | I am trying to do run the following code https://github.com/kamenbliznashki/generative_models/blob/master/ssvae.py#L174
. Unfortunately I am encoutering a few problems (line 315,316).
More specifically I have a list of tensor images for example:
[test_dataloader.dataset[i][0] for i in [0,1,2]]
That I want to stack
torch.stack([test_dataloader.dataset[i][0] for i in [0,1,2]], dim=0)
#tensor([[[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]],
Everything works perfectly fine in this case. But I also have a label with those images and the stack function is not working anymore. I guess the reason is that test_dataloader.dataset[i][1] for i in [0,1,2] returns a int for every images but pytorch doesn't seem to like it. I think there was an update in pytorch since the guy pushed his code, it used to work in december 2018.
torch.stack([test_dataloader.dataset[i][1] for i in [0,1,2]], dim=0).to(args.device)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-63-ac227397e18c> in <module>
----> 1 torch.stack([test_dataloader.dataset[i][1] for i in [0,1,2]], dim=0).to(args.device)
TypeError: expected Tensor as element 0 in argument 0, but got int
I have tried
torch.stack(torch.FloatTensor([test_dataloader.dataset[i][1] for i in [0,1,2]]), dim=0)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-43-7c4b1dd41c0f> in <module>
----> 1 torch.stack([torch.FloatTensor(test_dataloader.dataset[i][1]) for i in [0,1,2]], dim=0)
RuntimeError: stack expects each tensor to be equal size, but got [7] at entry 0 and [2] at entry 1
I have also tried, where my end goal is to one_hot those values:
def one_hot(x, label_size):
out = torch.zeros(len(x), label_size).to(x.device)
out[torch.arange(len(x)), x.squeeze()] = 1
return out
y = torch.FloatTensor([test_dataloader.dataset[i][1] for i in [0,1,2]])
# > tensor([7., 2., 1.])
y = one_hot(y, args.y_dim)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-60-2357ef4d4795> in <module>
----> 1 y = one_hot(y, args.y_dim)
<ipython-input-16-cf3479f01a91> in one_hot(x, label_size)
67 def one_hot(x, label_size):
68 out = torch.zeros(len(x), label_size).to(x.device)
---> 69 out[torch.arange(len(x)), x.squeeze()] = 1
70 return out
IndexError: tensors used as indices must be long, byte or bool tensors
| The problem with the last attempt is easy to solve. Change from torch.FloatTensor to torch.LongTensor:
y = torch.tensor([test_dataloader.dataset[i][1] for i in [0,1,2]], dtype=torch.long)
| https://stackoverflow.com/questions/63117172/ |
Bidirectional RNN Implementation pytorch | Hi I am trying to understand bidirectional RNN.
> class RNN(nn.Module):
>
>
> def __init__(self,n_vocab,n_embed,hidden_size,output_size):
>
> super().__init__()
>
> self.hidden_size = hidden_size
>
> self.embedding = nn.Embedding(n_vocab+1,n_embed) ## n_vocab is unique words in dictionary ## n_embed is hyperparameter
> self.rnn = nn.RNN(n_embed, hidden_size, num_layers = 1, batch_first = True,bidirectional = True) #
>
> self.fc = nn.Linear(hidden_size,output_size)
>
> def forward(self,x):>
>
> x = x # input batch_size * seq_length
>
> batch_size = x.size(0)
>
> #print('Batch Size is',batch_size)
>
> x = self.embedding(x) # batch-size x seq_length x embedding_dimension
>
> x,hidden =self.rnn(x) #batch-size x seq_length x hidden_size
>
>
>
> return x,hidden
I am returning both hidden state and output while going through tutorials some says that I need to concatenate hidden state (torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)) and in some tutorials take output state (x[:,-1,:]) but both of results come difference.
What is the correct way of doing Bidirectional RNN.
| Both ways are correct, depending on different conditions. If nn.RNN is bidirectional (as it is in your case), you will need to concatenate the hidden state's outputs. In case, nn.RNN is bidirectional, it will output a hidden state of shape: (num_layers * num_directions, batch, hidden_size). In the context of neural networks, when the RNN is bidirectional, we would need to concatenate the hidden states from two sides (LTR, and RTL). That's why you need to concatenate the hidden states using: torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1) which is basically the same as x[:,-1,:]).
| https://stackoverflow.com/questions/63121983/ |
How do I make custom pytorch datasets structured like the torchvision datasets? | I'm new to pytorch and I'm trying to reuse a Fashion MNIST CNN (from deeplizard) to categorize my timeseries data. I'm finding it hard to understand the structure of datasets, because following this official tutorial and this SO question as best I can, I'm getting something too simple. I think this is because I don't understand OOP very well. The dataset I've made works fine in my CNN for training but then trying to analyse the results with their code I get stuck.
So I create a dataset from two pytorch tensors called features [4050, 1, 150, 6] and targets[4050]:
train_dataset = TensorDataset(features,targets) # create your datset
train_dataloader = DataLoader(train_dataset, batch_size=50, shuffle=False) # create your dataloader
print(train_dataset.__dict__.keys()) # list the attributes
I get this printed output from inspecting the attributes
dict_keys(['tensors'])
But in the Fashion MNIST tutorial they access the data like this:
train_set = torchvision.datasets.FashionMNIST(
root='./data'
,train=True
,download=True
,transform=transforms.Compose([
transforms.ToTensor()
])
)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=1000, shuffle=True)
print(train_set.__dict__.keys()) # list the attributes
And you get this printed output from inspecting the attributes
dict_keys(['root', 'transform', 'target_transform', 'transforms',
'train', 'data', 'targets'])
My dataset works fine for training but when I get to later analysis parts of the tutorial, they want me to access parts of the dataset and I get an error:
# Analytics
prediction_loader = torch.utils.data.DataLoader(train_dataset, batch_size=50)
train_preds = get_all_preds(network, prediction_loader)
preds_correct = train_preds.argmax(dim=1).eq(train_dataset.targets).sum().item()
print('total correct:', preds_correct)
print('accuracy:', preds_correct / len(train_set))
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-73-daa87335a92a> in <module>
4 prediction_loader = torch.utils.data.DataLoader(train_dataset, batch_size=50)
5 train_preds = get_all_preds(network, prediction_loader)
----> 6 preds_correct = train_preds.argmax(dim=1).eq(train_dataset.targets).sum().item()
7
8 print('total correct:', preds_correct)
AttributeError: 'TensorDataset' object has no attribute 'targets'
Can anyone tell me what's going on here? Is this something I need to change in how I make the datasets, or can I rewrite the analysis code somehow to access the right part of the dataset?
| The equivalent of .targets for TensorDatasets would be train_dataset.tensors[1].
The implementation of TensorDataset is very simple:
class TensorDataset(Dataset[Tuple[Tensor, ...]]):
r"""Dataset wrapping tensors.
Each sample will be retrieved by indexing tensors along the first dimension.
Arguments:
*tensors (Tensor): tensors that have the same size of the first dimension.
"""
tensors: Tuple[Tensor, ...]
def __init__(self, *tensors: Tensor) -> None:
assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors)
self.tensors = tensors
def __getitem__(self, index):
return tuple(tensor[index] for tensor in self.tensors)
def __len__(self):
return self.tensors[0].size(0)
| https://stackoverflow.com/questions/63125160/ |
How to translate the neural network of MLP from tensorflow to pytorch | I have built up an MLP neural network using 'Tensorflow', which is stated as follow:
model_mlp=Sequential()
model_mlp.add(Dense(units=35, input_dim=train_X.shape[1], kernel_initializer='normal', activation='relu'))
model_mlp.add(Dense(units=86, kernel_initializer='normal', activation='relu'))
model_mlp.add(Dense(units=86, kernel_initializer='normal', activation='relu'))
model_mlp.add(Dense(units=10, kernel_initializer='normal', activation='relu'))
model_mlp.add(Dense(units=1))
I want to convert the above MLP code using pytorch. How to do it? I try to do it as follows:
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.fc1 = nn.Linear(train_X.shape[1],35)
self.fc2 = nn.Linear(35, 86)
self.fc3 = nn.Linear(86, 86)
self.fc4 = nn.Linear(86, 10)
self.fc5 = nn.Linear(10, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
x = self.fc5(x)
return x
def predict(self, x_test):
x_test = torch.from_numpy(x_test).float()
x_test = self.forward(x_test)
return x_test.view(-1).data.numpy()
model = MLP()
I use the same dataset but the two codes give two different answers. Code written in Tensorflow always produce a much better results than using the code written in Pytorch. I wonder if my code in pytorch is not correct.
In case my written code in PyTorch is correct, I wonder how to explain the differences. I am looking forward to any replies.
| Welcome to pytorch!
I guess the problem is with the initialization of your network. That is how I would do it:
def init_weights(m):
if type(m) == nn.Linear:
torch.nn.init.xavier_normal(m.weight) # initialize with xaver normal (called gorot in tensorflow)
m.bias.data.fill_(0.01) # initialize bias with a constant
class MLP(nn.Module):
def __init__(self, input_dim):
super(MLP, self).__init__()
self.mlp = nn.Sequential(nn.Linear(input_dim ,35), nn.ReLU(),
nn.Linear(35, 86), nn.ReLU(),
nn.Linear(86, 86), nn.ReLU(),
nn.Linear(86, 10), nn.ReLU(),
nn.Linear(10, 1), nn.ReLU())
def forward(self, x):
y =self.mlp(x)
return y
model = MLP(input_dim)
model.apply(init_weights)
optimizer = Adam(model.parameters())
loss_func = BCEWithLogistLoss()
# training loop
for data, label in dataloader:
optimizer.zero_grad()
pred = model(data)
loss = loss_func(pred, lable)
loss.backward()
optimizer.step()
Notice that in pytorch we do not call model.forward(x), but model(x). That is because nn.Module applies hooks in .__call__() that are used in the backward pass.
You can check the documentation of weight initialization here: https://pytorch.org/docs/stable/nn.init.html
| https://stackoverflow.com/questions/63128213/ |
Weighted Average of PyTorch Tensors | I have two Pytorch tensors of the form [y11, y12] and [y21, y22]. How do I get the weighted mean of the two tensors?
| you can add two tensors using torch.add and then get the mean of output tensor using torch.mean
assuming weight as 0.6 for tensor1 and 0.4 for tensor2
example:
tensor1 = [y11, y12] * 0.6 # multiplying with weight
tensor2 = [y21, y22] * 0.4 # multiplying with weight
pt_addition_result_ex = tensor1.add(tensor2) # addition of two tensors
torch.mean(pt_addition_result_ex) # mean of output tensors
| https://stackoverflow.com/questions/63128641/ |
How to get input tensor shape of an unknown PyTorch model | I am writing a python script, which converts any deep learning models from popular frameworks (TensorFlow, Keras, PyTorch) to ONNX format. Currently I have used tf2onnx for tensorflow and keras2onnx for keras to ONNX conversion, and those work.
Now PyTorch has integrated ONNX support, so I can save ONNX models from PyTorch directly. But the problem is I will need input tensor shape for that model, in order to save it in ONNX format. As you already might have guessed, I am writing this script to convert unknown deep learning models.
Here is PyTorch's tutorial for ONNX conversion. There it's written:
LimitationsΒΆ
The ONNX exporter is a trace-based exporter, which means that it operates by executing your model once, and exporting the operators which were actually run during this run. This means that if your model is dynamic, e.g., changes behavior depending on input data, the export wonβt be accurate.
Similarly, a trace is might be valid only for a specific input size (which is one reason why we require explicit inputs on tracing). Most of the operators export size-agnostic versions and should work on different batch sizes or input sizes. We recommend examining the model trace and making sure the traced operators look reasonable.
The code snippet I am using is this:
import torch
def convert_pytorch2onnx(self):
"""pytorch -> onnx"""
model = torch.load(self._model_file_path)
# Don't know how to get this INPUT_SHAPE
dummy_input = torch.randn(INPUT_SHAPE)
torch.onnx.export(model, dummy_input, self._onnx_file_path)
return
So how do I know the INPUT_SHAPE of the input tensor of that unknown PyTorch model? Or is there any other way to convert the PyTorch model to ONNX?
| you can follow this as a starting point to debug
list(model.parameters())[0].shape # weights of the first layer in the format (N,C,Kernel dimensions) # 64, 3, 7 ,7
after that get the N,C and create a tensor out of that by specially putting H,W as None like this toy example
import torch
import torchvision
net = torchvision.models.resnet18(pretrained = True)
shape_of_first_layer = list(net.parameters())[0].shape #shape_of_first_layer
N,C = shape_of_first_layer[:2]
dummy_input = torch.Tensor(N,C)
dummy_input = dummy_input[...,:, None,None] #adding the None for height and weight
torch.onnx.export(net, dummy_input, './alpha')
| https://stackoverflow.com/questions/63131273/ |
How to use PyTorch's torchaudio in Android? | Starting from here, I downloaded the tutorial project and got it to build and run. Then I tried adding this to the project's app build.gradle (after upping the pytorch version to 1.5.0):
implementation 'org.pytorch:pytorch_android_torchaudio:1.5.0'
And I got this error:
Could not find org.pytorch:pytorch_android_torchaudio:1.5.0.
Anyone else had any luck getting PyTorch's torchaudio to work in Android?
| Got this answer from here:
There is no android package dedicated for torchaudio. You build your model or pipeline in Python, then dump it as a Torchscript file, then load it from your app and run it with Torchscript runtime. Please refer to the following.
https://pytorch.org/mobile/home/
https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html
| https://stackoverflow.com/questions/63135485/ |
Reinforcement learning actor predicting same actions during initial training | I have a reinforcement learning Actor Critic model with lstm.
During initial training it is giving same action value for all the states.
Can someone expert in AI/RL please help to let me know if this is normal behavior during training?
Also can you please help to let me know what should be the ideal size of lstm and linear layers if I have a state_dimension = 50 and action_dimension = 3.
Thanks in advance
| This can be caused by numerous things:
1 - Check weights initialization
2 - Check the interface on which the model makes the inference, and if there is no other things preventing it from make the action choice other than the activation of that specific neuron
3 - Check your reward function. Avoid too large negative rewards. Also, if takes the same action is not an obvious way to avoid negative rewards.
| https://stackoverflow.com/questions/63139124/ |
ImportError: cannot import name 'AutoModelWithLMHead' from 'transformers' | This is literally all the code that I am trying to run:
from transformers import AutoModelWithLMHead, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")
model = AutoModelWithLMHead.from_pretrained("microsoft/DialoGPT-small")
I am getting this error:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-14-aad2e7a08a74> in <module>
----> 1 from transformers import AutoModelWithLMHead, AutoTokenizer
2 import torch
3
4 tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")
5 model = AutoModelWithLMHead.from_pretrained("microsoft/DialoGPT-small")
ImportError: cannot import name 'AutoModelWithLMHead' from 'transformers' (c:\python38\lib\site-packages\transformers\__init__.py)
What do I do about it?
| I solved it! Apperantly AutoModelWithLMHead is removed on my version.
Now you need to use AutoModelForCausalLM for causal language models, AutoModelForMaskedLM for masked language models and AutoModelForSeq2SeqLM for encoder-decoder models.
So in my case code looks like this:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")
| https://stackoverflow.com/questions/63141267/ |
Sagemaker Training Job Not Uploading/Saving Training Model to S3 Output Path | Ok I've been dealing with this issue in Sagemaker for almost a week and I'm ready to pull my hair out. I've got a custom training script paired with a data processing script in a BYO algorithm Docker deployment type scenario. It's a Pytorch model built with Python 3.x, and the BYO Docker file was originally built for Python 2, but I can't see an issue with the problem that I am having.....which is that after a successful training run Sagemaker doesn't save the model to the target S3 bucket.
I've searched far and wide and can't seem to find an applicable answer anywhere. This is all done inside a Notebook instance. Note: I am using this as a contractor and don't have full permissions to the rest of AWS, including downloading the Docker image.
Dockerfile:
FROM ubuntu:18.04
MAINTAINER Amazon AI <[email protected]>
RUN apt-get -y update && apt-get install -y --no-install-recommends \
wget \
python-pip \
python3-pip3
nginx \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
RUN wget https://bootstrap.pypa.io/get-pip.py && python3 get-pip.py && \
pip3 install future numpy torch scipy scikit-learn pandas flask gevent gunicorn && \
rm -rf /root/.cache
ENV PYTHONUNBUFFERED=TRUE
ENV PYTHONDONTWRITEBYTECODE=TRUE
ENV PATH="/opt/program:${PATH}"
COPY decision_trees /opt/program
WORKDIR /opt/program
Docker Image Build:
%%sh
algorithm_name="name-this-algo"
cd container
chmod +x decision_trees/train
chmod +x decision_trees/serve
account=$(aws sts get-caller-identity --query Account --output text)
region=$(aws configure get region)
region=${region:-us-east-2}
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest"
aws ecr describe-repositories --repository-names "${algorithm_name}" > /dev/null 2>&1
if [ $? -ne 0 ]
then
aws ecr create-repository --repository-name "${algorithm_name}" > /dev/null
fi
# Get the login command from ECR and execute it directly
$(aws ecr get-login --region ${region} --no-include-email)
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
docker build -t ${algorithm_name} .
docker tag ${algorithm_name} ${fullname}
docker push ${fullname}
Env setup and session start:
common_prefix = "pytorch-lstm"
training_input_prefix = common_prefix + "/training-input-data"
batch_inference_input_prefix = common_prefix + "/batch-inference-input-data"
import os
from sagemaker import get_execution_role
import sagemaker as sage
sess = sage.Session()
role = get_execution_role()
print(role)
Training Directory, Image, and Estimator Setup, then a fit call:
TRAINING_WORKDIR = "a/local/directory"
training_input = sess.upload_data(TRAINING_WORKDIR, key_prefix=training_input_prefix)
print ("Training Data Location " + training_input)
account = sess.boto_session.client('sts').get_caller_identity()['Account']
region = sess.boto_session.region_name
image = '{}.dkr.ecr.{}.amazonaws.com/image-that-works:working'.format(account, region)
tree = sage.estimator.Estimator(image,
role, 1, 'ml.p2.xlarge',
output_path="s3://sagemaker-directory-that-definitely/exists",
sagemaker_session=sess)
tree.fit(training_input)
The above script is working, for sure. I have print statements in my script and they are printing the expected results to the console. This runs as it's supposed to, finishes up, and says that it's deploying model artifacts when IT DEFINITELY DOES NOT.
Model Deployment:
model = tree.create_model()
predictor = tree.deploy(1, 'ml.m4.xlarge')
This throws an error that the model can't be found. A call to aws sagemaker describe-training-job shows that the training was completed but I found that the time it took to upload the model was super fast, so obviously there's an error somewhere and it's not telling me. Thankfully it's not just uploading it to the aether.
{
"Status": "Uploading",
"StartTime": 1595982984.068,
"EndTime": 1595982989.994,
"StatusMessage": "Uploading generated training model"
},
Here's what I've tried so far:
I've tried uploading it to a different bucket. I figured my permissions were the problem so I pointed it to one that I new allowed me to upload as I had done it before to that bucket. No dice.
I tried backporting the script to Python 2.x, but that caused more problems than it probably would have solved, and I don't really see how that would be the problem anyways.
I made sure the Notebook's IAM role has sufficient permissions, and it does have a SagemakerFullAccess policy
What bothers me is that there's no error log I can see. If I could be directed to that I would be happy too, but if there's some hidden Sagemaker kungfu that I don't know about I would be forever grateful.
EDIT
The training job runs and prints to both the Jupyter cell and CloudWatch as expected. I've since lost the cell output in the notebook but below is the last few lines in CloudWatch. The first number is the epoch and the rest are various custom model metrics.
| Can you verify from the training job logs that your training script is running? It doesn't look like your Docker image would respond to the command train, which is what SageMaker requires, and so I suspect that your model isn't actually getting trained/saved to /opt/ml/model.
AWS documentation about how SageMaker runs the Docker container: https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo-dockerfile.html
edit: summarizing from the comments below - the training script must also save the model to /opt/ml/model (the model isn't saved automatically).
| https://stackoverflow.com/questions/63145277/ |
AttributeError: 'Sequential' object has no attribute 'size' | I'm new in pytorch and just try to write a network. The data.shape is (204,6170) and the last 5 cols are some labels. The number in data is float number like 0.030822.
#%%
from sklearn.feature_selection import RFE
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from sklearn.model_selection import train_test_split
import torch.functional as F
#%%
data = pd.read_table("table.log")
data_x = data.iloc[:, 0:(data.shape[1]-5)]
data_y = data.loc[:, 'target']
X_train, X_test, y_train, y_test = train_test_split(data_x,data_y,test_size=0.2,random_state=0)
#%%
from sklearn.linear_model import LinearRegression
lr = LinearRegression(normalize=True)
lr.fit(X_train,y_train)
rfe1 = RFE(estimator=lr,n_features_to_select=2000)
rfe1 = rfe1.fit(X_train,y_train)
#%%
x_train_rfe1 = X_train[X_train.columns[rfe1.support_]]
print(x_train_rfe1.head())
class testmodel(nn.Module):
def __init__(self):
super(testmodel,self).__init__()
self.conv = nn.Sequential(
nn.Conv1d(1500, 500, 1500, 0, 0),
nn.ReLU(),
nn.Conv1d(500, 100, 500, 0),
nn.ReLU(),
nn.Conv1d(100, 20, 100, 0),
nn.Sigmoid()
)
def forward(self,x):
x = self.conv
return x
#%%
x_train_rfe1 = torch.Tensor(x_train_rfe1.values)
y_train = torch.Tensor(y_train.values.astype(np.int64))
model = testmodel()
y = model(x_train_rfe1)
criterion = nn.MSELoss()
loss = criterion(y, y_train)
print(loss)
error message:
Traceback (most recent call last):
File "<input>", line 7, in <module>
File "/miniconda3/envs/ml/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/miniconda3/envs/ml/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 431, in forward
return F.mse_loss(input, target, reduction=self.reduction)
File "/miniconda3/envs/ml/lib/python3.8/site-packages/torch/nn/functional.py", line 2203, in mse_loss
if not (target.size() == input.size()):
File "/miniconda3/envs/ml/lib/python3.8/site-packages/torch/nn/modules/module.py", line 575, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Sequential' object has no attribute 'size'
Where can the error was?
Is the internet usually written like this?
How could I improve it?
| You never run your input tensor x through your conv sequential layer in forward.
def forward(self, x):
x = self.conv(x)
return x
Do some PyTorch tutorials they will help you get the basics down: https://pytorch.org/tutorials/
| https://stackoverflow.com/questions/63145378/ |
multiplying a vector (1 x N) by a tensor (N x M x M) | I am looking for a matrix operation in numpy or preferably in pytorch that allows one to multiply a vector (1 x N) by a tensor (N x M x M) and get (1 x M x M). This is easily accomplished using a for loop, but the for loop does not allow back propagation during training. I tried using matmul in numpy and pytorch (and several others such as dot and bmm), but could not get any to work. Here is an example (where M=2, but is 256 in my use case) of what I am trying to do:
a = np.array([1,2,3])
b = np.array([[[1,2],[3,4]],[[5,6],[7,8]],[[9,10],[11,12]]])
I would like to perform the operation: 1*[[1,2],[3,4]] + 2*[[5,6],[7,8]] + 3*[[9,10],[11,12]], which can be achieved with a for loop like this:
for i in range(3):
matrix_sum += a[i]*b[i]
Any advice or solution would be greatly appreciated.
| Numpy and pytorch were built uppon matrix multiplications!
Torch example:
A = torch.rand(1, N)
B = torch.rand(N, M, M)
C = A @ B.transpose(0, 1)
C.transpose_(0, 1)
C.shape
torch.size(1, M, M)
And similarly for numpy:
A = np.random.randn(1, N)
B = np.random.randn(N, M, M)
C = A @ B.transpose(1, 0, 2)
C = C.transpose(1, 0, 2)
C.shape
(1, M, M)
Edit For the Einsum lovers:
Pytorch and numpy handle einsum pretty much in the same way:
torch.einsum('i,ijk->jk', A, B)
np.einsum('i,ijk->jk', A, B)
Pytorch einsum documentation: https://pytorch.org/docs/master/generated/torch.einsum.html
Numpy einsum documentation: https://numpy.org/doc/stable/reference/generated/numpy.einsum.html
| https://stackoverflow.com/questions/63147507/ |
PyTorch 1.5.0 CUDA 10.2 installation via pip always installs CUDA 9.2 | I am currently installing an environment and need pytorch 1.5.0 with CUDA 10.2 . CUDA drivers are set up and all is fine, but the pytorch download via pip is broken.
The official doc for the previous versions says the installation should go as folllows:
CUDA 10.2
pip install torch==1.5.0 torchvision==0.6.0 -f https://download.pytorch.org/whl/torch_stable.html
CUDA 10.1
pip install torch==1.5.0+cu101 torchvision==0.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
CUDA 9.2
pip install torch==1.5.0+cu92 torchvision==0.6.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html
CPU only
pip install torch==1.5.0+cpu torchvision==0.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
(source: https://pytorch.org/get-started/previous-versions/)
but when i try to install the first one for CUDA 10.2 it installs the one for CUDA 9.2:
$ pip install torch==1.5.0 torchvision==0.6.0 -f https://download.pytorch.org/whl/torch_stable.html
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torch==1.5.0
Downloading https://download.pytorch.org/whl/cu92/torch-1.5.0%2Bcu92-cp36-cp36m-linux_x86_64.whl (603.7 MB)
|ββββββββββββββββββββββββββββββββ| 603.7 MB 985 bytes/s
Collecting torchvision==0.6.0
Downloading https://download.pytorch.org/whl/cu92/torchvision-0.6.0%2Bcu92-cp36-cp36m-linux_x86_64.whl (6.5 MB)
|ββββββββββββββββββββββββββββββββ| 6.5 MB 600 kB/s
Requirement already satisfied: future in /home/---/site-packages (from torch==1.5.0) (0.18.2)
Requirement already satisfied: numpy in /home/---/python3.6/site-packages (from torch==1.5.0) (1.19.1)
Requirement already satisfied: pillow>=4.1.1 in /home/---/site-packages (from torchvision==0.6.0) (7.2.0) Installing collected packages: torch, torchvision
Successfully installed torch-1.5.0+cu92
torchvision-0.6.0+cu92
So, it downloads and installs the wrong version. Explicitely adding +cu102 to the version like with the other version also doesnt work, since then it gives the error:
$ pip install torch==1.5.0+cu102 torchvision==0.6.0+cu102 -f https://download.pytorch.org/whl/torch_stable.html
Looking in links: https://download.pytorch.org/whl/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch==1.5.0+cu102 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.3.0.post4, 0.3.1, 0.4.0, 0.4.1, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.2.0+cpu, 1.2.0+cu92, 1.3.0, 1.3.0+cpu, 1.3.0+cu100, 1.3.0+cu92, 1.3.1, 1.3.1+cpu, 1.3.1+cu100, 1.3.1+cu92, 1.4.0, 1.4.0+cpu, 1.4.0+cu100, 1.4.0+cu92, 1.5.0, 1.5.0+cpu, 1.5.0+cu101, 1.5.0+cu92, 1.5.1, 1.5.1+cpu, 1.5.1+cu101, 1.5.1+cu92, 1.6.0, 1.6.0+cpu, 1.6.0+cu101, 1.6.0+cu92)
ERROR: No matching distribution found for torch==1.5.0+cu102
Manually downloading the wheel and changing https://download.pytorch.org/whl/cu92/torch-1.5.0%2Bcu92-cp36-cp36m-linux_x86_64.whl by deleting the cu92 part or replacing it with cu102 doesnt work, either and results in an Error 403 from the pytorch server.
Sadly, I do rely on pip in this case and can't use the conda install command. Does anybody have a solution to this problem?
| You can always manually download and install the .whl files:
PyTorch 1.5.0 (cu102, py3.6, linux)
https://download.pytorch.org/whl/cu102/torch-1.5.0-cp36-cp36m-linux_x86_64.whl
TorchVision 0.6.0 (cu102, py3.6, linux)
https://download.pytorch.org/whl/cu102/torchvision-0.6.0-cp36-cp36m-linux_x86_64.whl
You can check these links on https://download.pytorch.org/whl/torch_stable.html. Scroll down to cu102/ section. You'll find other versions there, if you need.
| https://stackoverflow.com/questions/63152023/ |
Pytorch: How to load csv files in several folder | My data like:
folder 1
part0001.csv
part0002.csv
...
part0199.csv
folder 2
part0001.csv
part0002.csv
...
part0199.csv
folder 3
part0001.csv
part0002.csv
...
part0199.csv
Update:
Each .csv file is about 100Mb. Both features and label are in the same .csv file. Each .csv file is like as follows.
feat1 feat2 label
1 1 3 0
2 3 4 1
3 2 5 0
...
I want to load the samples in the .csv file in batch.
| You have to build a dataset that loads them. (docs: https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset)
Example:
import torch
from torch.ults.data import Dataset
import glob2
import pandas as pd
class CustomDataset(Dataset):
def __init__(self, root)
self.root = root
# make a list containing the path to all your csv files
self.paths = glob2.glob('src/**/*.csv)
def __len__(self):
return len(self.paths)
def __getittem__(idx):
data = pd.from_csv(self.paths[idx])
x = data['features']
y = data['labels']
return x, y
That is the basic, you can modify it to sample random examples from each csv file or pre-process the data before training.
Edit
If you are just intereted in one line from the csv, there are three things you can do.
Pre-process your data and save it as one big .csv file and load it all in memory before training. That will save you from the trouble of reloading heavy files.
(If the previous is not possible because of the final file would nof fit in memory) Pre-process your data and save it .csv file per datapoint. That will still require your dataloader to read from disc, but at least you will be loading lighter files this time.
(If preprocessing your data is not an option) Keep as much as you can in memory to avoid reloading files.
There aren't many secrets to implementing the first two solutions. The code for solution 3 should look something like this:
import torch
from torch.ults.data import Dataset
import glob2
import pandas as pd
class CustomDataset(Dataset):
def __init__(self, root)
self.root = root
# make a list containing the path to all your csv files
self.paths = glob2.glob('src/**/*.csv)
# dict to keep load data in memory:
self.cache = {}
def __len__(self):
return len(self.paths)
def __getittem__(idx):
"""This getittem will load data and save them in memory during training."""
data = cache.get(idx, None)
if data is None:
data = pd.from_csv(self.paths[idx])
try:
# cache data into memory
self.cache{idx: data}
except OSError:
# we may be using too much memory
del self.cache[list(self.cache.keys())[0]]
rnd_idx = np.random.randint(len(data))
x = data['features'][rdn_idx]
y = data['labels'][rdn_idx]
return x, y
| https://stackoverflow.com/questions/63155127/ |
Pytorch model.load call ambiguity | Simple one:
what does this do?
model.load_state_dict({name :
weights_before[name] + (weights_after[name] - weights_before[name]) * outerstepsize
for name in weights_before})
Thank you very much!
| load_state_dict loads learnable parameters into neural network from dictionary.
Each layer has it's respective name and parameters. In this case you go over two dictionaries (weights_before and weights_after), weights_after are always used, but additionally the difference between parameter values are added multiplied by outerstepsize.
You can check out more in PyTorch docs.
| https://stackoverflow.com/questions/63158646/ |
How to run pytorch with NVIDIA "cuda toolkit" version instead of the official conda "cudatoolkit" version? | Some questions came up from https://superuser.com/questions/1572640/do-i-need-to-install-cuda-separately-after-installing-the-nvidia-display-driver. One of these questions:
Does conda pytorch need a different version than the official non-conda / non-pip cuda toolkit at https://developer.nvidia.com/cuda-toolkit?
In other words: Can I use the NVIDIA "cuda toolkit" for a pytorch installation?
Context:
If you go through the "command helper" at https://pytorch.org/get-started/locally/, you can choose between cuda versions 9.2, 10.1, 10.2 and None.
Taking 10.2 can result in:
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
Taking "None" builds the following command, but then you also cannot use cuda in pytorch:
conda install pytorch torchvision cpuonly -c pytorch
Could I then use NVIDIA "cuda toolkit" version 10.2 as the conda cudatoolkit in order to make this command the same as if it was executed with cudatoolkit=10.2 parameter?
The question arose since pytorch installs a different version (10.2 instead of the most recent NVIDIA 11.0), and the conda install takes additional 325 MB. If both versions were 11.0 and the installation size was smaller, you might not even notice the possible difference. But now it is clear that conda carries its own cuda version which is independent from the NVIDIA one.
| You can try to install PyTorch via Pip:
pip install torch torchvision
It is also official way of installing, available in "command helper" at https://pytorch.org/get-started/locally/.
It uses preinstalled CUDA and doesn't download own CUDA Toolkit.
Also you can choose the version of CUDA to install PyTorch for:
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
| https://stackoverflow.com/questions/63163178/ |
Why is PyTorch handling similar weight update assignments differently? | The below code does work updating the weights using:
w -= lr * w.grad
But when updating the weights using
w = w - lr * w.grad
it throws me:
element 0 of tensors does not require grad and does not have a grad_fn
Why is that and shouldn't both assignments be equal?
import torch
X = torch.tensor([1, 2, 3, 4], dtype=torch.float32)
y = torch.tensor([2, 4, 6, 8], dtype=torch.float32)
w = torch.tensor(0.0, dtype=torch.float32, requires_grad=True)
epochs = 10
lr = 0.002
for epoch in range(1, epochs + 1):
y_pred = w * X
loss = ((y_pred - y)**2).mean()
loss.backward()
print(w.grad)
with torch.no_grad():
### Option 1 - doesn't work
w = w - lr * w.grad
### Option 2 - does work
w -= lr * w.grad
w.grad.zero_()
| The difference is that -= is an in-place op and the alternative is not. Therefore, when using -= inside the .no_grad() context, the variable will compute the operation, but the gradient won't take that op into account.
When you perform a normal subtraction, you'd expect a SubBackward as grad_fn:
import torch
x = torch.tensor([3.], requires_grad=True)
print(x)
# >>> tensor([3.], requires_grad=True)
x = x - 2
print(x)
# >>> tensor([1.], grad_fn=<SubBackward0>)
and indeed, this is what we get. But, if we try -= inside the .no_grad() context:
with torch.no_grad():
x -= 2
print(x)
# >>> tensor([1.], requires_grad=True)
we get the expected result (i.e., 2), but no backward function (ofc, we specified that with .no_grad()). Note that it still requires_grad=True. However, if we try to run this in-place op out of the .no_grad() context, this is what happens:
x -= 2
# >>> Traceback (most recent call last):
# >>> File "<stdin>", line 1, in <module>
# >>> RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.
and if we try to run the normal subtraction inside the .no_grad() context, we will get:
x = x - 2
print(x)
# >>> tensor([-1.])
a tensor without requires_grad; and that is the reason you get the error when using this option.
| https://stackoverflow.com/questions/63163827/ |
AttributeError: 'numpy.ndarray' object has no attribute 'dim' with pytorch and scipy.optimize minimize | I have the following small code snippet:
import torch
from scipy.optimize import minimize
def f(x):
return torch.norm(x)
x = torch.tensor([1.0, 1.0])
y = minimize(f, x)
print(y)
However, this results in this error message:
> AttributeError Traceback (most recent call
> last) <ipython-input-17-cb070be7a142> in <module>
> 6
> 7 x = [1.0, 1.0]
> ----> 8 y = minimize(f, x)
> 9 print(y)
>
> ~\Anaconda3\lib\site-packages\scipy\optimize\_minimize.py in
> minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints,
> tol, callback, options)
> 610 return _minimize_cg(fun, x0, args, jac, callback, **options)
> 611 elif meth == 'bfgs':
> --> 612 return _minimize_bfgs(fun, x0, args, jac, callback, **options)
> 613 elif meth == 'newton-cg':
> 614 return _minimize_newtoncg(fun, x0, args, jac, hess, hessp, callback,
>
> ~\Anaconda3\lib\site-packages\scipy\optimize\optimize.py in
> _minimize_bfgs(fun, x0, args, jac, callback, gtol, norm, eps, maxiter, disp, return_all, finite_diff_rel_step, **unknown_options) 1100
> 1101 sf = _prepare_scalar_function(fun, x0, jac, args=args,
> epsilon=eps,
> -> 1102 finite_diff_rel_step=finite_diff_rel_step) 1103 1104 f =
> sf.fun
>
> ~\Anaconda3\lib\site-packages\scipy\optimize\optimize.py in
> _prepare_scalar_function(fun, x0, jac, args, bounds, epsilon, finite_diff_rel_step, hess)
> 260 # calculation reduces overall function evaluations.
> 261 sf = ScalarFunction(fun, x0, args, grad, hess,
> --> 262 finite_diff_rel_step, bounds, epsilon=epsilon)
> 263
> 264 return sf
>
> ~\Anaconda3\lib\site-packages\scipy\optimize\_differentiable_functions.py
> in __init__(self, fun, x0, args, grad, hess, finite_diff_rel_step,
> finite_diff_bounds, epsilon)
> 74
> 75 self._update_fun_impl = update_fun
> ---> 76 self._update_fun()
> 77
> 78 # Gradient evaluation
>
> ~\Anaconda3\lib\site-packages\scipy\optimize\_differentiable_functions.py
> in _update_fun(self)
> 164 def _update_fun(self):
> 165 if not self.f_updated:
> --> 166 self._update_fun_impl()
> 167 self.f_updated = True
> 168
>
> ~\Anaconda3\lib\site-packages\scipy\optimize\_differentiable_functions.py
> in update_fun()
> 71
> 72 def update_fun():
> ---> 73 self.f = fun_wrapped(self.x)
> 74
> 75 self._update_fun_impl = update_fun
>
> ~\Anaconda3\lib\site-packages\scipy\optimize\_differentiable_functions.py
> in fun_wrapped(x)
> 68 def fun_wrapped(x):
> 69 self.nfev += 1
> ---> 70 return fun(x, *args)
> 71
> 72 def update_fun():
>
> <ipython-input-17-cb070be7a142> in f(x)
> 3
> 4 def f(x):
> ----> 5 return torch.norm(x)
> 6
> 7 x = [1.0, 1.0]
>
> ~\Anaconda3\lib\site-packages\torch\functional.py in norm(input, p,
> dim, keepdim, out, dtype)
> 738 (tensor(3.7417), tensor(11.2250))
> 739 """
> --> 740 ndim = input.dim()
> 741
> 742 # catch default case
>
> AttributeError: 'numpy.ndarray' object has no attribute 'dim'
I'm not sure why this is happening as I don't think I'm converting anything to a numpy.ndarray.
Any help would be appreciated.
| You can use NumPy instead of torch.
import numpy
from scipy.optimize import minimize
def f(x):
return numpy.linalg.norm(x)
x = numpy.array([1.0, 1.0])
y = minimize(f, x)
print(y)
| https://stackoverflow.com/questions/63164627/ |
PyTorch training with dropout and/or batch-normalization | A model should be set in the evaluation mode for inference by calling model.eval().
Do we need to also do this during training before getting the model outputs? Like within a training epoch if the network contains one or more dropout and/or batch-normalization layers.
If this is not done then the output of the forward pass in the training epoch might be affected by the randomness in the dropout?
Many example codes do not do this and something along these lines is the common approach:
for t in range(num_epochs):
# forward pass
yhat = model(x)
# get the loss
loss = criterion(yhat , y)
# backward pass, optimizer step
optimizer.zero_grad()
loss.backward()
optimizer.step()
For example here is an example code to look at : convolutional_neural_network/main.py
Should this instead be?
for t in range(num_epochs):
# forward pass
model.eval() # disable dropout etc
yhat = model(x)
# get the loss
loss = criterion(yhat , y)
# backward pass, optimizer step
model.train()
optimizer.zero_grad()
loss.backward()
optimizer.step()
| TLDR:
Should this instead be?
No!
Why?
More explanation:
Different Modules behave differently depending on whether they are in training or evaluation/test mode.
BatchNorm and Dropout are only two examples of such modules, basically any module that has a training phase follows this rule.
When you do .eval(), you are signaling all modules in the model to shift operations accordingly.
Update
The answer is during training you should not use eval mode and yes, as long as you have not set the eval mode, the dropout will be active and act randomly in each forward passes. Similarly all other modules that have two phases, will perform accordingly. That is BN will always update the mean/var for each pass, and also if you use batch_size of 1, it will error out as it can not do BN with batch of 1
As it was pointed out in comments, it should be noted that during training, you should not do eval() before the forward pass, as it effectively disables all modules that has different phases for train/test mode such as BN and Dropout (basically any module that has updateable/learnable parameters, or impacts network topology like dropout) will be disabled and you will not see them contributing to your network learning. So don't code like that!
Let me explain a bit what happens during training:
When you are in training mode, all of your modules that make up your model may have two modes, training and test mode. These modules either have learnable parameters that need to be updated during training, like BN, or affect network topology in a sense like Dropout (by disabling some features during forward pass). some modules such as ReLU() only operate in one mode and thus do not have any change when modes change.
When you are in training mode, you feed an image, it passes trough layers until it faces a dropout and here, some features are disabled, thus theri responses to the next layer is omitted, the output goes to other layers until it reaches the end of the network and you get a prediction.
the network may have correct or wrong predictions, which will accordingly update the weights. if the answer was right, the features/combinations of features that resulted in the correct answer will be positively affected and vice versa.
So during training you do not need and should not disable dropout, as it affects the output and should be affecting it so that the model learns a better set of features.
I hope this makes it a bit more clear for you. if you still feel you need more, say so in the comments.
| https://stackoverflow.com/questions/63167099/ |
How can I check the image sizes while CNN? | I'm trying to classify cat and dog in CNN with PyTorch.
While I made few layers and processing images, I found that final processed feature map size doesn't match with calculated size.
So I tried to check feature map size step by step in CNN process with print shape but it doesn't work.
I heard tensorflow enables check tensor size in steps but how can I do that?
What I want is :
def __init__(self):
super(CNN, self).__init__()
conv1 = nn.Conv2d(1, 16, 3, 1, 1)
conv1_1 = nn.Conv2d(16, 16, 3, 1, 1)
pool1 = nn.MaxPool2d(2)
conv2 = nn.Conv2d(16, 32, 3, 1, 1)
conv2_1 = nn.Conv2d(32, 32, 3, 1, 1)
pool2 = nn.MaxPool2d(2)
conv3 = nn.Conv2d(32, 64, 3, 1, 1)
conv3_1 = nn.Conv2d(64, 64, 3, 1, 1)
conv3_2 = nn.Conv2d(64, 64, 3, 1, 1)
pool3 = nn.MaxPool2d(2)
self.conv_module = nn.Sequential(
conv1,
nn.ReLU(),
conv1_1,
nn.ReLU(),
pool1,
# check first result size
conv2,
nn.ReLU(),
conv2_1,
nn.ReLU(),
pool2,
# check second result size
conv3,
nn.ReLU(),
conv3_1,
nn.ReLU(),
conv3_2,
nn.ReLU(),
pool3,
# check third result size
pool4,
# check fourth result size
pool5
# check fifth result size
)
If there's any other way to check feature size at every step, please give some advice.
Thanks in advance.
| To do that you shouldn't use nn.Sequential. Just initialize your layers in __init__() and call them in the forward function. In the forward function you can print the shapes out. For example like this:
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(...)
self.maxpool1 = nn.MaxPool2d()
self.conv2 = nn.Conv2d(...)
self.maxpool2 = nn.MaxPool2d()
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.maxpool1(x)
print(x.size())
x = self.conv2(x)
x = F.relu(x)
x = self.maxpool2(x)
print(x.size())
Hope thats what you looking for!
| https://stackoverflow.com/questions/63167919/ |
How can I use Numba for Pytorch tensors? | I am new to Numba and I need to use Numba to speed up some Pytorch functions. But I find even a very simple function does not work :(
import torch
import numba
@numba.njit()
def vec_add_odd_pos(a, b):
res = 0.
for pos in range(len(a)):
if pos % 2 == 0:
res += a[pos] + b[pos]
return res
x = torch.tensor([3, 4, 5.])
y = torch.tensor([-2, 0, 1.])
z = vec_add_odd_pos(x, y)
But the following error appears
def vec_add_odd_pos(a, b):
res = 0.
^
This error may have been caused by the following argument(s):
argument 0: cannot determine Numba type of <class 'torch.Tensor'>
argument 1: cannot determine Numba type of <class 'torch.Tensor'>
Can anyone help me? A link with more examples would be also appreciated. Thanks.
| numba supports numpy-arrays but not torch's tensors. There is however a bridge Tensor.numpy():
Returns self tensor as a NumPy ndarray. This tensor and the returned
ndarray share the same underlying storage. Changes to self tensor will
be reflected in the ndarray and vice versa.
That means you have to call jitted functions as:
...
z = vec_add_odd_pos(x.numpy(), y.numpy())
If z should be a torch.Tensor as well, torch.from_numpy is what we need:
Creates a Tensor from a numpy.ndarray.
The returned tensor and ndarray share the same memory. Modifications
to the tensor will be reflected in the ndarray and vice versa. The
returned tensor is not resizable.
...
For our code that means
...
z = torch.from_numpy(vec_add_odd_pos(x.numpy(), y.numpy()))
should be called.
| https://stackoverflow.com/questions/63169760/ |
convert nn.Softmax to torch.tensor | I have a softmax function at the end of my Neural Network.
I want to have the probabilities as torch.tensor.For it I am using torch.tensor(nn.softmax(x)) and getting error RuntimeError: Could not infer dtype of Softmax.
May I please know what I am doing wrong here or is there any other way to do it.
| nn.Softmax is a class. You can use it like this:
import torch
x = torch.tensor([10., 3., 8.])
softmax = torch.nn.Softmax(dim=0)
probs = softmax(x)
or, you can use the Functional API torch.nn.functional.softmax:
import torch
x = torch.tensor([10., 3., 8.])
probs = torch.nn.functional.softmax(x, dim=0)
They are equivalent. In both cases, you can check that type(probs) is <class 'torch.Tensor'>.
| https://stackoverflow.com/questions/63174079/ |
Multidimensional tensor product in PyTorch | In pytorch I have to tensors of dimensions [K,L,M] and [M,L,N]. I want to perform a standard tensor convolution product of those tensors along the middle two dimensions to obtain a [K,N] tensor. I couldn't find official documentation on how to perform those operations, perhaps it should be better done in some other library and then reconverted to pytorch tensor?
| If by convolution you actually mean something like contraction, you're probably looking for torch.tensordot. You can specify the indices that should be contracted.
| https://stackoverflow.com/questions/63180601/ |
How can I access layers in a pytorch module by index? | I am trying to write a pytorch module with multiple layers. Since I need the intermediate outputs I cannot put them all in a Sequantial as usual. On the other hand, since there are many layers, what I have in mind is to put the layers in a list and access them by index in a loop. Below describe what I am trying to achieve:
import torch
import torch.nn as nn
import torch.optim as optim
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.layer_list = []
self.layer_list.append(nn.Linear(2,3))
self.layer_list.append(nn.Linear(3,4))
self.layer_list.append(nn.Linear(4,5))
def forward(self, x):
res_list = [x]
for i in range(len(self.layer_list)):
res_list.append(self.layer_list[i](res_list[-1]))
return res_list
model = MyModel()
x = torch.randn(4,2)
y = model(x)
print(y)
optimizer = optim.Adam(model.parameters())
The forward method works fine, but when I want to set an optimizer the program says
ValueError: optimizer got an empty parameter list
It appears that the layers in the list are not registered here. What can I do?
| If you put your layers in a python list, pytorch does not register them correctly. You have to do so using ModuleList (https://pytorch.org/docs/master/generated/torch.nn.ModuleList.html).
ModuleList can be indexed like a regular Python list, but modules it contains are properly registered, and will be visible by all Module methods.
Your code should be something like:
import torch
import torch.nn as nn
import torch.optim as optim
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.layer_list = nn.ModuleList() # << the only changed line! <<
self.layer_list.append(nn.Linear(2,3))
self.layer_list.append(nn.Linear(3,4))
self.layer_list.append(nn.Linear(4,5))
def forward(self, x):
res_list = [x]
for i in range(len(self.layer_list)):
res_list.append(self.layer_list[i](res_list[-1]))
return res_list
By using ModuleList you make sure all layers are registered in the computational graph.
There is also a ModuleDict that you can use if you want to index your layers by name. You can check pytorch's containers here: https://pytorch.org/docs/master/nn.html#containers
| https://stackoverflow.com/questions/63184926/ |
Can I make my custom pytorch modules behave differently when train() or eval() are called? | According to the official documents, using train() or eval() will have effects on certain modules. However, now I wish to achieve a similar thing with my custom module, i.e. it does something when train() is turned on, and something different when eval() is turned on. How can I do this?
| Yes, you can.
As you can see in the source code, eval() and train() are basically changing a flag called self.training (note that it is called recursively):
def train(self: T, mode: bool = True) -> T:
self.training = mode
for module in self.children():
module.train(mode)
return self
def eval(self: T) -> T:
return self.train(False)
This flag is available in every nn.Module. If your custom module inherits this base class, then it is quite simple to achieve what you want:
import torch.nn as nn
class MyCustomModule(nn.Module):
def __init__(self):
super().__init__()
# [...]
def forward(self, x):
if self.training:
# train() -> training logic
else:
# eval() -> inference logic
| https://stackoverflow.com/questions/63185157/ |
error while import pytorch module. (The specified module could not be found.) | I just newly install python 3.8 via anaconda installer and install pytorch using command
conda install pytorch torchvision cpuonly -c pytorch
when i try to import torch, I got this error message.
OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\chunc\anaconda3\lib\site-packages\torch\lib\asmjit.dll" or one of its dependencies.
I can see dll files are still in the directory.
I ran Dependency Walker and it gave me this result.
I am with this problem for a day.
What should i do if i want to use PyTorch module?
| I had the same problem, you should check if you installed Microsoft Visual C++ Redistributable, because if you didn't this may lead to the DLL load failure.
Here is a link to download it: https://aka.ms/vs/16/release/vc_redist.x64.exe
| https://stackoverflow.com/questions/63187161/ |
How to understand "-input[range(target.shape[0]), target]"? | In the official example of PyTorch, it gives a loss function as follows.
def nll(input, target):
return -input[range(target.shape[0]), target].mean()
loss_func = nll
How to understand the grammar of "input[range(target.shape[0]), target]" in the above function?
"input" has a torch.Size([64, 10]) and "target" has a torch.Size([64]). Why use "range" function here?
| The range function is used as a shortcut to create a vector/list/generator starting from 0 up to 64. So it is shorthand for essentially [0,1,2,...64]
To make this explicit you could do the following:
def nll(input, target):
minputlist = list(range(target.shape[0]))
print(minputlist )
return -input[minputlist, target].mean()
| https://stackoverflow.com/questions/63189222/ |
Memory leak issue using PyTorch IterableDataset with zarr | I'm trying to build a pytorch project on an IterableDataset with zarr as storage backend.
class Data(IterableDataset):
def __init__(self, path, start=None, end=None):
super(Data, self).__init__()
store = zarr.DirectoryStore(path)
self.array = zarr.open(store, mode='r')
if start is None:
start = 0
if end is None:
end = self.array.shape[0]
assert end > start
self.start = start
self.end = end
def __iter__(self):
return islice(self.array, self.start, self.end)
This works quite nicely with small test-datasets but once i move to my actual dataset (480 000 000 x 290) i'm running into a memory leak. I've tried logging out the python heap periodically as everything slows to a crawl, but i couldn't see anything increasing in size abnormally, so the lib i used (pympler) didn't actually catch the memory leak.
I'm kind of at my wits end, so if anybody has any idea how to further debug this, it would be greatly appreciated.
Cross-posted on PyTorch Forums.
| Turns out that I had an issue in my validation routine:
with torch.no_grad():
for batch in tqdm(testloader, **params):
x = batch[:, 1:].to(device)
y = batch[:, 0].unsqueeze(0).T
y_test_pred = torch.sigmoid(sxnet(x))
y_pred_tag = torch.round(y_test_pred)
y_pred_list.append(y_pred_tag.cpu().numpy())
y_list.append(y.numpy())
I originally thought that I am well clear of running into troubles with appending my results to lists, but the issue is that the result of .numpy was an array of arrays (since the original datatype was a 1xn Tensor).
Adding .flatten() on the numpy arrays has fixed this issue and the RAM consumption is now as I originally provisioned.
| https://stackoverflow.com/questions/63192550/ |
How much deep a Neural Network Required for 12 inputs of ranging from -5000 to 5000 in a3c Reinforcement Learning | I am trying to use A3C with LSTM for an environment where states has 12 inputs ranging from -5000 to 5000.
I am using an LSTM layer of size 12 and then 2 fully connected hidden layers of size 256, then 1 fc for 3 action dim and 1 fc for 1 value function.
The reward is in range (-1,1).
However during initial training I am unable to get good results.
My question is- Is this Neural Network good enough for this kind of environment.
Below is the code for Actor Critic
class ActorCritic(torch.nn.Module):
def __init__(self, params):
super(ActorCritic, self).__init__()
self.state_dim = params.state_dim
self.action_space = params.action_dim
self.hidden_size = params.hidden_size
state_dim = params.state_dim
self.lstm = nn.LSTMCell(state_dim, state_dim)
self.lstm.bias_ih.data.fill_(0)
self.lstm.bias_hh.data.fill_(0)
lst = [state_dim]
for i in range(params.layers):
lst.append(params.hidden_size)
self.hidden = nn.ModuleList()
for k in range(len(lst)-1):
self.hidden.append(nn.Linear(lst[k], lst[k+1]))
for layer in self.hidden:
layer.apply(init_weights)
self.critic_linear = nn.Linear(params.hidden_size, 1)
self.critic_linear.apply(init_weights)
self.actor_linear = nn.Linear(params.hidden_size, self.action_space)
self.actor_linear.apply(init_weights)
self.train()
def forward(self, inputs):
inputs, (hx, cx) = inputs
inputs = inputs.reshape(1,-1)
hx, cx = self.lstm(inputs, (hx, cx))
x = hx
for layer in self.hidden:
x = torch.tanh(layer(x))
return self.critic_linear(x), self.actor_linear(x), (hx, cx)
class Params():
def __init__(self):
self.lr = 0.0001
self.gamma = 0.99
self.tau = 1.
self.num_processes = os.cpu_count()
self.state_dim = 12
self.action_dim = 3
self.hidden_size = 256
self.layers = 2
self.epochs = 10
self.lstm_layers = 1
self.lstm_size = self.state_dim
self.num_steps = 20
self.window = 50
| Since you have 12 inputs so make sure you dont use too many parameters, also try changing activation function.
i dont use Torch so i can not understand model architecture.
why your first layer is LSTM? is your data time series?
try using only Dense layer,
1 Dense only with 12 neurons and output layer
2 Dense Layers with 12 neurons each and output layer
As for activation function use leaky relu, since your data is -5000, or you can make your data positive only by adding 5000 to all data samples.
| https://stackoverflow.com/questions/63195873/ |
how many epochs required for model with lstm training | I am having nn Actor Critic TD3 model with LSTM in my AI.
For every training, I am creating batches of sequential data and training my AI.
Can someone expert please help to let know if I require epochs as well for this AI.And in general how many epochs can I run with this code, because I am creating many batches on one training step is it feasible to have epochs as well.
Below is training step code
def train(
self,
replay_buffer,
iterations,
batch_size=50,
discount=0.99,
tau=0.005,
policy_noise=0.2,
noise_clip=0.5,
policy_freq=2,
):
b_state = torch.Tensor([])
b_next_state = torch.Tensor([])
b_done = torch.Tensor([])
b_reward = torch.Tensor([])
b_action = torch.Tensor([])
for it in range(iterations):
# print ('it: ', it, ' iterations: ', iterations)
# Step 4: We sample a batch of transitions (s, sβ, a, r) from the memory
(batch_states, batch_next_states, batch_actions,
batch_rewards, batch_dones) = \
replay_buffer.sample(batch_size)
batch_states = batch_states.astype(float)
batch_next_states = batch_next_states.astype(float)
batch_actions = batch_actions.astype(float)
batch_rewards = batch_rewards.astype(float)
batch_dones = batch_dones.astype(float)
state = torch.from_numpy(batch_states)
next_state = torch.from_numpy(batch_next_states)
action = torch.from_numpy(batch_actions)
reward = torch.from_numpy(batch_rewards)
done = torch.from_numpy(batch_dones)
b_size = 1
seq_len = state.shape[0]
batch = b_size
input_size = state_dim
state = torch.reshape(state, ( 1,seq_len, state_dim))
next_state = torch.reshape(next_state, ( 1,seq_len, state_dim))
done = torch.reshape(done, ( 1,seq_len, 1))
reward = torch.reshape(reward, ( 1, seq_len, 1))
action = torch.reshape(action, ( 1, seq_len, action_dim))
b_state = torch.cat((b_state, state),dim=0)
b_next_state = torch.cat((b_next_state, next_state),dim=0)
b_done = torch.cat((b_done, done),dim=0)
b_reward = torch.cat((b_reward, reward),dim=0)
b_action = torch.cat((b_action, action),dim=0)
# state = torch.reshape(state, (seq_len, 1, state_dim))
# next_state = torch.reshape(next_state, (seq_len, 1,
# state_dim))
# done = torch.reshape(done, (seq_len, 1, 1))
# reward = torch.reshape(reward, (seq_len, 1, 1))
# action = torch.reshape(action, (seq_len, 1, action_dim))
# b_state = torch.cat((b_state, state),dim=1)
# b_next_state = torch.cat((b_next_state, next_state),dim=1)
# b_done = torch.cat((b_done, done),dim=1)
# b_reward = torch.cat((b_reward, reward),dim=1)
# b_action = torch.cat((b_action, action),dim=1)
print("dim state:",b_state.shape)
# for h and c shape (num_layers * num_directions, batch, hidden_size)
ha0 = torch.zeros(lstm_layers, b_state.shape[0], state_dim)
ca0 = torch.zeros(lstm_layers, b_state.shape[0], state_dim)
hc0 = torch.zeros(lstm_layers, b_state.shape[0], state_dim + action_dim)
cc0 = torch.zeros(lstm_layers, b_state.shape[0], state_dim + action_dim)
# Step 5: From the next state sβ, the Actor target plays the next action aβ
b_next_action = self.actor_target(b_next_state, (ha0, ca0))
b_next_action = b_next_action[0]
# Step 6: We add Gaussian noise to this next action aβ and we clamp it in a range of values supported by the environment
noise = torch.Tensor(b_next_action).data.normal_(0,
policy_noise)
noise = noise.clamp(-noise_clip, noise_clip)
b_next_action = (b_next_action + noise).clamp(-self.max_action,
self.max_action)
# Step 7: The two Critic targets take each the couple (sβ, aβ) as input and return two Q-values Qt1(sβ,aβ) and Qt2(sβ,aβ) as outputs
result = self.critic_target(b_next_state, b_next_action, (hc0,cc0))
target_Q1 = result[0]
target_Q2 = result[1]
# Step 8: We keep the minimum of these two Q-values: min(Qt1, Qt2)
target_Q = torch.min(target_Q1, target_Q2).double()
# Step 9: We get the final target of the two Critic models, which is: Qt = r + Ξ³ * min(Qt1, Qt2), where Ξ³ is the discount factor
target_Q = b_reward + (1 - b_done) * discount * target_Q
# Step 10: The two Critic models take each the couple (s, a) as input and return two Q-values Q1(s,a) and Q2(s,a) as outputs
b_action_reshape = torch.reshape(b_action, b_next_action.shape)
result = self.critic(b_state, b_action_reshape, (hc0, cc0))
current_Q1 = result[0]
current_Q2 = result[1]
# Step 11: We compute the loss coming from the two Critic models: Critic Loss = MSE_Loss(Q1(s,a), Qt) + MSE_Loss(Q2(s,a), Qt)
critic_loss = F.mse_loss(current_Q1, target_Q) \
+ F.mse_loss(current_Q2, target_Q)
# Step 12: We backpropagate this Critic loss and update the parameters of the two Critic models with a SGD optimizer
self.critic_optimizer.zero_grad()
critic_loss.backward()
self.critic_optimizer.step()
# Step 13: Once every two iterations, we update our Actor model by performing gradient ascent on the output of the first Critic model
out = self.actor(b_state, (ha0, ca0))
out = out[0]
(actor_loss, hx, cx) = self.critic.Q1(b_state, out, (hc0,cc0))
actor_loss = -1 * actor_loss.mean()
self.actor_optimizer.zero_grad()
actor_loss.backward()
self.actor_optimizer.step()
# Step 14: Still once every two iterations, we update the weights of the Actor target by polyak averaging
for (param, target_param) in zip(self.actor.parameters(),
self.actor_target.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau)
* target_param.data)
# Step 15: Still once every two iterations, we update the weights of the Critic target by polyak averaging
for (param, target_param) in zip(self.critic.parameters(),
self.critic_target.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau)
* target_param.data)
| First I will say this, generally reinforcement learning requires A LOT of training. This is however heavily dependent on the complexity of the problem you are trying to solve. The situation may be made worse if your model is complicated (e.g. when using a LSTM). When training an agent to play Atari games you can expect needing up to 1 million episodes (depending on the game and the approach used).
With regard to epochs, if you mean repeated use of a particular episode (or collection of episodes) for training then it will depend on whether your using an on or off policy approach (for off policy this is like experience replay, epochs is the wrong word). Actor-Critic methods are generally on policy, which means they require fresh data at each stage of training. Once an episode has been used for training it should not be used again. For more information about the difference between on/off policy I recommend taking a look at Sutton's book.
| https://stackoverflow.com/questions/63203574/ |
Pytorch training loss function throws: "TypeError: 'Tensor' object is not callable" | I use Python 3.x, and pytorch 1.5.0 with a GPU. I am trying to write a simple multinomial logistic regression using mnist data.
My issue is the loss() function throws a TypeError: 'Tensor' object is not callable while looping through the training batches. The thing that baffles me is that the error does not show up in the first iteration of the loop, but for the second batch, I get the full error below:
Traceback (most recent call last):
File "/snap/pycharm-community/207/plugins/python-ce/helpers/pydev/pydevd.py", line 1448, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/snap/pycharm-community/207/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/pytorch_tutorial/Pytorch_feed_fwd_310720.py", line 78, in <module>
loss = loss(preds,ys)
TypeError: 'Tensor' object is not callable
The loss() function here is simply loss = nn.CrossEntropyLoss(). The full code is below. Any pointers would be very welcome.
for epoch in range(5):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
xs, ys = data
opt.zero_grad()
preds = net(xs)
loss = loss(preds,ys)
loss.backward()
opt.step()
# print statistics
running_loss += loss.item()
if i % 1000 == 999: # print every 1000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('epoch {}, loss {}'.format(epoch, loss.item()))
a=1
| it is because you are setting loss locally in the loop.
change loss = loss(preds, ys) to _loss = loss(preds, ys)
| https://stackoverflow.com/questions/63204176/ |
From where can i get a detailed description of all the methods for the model in pytorch torchvisonοΌ | I am a beginner of pytorch οΌwhen i read the source code of a project about mask rcnn .I don't konw from where can i get some information about some methods that i don't understand .The official documentation doesn't seem very detailed?
# load an instance segmentation model pre-trained pre-trained on COCO
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
Just like code above ,I could not get detailed information about the " roi_head" attribute from doc of model .From where i can learn about itοΌ
| You won't be able to find such thing on the documentation. You'll have to dive into the source code. Object Detection APIs, especially anchor-based two-stage approaches, are a little bit complex, and they tend to have too many components and hyper-parameters. PyTorch team already made an incredible job making this API modular and kinda easy-to-use. In the specific case of roi_heads you can take a look here to learn more about it. In general, all components can be found in torchvision/models/detection.
Anyway, you can always open an issue, requesting them to expand the documentation. Or we can even do it ourselves and make a pull request :)
| https://stackoverflow.com/questions/63207013/ |
Error Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select | I have the following code taken directly from here with some pretty little modifications:
import pandas as pd
import torch
import json
from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config
from torch import cuda
df = pd.read_pickle('df_final.pkl')
model = T5ForConditionalGeneration.from_pretrained('t5-base')
tokenizer = T5Tokenizer.from_pretrained('t5-base')
device = 'cuda' if cuda.is_available() else 'cpu'
text = ''.join(df[(df['col1'] == 'type') & (df['col2'] == 2)].col3.to_list())
preprocess_text = text.strip().replace("\n","")
t5_prepared_Text = "summarize: "+preprocess_text
#print ("original text preprocessed: \n", preprocess_text)
tokenized_text = tokenizer.encode(t5_prepared_Text, return_tensors="pt", max_length = 500000).to(device)
# summmarize
summary_ids = model.generate(tokenized_text,
num_beams=4,
no_repeat_ngram_size=2,
min_length=30,
max_length=100,
early_stopping=True)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print ("\n\nSummarized text: \n",output)
When executing the model_generate() part i get an error like this:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-12-e8e9819a85dc> in <module>
12 min_length=30,
13 max_length=100,
---> 14 early_stopping=True).to(device)
15
16 output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
~\Anaconda3\lib\site-packages\torch\autograd\grad_mode.py in decorate_no_grad(*args, **kwargs)
47 def decorate_no_grad(*args, **kwargs):
48 with self:
---> 49 return func(*args, **kwargs)
50 return decorate_no_grad
51
~\Anaconda3\lib\site-packages\transformers\generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache, **model_specific_kwargs)
383 encoder = self.get_encoder()
384
--> 385 encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask)
386
387 # Expand input ids if num_beams > 1 or num_return_sequences > 1
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~\Anaconda3\lib\site-packages\transformers\modeling_t5.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_value_states, use_cache, output_attentions, output_hidden_states, return_dict)
701 if inputs_embeds is None:
702 assert self.embed_tokens is not None, "You have to intialize the model with valid token embeddings"
--> 703 inputs_embeds = self.embed_tokens(input_ids)
704
705 batch_size, seq_length = input_shape
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~\Anaconda3\lib\site-packages\torch\nn\modules\sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~\Anaconda3\lib\site-packages\torch\nn\functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1482 # remove once script supports set_grad_enabled
1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1485
1486
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select
β
I've searched this error and fouund some other threads like this one and this one but they didn't help me much since their case seems to be completely different. In my case there are no custom instances or classes created, so i don't know how to fix this or where the error come from.
Could you please tell me where is the error coming from and how could i fix it?
Thank you very much in advance.
| Try explicitly moving your model to the GPU.
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = T5ForConditionalGeneration.from_pretrained('t5-base').to(device)
| https://stackoverflow.com/questions/63211463/ |
PyTorch: How to use `torch.einsum()` to find the trace between the dot product of a nested tensor and another tensor | Suppose I have a nested tensor A:
import torch.nn as nn
X = np.array([[1, 3, 2], [2, 3, 5], [1, 2, 3]])
X = torch.DoubleTensor(X)
rows = X.shape[0]
cols = X.shape[1]
A = torch.matmul(X.view(rows, cols, 1),
X.view(rows, 1, cols))
A
Output:
tensor([[[ 1., 3., 2.],
[ 3., 9., 6.],
[ 2., 6., 4.]],
[[ 4., 6., 10.],
[ 6., 9., 15.],
[10., 15., 25.]],
[[ 1., 2., 3.],
[ 2., 4., 6.],
[ 3., 6., 9.]]], dtype=torch.float64)
And I have another tensor B:
B = torch.DoubleTensor([[11., 21, 31], [31, 51, 31], [41, 51, 21]])
B
Output:
tensor([[11., 21., 31.],
[31., 51., 31.],
[41., 51., 21.]])
How do I use torch.einsum() to find the trace value between the dot product of each of the nested tensor in A and tensor B. For eg. the trace value of the dot product between the 1st nested tensor in A:
[[ 1., 3., 2.],
[ 3., 9., 6.],
[ 2., 6., 4.]]
and B:
tensor([[11., 21., 31.],
[31., 51., 31.],
[41., 51., 21.]])
and similarly with the other 2 nested tensors in A.
My results tensor will be a tensor with just 3 trace values. Is there a way to do this without looping over each of the nested tensor in A (with say a for loop)?
Ps:
I know the code to find the trace value between the dot product of 2 tensors is:
torch.einsum('ij,ji->', X, Y).item()
If you know how to do this with numpy.einsum(), please let me know too. I might just need to tweak numpy.einsum() a little to make it work for PyTorch tensors.
| It's quite simple, you need to add the 'batch dimension' of A:
torch.einsum('bij,ji->b', A, B)
The output is
tensor([1346., 3290., 1216.])
| https://stackoverflow.com/questions/63213658/ |
PyTorch Linear MINST model training error | I am creating a binary classifier based on the MINST dataset using PyTorch. I want my classifier to classify between only 0s and 1s, however, when I train it, the error doesn't decrease and the loss becomes negative.
Here's the error and loss at the first few iterations:
I was obviously expecting better results.
Here is the code I am using:
# Loading the MNISR data reduced to the 0/1 examples
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
mnist_train = datasets.MNIST("./data", train=True, download=True, transform=transforms.ToTensor())
mnist_test = datasets.MNIST("./data", train=False, download=True, transform=transforms.ToTensor())
train_idx = mnist_train.train_labels <= 1
try:
mnist_train.train_data = mnist_train.train_data[train_idx]
except AttributeError:
mnist_train._train_data = mnist_train.train_data[train_idx]
try:
mnist_train.train_labels = mnist_train.train_labels[train_idx]
except AttributeError:
mnist_train._train_labels = mnist_train.train_labels[train_idx]
test_idx = mnist_test.test_labels <= 1
try:
mnist_test.test_data = mnist_test.test_data[test_idx]
except AttributeError:
mnist_test._test_data = mnist_test.test_data[test_idx]
try:
mnist_test.test_labels = mnist_test.test_labels[test_idx]
except AttributeError:
mnist_test._test_labels = mnist_test.test_labels[test_idx]
train_loader = DataLoader(mnist_train, batch_size = 100, shuffle=True)
test_loader = DataLoader(mnist_test, batch_size = 100, shuffle=False)
# Creating a simple linear classifier
import torch
import torch.nn as nn
import torch.optim as optim
# do a single pass over the data
def epoch(loader, model, opt=None):
total_loss, total_err = 0.,0.
for X,y in loader:
yp = model(X.view(X.shape[0], -1))[:,0]
loss = nn.BCEWithLogitsLoss()(yp, y.float())
if opt:
opt.zero_grad()
loss.backward()
opt.step()
total_err += ((yp > 0) * (y==0) + (yp < 0) * (y==1)).sum().item()
total_loss += loss.item() * X.shape[0]
return total_err / len(loader.dataset), total_loss / len(loader.dataset)
model = nn.Linear(784, 1)
opt = optim.SGD(model.parameters(), lr=1)
print("Train Err", "Train Loss", "Test Err", "Test Loss", sep="\t")
for i in range(10):
train_err, train_loss = epoch(train_loader, model, opt)
test_err, test_loss = epoch(test_loader, model)
print(*("{:.6f}".format(i) for i in (train_err, train_loss, test_err, test_loss)), sep="\t")
I don't know why my error does not decrease nor why my loss keeps getting more negative. Does anyone spot the error?
| I found the error. My initial code to select only 1s and 0s from the MNIST dataset didn't work. So obviously, applying BCELoss to a non-binary dataset was making the model fail.
| https://stackoverflow.com/questions/63215140/ |
Perform a different neighbourhood operation for specified pixels | I have an HxW "feature map", F. Let us assume that it is a HxWx1 map. Through some other operation, I have a set of pixels that are of interest to me, (say N pixels). Each of these pixels is associated with a different value, thus my set is of the form Nx3 where each pixel is of the form x, y and val. Note that this val is different from the feature map value at the location.
Here is my question. Is it possible to vectorize a neighbourhood operation for each of these points? For each pixel n from N, I wish to multiply the corresponding val to its 3x3 neighbourhood in the feature map F. For the 3x3 neighbourhood, this gives a new 3x3 set of elements new val. I want to replace the x y with the pixel with the maximum of new val (multiplied feature map) in the 3x3 window.
This sounds similar to a convolution (slight abuse of terminology here) followed by a max pool operation, but not exactly since each pixel location has a different val to be multiplied.
Sample input and output, and walkthrough for required solution
Let us assume H=10 and W=10
Here is a sample F
0.635955 0.922379 0.993406 0.007837 0.818661 0.983730 0.199866 0.757519 0.073152 0.015831
0.397718 0.097353 0.231351 0.177886 0.343099 0.419940 0.017342 0.087294 0.402266 0.366337
0.978686 0.476594 0.067836 0.148977 0.058994 0.810586 0.542894 0.797419 0.386559 0.225982
0.479860 0.033354 0.353366 0.431562 0.336208 0.674272 0.398151 0.713732 0.598623 0.829230
0.940838 0.869564 0.287100 0.669844 0.631836 0.748982 0.762292 0.597999 0.540236 0.758802
0.925995 0.141296 0.466772 0.672663 0.929746 0.544029 0.991860 0.197474 0.762866 0.798973
0.543519 0.128332 0.624323 0.876569 0.050709 0.223705 0.708381 0.380842 0.818092 0.163447
0.283125 0.329618 0.283481 0.672950 0.136922 0.897785 0.385479 0.764824 0.132671 0.091148
0.661984 0.369459 0.501181 0.352681 0.554113 0.133283 0.593048 0.108534 0.397813 0.836065
0.654929 0.928576 0.539204 0.931213 0.344114 0.591214 0.126809 0.456681 0.036531 0.725228
My structure of pixels, let us say N=3
The three values in the order of row,col,val: (for simplicity I assume x is rows, and y is cols, though it isn't necessarily the case). This is completely independent of the feature map in the previous step.
3,2,0.38
4,4,0.602
7,5,0.9647
The neighborhood around (3,2) is:
[[0.4765941 , 0.06783561, 0.14897662],
[0.03335438, 0.35336647, 0.4315618 ],
[0.86956374, 0.28709952, 0.66984412]]
Thus val * neighborhood yields. (here val is 0.38)
[[0.18110576, 0.02577753, 0.05661112],
[0.01267466, 0.13427926, 0.16399349],
[0.33043422, 0.10909782, 0.25454077]]
The location of max value here is (2,0) i.e. (1,-1) with respect to center pixel. Thus my updated (x,y) should be (3,2) + (1,-1) = (4,1).
Similarly for the other two, the updated pixels are : (5,4) and (7,5)
How can I parallelize this entire thing?
(Hopefully to be loaded onto a GPU using Pytorch, but not necessarily, I have not come to that stage yet.)
Note: I had asked this question a few days ago, but it was poorly framed without proper info. Hopefully this solves the issue.
Edit: For this specific instance, F can be produced as a random array:
F = np.random.rand(10,10)
| If I understand correctly, you want this:
from skimage.util.shape import view_as_windows
idx = pixels[:,0:2].astype(int)
print((np.unravel_index((view_as_windows(F,(3,3))[tuple(idx.T-1)]*pixels[:,-1][:,None,None]).reshape(-1,9).argmax(1),(3,3))+idx.T).T-1)
#if you need to replace the values of F with new values
F[tuple(idx.T)] = (view_as_windows(F,(3,3))[tuple(idx.T-1)]*pixels[:,-1][:,None,None]).reshape(-1,9).max(1)
I assumed your window shape is (3,3). Of course, you can change it. And if you need to deal with edge neighborhoods, pad your F with enough 0s (depending on your window size) using np.pad before using the view_as_windows.
output:
[[4 1]
[5 4]
[7 5]]
| https://stackoverflow.com/questions/63215665/ |
RuntimeError("grad can be implicitly created only for scalar outputs") | I have following code for train function for training an A3C.
I am stuck with following error.
RuntimeError("grad can be implicitly created only for scalar outputs")
at line (policy_loss + 0.5 * value_loss).backward()
Here is my code, can someone please help to check what is wrong with this code.
def train(rank, params, shared_model, optimizer,ticker):
torch.manual_seed(params.seed + rank) # shifting the seed with rank to asynchronize each training agent
print(ticker)
try:
ohlcv = pd.read_csv(ticker + '.csv')
data = ohlcv.copy()
data['rsi'] = ab.RSI(data['Close'],14)
data['adx'] = ab.ADX(data,20)
data = ab.BollBnd(data,20)
data['BBr'] = data['Close']/data['BB_dn']
data['macd_signal'] = ab.MACD(data)
data['macd_r'] = data['macd_signal']/data['Close']
data['ema20'] = ab.EMA(np.asarray(data['Close']), 20)
data['ema20_r'] = data['ema20']/data['Close']
data['Close'] = data['Close']/data['Close'].max()
data = data.iloc[:,[4,7,8,13,15,17]]
data = data.dropna()
data = torch.DoubleTensor(np.asarray(data))
env = ENV(state_dim, action_dim, data)
model = ActorCritic(env.observation_space, env.action_space)
state = env.reset()
done = True
episode_length = 0
while True:
episode_length += 1
model.load_state_dict(shared_model.state_dict())
if done:
cx = Variable(torch.zeros(1, state_dim)) # the cell states of the LSTM are reinitialized to zero
hx = Variable(torch.zeros(1, state_dim)) # the hidden states of the LSTM are reinitialized to zero
else:
cx = Variable(cx.data)
hx = Variable(hx.data)
values = []
log_probs = []
rewards = []
entropies = []
for step in range(params.num_steps):
value, action_values, (hx, cx) = model((Variable(state.unsqueeze(0)), (hx, cx)))
prob = F.softmax(action_values,-1)
log_prob = F.log_softmax(action_values,-1)
entropy = -(log_prob * prob).sum(1)
entropies.append(entropy)
action = prob.multinomial(num_samples = action_dim).data
log_prob = log_prob.gather(1, Variable(action))
values.append(value)
log_probs.append(log_prob)
state, reward, done = env.step(action)
done = (done or episode_length >= params.max_episode_length)
reward = max(min(reward, 1), -1) # clamping the reward between -1 and +1
if done:
episode_length = 0
state = env.reset()
rewards.append(reward)
if done:
break
R = torch.zeros(1, 1)
if not done: # if we are not done:
value, _, _ = model((Variable(state.unsqueeze(0)), (hx, cx)))
R = value.data
values.append(Variable(R))
policy_loss = torch.zeros(1, 1)
value_loss = torch.zeros(1, 1)
R = Variable(R)
gae = torch.zeros(1, 1)
for i in reversed(range(len(rewards))):
R = params.gamma * R + rewards[i]
advantage = R - values[i]
print("advantage:",advantage)
value_loss = value_loss + 0.5 * advantage.pow(2) # computing the value loss
TD = rewards[i] + params.gamma * values[i + 1].data - values[i].data # computing the temporal difference
gae = gae * params.gamma * params.tau + TD # gae = sum_i (gamma*tau)^i * TD(i) with gae_i = gae_(i+1)*gamma*tau + (r_i + gamma*V(state_i+1) - V(state_i))
print("gae:",gae)
policy_loss = policy_loss - log_probs[i] * Variable(gae) - 0.01 * entropies[i] # computing the policy loss
print("policy_loss:",policy_loss)
optimizer.zero_grad() # initializing the optimizer
los = policy_loss + 0.5 * value_loss
print("los",los.shape)
(policy_loss + 0.5 * value_loss).backward()
torch.nn.utils.clip_grad_norm(model.parameters(), 40) # clamping the values
ensure_shared_grads(model, shared_model)
optimizer.step() # running the optimization step
except Exception as e:
print(e)
traceback.print_exc()
var = traceback.format_exc()
Below are the outputs:-
advantage: tensor([[-1.0750]], grad_fn=<ThSubBackward>)
gae: tensor([[-1.0750]])
policy_loss: tensor([[-25.8590, -26.1414, -25.9023, -25.2628]], grad_fn=<ThSubBackward>)
los torch.Size([1, 4])
RuntimeError: grad can be implicitly created only for scalar outputs
PS E:\ML\Breakout_a3c\Code_With_Comments>
| The pytorch error you get means "you can only call backward on scalars, i.e 0-dimensional tensors". Here, according to your prints, policy_lossis not scalar, it's a 1x4 matrix. As a consequence, so is policy_loss + 0.5 * value_loss. Thus your call to backward yields an error.
You probably forgot to reduce your losses to a scalar (with functions like norm or MSELoss ...). See example here
The reason it does not work is the way the gradient propagation works internally (it's basically a Jacobian multiplication engine). You can call backward on a non-scalar tensor, but then you have to provide a gradient yourself, like :
# loss is 1x4
loss = policy_loss + 0.5 * value_loss
# explicit gradient backprop with non-scalar tensor
loss.backward(torch.ones(1,4))
You should really not do that without a good understanding of how Pytorch's Autograd works and what it means.
PS: next time, please provide a minimal working example :)
| https://stackoverflow.com/questions/63218710/ |
Masked Matrix multiplication | I have a A = N x M matrix and another array B = N x P x M where P is typically 9 or 15. For each vector a from A, it has to be multiplied with each pi from B of the same row, to get an output of dimensions N x P.
I am using numpy and Python and would be performing this operation on a GPU.
For a tiny example, let N=4, M=5, P=3.
Let A be:
array([[0.18503431, 0.2628188 , 0.26343728, 0.8356702 , 0.47581551],
[0.70827725, 0.04006919, 0.58975722, 0.90874113, 0.43946412],
[0.40669507, 0.63328008, 0.95832881, 0.59041436, 0.63578578],
[0.12129919, 0.74470057, 0.62271405, 0.97760796, 0.6499647 ]])
Let B be:
array([[[4.29031165e-01, 6.17324572e-01, 6.54726975e-02, 1.72218768e-02, 3.53970827e-01],
[3.38821841e-01, 3.80128792e-01, 7.70995505e-01, 7.38437494e-03, 5.87395036e-02],
[4.75661932e-01, 3.75617802e-01, 1.28564731e-01, 3.66302247e-01, 6.70953890e-01]],
[[8.96228996e-02, 1.67135584e-02, 4.56921778e-01, 8.25731354e-01, 7.66242539e-01],
[5.16651815e-01, 4.27179773e-01, 9.34673912e-01, 2.04687170e-01, 7.68417953e-01],
[5.90980849e-01, 5.03013376e-01, 8.41765736e-02, 8.08221224e-01, 7.76765422e-01]],
[[3.25802668e-01, 8.58148960e-01, 9.47505735e-01, 1.01405305e-01, 8.34114717e-01],
[1.65308159e-01, 9.74572631e-01, 2.69886016e-01, 7.44036253e-02, 4.73350521e-01],
[8.59030672e-01, 3.96972621e-01, 7.34687493e-01, 2.84647032e-02, 7.19723378e-01]],
[[1.35751242e-01, 1.74882898e-01, 5.48875709e-01, 7.33443675e-01, 4.05282650e-01],
[8.41298770e-01, 6.24323279e-01, 5.83482185e-01, 4.28514313e-01, 1.96797205e-01],
[7.93345700e-04, 3.01441721e-01, 7.59451146e-01, 9.09102382e-01, 7.11518948e-01]]])
This is how I want my output to be:
[[np.dot(a[0], b[0][0]), np.dot(a[0], b[0][1]), np.dot(a[0], b[0][2])],
[np.dot(a[1], b[1][0]), np.dot(a[1], b[1][1]), np.dot(a[1], b[1][2])],
[np.dot(a[2], b[2][0]), np.dot(a[2], b[2][1]), np.dot(a[2], b[2][2])],
[np.dot(a[3], b[3][0]), np.dot(a[3], b[3][1]), np.dot(a[3], b[3][2])]]
Doing this manually gives:
[[0.44169455751462816, 0.3998276862221848, 0.845960080871557],
[1.4207326179275017, 1.4579799277670968, 1.564201768913105],
[2.174162453912622, 1.287925491552765, 1.779226448174152],
[1.4689343122491012, 1.4771555510001255, 2.0487088726424365]]
Since I want to do this on a GPU, this obviously calls for converting my problem into matrix multiplication (this is true if I dont use a GPU as well for that matter).
But I do not exactly know how to convert it to that.
One idea that I had was to reshape B to be Q x M where Q=NxP. And then perform some sort of sparse multiplication, where for every row i of a boolean sparse matrix, I turn on (0:P) + P*ith elements. (Drawing it out makes sense), however I certainly feel that there is a much more elegant way to do this as creating sparse matrices and performing the operations might take time, and that the sparsity of my matrix is not random at all.
How can I solve this elegantly.
Note that I cannot do some operations such as broadcasting/repeating my A matrix P times and performing the huge matrix multiplication and picking out relevant values, since typically N and M will be quite huge (2000ish and 256 respectively), but P will be quite small, hence doing a global matrix multiplication for all vectors means I will be doing >95% unnecessary computations!.
| You can use einsum here to compute this efficiently.
np.einsum('ij,ikj->ik', A, B) # or torch.einsum
>>> np.allclose(np.einsum('ij,ikj->ik', A, B), manual)
True
| https://stackoverflow.com/questions/63218951/ |
What does "Variable" means in python? Is it a standard function? | I have some python code that I have to read and understand.
In one line I find
img = Variable(torch.from_numpy(img.transpose(2, 0, 1)[np.newaxis,:,:,:]).cuda().float(), volatile=True)
what is this Variable I am seeing? When I use the IDE to find the definition it says 'No definition found for Variable' which makes me suspect it is a standard function in python. I Obviously can not google "Variable" for python because I will get countless definitions of what a variable is in python.
Has anyone seen a line like this before? Where Variable is being used as a function?
t
| Variable is not an inbuilt class. It is under module torch.autograd
A Variable wraps a Tensor. It supports nearly all the APIβs defined by a Tensor. Variable also provides a backward method to perform backpropagation. For example, to backpropagate a loss function to train model parameter x, we use a variable loss to store the value computed by a loss function. Then, we call loss.backward which computes the gradients βloss/βx for all trainable parameters. PyTorch will store the gradient results back in the corresponding variable x.
Source
| https://stackoverflow.com/questions/63222174/ |
PyTorch softmax return | I am new to PyTorch and have been following this tutorial for reinforcement learning. My environment is a custom Pacman game that does not use the gym environments. The game loop is taken care of. An object in this Pacman game allows access to the state data. I use this data to send input to my Deep Q Network.
First I change the input from a python list into a tensor so that my Deep Q Network can take it as an input. Here is how I convert my python list into a tensor:
#self.getFeatures() returns a dictionary, so I grab the values and convert it to a list
input = torch.FloatTensor(list(self.getFeatures(gameState, bestAction).values())) \
.unsqueeze(0) \
.to(torch.device("cuda" if torch.cuda.is_available() else "cpu"))
Then I pass in this input to my policy Deep Q Network:
test_net = self.policy_net(input).max(1)[1].view(1, 1)
Below is my Deep Q Network:
class DQN(nn.Module):
def __init__(self, feature_size, action_size):
super(DQN, self).__init__()
self.input = nn.Linear(feature_size, 12)
self.hidden1 = nn.Linear(12, 5)
self.hidden2 = nn.Linear(5, action_size)
# Called with either one element to determine next action, or a batch
# during optimization. Returns tensor([[left0exp,right0exp]...]).
def forward(self, x):
x = F.relu(self.input(x))
x = F.relu(self.hidden1(x))
x = F.softmax(self.hidden2(x), dim=1)
return x
With input tensor([[0., 1., 1., 0., 1.]]) this test_net returns this tensor([[0]]). I don't know what to get from this. I was under the impression that softmax returned a probability of each action. I have 5 actions available in my action space. I don't know what to do with my output from test_net. I want to get an action selection from this test_net, but am getting an integer.
My questions are, should the input be in a different shape? Am I converting my python input list into a tensor correctly? I have 5 features, which are tensor([[0., 1., 1., 0., 1.]]). Should the output tensor([[0]]) be a float and not a 0?
| Softmax indeed assigns a probability for each action, but you are calling .max(1)[1] after you get the results from DQN, which computes max and argmax along axis 1 (.max(1)) and selects argmax ([1]). Afterwards, you also viewed it into a (1,1) shape, that's why in the end you have a 2d tensor with only one cell, containing the index that has the largest probability given by the network.
Try calling the DQN instance directly, it will return the full softmax output.
| https://stackoverflow.com/questions/63222615/ |
PyTorch Gradient not Working after adding to("cuda:0") | I'm going to give a context-free set of code. The code works before adding "to(device)".
def get_input_layer(word_idx) :
x = torch.zeros(vocabulary_size).float().to(device)
x[word_idx] = 1.0
return x
embedding_dims = 5
device = torch.device("cuda:0")
W1 = Variable(torch.randn(embedding_dims, vocabulary_size).float(), requires_grad=True).to(device)
W2 = Variable(torch.randn(vocabulary_size, embedding_dims).float(), requires_grad=True).to(device)
num_epochs = 100
learning_rate = 0.001
x = Variable(get_input_layer(data)).float().to(device)
y_true = Variable(torch.from_numpy(np.array([target])).long()).to(device)
z1 = torch.matmul(W1, x).to(device)
z2 = torch.matmul(W2, z1).to(device)
log_softmax = F.log_softmax(z2, dim=0).to(device)
loss = F.nll_loss(log_softmax.view(1,-1), y_true).to(device)
loss_val += loss.data
loss.backward().to(device)
## Optimize values. This is done by hand rather than using the optimizer function
W1.data -= learning_rate * W1.grad.data
W2.data -= learning_rate * W2.grad.data
I get
Traceback (most recent call last):
File "<input>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'data'
Which triggers specifically on the line
W1.data -= learning_rate * W1.grad.data
Checking, this is confirmed because W1.grad is None for some reason.
And this loops after clearing the gradients. This works just fine if I remove all of the .to(device). What is it that I'm doing wrong in trying to run this on my GPU?
Thank you for your time.
| This happens because .to returns a new, non-leaf tensor. You should set requires_grad after transferring to the desired device. Also, the Variable interface has been deprecated for a long time, since before pytorch 1.0. It doesn't do anything (except in this case act as an overly complicated way to set requires_grad).
Consider
W1 = Variable(torch.randn(embedding_dims, vocabulary_size).float(), requires_grad=True).to(device)
The problem here is that there are two different tensors. Breaking it down we could rewrite what you're doing as
W1a = Variable(torch.randn(embedding_dims, vocabulary_size).float(), requires_grad=True)
W1 = W1a.to(device)
Observe that W1a requires a gradient but W1 is derived from W1a so it isn't considered a leaf tensor, therefore the .grad attribute of W1a will be updated but W1 won't be. In your code you no longer have a direct reference to W1a so you won't have access the gradients.
Instead you can do
W1 = torch.randn(embedding_dims, vocabulary_size).float().to(device)
W1.required_grad_(True)
which will properly set W1 to be a leaf tensor after being transfered to a different device.
Note that for your specific case we could also just make use of the device, dtype, and requires_grad arguments for torch.randn and simply do
W1 = torch.randn(embedding_dims, vocabulary_size, dtype=torch.float, device=device, requires_grad=True)
Most pytorch functions which initialize new tensors support these three arguments which can help avoid the issues you've encountered.
To respond to OP's additional question in the comments:
Is there a good spot I would've come across this in the documentation?
AFAIK the documentation doesn't specifically address this issue. It's kind of a combination between how variables in python work and how autograd mechanics work in pytorch.
Assuming you have a good understand of variables in python then you can reach the conclusions of this answer yourself by first reading Tensor.is_leaf, in particular
they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so grad_fn is None.
And furthermore the documentation for Tensor.to which states
If the self Tensor already has the correct torch.dtype and torch.device, then self is returned. Otherwise, the returned tensor is a copy of self with the desired torch.dtype and torch.device.
Since Tensor.to returns a copy, and a copy is an operation, then it should be clear from the documentation that the W1 tensor in the original code is not a leaf tensor.
| https://stackoverflow.com/questions/63222657/ |
Converting a list of lists and scalars to a list of PyTorch tensors throws warning | I was converting a list of lists to a PyTorch tensor and got a warning message. The conversion itself isn't difficult. For example:
>>> import torch
>>> thing = [[1, 2, 3, 4, 5], [2, 3], 2, 3]
>>> thing_tensor = list(map(torch.tensor, thing))
I get the warning:
home/user1/files/module.py:1: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
and am wondering what the reason may be. Is there any other way to convert the data into a tensor that I'm not aware of? Thanks.
| I tried to reproduce your warning but failed to do so. However, I could get the same warning by creating if I replaced the lists in thing by tensors.
I'll go over why it is better to use x.clone().detach() rather than torch.tensor(x) to make a copy :
On my version of pytorch using torch.tensor will work to create a copy that is no longer related to the computational graph and that does not occupy the same place in memory. However, this behaviour could change in future versions which is why you should use a command that will remain valid. I'll illustrate problems that come with note being detached or occupying the same spot in the memory.
Not being detached :
x = torch.tensor([0.],requires_grad=True)
y = x.clone()
y[0] = 1
z = 2 * y
z.backward()
print(x, x.grad)
tensor([0.], requires_grad=True) tensor([0.])
As you can see the gradient of x is being updated while the computation is done on y, but changing the value of y won't change the value of x because they don't occupy the same memory space.
Occupying the same memory space :
x = torch.tensor([0.],requires_grad=True)
y = x.detach().requires_grad_(True)
z = 2 * y
z.backward()
y[0] = 1
print(x, x.grad)
tensor([1.], requires_grad=True) None
In this case, the gradients are not updated but changing the value of y changes the value of x because they occupy the same memory space.
Best practice :
As suggested by the warning, the best practice is to both detach and clone the tensor :
x = torch.tensor([0.],requires_grad=True)
y = x.clone().detach().requires_grad_(True)
z = 2 * y
z.backward()
y[0] = 1
print(x, x.grad)
tensor([0.], requires_grad=True) None
This ensures that future modifications and computations from y won't affect x
| https://stackoverflow.com/questions/63223223/ |
Failed to generate adversarial examples using trained NSGA-Net PyTorch models | I have used NSGA-Net neural architecture search to generate and train several architectures. I am trying to generate PGD adversarial examples using my trained PyTorch models. I tried using both Adversarial Robustness Toolbox 1.3 (ART) and torchattacks 2.4 but I get the same error.
These few lines of code describe the main functionality of my code and what I am trying to achieve here:
# net is my trained NSGA-Net PyTorch model
# Defining PGA attack
pgd_attack = PGD(net, eps=4 / 255, alpha=2 / 255, steps=3)
# Creating adversarial examples using validation data and the defined PGD attack
for images, labels in valid_data:
images = pgd_attack(images, labels).cuda()
outputs = net(images)
So here is what the error generally looks like:
Traceback (most recent call last):
File "torch-attacks.py", line 296, in <module>
main()
File "torch-attacks.py", line 254, in main
images = pgd_attack(images, labels).cuda()
File "\Anaconda3\envs\GPU\lib\site-packages\torchattacks\attack.py", line 114, in __call__
images = self.forward(*input, **kwargs)
File "\Anaconda3\envs\GPU\lib\site-packages\torchattacks\attacks\pgd.py", line 57, in forward
outputs = self.model(adv_images)
File "\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "\codes\NSGA\nsga-net\models\macro_models.py", line 79, in forward
x = self.gap(self.model(x))
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\container.py", line 100, in forward
input = module(input)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "\codes\NSGA\nsga-net\models\macro_decoder.py", line 978, in forward
x = self.first_conv(x)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward
return self.conv2d_forward(input, self.weight)
File "\Anaconda3\envs\GPU\lib\site-packages\torch\nn\modules\conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #2 'weight' in call to _thnn_conv2d_forward
I have used the same the code with a simple PyTorch model and it worked but I am using NSGA-Net so I haven't designed the model myself. I also tried using .float() on both the model and inputs and still got the same error.
Keep in mind that I only have access to the following files:
torch-attacks.py
macro_models.py
macro_decoder.py
| You should convert images to the desired type (images.float() in your case). Labels must not be converted to any floating type. They are allowed to be either int or long tensors.
| https://stackoverflow.com/questions/63223579/ |
Issue with running multiple models in PyTorch Lightning | I am developing a system which needs to train dozens of individual models (>50) using Lightning, each with their own TensorBoard plots and logs. My current implementation has one Trainer object per model and it seems like I'm running into this error when I go over ~90 Trainer objects. Interestingly, the error only appears when I run the .test() method, not during .fit():
Traceback (most recent call last):
File "lightning/main_2.py", line 193, in <module>
main()
File "lightning/main_2.py", line 174, in main
new_trainer.test(model=new_model, test_dataloaders=te_loader)
File "\Anaconda3\envs\pyenv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1279, in test
results = self.__test_given_model(model, test_dataloaders)
File "\Anaconda3\envs\pyenv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1343, in __test_given_model
self.set_random_port(force=True)
File "\Anaconda3\envs\pyenv\lib\site-packages\pytorch_lightning\trainer\distrib_data_parallel.py", line 398, in set_random_port
default_port = RANDOM_PORTS[-1]
IndexError: index -1 is out of bounds for axis 0 with size 0
As I just started with Lightning, I am not sure if having one Trainer/model is the best approach. However, I require individual plots from each model, and it seems that if I use a single trainer for multiple models the results get overridden.
For reference, I'm defining different lists of trainers as such:
for i in range(args["num_users"]):
trainer_list_0.append(Trainer(max_epochs=args["epochs"], gpus=1, default_root_dir=args["save_path"],
fast_dev_run=args["fast_dev_run"], weights_summary=None))
trainer_list_1.append(Trainer(max_epochs=args["epochs"], gpus=1, default_root_dir=args["save_path"],
fast_dev_run=args["fast_dev_run"], weights_summary=None))
trainer_list_2.append(Trainer(max_epochs=args["epochs"], gpus=1, default_root_dir=args["save_path"],
fast_dev_run=args["fast_dev_run"], weights_summary=None))
As for training:
for i in range(args["num_users"]):
trainer_list_0[i].fit(model_list_0[i], train_dataloader=dataloader_list[i],
val_dataloaders=val_loader)
trainer_list_1[i].fit(model_list_1[i], train_dataloader=dataloader_list[i],
val_dataloaders=val_loader)
trainer_list_2[i].fit(model_list_2[i], train_dataloader=dataloader_list[i],
val_dataloaders=val_loader)
And testing:
for i in range(args["num_users"]):
trainer_list_0[i].test(test_dataloaders=te_loader)
trainer_list_1[i].test(test_dataloaders=te_loader)
trainer_list_2[i].test(test_dataloaders=te_loader)
Thanks!
| As far as I know, only one model per Trainer is expected. You can explicitly pass TensorBoardLogger object to Trainer with pre-defined experiment name and version so as to keep plots separate (see docs).
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import TensorBoardLogger
logger = TensorBoardLogger("tb_logs", name="my_model", version="version_XX")
trainer = Trainer(logger=logger)
The problem you have faced is related to ddp module somehow. Its source code contains the following lines [1], [2]:
RANDOM_PORTS = RNG1.randint(10000, 19999, 1000)
def set_random_port(self, force=False):
...
default_port = RANDOM_PORTS[-1]
RANDOM_PORTS = RANDOM_PORTS[:-1]
if not force:
default_port = os.environ.get('MASTER_PORT', default_port)
I'm not sure why you're facing the issue with 90+ Trainers, but you could try to drop this line:
RANDOM_PORTS = RANDOM_PORTS[:-1]
| https://stackoverflow.com/questions/63227656/ |
How to use the past with HuggingFace Transformers GPT-2? | I have:
context = torch.tensor(context, dtype=torch.long, device=self.device)
context = context.unsqueeze(0)
generated = context
with torch.no_grad():
past_outputs = None
for i in trange(num_words):
print(i, num_words)
inputs = {"input_ids": generated}
outputs, past_outputs = self.model(
**inputs,
past=past_outputs
)
next_token_logits = outputs[
0, -1, :] / (temperature if temperature > 0 else 1.0)
# reptition penalty from CTRL
# (https://arxiv.org/abs/1909.05858)
for _ in set(generated.view(-1).tolist()):
next_token_logits[_] /= repetition_penalty
filtered_logits = top_k_top_p_filtering(
next_token_logits, top_k=top_k, top_p=top_p)
if temperature == 0: # greedy sampling:
next_token = torch.argmax(filtered_logits).unsqueeze(0)
else:
next_token = torch.multinomial(
F.softmax(filtered_logits, dim=-1), num_samples=1)
generated = torch.cat(
(generated, next_token.unsqueeze(0)), dim=1)
This works for the first iteration, but then I get an error for the next iteration:
File "/Users/shamoon/Sites/wordblot/packages/ml-server/generator.py", line 143, in sample_sequence
past=past_outputs
File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 601, in forward
output_hidden_states=output_hidden_states,
File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 470, in forward
position_embeds = self.wpe(position_ids)
File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/Users/shamoon/.local/share/virtualenvs/ml-server-EdimT5-E/lib/python3.7/site-packages/torch/nn/functional.py", line 1724, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
Is there something I'm doing wrong?
| I did:
outputs, past_outputs = self.models[model_name](
context,
past=past_outputs
)
context = next_token.unsqueeze(0)
| https://stackoverflow.com/questions/63232732/ |
Manually adjusting parameters of a torch.nn.Module | Suppose I had a simple neural network defined by:
import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(2,2)
self.fc2 = nn.Linear(2,2)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
net = Net()
Doing the following:
for param in net.parameters():
print(param.data)
results in something like:
tensor([[-0.0776, 0.2409],
[ 0.3478, -0.6820]])
tensor([-0.6311, 0.2323])
tensor([[-0.5466, 0.0341],
[ 0.5822, 0.7005]])
tensor([-0.5624, 0.3278])
Let's say that I have a tensor([[0,0],[0,1]]) and I wanted to replace the first
param.data with my custom tensor.
Is this possible and if so, how can I do this?
| Should be possible, even though I dont know a reason why you would want to do this haha, anyways, this should be it:
replace_with = tensor([[0,0],[0,1]])
for parameter in net.parameters():
parameter.data = replace_with
break
Now the first element of the parameters should be your custom tensor. I hope that solves your issue :)
| https://stackoverflow.com/questions/63238498/ |
Invoking SageMaker Endpoint for PyTorch Model | I'm trying to call my SageMaker model endpoint both from Postman and the AWS CLI. The endpoint's status is "in service" but whenever I try to call it it gives me an error. When I try to use the predict function in the SageMaker notebook and provide it a numpy array (ex. np.array([1,2,3,4])), it successfully gives me an output. I'm unsure what I'm doing wrong.
$ aws2 sagemaker-runtime invoke-endpoint \
$ --endpoint-name=pytorch-model \
$ --body=1,2 \
$ --content-type=text/csv \
$ --cli-binary-format=raw-in-base64-out \
$ output.json
An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (500) from model with message "tensors used as indices must be long, byte or bool tensors
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/sagemaker_inference/transformer.py", line 125, in transform
result = self._transform_fn(self._model, input_data, content_type, accept)
File "/opt/conda/lib/python3.6/site-packages/sagemaker_inference/transformer.py", line 215, in _default_transform_fn
prediction = self._predict_fn(data, model)
File "/opt/ml/model/code/pytorch-model-reco.py", line 268, in predict_fn
return torch.argsort(- final_matrix[input_data, :], dim = 1)
IndexError: tensors used as indices must be long, byte or bool tensors
| The clue is in the final few lines of your stacktrace:
File "/opt/ml/model/code/pytorch-model-reco.py", line 268, in predict_fn
return torch.argsort(- final_matrix[input_data, :], dim = 1)
IndexError: tensors used as indices must be long, byte or bool tensors
In your predict_fn in pytorch-model-reco.py on line 268, you're trying to use input_data as indices for final_matrix, but input_data is the wrong type.
I would guess there is some type casting that your predict_fn should be doing when the input type is text/csv. This type casting is happening outside of the predict_fn when your input type is numpy data. Taking a look at the sagemaker_inference source code might reveal more.
| https://stackoverflow.com/questions/63243154/ |
Loading model from checkpoint is not working | I trained a vanilla vae which I modified from this repository. When I try and use the trained model I am unable to load the weights using load_from_checkpoint. It seems there is a mismatch between my checkpoint object and my lightningModule object.
I have setup an experiment (VAEXperiment) using pytorch-lightning LightningModule. I try to load the weights into the network with:
#building a new model
model = VanillaVAE(**config['model_params'])
model.build_layers()
#loading the weights
experiment = VAEXperiment(model, config['exp_params'])
experiment.load_from_checkpoint(path_to_checkpoint, config['exp_params'])
I also tried:
checkpoint = torch.load(path_to_checkpoint, map_location=lambda storage, loc: storage)
model.load_state_dict(checkpoint['state_dict'])
But I get an error
Unexpected key(s) in state_dict: "model.encoder.0.0.weight", "model.encoder.0.0.bias"...
I also followed the issue on
https://github.com/PyTorchLightning/pytorch-lightning/issues/924
https://github.com/PyTorchLightning/pytorch-lightning/issues/2798
Why I am I getting this error? Is it because of the encoder and decoder modules in my model? Based on the issues log on git it seems that the error resolved. What am I doing wrong?
| Posting the answer from comments:
experiment.load_state_dict(checkpoint['state_dict'])
| https://stackoverflow.com/questions/63243359/ |
What should be the Query Q, Key K and Value V vectors/matrics in torch.nn.MultiheadAttention? | Following an amazing blog, I implemented my own self-attention module. However, I found PyTorch has already implemented a multi-head attention module. The input to the forward pass of the MultiheadAttention module includes Q (which is query vector) , K (key vector), and V (value vector). It is strange that PyTorch wouldn't just take the input embedding and compute the Q, K, V vectors on the inside. In the self-attention module that I implemented, I compute this Q, K, V vectors from the input embeddings multiplied by the Q, K, V weights. At this point, I am not sure what the Q, K, and V vector inputs that MultiheadAttention module requires. Should they be Q, K, and V weights or vectors and should these be normal vectors, or should these be Parameters?
| If you look at the implementation of Multihead attention in pytorch. Q,K and V are learned during the training process. In most cases should be smaller then the embedding vectors. So you just need to define their dimension, everything else is taken by the module. You have two choices :
kdim: total number of features in key. Default: None.
vdim: total number of features in value. Default: None.
Query vector has size of your embedding.
Note: if kdim and vdim are None, they will be set to embed_dim such that query, key, and value have the same number of features.
For more details, look at the source code : https://pytorch.org/docs/master/_modules/torch/nn/modules/activation.html#MultiheadAttention
Specially this class : class MultiheadAttention(Module):
| https://stackoverflow.com/questions/63248948/ |
Fast.ai learn.export() stopped working with no apparent code change | I have been training and exporting several versions of a model on colab. My export code has always been this:
model.export('model.pkl')
I have been able to reload the model and make predictions to confirm everything works, like this:
x = load_learner('', 'model.pkl')
x.predict()
All of a sudden, the export code is behaving differently. I am still able to export and load_learner with not problem on colab, but when I download the model file from colab to my local machine and try to run locally, I get this error:
Traceback (most recent call last):
File "process.py", line 127, in <module>
predict_utterances_in_transcript(i, path_to_processed_and_predicted_transcript_dir, audio_id)
File "process.py", line 94, in predict_utterances_in_transcript
predict()
File "/Users/src/predict.py", line 9, in predict_utterances
model = load_learner('', 'models/model.pkl')
File "/Users//anaconda3/envs/nlp/lib/python3.6/site-packages/fastai/basic_train.py", line 621, in load_learner
state = torch.load(source, map_location='cpu') if defaults.device == torch.device('cpu') else torch.load(source)
File "/Users//anaconda3/envs/nlp/lib/python3.6/site-packages/torch/serialization.py", line 586, in load
with _open_zipfile_reader(f) as opened_zipfile:
File "/Users//anaconda3/envs/nlp/lib/python3.6/site-packages/torch/serialization.py", line 246, in __init__
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
AttributeError: 'PosixPath' object has no attribute 'tell'
Also, I have been able to open and read model pickle files in a text editor just fine until now. The broken model files cannot be read because an error says they are corrupted.
Nothing has changed in my code or my environment. Any ideas?
| I had the exact same problem. Moving from torch 1.5 to torch 1.6 solved it for me. After that, the downloaded pkl file worked like a charm.
After installation, in a python console, import torch and check the torch version. It should return: '1.6.0'
Best regards,
StΓ©ph
| https://stackoverflow.com/questions/63256567/ |
Calculate number of parameters in neural network | I am wondering would the number of parameters in the models like ResNet18, Vgg16, and DenseNet201 would change if we change the input size to the model?
I did measure the number of parameters with the following command
pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
Also, I have tried this snippet, and the number of parameters did not change for different input size
import torchvision.models as models
model= models.resnet18(pretrained = False)
model.cuda()
summary(model, (1,64,64))
| No it would not. Parameters of a model have the purpose of processing the input as it propagates inside the network pipeline.
The parameters are mostly trained to serve their purpose, which is defined by the training task. Consider a increase in number of parameters based on the input? What would their values be? Would they be random? How would this new parameters with new values affect the inference of the model?
Such a sudden, random change to the fine-tuned, well-trained parameters of the model would be impractical. Maybe there are some other algorithms that I am unaware of, that change their parameter collection based on input. But the architectures that have been mentioned in question do not support such functionality.
| https://stackoverflow.com/questions/63260899/ |
What is the difference between Tensor.size and Tensor.shape in PyTorch? | What is the difference between Tensor.size and Tensor.shape in Pytorch?
I want to get the number of elements and the dimensions of Tensor. For example for a tensor with the dimensions of 2 by 3 by 4 I expect 24 for number of elements and (2,3,4) for dimension.
Thanks.
| .shape is an alias for .size(), and was added to more closely match numpy, see this discussion here.
| https://stackoverflow.com/questions/63263292/ |
DataLoader create dataset with pytorch | I have a folder with subfolders (classes), with images inside each subfolder.
data
|_ classe1
|_ image1
|_ image2
|_ classe2
|_ ...
My goal is to create a dataset (train + test set) to train my model with pytorch resnet.
I have a error, i dont know how to solve it because i don't really understand the DataLoader structure, so i tried this:
I have this:
dataset = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['data']}
batch_size = 32
validation_split = .3
shuffle_dataset = True
random_seed= 42
# Creating data indices for training and validation splits:
dataset_size = len(dataset)
indices = list(range(dataset_size))
split = int(np.floor(validation_split * dataset_size))
if shuffle_dataset :
np.random.seed(random_seed)
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
# Creating PT data samplers and loaders:
train_sampler = SubsetRandomSampler(train_indices)
valid_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
sampler=train_sampler)
validation_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
sampler=valid_sampler)
dataloaders_dict = {'train': train_loader, 'val': validation_loader}
But when i try to run my model, i have this error:
Epoch 0/99
----------
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-79-8c30eb5e6a01> in <module>()
3
4 # Train and evaluate
----> 5 model_ft, hist = train_model(model_ft, dataloaders_dict, criterion, optimizer_ft, num_epochs=num_epochs, is_inception=False)
4 frames
<ipython-input-56-9421c2d39473> in train_model(model, dataloaders, criterion, optimizer, num_epochs, is_inception)
22
23 # Iterate over data.
---> 24 for inputs, labels in dataloaders[phase]:
25 inputs = inputs.to(device)
26 labels = labels.to(device)
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __next__(self)
361
362 def __next__(self):
--> 363 data = self._next_data()
364 self._num_yielded += 1
365 if self._dataset_kind == _DatasetKind.Iterable and \
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _next_data(self)
401 def _next_data(self):
402 index = self._next_index() # may raise StopIteration
--> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
404 if self._pin_memory:
405 data = _utils.pin_memory.pin_memory(data)
/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
KeyError: 0
Any suggestions? Any errors detected?
| The problem most likely comes from your first line, where your datasetis actually a dict containing one element (a pytorch dataset). This would be better :
x = 'data'
dataset = datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x])
I assume data_transforms['data']is a transformation of the expected type (as detailed here).
The keyerror is probably yielded when pytorch tries to get a tensor from your "dataset" (the dict), which merely contains one element.
By the way, I think pytorch provides the torch.utils.data.random_split` features so that you don't have to do the train/test split yourself. You may want to look it up.
| https://stackoverflow.com/questions/63268886/ |
Covnert a List of Tensors to a Tensor | I have a list of tensors like this:
[tensor(-2.9222, grad_fn=<SqueezeBackward1>), tensor(-2.8192, grad_fn=<SqueezeBackward1>), tensor(-3.1894, grad_fn=<SqueezeBackward1>), tensor(-2.9048, grad_fn=<SqueezeBackward1>)]
I want it to be in this form:
tensor([-0.5575, -0.9004, -0.8491, ..., -0.7345, -0.6729, -0.7553],
grad_fn=<SqueezeBackward1>)
How can I do this?
I appreciate the help.
| Since these tensor are 0-dimensional, torch.catwill not work but you can use torch.stack (which creates a new dimension along which to concatenate):
a = torch.tensor(1.0, requires_grad=True)
b = torch.tensor(2.0, requires_grad=True)
torch.stack([a,b], dim=0)
>>>tensor([1.,2.], grad_fn=<StackBackward>)
| https://stackoverflow.com/questions/63270966/ |
Unable to install Pytorch in Ubuntu | I'm using the following command to install pytorch in my conda environment.
conda install pytorch=0.4.1 cuda90 -c pytorch
However, I'm getting the following error
Solving environment: failed
PackagesNotFoundError: The following packages are not available from
current channels:
pytorch=0.4.1
cuda90
Current channels:
https://conda.anaconda.org/pytorch/linux-32
https://conda.anaconda.org/pytorch/noarch
https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/linux-32
https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/noarch
https://repo.anaconda.com/pkgs/main/linux-32
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/linux-32
To search for alternate channels that may provide the conda package
you're looking for, navigate to
https://anaconda.org
How can I sort this out?
I have ofcourse installed cuda 9 and nvcc works.
| Go directly to the pytorch website and follow the instructions for your setup and it will tell you exactly the command required to install - pytorch - get started
For example:
If you're looking for older versions of PyTorch, the version history and commands to install can be found here - Installing Previous Versions of PyTorch
If this doesn't work for you, your last option is to build from source yourself. Here's the GitHub repo for version 0.4.1 - pytorch at 0.4.1. The steps to install from source are outlined on the repo here.
| https://stackoverflow.com/questions/63272687/ |
Dumping Image data and load using pytorch dataloader | I want to dump the data so that I can load it back for training my model.
My code snipped for dumping the data:
for batch_idx, (image, label) in enumerate(dataloader):
image, label = image.to(device), label.to(device)
perturbed_image = attack.perturb(image, label)
#---------- Classifier ----------
predict_A = classifier(perturbed_image)
pred_label = torch.max(predict_A.data, 1)[1]
if pred_label != label:
adv_data.append( (perturbed_image.to("cpu"), label.to("cpu")) )
Is there any other way I can dump it correctly so as to load it in the torch.utils.data.DataLoader.
| The most straight forward approach would be to use torch.save to save the actual tensors of perturbed_image and label as binary files, and then use a custom Dataset. Note that the saved tensors are not single image/label, but rather batches of images/labels. Your new custom Dataset should account for it.
| https://stackoverflow.com/questions/63275597/ |
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 128 256, but got 2-dimensional input of size [32, 128] instead | I am working on creating an image generator using conditional GAN as the base model. I've run across an error that I don't understand how to debug, even after searching for solutions online. I'm not sure if I should change the settings for training or do some adjustment to my model, or something else. Any help on what to do would be appreciated.
The CGAN model I am using:
class Generator(nn.Module):
def __init__(self, classes, channels, img_size, latent_dim):
super(Generator, self).__init__()
self.classes = classes
self.channels = channels
self.img_size = img_size
self.latent_dim = latent_dim
self.img_shape = (self.channels, self.img_size, self.img_size)
self.label_embedding = nn.Embedding(self.classes, self.classes) # process label information, behave as a lookup table
self.model = nn.Sequential(
*self._create_layer_1(self.latent_dim + self.classes, 128, False),
*self._create_layer_2(128, 256),
*self._create_layer_2(256, 512),
*self._create_layer_2(512, 1024),
nn.Linear(1024, int(np.prod(self.img_shape))),
nn.Tanh()
)
def _create_layer_1(self, size_in, size_out, normalize=True):
layers = [nn.Linear(size_in, size_out)]
if normalize:
layers.append(nn.BatchNorm1d(size_out))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
def _create_layer_2(self, size_in, size_out, normalize=True):
layers = [nn.ConvTranspose2d(size_in, size_out, 4, 2, 1, bias=False)]
if normalize:
layers.append(nn.BatchNorm1d(size_out))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
def forward(self, noise, labels):
z = torch.cat((self.label_embedding(labels), noise), -1)
x = self.model(z)
x = x.view(x.size(0), *self.img_shape)
return x
class Discriminator(nn.Module):
def __init__(self, classes, channels, img_size, latent_dim):
super(Discriminator, self).__init__()
self.classes = classes
self.channels = channels
self.img_size = img_size
self.latent_dim = latent_dim
self.img_shape = (self.channels, self.img_size, self.img_size)
self.label_embedding = nn.Embedding(self.classes, self.classes)
self.adv_loss = torch.nn.BCELoss()
self.model = nn.Sequential(
*self._create_layer_1(self.classes + int(np.prod(self.img_shape)), 1024, False, True),
*self._create_layer_2(1024, 512, True, True),
*self._create_layer_2(512, 256, True, True),
*self._create_layer_2(256, 128, False, False),
*self._create_layer_1(128, 1, False, False),
nn.Sigmoid()
)
def _create_layer_1(self, size_in, size_out, drop_out=True, act_func=True):
layers = [nn.Linear(size_in, size_out)]
if drop_out:
layers.append(nn.Dropout(0.4))
if act_func:
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
def _create_layer_2(self, size_in, size_out, drop_out=True, act_func=True):
layers = [nn.Conv2d(size_in, size_out, 4, 2, 1, bias=False)]
if drop_out:
layers.append(nn.Dropout(0.4))
if act_func:
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
def forward(self, image, labels):
x = torch.cat((image.view(image.size(0), -1), self.label_embedding(labels)), -1)
return self.model(x)
def loss(self, output, label):
return self.adv_loss(output, label)
Code for initializing the model:
class Model(object):
def __init__(self,
name,
device,
data_loader,
classes,
channels,
img_size,
latent_dim,
style_dim=3):
self.name = name
self.device = device
self.data_loader = data_loader
self.classes = classes
self.channels = channels
self.img_size = img_size
self.latent_dim = latent_dim
self.style_dim = style_dim
self.netG = cganG(self.classes, self.channels, self.img_size, self.latent_dim)
self.netG.to(self.device)
self.netD = cganD(self.classes, self.channels, self.img_size, self.latent_dim)
self.netD.to(self.device)
self.optim_G = None
self.optim_D = None
@property
def generator(self):
return self.netG
@property
def discriminator(self):
return self.netD
def create_optim(self, lr, alpha=0.5, beta=0.999):
self.optim_G = torch.optim.Adam(filter(lambda p: p.requires_grad,
self.netG.parameters()),
lr=lr,
betas=(alpha, beta))
self.optim_D = torch.optim.Adam(filter(lambda p: p.requires_grad,
self.netD.parameters()),
lr=lr,
betas=(alpha, beta))
def _to_onehot(self, var, dim):
res = torch.zeros((var.shape[0], dim), device=self.device)
res[range(var.shape[0]), var] = 1.
return res
def train(self,
epochs,
log_interval=100,
out_dir='',
verbose=True):
self.netG.train()
self.netD.train()
viz_z = torch.zeros((self.data_loader.batch_size, self.latent_dim), device=self.device)
viz_noise = torch.randn(self.data_loader.batch_size, self.latent_dim, device=self.device)
nrows = self.data_loader.batch_size // 8
viz_label = torch.LongTensor(np.array([num for _ in range(nrows) for num in range(8)])).to(self.device)
viz_onehot = self._to_onehot(viz_label, dim=self.classes)
viz_style = torch.zeros((self.data_loader.batch_size, self.style_dim), device=self.device)
total_time = time.time()
for epoch in range(epochs):
batch_time = time.time()
for batch_idx, (data, target) in enumerate(self.data_loader):
data, target = data.to(self.device), target.to(self.device)
batch_size = data.size(0)
real_label = torch.full((batch_size, 1), 1., device=self.device)
fake_label = torch.full((batch_size, 1), 0., device=self.device)
# Train G
self.netG.zero_grad()
z_noise = torch.randn(batch_size, self.latent_dim, device=self.device)
x_fake_labels = torch.randint(0, self.classes, (batch_size,), device=self.device)
x_fake = self.netG(z_noise, x_fake_labels)
y_fake_g = self.netD(x_fake, x_fake_labels)
g_loss = self.netD.loss(y_fake_g, real_label)
g_loss.backward()
self.optim_G.step()
# Train D
self.netD.zero_grad()
y_real = self.netD(data, target)
d_real_loss = self.netD.loss(y_real, real_label)
y_fake_d = self.netD(x_fake.detach(), x_fake_labels)
d_fake_loss = self.netD.loss(y_fake_d, fake_label)
d_loss = (d_real_loss + d_fake_loss) / 2
d_loss.backward()
self.optim_D.step()
if verbose and batch_idx % log_interval == 0 and batch_idx > 0:
print('Epoch {} [{}/{}] loss_D: {:.4f} loss_G: {:.4f} time: {:.2f}'.format(
epoch, batch_idx, len(self.data_loader),
d_loss.mean().item(),
g_loss.mean().item(),
time.time() - batch_time))
vutils.save_image(data, os.path.join(out_dir, 'real_samples.png'), normalize=True)
with torch.no_grad():
viz_sample = self.netG(viz_noise, viz_label)
vutils.save_image(viz_sample, os.path.join(out_dir, 'fake_samples_{}.png'.format(epoch)), nrow=8, normalize=True)
batch_time = time.time()
torch.save(self.netG.state_dict(), os.path.join(out_dir, 'netG_{}.pth'.format(epoch)))
torch.save(self.netD.state_dict(), os.path.join(out_dir, 'netD_{}.pth'.format(epoch)))
self.save_to(path=out_dir, name=self.name, verbose=False)
if verbose:
print('Total train time: {:.2f}'.format(time.time() - total_time))
Code for setting up the training:
def main():
device = torch.device("cuda:0" if FLAGS.cuda else "cpu")
if FLAGS.train:
dataloader = torch.utils.data.DataLoader(
dset.ImageFolder(FLAGS.data_dir, transforms.Compose([
transforms.Resize(FLAGS.img_size),
transforms.CenterCrop(FLAGS.img_size),
transforms.ToTensor()
])),
batch_size=FLAGS.batch_size,
shuffle=True,
num_workers=4,
pin_memory=True
)
model = Model(FLAGS.model, device, dataloader, FLAGS.classes, FLAGS.channels, FLAGS.img_size, FLAGS.latent_dim)
model.create_optim(FLAGS.lr)
# Train
print("Start training...\n")
model.train(FLAGS.epochs, FLAGS.log_interval, FLAGS.out_dir, True)
if __name__ == '__main__':
from utils import boolean_string
parser.add_argument('--cuda', type=boolean_string, default=True, help='enable CUDA.')
parser.add_argument('--train', type=boolean_string, default=True, help='train mode or eval mode.')
parser.add_argument('--data_dir', type=str, default='../datasets', help='Directory for dataset.')
parser.add_argument('--out_dir', type=str, default='output', help='Directory for output.')
parser.add_argument('--epochs', type=int, default=800, help='number of epochs')
parser.add_argument('--batch_size', type=int, default=32, help='size of batches')
parser.add_argument('--lr', type=float, default=0.0002, help='learning rate')
parser.add_argument('--latent_dim', type=int, default=62, help='latent space dimension')
parser.add_argument('--classes', type=int, default=25, help='number of classes')
parser.add_argument('--img_size', type=int, default=128, help='size of images')
parser.add_argument('--channels', type=int, default=3, help='number of image channels')
Settings:
PyTorch version: 1.1.0
CUDA version: 9.0.176
Args | Type | Value
--------------------------------------------------
cuda | bool | True
train | bool | True
resume | bool | False
data_dir | str | ../datasets
out_dir | str | output
epochs | int | 800
batch_size | int | 32
lr | float | 0.0002
latent_dim | int | 62
classes | int | 25
img_size | int | 128
channels | int | 3
Image size as input:
torch.Size([32, 3, 128, 128])
Model structure:
Generator(
(label_embedding): Embedding(25, 25)
(model): Sequential(
(0): Linear(in_features=87, out_features=128, bias=True)
(1): LeakyReLU(negative_slope=0.2, inplace)
(2): ConvTranspose2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(3): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(4): LeakyReLU(negative_slope=0.2, inplace)
(5): ConvTranspose2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): LeakyReLU(negative_slope=0.2, inplace)
(8): ConvTranspose2d(512, 1024, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(9): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(10): LeakyReLU(negative_slope=0.2, inplace)
(11): Linear(in_features=1024, out_features=49152, bias=True)
(12): Tanh()
)
)
Discriminator(
(label_embedding): Embedding(25, 25)
(adv_loss): BCELoss()
(model): Sequential(
(0): Linear(in_features=49177, out_features=1024, bias=True)
(1): LeakyReLU(negative_slope=0.2, inplace)
(2): Conv2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(3): Dropout(p=0.4)
(4): LeakyReLU(negative_slope=0.2, inplace)
(5): Conv2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(6): Dropout(p=0.4)
(7): LeakyReLU(negative_slope=0.2, inplace)
(8): Conv2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(9): Linear(in_features=128, out_features=1, bias=True)
(10): Sigmoid()
)
)
The error I got:
File "main.py", line 121, in <module>
main()
File "main.py", line 56, in main
model.train(FLAGS.epochs, FLAGS.log_interval, FLAGS.out_dir, True)
File "build_gan.py", line 123, in train
x_fake = self.netG(z_noise, x_fake_labels)
File "anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "cgan.py", line 42, in forward
x = self.model(z)
File "anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 796, in forward
output_padding, self.groups, self.dilation)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 128 256, but got 2-dimensional input of size [32, 128] instead
I am using my own image dataset with 3 channels and 25 classes. I have tried to change the image size and kernel size but still got the same error. Any help on what should I do to debug would be highly appreciated.
| The issue is actually with your model architecture. You are trying to place a conv2d layer just after a linear fully connected layer.The _create_layer_1 produces a 1d output. You are trying to feed this 1d output to a conv2d layer which expects a multidimensional input.
From your code the best thing I feel to make it work in a single go would be to remove "_create_layer_2" function completely from generator class and use _create_layer_1 function to define all your layers(so that all layers are fully connected layers). Also do this for your discriminator
If you still need to use conv2d. You should reshape the input to conv2d to a 2d tensor. Also you have to flatten the 2d tensor to 1d before your final linear layer. Or you could ditch the first linear nn.Linear layer and start with conv2d altogether.
To summarise
As you are designing GANs you might have experience developing CNNs. The point is you dont simply mixup conv2d/conv layers with linear layers without using proper flatten/reshape.
Cheers
| https://stackoverflow.com/questions/63275709/ |
How does a pytorch dataset object know whether it has hit the end when used in a for loop? | I am writing a custom pytorch dataset. In __init__ the dataset object loads a file that contains certain data. But in my program I only wish to access part of the data (to achieve train/valid cut, if it helps). Originally I thought this behavior was controlled by overriding __len__, but it turned out that modifying __len__ does not help. A simple example is as follows:
from torch.utils.data import Dataset, DataLoader
import torch
class NewDS(Dataset):
def __init__(self):
self.data = torch.randn(10,2) # suppose there are 10 items in the data file
def __len__(self):
return len(self.data)-5 # But I only want to access the first 5 items
def __getitem__(self, index):
return self.data[index]
ds = NewDS()
for i, x in enumerate(ds):
print(i)
The output is 0 through 9, while the desired behavior would be 0 through 4.
How does this dataset object know that the enumeration has hit the end when used in a for loop like this? Any other method to achieve a similar effect is also welcome.
| You can use torch.utils.data.Subset to get subset of your data
top_five = torch.utils.data.Subset(ds, indices=range(5)) # Get first five items
for i, x in enumerate(top_five):
print(i)
0
1
2
3
4
enumerate in loop will return item until it gets StopIteration exception.
len(ds) # Returned modified length
5
# `enumerate` will call `next` method on iterable each time in loop.
# and When no more data available a StopIteration exception is raised instead.
iter_ds = iter(ds)
print(next(iter_ds))
print(next(iter_ds))
print(next(iter_ds))
print(next(iter_ds))
print(next(iter_ds))
print(next(iter_ds))
print(next(iter_ds))
print(next(iter_ds))
print(next(iter_ds))
print(next(iter_ds))
print(next(iter_ds)) #11th time StopIteration exception raised as no item left to iterate in iterable
Output:
tensor([-1.5952, -0.0826])
tensor([-2.2254, 0.2461])
tensor([-0.8268, 0.1956])
tensor([ 0.3157, -0.3403])
tensor([0.8971, 1.1255])
tensor([0.3922, 1.3184])
tensor([-0.4311, -0.8898])
tensor([ 0.1128, -0.5708])
tensor([-0.5403, -0.9036])
tensor([0.6550, 1.6777])
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
<ipython-input-99-7a9910e027c3> in <module>
10 print(next(iter_ds))
11
---> 12 print(next(iter_ds)) #11th time StopIteration exception raised as no item left to iterate
StopIteration:
| https://stackoverflow.com/questions/63278696/ |
huggingface transformers: truncation strategy in encode_plus | encode_plus in huggingface's transformers library allows truncation of the input sequence. Two parameters are relevant: truncation and max_length. I'm passing a paired input sequence to encode_plus and need to truncate the input sequence simply in a "cut off" manner, i.e., if the whole sequence consisting of both inputs text and text_pair is longer than max_length it should just be truncated correspondingly from the right.
It seems that neither of the truncation strategies allows to do this, instead longest_first removes tokens from the longest sequence (which could be either text or text_pair, but not just simply from the right or end of the sequence, e.g., if text is longer that text_pair, it seems this would remove tokens from text first), only_first and only_second remove tokens from only the first or second (hence, also not simply from the end), and do_not_truncate does not truncate at all. Or did I misunderstood this and actually longest_first might be what I'm looking for?
| No longest_first is not the same as cut from the right. When you set the truncation strategy to longest_first, the tokenizer will compare the length of both text and text_pair everytime a token needs to be removed and remove a token from the longest. The could for example mean that it will cut at first 3 tokens from text_pair and will cut the rest of the tokens which need be cut alternately from text and text_pair. An example:
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
seq1 = 'This is a long uninteresting text'
seq2 = 'What could be a second sequence to the uninteresting text'
print(len(tokenizer.tokenize(seq1)))
print(len(tokenizer.tokenize(seq2)))
print(tokenizer(seq1, seq2))
print(tokenizer(seq1, seq2, truncation= True, max_length = 15))
print(tokenizer.decode(tokenizer(seq1, seq2, truncation= True, max_length = 15)['input_ids']))
Output:
9
13
{'input_ids': [101, 2023, 2003, 1037, 2146, 4895, 18447, 18702, 3436, 3793, 102, 2054, 2071, 2022, 1037, 2117, 5537, 2000, 1996, 4895, 18447, 18702, 3436, 3793, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
{'input_ids': [101, 2023, 2003, 1037, 2146, 4895, 18447, 102, 2054, 2071, 2022, 1037, 2117, 5537, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
[CLS] this is a long unint [SEP] what could be a second sequence [SEP]
As far as I can tell from your question you are actually looking for only_second because it cuts from the right (which is text_pair):
print(tokenizer(seq1, seq2, truncation= 'only_second', max_length = 15))
Output:
{'input_ids': [101, 2023, 2003, 1037, 2146, 4895, 18447, 18702, 3436, 3793, 102, 2054, 2071, 2022, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
It throw an exception when you try your text input is longer as the specified max_length. That is correct in my opinion because in this case it is not any longer a sequnece pair input.
Just in case only_second doesn't meet your requirements, you can simply create your own truncation strategy. As an example only_second by hand:
tok_seq1 = tokenizer.tokenize(seq1)
tok_seq2 = tokenizer.tokenize(seq2)
maxLengthSeq2 = myMax_len - len(tok_seq1) - 3 #number of special tokens for bert sequence pair
if len(tok_seq2) > maxLengthSeq2:
tok_seq2 = tok_seq2[:maxLengthSeq2]
input_ids = [tokenizer.cls_token_id]
input_ids += tokenizer.convert_tokens_to_ids(tok_seq1)
input_ids += [tokenizer.sep_token_id]
token_type_ids = [0]*len(input_ids)
input_ids += tokenizer.convert_tokens_to_ids(tok_seq2)
input_ids += [tokenizer.sep_token_id]
token_type_ids += [1]*(len(tok_seq2)+1)
attention_mask = [1]*len(input_ids)
print(input_ids)
print(token_type_ids)
print(attention_mask)
Output:
[101, 2023, 2003, 1037, 2146, 4895, 18447, 18702, 3436, 3793, 102, 2054, 2071, 2022, 102]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
| https://stackoverflow.com/questions/63280435/ |
Measuring uncertainty using MC Dropout on pytorch | I am trying to implement Bayesian CNN using Mc Dropout on Pytorch,
the main idea is that by applying dropout at test time and running over many forward passes , you get predictions from a variety of different models.
Iβve found an application of the Mc Dropout and I really did not get how they applied this method and how exactly they did choose the correct prediction from the list of predictions
here is the code
def mcdropout_test(model):
model.train()
test_loss = 0
correct = 0
T = 100
for data, target in test_loader:
if args.cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data, volatile=True), Variable(target)
output_list = []
for i in xrange(T):
output_list.append(torch.unsqueeze(model(data), 0))
output_mean = torch.cat(output_list, 0).mean(0)
test_loss += F.nll_loss(F.log_softmax(output_mean), target, size_average=False).data[0] # sum up batch loss
pred = output_mean.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nMC Dropout Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
train()
mcdropout_test()
I have replaced
data, target = Variable(data, volatile=True), Variable(target)
by adding
with torch.no_grad(): at the beginning
And this is how I have defined my CNN
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 192, 5, padding=2)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(192, 192, 5, padding=2)
self.fc1 = nn.Linear(192 * 8 * 8, 1024)
self.fc2 = nn.Linear(1024, 256)
self.fc3 = nn.Linear(256, 10)
self.dropout = nn.Dropout(p=0.3)
nn.init.xavier_uniform_(self.conv1.weight)
nn.init.constant_(self.conv1.bias, 0.0)
nn.init.xavier_uniform_(self.conv2.weight)
nn.init.constant_(self.conv2.bias, 0.0)
nn.init.xavier_uniform_(self.fc1.weight)
nn.init.constant_(self.fc1.bias, 0.0)
nn.init.xavier_uniform_(self.fc2.weight)
nn.init.constant_(self.fc2.bias, 0.0)
nn.init.xavier_uniform_(self.fc3.weight)
nn.init.constant_(self.fc3.bias, 0.0)
def forward(self, x):
x = self.pool(F.relu(self.dropout(self.conv1(x)))) # recommended to add the relu
x = self.pool(F.relu(self.dropout(self.conv2(x)))) # recommended to add the relu
x = x.view(-1, 192 * 8 * 8)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(self.dropout(x)))
x = self.fc3(self.dropout(x)) # no activation function needed for the last layer
return x
Can anyone help me to get the right implementation of the Monte Carlo Dropout method on CNN?
| Implementing MC Dropout in Pytorch is easy. All that is needed to be done is to set the dropout layers of your model to train mode. This allows for different dropout masks to be used during the different various forward passes. Below is an implementation of MC Dropout in Pytorch illustrating how multiple predictions from the various forward passes are stacked together and used for computing different uncertainty metrics.
import sys
import numpy as np
import torch
import torch.nn as nn
def enable_dropout(model):
""" Function to enable the dropout layers during test-time """
for m in model.modules():
if m.__class__.__name__.startswith('Dropout'):
m.train()
def get_monte_carlo_predictions(data_loader,
forward_passes,
model,
n_classes,
n_samples):
""" Function to get the monte-carlo samples and uncertainty estimates
through multiple forward passes
Parameters
----------
data_loader : object
data loader object from the data loader module
forward_passes : int
number of monte-carlo samples/forward passes
model : object
keras model
n_classes : int
number of classes in the dataset
n_samples : int
number of samples in the test set
"""
dropout_predictions = np.empty((0, n_samples, n_classes))
softmax = nn.Softmax(dim=1)
for i in range(forward_passes):
predictions = np.empty((0, n_classes))
model.eval()
enable_dropout(model)
for i, (image, label) in enumerate(data_loader):
image = image.to(torch.device('cuda'))
with torch.no_grad():
output = model(image)
output = softmax(output) # shape (n_samples, n_classes)
predictions = np.vstack((predictions, output.cpu().numpy()))
dropout_predictions = np.vstack((dropout_predictions,
predictions[np.newaxis, :, :]))
# dropout predictions - shape (forward_passes, n_samples, n_classes)
# Calculating mean across multiple MCD forward passes
mean = np.mean(dropout_predictions, axis=0) # shape (n_samples, n_classes)
# Calculating variance across multiple MCD forward passes
variance = np.var(dropout_predictions, axis=0) # shape (n_samples, n_classes)
epsilon = sys.float_info.min
# Calculating entropy across multiple MCD forward passes
entropy = -np.sum(mean*np.log(mean + epsilon), axis=-1) # shape (n_samples,)
# Calculating mutual information across multiple MCD forward passes
mutual_info = entropy - np.mean(np.sum(-dropout_predictions*np.log(dropout_predictions + epsilon),
axis=-1), axis=0) # shape (n_samples,)
Moving on to the implementation which is posted in the question above, multiple predictions from T different forward passes are obtained by first setting the model to train mode (model.train()). Note that this is not desirable because unwanted stochasticity will be introduced in the predictions if there are layers other than dropout such as batch-norm in the model. Hence the best way is to just set the dropout layers to train mode as shown in the snippet above.
| https://stackoverflow.com/questions/63285197/ |
Converting a pytorch model to nn.Module for exporting to onnx for lens studio | I am trying to convert pix2pix to a pb or onnx that can run in Lens Studio. Lens studio has strict requirements for the models. I am trying to export this pytorch model to onnx using this guide provided by lens studio. The issue is the pytorch model found here uses its own base class, when in the example it uses Module.nn, and therefore doesnt have methods/variables that the torch.onnx.export function needs to run. So far Ive run into its missing a variable called training and a method called train
Would it be worth it to try to modify the base model, or should I try to build it from scratch using nn.Module? Is there a way to make the pix2pix model inherit from both the abstract base class and nn.module? Am I not understanding the situation? The reason why I want to do it using the lens studio tutorial is because I have gotten it to export onnx different ways but Lens Studio wont accept those for various reasons.
Also this is my first time asking a SO question (after 6 years of coding), let me know if I make any mistakes and I can correct them. Thank you.
This is the important code from the tutorial creating a pytorch model for Lens Studio:
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super().__init__()
self.layer = nn.Conv2d(in_channels=3, out_channels=1,
kernel_size=3, stride=2, padding=1)
def forward(self, x):
out = self.layer(x)
out = nn.functional.interpolate(out, scale_factor=2,
mode='bilinear', align_corners=True)
out = torch.nn.functional.softmax(out, dim=1)
return out
I'm not going to include all the code from the pytorch model bc its large, but the beginning of the baseModel.py is
import os
import torch
from collections import OrderedDict
from abc import ABC, abstractmethod
from . import networks
class BaseModel(ABC):
"""This class is an abstract base class (ABC) for models.
To create a subclass, you need to implement the following five functions:
-- <__init__>: initialize the class; first call BaseModel.__init__(self, opt).
-- <set_input>: unpack data from dataset and apply preprocessing.
-- <forward>: produce intermediate results.
-- <optimize_parameters>: calculate losses, gradients, and update network weights.
-- <modify_commandline_options>: (optionally) add model-specific options and set default options.
"""
def __init__(self, opt):
"""Initialize the BaseModel class.
Parameters:
opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
When creating your custom class, you need to implement your own initialization.
In this function, you should first call <BaseModel.__init__(self, opt)>
Then, you need to define four lists:
-- self.loss_names (str list): specify the training losses that you want to plot and save.
-- self.model_names (str list): define networks used in our training.
-- self.visual_names (str list): specify the images that you want to display and save.
-- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example.
"""
self.opt = opt
self.gpu_ids = opt.gpu_ids
self.isTrain = opt.isTrain
self.device = torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') # get device name: CPU or GPU
self.save_dir = os.path.join(opt.checkpoints_dir, opt.name) # save all the checkpoints to save_dir
if opt.preprocess != 'scale_width': # with [scale_width], input images might have different sizes, which hurts the performance of cudnn.benchmark.
torch.backends.cudnn.benchmark = True
self.loss_names = []
self.model_names = []
self.visual_names = []
self.optimizers = []
self.image_paths = []
self.metric = 0 # used for learning rate policy 'plateau'
and for pix2pix_model.py
import torch
from .base_model import BaseModel
from . import networks
class Pix2PixModel(BaseModel):
""" This class implements the pix2pix model, for learning a mapping from input images to output images given paired data.
The model training requires '--dataset_mode aligned' dataset.
By default, it uses a '--netG unet256' U-Net generator,
a '--netD basic' discriminator (PatchGAN),
and a '--gan_mode' vanilla GAN loss (the cross-entropy objective used in the orignal GAN paper).
pix2pix paper: https://arxiv.org/pdf/1611.07004.pdf
"""
@staticmethod
def modify_commandline_options(parser, is_train=True):
"""Add new dataset-specific options, and rewrite default values for existing options.
Parameters:
parser -- original option parser
is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
Returns:
the modified parser.
For pix2pix, we do not use image buffer
The training objective is: GAN Loss + lambda_L1 * ||G(A)-B||_1
By default, we use vanilla GAN loss, UNet with batchnorm, and aligned datasets.
"""
# changing the default values to match the pix2pix paper (https://phillipi.github.io/pix2pix/)
parser.set_defaults(norm='batch', netG='unet_256', dataset_mode='aligned')
if is_train:
parser.set_defaults(pool_size=0, gan_mode='vanilla')
parser.add_argument('--lambda_L1', type=float, default=100.0, help='weight for L1 loss')
return parser
def __init__(self, opt):
"""Initialize the pix2pix class.
Parameters:
opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
"""
(Also sidenote if you see this and it looks like no easy way out let me know, I know what its like seeing someone getting started in something who goes in to deep too early on)
| You can definitely have your model inherit from both the base class and torch.nn.Module (python allows multiple inheritance). However you should take care about the conflicts if both inherited class have functions with identical names (I can see at least one : their base provide the eval function and so to nn.module).
However since you do not need the CycleGan, and a lot of the code is compatibility with their training environment, you'd probably better just re-implement the pix2pix. Just steal the code, have it inherit from nn.Module, copy-paste useful/mandatory functions from the base class, and have everything translated into clean pytorch code. You already have the forward function (which is the only requirement for a pytorch module).
All the subnetworks they use (like the resnet blocks) seem to inherit from nn.Module already so there is nothing to change here (double check that though)
| https://stackoverflow.com/questions/63291760/ |
PyTorch: `torch.chunk` source code Github location | I canβt seem to find the source code for torch.chunk in PyTorchβs Github page or in the documentation.
Anyone knows where this is in PyTorchβs Github page?
| Open the main GitHub project https://github.com/pytorch/pytorch
Then type in this page the keyboard shortcut t: you will enter the file finder mode (introduced in 2011). (https://github.com/pytorch/pytorch/find/master)
Type "chunk.h": the first result will be your file.
If the file is not the right one (header but not source), then you need a search:
https://github.com/search?q=%22chunk%22+%22const+Tensor%22+repo%3Apytorch%2Fpytorch+extension%3Acpp+language%3AC%2B%2B+language%3AC%2B%2B&type=Code
Search for chunk and "const Tensor" helps to refine rapidly the search results to TensorShape.cpp#chunk(), as noted in the comment.
| https://stackoverflow.com/questions/63294918/ |
How to write torch.device('cuda' if torch.cuda.is_available() else 'cpu') as a full if else statement? | I'm a beginner to Pytorch and wanted to type this statement as a whole if else statement:-
torch.device('cuda' if torch.cuda.is_available() else 'cpu')
Can somebody help me?
| Here is the code as a whole if-else statement:
torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if torch.cuda.is_available():
torch.device('cuda')
else:
torch.device('cpu')
Since you probably want to store the device for later, you might want something like this instead:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
Here is a post and discussion about the ternary operator in Python: https://stackoverflow.com/a/2802748/13985765
From that post:
value_when_true if condition else value_when_false
| https://stackoverflow.com/questions/63302534/ |
How to represent those types of values in a tensor? | I have a dataset like this:
(0, 1), UpDownUpUp
(2, 3), UpUpUpDownDownDown
(0, 2), DownUp
(0, Undefined), DownUp
How to represent this type of data in a PyTorch tensor? So I can then train a neural network with it?
| Here is one way to do it:
You keep the values as they are, replacing the undefined with 9 (we will use 9 to represent [do nothing])
You encode the labels into integers 5, 6
Convert your variables into lists
For every batch, take the length of the longest sequence
Pad the input and the labels with 9s to match the length you found at previous step
Resulting dataset will look like the following input and target pairs:
[0, 1, 9, 9, 9, 9], [5, 6, 5, 5, 9, 9]
[2 ,3, 9, 9, 9, 9], [5, 5, 5, 6, 6, 6]
[0, 2, 9, 9, 9, 9], [6, 5, 9, 9, 9, 9]
[0, 9, 9, 9, 9, 9], [6, 5, 9, 9, 9, 9]
| https://stackoverflow.com/questions/63303822/ |
TypeError: 'Image' object does not support item assignment | I am running the model below and after splitting the image and trying to assign the image back I get the error message stating.
sample[:, :, 0] = 0
TypeError: 'Image' object does not support item assignment
I have tried different approaches to assigning the image back to the sample but that didn't help. Any help would be greatly appreciated. Many Thanks
Here's the code I am using to do the data loading:
import os
import numpy as np
import torch
from PIL import Image
from torch.utils.data import Dataset
from torchvision import transforms, utils
from torchvision.transforms import functional
#from torchvision.transforms import Grayscalei
import pandas as pd
import pdb
import cv2
from cv2 import imread
from cv2 import resize
class CellsDataset(Dataset):
# a very simple dataset
def __init__(self, root_dir, transform=None, return_filenames=False):
self.root = root_dir
self.transform = transform
self.return_filenames = return_filenames
self.files = [os.path.join(self.root,filename) for filename in os.listdir(self.root)]
self.files = [path for path in self.files
if os.path.isfile(path) and os.path.splitext(path)[1]=='.png']
def __len__(self):
return len(self.files)
path = self.files[idx]
sample = Image.open(path)
sample = np.asarray(sample, dtype=np.uint8)
sample = cv2.resize(src=sample, dsize=(1024, 1024))
sample = functional.to_grayscale(sample, num_output_channels=3)
sample[:, :, 0] = 0
sample[:, :, 1] = 0
if self.transform:
sample = self.transform(sample)
if self.return_filenames:
return sample, path
else:
return sample
def duplicate_and_transform(self,transforms):
currfilelen = self.__len__()
for pind in range(0,currfilelen-1):
path = self.files[pind]
sample = Image.open(path)
samplet = transforms(sample)
# save the transformed images and save the associated counts:
newpath = path[:-4]+'trans.png' #path[:-4] is the path without the .png extension.
samplet.save(newpath,"PNG")
metadata = pd.read_csv(os.path.join(self.root,'gt_red.csv'))
metadata.set_index('filename',inplace=True)
count = metadata['count'][os.path.split(path)[-1]]
samplet.save(newpath,"PNG")
f = open(self.root+'gt.csv','a')
basepath = os.path.basename(path) # strip away the directory list and just get the filename
f.write(basepath[:-4]+'trans.png,'+str(count)+'\n')
# return the new file names and add to self.files.
self.files.append(newpath)
#pdb.set_trace()
| You are directly operating on a PIL object. you have to convert it to a numpy array first and then manipulate the values:
sample = Image.open(path)
sample = np.asarray(sample, dtype=np.uint8)
| https://stackoverflow.com/questions/63307053/ |
Pytorch counting tensor | I am trying to create a Python function that, given inputs dim and m, generates the tensor of size [m ** dim, dim] of the form
[[1,...,1,1],
[1,...,1,2],
...
[1,...,1,m],
[1,...,2,1],
[1,...,2,2],
...
[m,...,m,m]]
What is the best way of doing this in Pytorch?
Thanks
| I have solved this myself using the following code:
import torch
import numpy as np
def mat_gen(dim, m):
return torch.from_numpy(np.array(np.meshgrid(*[np.arange(1, m + 1, 1) for i in range(dim)])).T.reshape(m ** dim, dim))
Here is another function that uses Pytorch only:
import torch
def mat_gen(dim, m):
return torch.stack(torch.meshgrid(*[torch.arange(1, m + 1, 1) for i in range(dim)])).T.reshape(m ** dim, dim)
| https://stackoverflow.com/questions/63309922/ |
Inverse transform | I am not able to find an efficient way to give batch input to this function and return the batch output. I want to do this during the training of my neural network.
Inverse_Norm = transforms.Normalize(
mean = [-m/s for m, s in zip(mean, std)],
std = [1/s for s in std]
)
inverse_norm_input = Inverse_Norm(input)
| Assuming a tensor of shape (B, C, ...) wheremean and std are iterables of length C then you can use broadcasting semantics to operate across a batch tensor. For example
import torch
def batch_inverse_normalize(x, mean, std):
# represent mean and std to 1, C, 1, ... tensors for broadcasting
reshape_shape = [1, -1] + ([1] * (len(x.shape) - 2))
mean = torch.tensor(mean, device=x.device, dtype=x.dtype).reshape(*reshape_shape)
std = torch.tensor(std, device=x.device, dtype=x.dtype).reshape(*reshape_shape)
return x * std + mean
| https://stackoverflow.com/questions/63311675/ |
Can you integrate opencv SIFT with a tensorflow model? | I am trying to create a CNN, but using the SIFT algorithm instead of any pooling layers.
Problem is I can't seem to find any Python implementation of the algorithm in Tensorflow or PyTorch. The only implementation I have seen of it is with opencv.
Is it possible to use the opencv SIFT implementation as a layer in a Tensorflow CNN Model?
If so, how would you go about creating it?
| While this is an interesting idea, I believe it has numerous issues that make it highly impracticable to impossible.
Layers of a network have to be differentiable with regards to their input to allow any gradients to be calculated, which are then used to update the weights.
While I think it might be possible to write a fully differentiable sift implementation, this alone will be impractical.
Further SIFT does not have a constant number of outputs and takes a long time to compute, which would slow down the training a lot.
The only practical way to use SIFT with neural networks would be to first run SIFT and then use the top N detected keypoints as input for the first layer. However, I'm not sure this would be successful.
| https://stackoverflow.com/questions/63313842/ |
How would I apply a nn.conv1d manually, given an input matrix and weight matrix? | I am trying to understand how a nn.conv1d processes an input for a specific example related to audio processing in a WaveNet model.
I have input data of shape (1,1,8820), which passes through an input layer (1,16,1), to output a shape of (1,16,8820).
That part I understand, because you can just multiply the two matrices. The next layer is a conv1d, kernel size=3, input channels=16, output channels=16, so the state dict shows a matrix with shape (16,16,3) for the weights. When the input of (1,16,8820) goes through that layer, the result is another (1,16,8820).
What multiplication steps occur within the layer to apply the weights to the audio data? In other words, if I wanted to apply the layer(forward calculations only) using only the input matrix, the state_dict matrix, and numpy, how would I do that?
This example is using the nn.conv1d layer from Pytorch. Also, if the same layer had a dilation=2, how would that change the operations?
| A convolution is a specific type of "sliding window operation": that is, applying the same function/operation on overlapping sliding windows of the input.
In your example, you treat each 3 overlapping temporal samples (each in 16 dimensions) as an input to 16 filters. Therefore, you have a weight matrix of 3x16x16.
You can think of it as "unfolding" the (1, 16, 8820) signal into (1, 16*3, 8820) sliding windows. Then multiplying by 16*3 x 16 weight matrix to get an output of shape (1, 16, 8820).
Padding, dilation and strides affect the way the "sliding windows" are formed.
See nn.Unfold for more information.
| https://stackoverflow.com/questions/63321573/ |
Pinned memory in LibTorch | I might be missing something really fundamental but I couldn't find any explanation in the documentation or online
I'm trying to copy a GPU at::Tensor to a pinned tensor on the CPU, but once I copy it, the CPU tensor is no longer pinned. I assume it just creates a new copy of the GPU tensor and assigns it, but if that's the case, how do you copy to a pre-allocated pinned memory?
My testing code:
at::Tensor gpu = at::randn({1025,1025}, device(at::kCUDA));
at::Tensor pinned = at::empty(gpu.sizes(), device(at::kCPU).pinned_memory(true));
std::cout << "Is Pinned: " << std::boolalpha << pinned.is_pinned() << std::endl;
pinned = gpu.to(at::kCPU);
std::cout << "Is Pinned: " << std::boolalpha << pinned.is_pinned() << std::endl;
The output is
Is Pinned: true
Is Pinned: false
This also happens with torch:: instead of at::
Tested on Ubuntu 16.04 using LibTorch 1.5.0 compiled from sources
| I found a way, and that's using the copy_ function
...
//pinned = gpu.to(torch::kCPU, true);
gpu.copy_(pinned);
std::cout << "Is Pinned: " << std::boolalpha << pinned.is_pinned() << std::endl;
This outputs
Is Pinned: true
Is Pinned: true
I guess it makes sense since the to function returns a tensor rather than manipulating. Though I would expect some variant of to to allow it.
Oh well, it works this way.
| https://stackoverflow.com/questions/63324584/ |
Welch Transformation to keep the dimensions | I am trying to extract the scipy.signal.welch signal from a temporal 1d time series and since I am not adept in signal processing I have no idea why the dimensions shrink when returned.
I need to concatenate the temporal to the spectral as another channel so if the temporal is of shape:
[batches, channels, sample_length]
then I expect to get after the concatenation:
[batches, 2*channels, sample_length]
but when I try to train my model an error is thrown because the size of the spectral doesn't match the temporal (the temporal size is 16):
size mismatch, m1: [2 x 9], m2: [16 x 16]
I tried to look at the documentation but they don't mention why it shrinks it and how it can be avoided.
| As @SleuthEye said, I set the the return_onesided to False like so:
scipy.signal.welch(temporal, axis=-1, return_onesided=False, nperseg=temporal.shape[0])
| https://stackoverflow.com/questions/63324935/ |
Finding column index of a PyTorch tensor that has most 1's | I have a PyTorch tensor a shaped like below:
import torch
a = torch.tensor([[[1., 0., 0., 0.]],
[[0., 1., 0., 0.]],
[[1., 0., 0., 0.]],
[[0., 0., 0., 1.]],
[[1., 0., 0., 0.]],
[[0., 0., 0., 1.]],
[[1., 0., 0., 0.]]])
Each row of the tensor a has 4 elements, 1's and 0's. Say I index the row and the column of this tensor accordingly. So for instance, the entry in the row 0 (uppermost row) is [[1., 0., 0., 0.]], whereas the entry in the column 3 (rightmost column) is [[0., 0., 0., 1., 0., 1., 0.]].
From a given tensor, I want to identify the index of the column where 1. appear most frequently. For example, for the tensor a, the index of such a column would be 0. If there are ties in the number of 1.'s, I still would like to get all of those tied column indices.
How can I do this task easily on Python?
Thank you,
| If your matrix only contains 0 and 1, you can sum the elements of each column and then search for the sum that is the largest:
import numpy as np
% sum over columns
sumsi = torch.sum(a, dim=1)
% find where maximum
col_idx = np.where(sumsi==np.max(sumsi))
| https://stackoverflow.com/questions/63329989/ |
PyTorch DatasetLoader tensor should be a torch tensor. Got | data_dir="D:\ML-ComputerVision\Datasets"
train_transforms=transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(100),
transforms.RandomHorizontalFlip(),
transforms.Normalize([0.5,],[0.5,]),
transforms.ToTensor()])
test_transforms=transforms.Compose([transforms.Normalize([0.5,],[0.5,]),
transforms.ToTensor()])
train_data=datasets.ImageFolder(data_dir + "/Train",transform=train_transforms)
test_data=datasets.ImageFolder(data_dir + "/Test",transform=test_transforms)
trainloader=torch.utils.data.DataLoader(train_data,batch_size=32,shuffle=True)
testloader=torch.utils.data.DataLoader(test_data,batch_size=32,shuffle=False)
images, labels = next(iter(trainloader)) # <-- Error line
I am getting tensor should be a torch tensor. Got <class 'PIL.Image.Image'> error even though I have implemented transforms.ToTensor(). Any ideas how it can be fixed?
| The way you chained your transforms, transforms.Normalizeis applied before transforms.ToTensor. But even though RandomRotation, RandomResizedCrop and RandomHorizontalFlip are image transforms that work on PIL images, transforms.Normalize only works on tensors (documentation here).
Simply putting ToTensor before Normalize should work.
| https://stackoverflow.com/questions/63331800/ |
Is normalization necessary for regression problem in Neural network | I am learning how to build a neural network using PyTorch.
This formula is the target of my code:
y =2X^3 + 7X^2 - 8*X + 120
It is a regression problem.
I used this because it is simple and the output can be calculated so that I can ensure my neural network is able to predict output with the given input.
However, I met some problem during training.
The problem occurs in this line of code:
loss = loss_func(prediction, outputs)
The loss computed in this line is NAN (not a number)
I am using MSEloss as the loss function. 100 datasets are used for training the ANN model. The input X_train is ranged from -1000 to 1000.
I believed that the problem is owing to the value of X_train and MSEloss. X_train should be scaled into some values between 0 and 1 so that MSEloss can compute the loss.
However, is it possible to train the ANN model without scaling the input into value between 0 and 1 in a regression problem?
Here is my code, it does not use MinMaxScaler and it print the loss with NAN:
import torch
import torch.nn as nn
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch.nn.functional as F
from torch.autograd import Variable
#Load datasets
dataset = pd.read_csv('test_100.csv')
x_temp_train = dataset.iloc[:79, :-1].values
y_temp_train = dataset.iloc[:79, -1:].values
x_temp_test = dataset.iloc[80:, :-1].values
y_temp_test = dataset.iloc[80:, -1:].values
#Turn into tensor
X_train = torch.FloatTensor(x_temp_train)
Y_train = torch.FloatTensor(y_temp_train)
X_test = torch.FloatTensor(x_temp_test)
Y_test = torch.FloatTensor(y_temp_test)
#Define a Artifical Neural Network
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linear = nn.Linear(1,1) #input=1, output=1, bias=True
def forward(self, x):
x = self.linear(x)
return x
net = Net()
print(net)
#Define a Loss function and optimizer
optimizer = torch.optim.SGD(net.parameters(), lr=0.2)
loss_func = torch.nn.MSELoss()
#Training
inputs = Variable(X_train)
outputs = Variable(Y_train)
for i in range(100): #epoch=100
prediction = net(inputs)
loss = loss_func(prediction, outputs)
optimizer.zero_grad() #zero the parameter gradients
loss.backward() #compute gradients(dloss/dx)
optimizer.step() #updates the parameters
if i % 10 == 9: #print every 10 mini-batches
#plot and show learning process
plt.cla()
plt.scatter(X_train.data.numpy(), Y_train.data.numpy())
plt.plot(X_train.data.numpy(), prediction.data.numpy(), 'r-', lw=2)
plt.text(0.5, 0, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size': 10, 'color': 'red'})
plt.pause(0.1)
plt.show()
Thanks for your time.
|
Is normalization necessary for regression problem in Neural Network?
No.
But...
I can tell you that MSELoss works with non-normalised values. You can tell because:
>>> import torch
>>> torch.nn.MSELoss()(torch.randn(1)-1000, torch.randn(1)+1000)
tensor(4002393.)
MSE is a very well-behaved loss function, and you can't really get NaN without giving it a NaN. I would bet that your model is giving a NaN output.
The two most common causes of a NaN are: an accidental divide by 0, and absurdly large weights/gradients.
I ran a variant of your code on my machine using:
x = torch.randn(79, 1)*1000
y = 2*x**3 + 7*x**2 - 8*x + 120
And it got to NaN in about 20 training steps due to absurdly large weights.
A model can get absurdly large weights if the learning rate is too large. You may think 0.2 is not too large, but that's a typical learning rate people use for normalised data, which forces their gradients to be fairly small. Since you are not using normalised data, let's calculate how large your gradients are (roughly).
First, your x is on the order of 1e3, your expected output y scales at x^3, then MSE calculates (pred - y)^2. Then your loss is on the scale of 1e3^3^2=1e18. This propagates to your gradients, and recall that weight updates are += gradient*learning_rate, so it's easy to see why your weights fairly quickly explode outside of float precision.
How to fix this? Well you could use a learning rate of 2e-7. Or you could just normalise your data. I recommend normalising your data; it has other nice properties for training and avoids these kinds of problems.
| https://stackoverflow.com/questions/63333466/ |
PyTorch - get list of sums of 2D tensors in 3D tensor | I have a 3D tensor consisting of 2D tensors, e. g.:
t = torch.tensor([[[0, 0, 1],
[0, 1, 0],
[1, 0, 0]],
[[0, 0, 1],
[0, 1, 0],
[1, 0, 0]],
[[0, 0, 1],
[0, 1, 0],
[1, 0, 0]]
])
I need a list or tensor of sums of those 2D tensors, e. g.: sums = [3, 3, 3]. So far I have:
sizes = [torch.sum(t[i]) for i in range(t.shape[0])]
I think this can be done with PyTorch only, but I've tried using torch.sum() with all possible dimensions and I always get sums over the individual fields of those 2D tensors, e. g.:
[[0, 0, 3],
[0, 3, 0],
[3, 0, 0]]
How can this be done in PyTorch?
| You can do it at once by passing dims as tuple.
t.sum(dim=(0,1))
tensor([3, 3, 3])
or for list
t.sum(dim=(0,1)).tolist()
[3, 3, 3]
| https://stackoverflow.com/questions/63335526/ |
Error invalid syntax in pytorch nn module | I have an error in my pytorch code, and i really don't understand why.
File "<ipython-input-11-89006c750b74>", line 3
def GaussianBlur(torch.nn.Module):
^
SyntaxError: invalid syntax
The rest of the code is here
import torch.nn as nn
def GaussianBlur(torch.nn.Module):
def __init__(self, kernel_size, std_dev):
self.kernel_size = kernel_size
self.std_dev = std_dev
def forward(self, img):
image = np.array(img)
image_blur = cv2.GaussianBlur(image, self.kernel_size, self.std_dev)
return Image.fromarray(image_blur)
Anyone knows what's going on? Thanks
| def is for functions, class for classes. This should work :
class GaussianBlur(torch.nn.Module):
| https://stackoverflow.com/questions/63337117/ |
Training a CNN model with DataLoader on a GPU in PyTorch | I am trying to classify chest x-rays into two categories: 'normal', and 'pneumonia'. My training and test sets are DataLoader objects with num_workers=0, pin_memory=True. Cuda is available on my device (GTX 1060 6GB). After creating the CNN I call
model = CNN().cuda(). When I try to train the model I get RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight'. What do I have to change in order to train the model on my GPU?
The code:
root = 'chest_xray/'
train_data = datasets.ImageFolder(os.path.join(root, 'train'), transform=train_transform)
test_data = datasets.ImageFolder(os.path.join(root, 'test'), transform=test_transform)
train_loader = DataLoader(train_data,batch_size=10,shuffle=True,num_workers=0,pin_memory=True)
test_loader = DataLoader(test_data,batch_size=10,shuffle=False,num_workers=0,pin_memory=True)
class_names = train_data.classes
class CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 12, 5, 1)
self.conv2 = nn.Conv2d(12, 24, 5, 1)
self.conv3 = nn.Conv2d(24, 30, 5, 1)
self.conv4 = nn.Conv2d(30, 36, 5, 1)
self.fc1 = nn.Linear(58*58*36, 256)
self.fc2 = nn.Linear(256, 144)
self.fc3 = nn.Linear(144, 84)
self.fc4 = nn.Linear(84, 16)
self.fc5 = nn.Linear(16, 2)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv3(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv4(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 58*58*36)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
x = self.fc5(x)
return F.log_softmax(x, dim=1)
model = CNN().cuda()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
epochs = 2
train_losses = []
train_correct = []
for i in range(epochs):
trn_correct = 0
tst_correct = 0
for b, (X_train, y_train) in enumerate(train_loader):
y_pred = model(X_train)
loss = criterion(y_pred, y_train)
predicted = torch.max(y_pred.data, 1)[1]
batch_correct = (predicted == y_train).sum()
trn_correct += batch_correct
optimizer.zero_grad()
loss.backward()
optimizer.step()
if b % 200 == 0:
print(f'epoch: {i+1} batch: {b} progress: {10*b/len(train_data)} loss: {loss.item()} accuracy: {10*trn_correct/b}%')
train_losses.append(loss)
train_correct.append(trn_correct)
| After this line:
for b, (X_train, y_train) in enumerate(train_loader):
Add the following:
X_train, y_train = X_train.cuda(), y_train.cuda()
| https://stackoverflow.com/questions/63339144/ |
when we reshape numpy array, how is the stride size inferred? | I have an 1x1024 1-d array (flattened image). To see the image, I want to reshape its size as 32x32.
I can easily achieve this by doing x.reshape(-1,32) and it works as I intended. It doesn't screw the image. It reads the 1d array with 32-width stride each time.
Say this time, there are 4 images, sized 32 by 8. What's the safe way to reshape this?
What is the logic behind how strides are defined? Is it always starting from the largest dimension (say, 3d->2d->1d)? It seems like it..
In [2]: a = np.arange(1024)
In [3]: a.reshape(4,32,8)
Out[3]:
array([[[ 0, 1, 2, ..., 5, 6, 7],
[ 8, 9, 10, ..., 13, 14, 15],
[ 16, 17, 18, ..., 21, 22, 23],
...,
[ 232, 233, 234, ..., 237, 238, 239],
[ 240, 241, 242, ..., 245, 246, 247],
[ 248, 249, 250, ..., 253, 254, 255]],
[[ 256, 257, 258, ..., 261, 262, 263],
[ 264, 265, 266, ..., 269, 270, 271],
[ 272, 273, 274, ..., 277, 278, 279],
...,
[ 488, 489, 490, ..., 493, 494, 495],
[ 496, 497, 498, ..., 501, 502, 503],
[ 504, 505, 506, ..., 509, 510, 511]],
[[ 512, 513, 514, ..., 517, 518, 519],
[ 520, 521, 522, ..., 525, 526, 527],
[ 528, 529, 530, ..., 533, 534, 535],
...,
[ 744, 745, 746, ..., 749, 750, 751],
[ 752, 753, 754, ..., 757, 758, 759],
[ 760, 761, 762, ..., 765, 766, 767]],
[[ 768, 769, 770, ..., 773, 774, 775],
[ 776, 777, 778, ..., 781, 782, 783],
[ 784, 785, 786, ..., 789, 790, 791],
...,
[1000, 1001, 1002, ..., 1005, 1006, 1007],
[1008, 1009, 1010, ..., 1013, 1014, 1015],
[1016, 1017, 1018, ..., 1021, 1022, 1023]]])
In [4]: a.reshape(4,-1,8)
Out[4]:
array([[[ 0, 1, 2, ..., 5, 6, 7],
[ 8, 9, 10, ..., 13, 14, 15],
[ 16, 17, 18, ..., 21, 22, 23],
...,
[ 232, 233, 234, ..., 237, 238, 239],
[ 240, 241, 242, ..., 245, 246, 247],
[ 248, 249, 250, ..., 253, 254, 255]],
[[ 256, 257, 258, ..., 261, 262, 263],
[ 264, 265, 266, ..., 269, 270, 271],
[ 272, 273, 274, ..., 277, 278, 279],
...,
[ 488, 489, 490, ..., 493, 494, 495],
[ 496, 497, 498, ..., 501, 502, 503],
[ 504, 505, 506, ..., 509, 510, 511]],
[[ 512, 513, 514, ..., 517, 518, 519],
[ 520, 521, 522, ..., 525, 526, 527],
[ 528, 529, 530, ..., 533, 534, 535],
...,
[ 744, 745, 746, ..., 749, 750, 751],
[ 752, 753, 754, ..., 757, 758, 759],
[ 760, 761, 762, ..., 765, 766, 767]],
[[ 768, 769, 770, ..., 773, 774, 775],
[ 776, 777, 778, ..., 781, 782, 783],
[ 784, 785, 786, ..., 789, 790, 791],
...,
[1000, 1001, 1002, ..., 1005, 1006, 1007],
[1008, 1009, 1010, ..., 1013, 1014, 1015],
[1016, 1017, 1018, ..., 1021, 1022, 1023]]])
In [5]: a.reshape(4,8,32)
Out[5]:
array([[[ 0, 1, 2, ..., 29, 30, 31],
[ 32, 33, 34, ..., 61, 62, 63],
[ 64, 65, 66, ..., 93, 94, 95],
...,
[ 160, 161, 162, ..., 189, 190, 191],
[ 192, 193, 194, ..., 221, 222, 223],
[ 224, 225, 226, ..., 253, 254, 255]],
[[ 256, 257, 258, ..., 285, 286, 287],
[ 288, 289, 290, ..., 317, 318, 319],
[ 320, 321, 322, ..., 349, 350, 351],
...,
[ 416, 417, 418, ..., 445, 446, 447],
[ 448, 449, 450, ..., 477, 478, 479],
[ 480, 481, 482, ..., 509, 510, 511]],
[[ 512, 513, 514, ..., 541, 542, 543],
[ 544, 545, 546, ..., 573, 574, 575],
[ 576, 577, 578, ..., 605, 606, 607],
...,
[ 672, 673, 674, ..., 701, 702, 703],
[ 704, 705, 706, ..., 733, 734, 735],
[ 736, 737, 738, ..., 765, 766, 767]],
[[ 768, 769, 770, ..., 797, 798, 799],
[ 800, 801, 802, ..., 829, 830, 831],
[ 832, 833, 834, ..., 861, 862, 863],
...,
[ 928, 929, 930, ..., 957, 958, 959],
[ 960, 961, 962, ..., 989, 990, 991],
[ 992, 993, 994, ..., 1021, 1022, 1023]]])
| reshape does not reorder the underlying values. The array is stored as a 1d array of bytes, plus shape, strides and dtype which are used to view it as particular multidimensional array.
You can look at the strides attribute:
In [513]: arr = np.arange(1024)
In [514]: arr.shape, arr.strides
Out[514]: ((1024,), (8,))
In [515]: arr1=arr.reshape(32,32);arr1.shape, arr1.strides
Out[515]: ((32, 32), (256, 8))
In [516]: arr1=arr.reshape(4,32,8);arr1.shape, arr1.strides
Out[516]: ((4, 32, 8), (2048, 64, 8))
With 1d, it just steps 8 bytes at a time (the size of the int64)
With 2d, 256=32*8; to traverse rows it has to step 256 bytes
With 3d, 2048 = 32 * 8 * 8; the step between blocks.
And for fun, look at a transpose:
In [517]: arr1=arr.reshape(4,32,8).T;arr1.shape, arr1.strides
Out[517]: ((8, 32, 4), (8, 64, 2048))
shape has been reversed, and so has strides.
Often when reshaping an image array into blocks, we need to reshape into small blocks, do partial transpose, and reshape to a target. The first reshape and transpose create a view, just playing with the shapes and strides. But the last reshape often requires a copy.
| https://stackoverflow.com/questions/63343933/ |
Word-sense disambiguation based on sets of words using pre-trained embeddings | I am interested in identifying the WordNet synset IDs for each word in a set of tags.
The words in the set provide the context for the word sense disambiguation, such as:
{mole, skin}
{mole, grass, fur}
{mole, chemistry}
{bank, river, river bank}
{bank, money, building}
I know of the lesk algorithm and libraries, such as pywsd, which is based on 10+ year old tech (which may still be cutting edge -- that is my question).
Are there better performing algorithms by now that make sense of pre-trained embeddings, like GloVe, and maybe the distances of these embeddings to each other?
Are there ready-to-use implementations of such WSD algorithms?
I know this question is close to the danger zone of asking for subjective preferences - as in this 5-year old thread. But I am not asking for an overview of options or the best software for a problem.
| Transfer learning, particularly models like Allen AIβs ELMO, OpenAIβs Open-GPT, and Googleβs BERT allowed researchers to smash multiple benchmarks with minimal task-specific fine-tuning and provided the rest of the NLP community with pretrained models that could easily (with less data and less compute time) be fine-tuned and implemented to produce state of the art results.
these representations will help you accuratley retrieve results matching the customer's intent and contextual meaning(), even if there's no keyword or phrase overlap.
To start off, embeddings are simply (moderately) low dimensional representations of a point in a higher dimensional vector space.
By translating a word to an embedding it becomes possible to model the semantic importance of a word in a numeric form and thus perform mathematical operations on it.
When this was first possible by the word2vec model it was an amazing breakthrough. From there, many more advanced models surfaced which not only captured a static semantic meaning but also a contextualized meaning. For instance, consider the two sentences below:
I like apples.
I like Apple macbooks
Note that the word apple has a different semantic meaning in each sentence. Now with a contextualized language model, the embedding of the word apple would have a different vector representation which makes it even more powerful for NLP tasks.
contextual embedding's like BERT offers an advantage over models like Word2Vec, because while each word has a fixed representation under Word2Vec regardless of the context within which the word appears, BERT produces word representations that are dynamically informed by the words around them.
| https://stackoverflow.com/questions/63348829/ |
Conda environment : several environment files - specify cpu-only version of Pytorch | I am using conda 4.8.3 with Python 3.7, I am writing environment files to specify the dependencies of my project. I would like to write several files to be able to install several environments:
main.yml : containing the dependencies of my project, and pytorch CPU-only version
dev.yml : containing dev tools (mypy, flake8, pytest ..)
gpu.yml : containing pytorch-GPU (with a specified version of CUDA)
To get a basic (CPU) installation one would write : conda env update --file main.yml
To get an installation with GPU compatibility would then add conda env update --file gpu.yml
Here is my question : at the moment I cannot find the right way to specify the 'CPU-only' criteria for pytorch in an environment file, does anyone know if it is doable?
The command usually used for that purpose is conda install pytorch torchvision cpuonly -c pytorch, but I cannot find a way to specify it in the yml file.
On the pytorch channel site , there is a pytorch-cpu package, but its version is quite outdated (1.1.0, while the current main is 1.6.0)
Here is my main.yml environment file:
name: my_env
channels:
- intel
- conda-forge
- pytorch
dependencies:
- numpy
- scipy
- scikit-image
- matplotlib
- wxpython
- colorama
- dill
- protobuf
- pytorch # How to specify the 'cpu' criteria here??
- torchvision
- pip:
- -r env/requirements.txt
| In case anyone else is looking for the answer -- I tried what AMC suggested in a comment above. I can confirm that adding the line:
- cpuonly
to my environment.yml file forced the CPU version of pytorch to be downloaded.
| https://stackoverflow.com/questions/63356412/ |
ImportError: TensorBoard logging requires TensorBoard version 1.15 or above | I follow the tutorials in pytorch.org
It occurs errorοΌTensorBoard logging requires TensorBoard version 1.15 or above,but I have install TensorBoard already.
Here is the code:
#from torch.utils.tensorboard import SummaryWriter
from tensorboardX import SummaryWriter
writer = SummaryWriter('runs/fashion_mnist_experiment_1')
#get some random training images
dataiter = iter(trainloader)
images , labels = dataiter.next()
#create grid of images
img_grid = torchvision.utils.make_grid(images)
matplotlib_imshow(img_grid,one_channel=True)
writer.add_image('four_fashion_images',img_grid)
writer.add_graph(net, images)
writer.close()
Error:
ImportError Traceback (most recent call last)
<ipython-input-12-d38808675cb4> in <module>
----> 1 writer.add_graph(net, images)
2 writer.close()
~\anaconda3\envs\torch2\lib\site-packages\tensorboardX\writer.py in add_graph(self, model, input_to_model, verbose)
791
792 """
--> 793 from torch.utils.tensorboard._pytorch_graph import graph
794 self._get_file_writer().add_graph(graph(model, input_to_model, verbose))
795
~\anaconda3\envs\torch2\lib\site-packages\torch\utils\tensorboard\__init__.py in <module>
2 from distutils.version import LooseVersion
3 if not hasattr(tensorboard, '__version__') or LooseVersion(tensorboard.__version__) < LooseVersion('1.15'):
----> 4 raise ImportError('TensorBoard logging requires TensorBoard version 1.15 or above')
5 del LooseVersion
6 del tensorboard
ImportError: TensorBoard logging requires TensorBoard version 1.15 or above
Environment:
tensorboard 2.3.0 pypi_0 pypi
tensorboard-plugin-wit 1.7.0 pypi_0 pypi
tensorboardx 2.1 pypi_0 pypi
tensorflow 1.2.1 py36_0 defaults
pytorch 1.6.0 py3.6_cuda102_cudnn7_0 pytorch
torchvision 0.7.0 py36_cu102 pytorch
future 0.18.2 py36_1 defaults
protobuf 3.12.3 py36h33f27b4_0 defaults
I use from torch.utils.tensorboard import SummaryWriter at the beginning,but it occurs the error as same as above.Then I use from tensorboardX import SummaryWriter
| Uninstall tensorflow, tensorboard, tensorboardx and tensorboard-plugin-wit.
Install only tensorboard with conda after that.
If this doesn't work, recreate your conda environment only with tensorboard. If you need tensorflow as well install it beforehand.
EDIT:
tensorboard-plugin-wit is a dependency of tensorboard and should be installed automatically as per their pypi description when installing tensorboard itself.
| https://stackoverflow.com/questions/63357718/ |
rolling statistics in numpy or pytroch | I have a tensors data of sensors, each tensor is of shape (4,1500)
This is 1500 timepoints and for each time point I have 4 features.
I want to "smooth" the sequences with rolling average or other rolling statistics. The end goal is to try to improve an lstm autoencoder with rolling statistics instead of the long raw sequence.
I am familiar with rolling windows of pandas and currently I am doing this:
#tensor shape:
data.shape
(4,1500)
#convert data to numpy array and then to dataframe and perform rolling mean
rolled_data=pd.DataFrame(data.numpy().swapaxes(1,0)).rolling(10).mean()[::10]
rolled_data.shape
(150, 4)
# convert back the dataframe to tensor
tensor_rolled_data=torch.Tensor(rolled_data.to_numpy().swapaxes(1,0))
tensor_rolled_data.shape
torch.Size([4, 150])
my question is- is there a better way to do it? a function in numpy/torch that can do rolling statistics in a cleaner or more efficient way?
| Since you're striding the output by the size of the window this is actually more akin to downsampling by averaging than to a computing rolling statistics. We can take advantage of the fact that there are no overlaps by simply reshaping the initial tensor.
Using Tensor.reshape
Assuming your data tensor has a shape divisible by 10 then you can just reshape the tensor to shape (4, 150, 10) and compute the statistic along the last dimension. For example
win_size = 10
tensor_rolled_data = data.reshape(data.shape[0], -1, win_size).mean(dim=2)
This solution doesn't give exactly the same results as your tensor_rolled_data since in this solution the first entry will contain the mean of the first 10 samples, the second entry will contain the mean of the second 10 samples, etc... The pandas solution is a "causal filter" so the first entry will contain the mean of the 10 most recent samples up to and including sample 0, the second will contain the 10 most recent samples up to and including sample 10, etc... (Note that the first entry is nan in the pandas solution since less than 10 preceding samples exist).
If this difference is unacceptable you can recreate the pandas result by first padding with 9 nan values and clipping off the last 9 samples.
import torch.nn.functional as F
win_size = 10
# pad with `nan` to match behavior of pandas
data_padded = F.pad(data[None, :, :-(win_size - 1)], (win_size - 1, 0), 'constant', float('nan')).squeeze(0)
# find mean of groups of N samples
tensor_rolled_data = data_padded.reshape(data.shape[0], -1, win_size).mean(dim=2)
Using Tensor.unfold
To address the comment about what to do when there are overlaps. If you're only interested in the mean statistic then there are a number of ways to compute this (e.g. convolution, average pooling, tensor unfolding). That said, Tensor.unfold gives the most general solution since it could be used to compute any statistic over a window. For example
# same as first example above
win_size = 10
tensor_rolled_data = data.unfold(dimension=1, size=win_size, step=win_size).mean(dim=2)
or
# same as second example above
import torch.nn.functional as F
win_size = 10
data_padded = F.pad(data.unsqueeze(0), (win_size - 1, 0), 'constant', float('nan')).squeeze(0)
tensor_rolled_data = data_padded.unfold(dimension=1, size=win_size, step=win_size).mean(dim=2)
In the above cases, unfolding produces the same result as reshape since size and step are equal. However, unlike reshape, unfolding also supports size != step.
win_size = 10
stride = 2
tensor_rolled_data = data.unfold(1, win_size, stride).mean(dim=2).mean(dim=2)
# produces shape [4, 746]
or you can pad the front of the features with win_size - 1 values to achieve the same result as pandas.
import torch.nn.functional as F
win_size = 10
stride = 2
data_padded = F.pad(data.unsqueeze(0), (win_size - 1, 0), 'constant', float('nan')).squeeze(0)
tensor_rolled_data = data_padded.unfold(1, win_size, stride).mean(dim=2)
# produces shape [4, 750]
Note In practice you probably don't want to pad with NaN since this will probably become quite a headache. Instead you could use zero padding, 'replicate' padding, or 'mirror' padding.
| https://stackoverflow.com/questions/63361688/ |
Numpy from an array, for each element create a matrix N*M with all values set as the element without for loops | I have a numpy array like np.array([1, 2, 3])
Without using for loops but just use numpy or pytorch methods, i want a matrix with dimension len(array) * N * M composed by matrices N*M that the first matrix is composed by all ones, the second only by two values and the third only by 3 values.
For istance
N = 4 M = 3
[[[1,1,1,1],[1,1,1,1],[1,1,1,1]],
[[2,2,2,2],[2,2,2,2],[2,2,2,2]],
[[3,3,3,3],[3,3,3,3],[3,3,3,3]]]
I tried different methods to achive this matrix like unsqueeze and repeat but i wasn't able to find a solution, any suggestion?
| Here you go:
np.ones((len(a),M,N)) * a[:,None,None]
Or without multiplication:
np.full((len(a),M,N), a[:,None,None])
Output:
array([[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]],
[[2., 2., 2., 2.],
[2., 2., 2., 2.],
[2., 2., 2., 2.]],
[[3., 3., 3., 3.],
[3., 3., 3., 3.],
[3., 3., 3., 3.]]])
Note the shape here and given in your expected output is len(a) * M * N, not len(a) * N * M. Swap M,N in np.ones if needed.
| https://stackoverflow.com/questions/63367579/ |
How to compute the validation loss? (Simple linear regression) | I am currently learning how to use PyTorch to build a neural network. I have learned keras before and I would like to do the same thing in PyTorch like 'model.fit' and plotting a graph containing both training loss and validation loss.
In order to know whether the model is underfitting or not, I have to plot a graph to compare the training loss and validation loss.
However, I cannot compute the right validation loss. I know that gradients should only be updated during training and it should not be updated during testing/validation. With no change in gradients, does it mean the loss will not change? Sorry, my concept is not clear enough. But I think not, loss should be computed by comparing expected output and prediction using loss function.
In my code, 80 datasets are used for training and 20 datasets are used for validation. In my code, the neural network is prediction this formula: y =2X^3 + 7X^2 - 8*X + 120
It is easy to compute so I use this for learning how to build neural network through PyTorch.
Here is my code:
import torch
import torch.nn as nn #neural network model
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch.nn.functional as F
from torch.autograd import Variable
from sklearn.preprocessing import MinMaxScaler
#Load datasets
dataset = pd.read_csv('test_100.csv')
X = dataset.iloc[:, :-1].values
Y = dataset.iloc[:, -1:].values
X_scaler = MinMaxScaler()
Y_scaler = MinMaxScaler()
print(X_scaler.fit(X))
print(Y_scaler.fit(Y))
X = X_scaler.transform(X)
Y = Y_scaler.transform(Y)
x_temp_train = X[:79]
y_temp_train = Y[:79]
x_temp_test = X[80:]
y_temp_test = Y[80:]
X_train = torch.FloatTensor(x_temp_train)
Y_train = torch.FloatTensor(y_temp_train)
X_test = torch.FloatTensor(x_temp_test)
Y_test = torch.FloatTensor(y_temp_test)
D_in = 1 # D_in is input features
H = 24 # H is hidden dimension
D_out = 1 # D_out is output features.
#Define a Artifical Neural Network model
class Net(nn.Module):
#------------------Two Layers------------------------------
def __init__(self, D_in, H, D_out):
super(Net, self).__init__()
self.linear1 = nn.Linear(D_in, H)
self.linear2 = nn.Linear(H, D_out)
def forward(self, x):
h_relu = self.linear1(x).clamp(min=0)
prediction = self.linear2(h_relu)
return prediction
model = Net(D_in, H, D_out)
print(model)
#Define a Loss function and optimizer
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.2) #2e-7
#Training
inputs = Variable(X_train)
outputs = Variable(Y_train)
inputs_val = Variable(X_test)
outputs_val = Variable(Y_test)
loss_values = []
val_values = []
epoch = []
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
optimizer.zero_grad() #zero the parameter gradients
model.eval() # Set model to evaluate mode
for i in range(50): #epoch=50
if phase == 'train':
model.train()
prediction = model(inputs)
loss = criterion(prediction, outputs)
print('train loss')
print(loss)
loss_values.append(loss.detach())
optimizer.zero_grad() #zero the parameter gradients
epoch.append(i)
loss.backward() #compute gradients(dloss/dx)
optimizer.step() #updates the parameters
elif phase == 'val':
model.eval()
prediction_val = model(inputs_val)
loss_val = criterion(prediction_val, outputs_val)
print('validation loss')
print(loss)
val_values.append(loss_val.detach())
optimizer.zero_grad() #zero the parameter gradients
plt.plot(epoch,loss_values)
plt.plot(epoch, val_values)
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','validation'], loc='upper left')
plt.show()
Here is the result:
train loss
tensor(0.9788, grad_fn=<MseLossBackward>)
tensor(2.0834, grad_fn=<MseLossBackward>)
tensor(3.2902, grad_fn=<MseLossBackward>)
tensor(0.8851, grad_fn=<MseLossBackward>)
tensor(0.0832, grad_fn=<MseLossBackward>)
tensor(0.0402, grad_fn=<MseLossBackward>)
tensor(0.0323, grad_fn=<MseLossBackward>)
tensor(0.0263, grad_fn=<MseLossBackward>)
tensor(0.0217, grad_fn=<MseLossBackward>)
tensor(0.0181, grad_fn=<MseLossBackward>)
tensor(0.0153, grad_fn=<MseLossBackward>)
tensor(0.0132, grad_fn=<MseLossBackward>)
tensor(0.0116, grad_fn=<MseLossBackward>)
tensor(0.0103, grad_fn=<MseLossBackward>)
tensor(0.0094, grad_fn=<MseLossBackward>)
tensor(0.0087, grad_fn=<MseLossBackward>)
tensor(0.0081, grad_fn=<MseLossBackward>)
tensor(0.0077, grad_fn=<MseLossBackward>)
tensor(0.0074, grad_fn=<MseLossBackward>)
tensor(0.0072, grad_fn=<MseLossBackward>)
tensor(0.0070, grad_fn=<MseLossBackward>)
tensor(0.0068, grad_fn=<MseLossBackward>)
tensor(0.0067, grad_fn=<MseLossBackward>)
tensor(0.0067, grad_fn=<MseLossBackward>)
tensor(0.0066, grad_fn=<MseLossBackward>)
tensor(0.0065, grad_fn=<MseLossBackward>)
tensor(0.0065, grad_fn=<MseLossBackward>)
tensor(0.0065, grad_fn=<MseLossBackward>)
tensor(0.0064, grad_fn=<MseLossBackward>)
tensor(0.0064, grad_fn=<MseLossBackward>)
tensor(0.0064, grad_fn=<MseLossBackward>)
tensor(0.0064, grad_fn=<MseLossBackward>)
tensor(0.0063, grad_fn=<MseLossBackward>)
tensor(0.0063, grad_fn=<MseLossBackward>)
tensor(0.0063, grad_fn=<MseLossBackward>)
tensor(0.0063, grad_fn=<MseLossBackward>)
tensor(0.0063, grad_fn=<MseLossBackward>)
tensor(0.0062, grad_fn=<MseLossBackward>)
tensor(0.0062, grad_fn=<MseLossBackward>)
tensor(0.0062, grad_fn=<MseLossBackward>)
tensor(0.0062, grad_fn=<MseLossBackward>)
tensor(0.0062, grad_fn=<MseLossBackward>)
tensor(0.0062, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
validation loss
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
tensor(0.0061, grad_fn=<MseLossBackward>)
Train Loss Vs. Validation Loss
The validation loss is a flat line. It is not what I want.
| You should do validation after each training epoch to "validate" your model capabilities. Also, in validation phase the model parameters don't change so it is obvious you would get a constant loss in your validations. Your code should be as follows:
training epoch 1
validation
training epoch 2
validation
...
don't forget to use loss.item() instead of loss alone in your loss calculating and averaging. Because loss gives you a grad_function, not a float value.
| https://stackoverflow.com/questions/63369206/ |
How to find the source code of torch.solve? | I am concerned whether torch.solve() examine the condition of the coefficient matrix for a linear system and employ desirable preconditionings; thus I am curious about its implementation details. I have read through several answers trying to track down the source file but in vain. I hope somebody can help me to locate its definition in the ATen library.
| I think it just uses LAPACK for CPU and CUBLAS for GPU, since torch.solve is listed under "BLAS and LAPACK Operations" on the official docs.
Then we're looking for wrapper code, which I believe is this part.
| https://stackoverflow.com/questions/63369340/ |
Why does my autoencoder for de-noising in PyTorch learns to zero out everything? | I need to build a de-noising autoencoder using pytorch for the task of cleaning out signals.
For example, I can take the cosine function and sample it in intervals (where i have two parameters - B and K. B is the number of intervals I take in each example and K is how many sampling points (equally distanced) are in each interval) so for instance, I can take B = 5 intervals and measure K = 8 points in each interval. Hence the distance between each point is 2pi / 8 and i have a total of 40 points. The number of functions I try to generalize is L and I treat this like different channels. Then I add a random starting position for each example (to make it slightly different) then random noise and send it to the autoencoder to train.
The thing is, no matter what architecture or learning rate, gradually it learns to output nothing but zeros. The autoencoder is super simple so I dont reckon there's a problem with it but rather a problem of how I generate my data.
I attach both of the codes anyways:
class ConvAutoencoder(nn.Module):
def __init__(self, enc_channels, dec_channels):
super(ConvAutoencoder, self).__init__()
## encoder layers ##
encoder_layers = []
decoder_layers = []
in_channels = enc_channels[0]
for i in range(1, len(enc_channels)):
out_channels = enc_channels[i]
encoder_layers += [nn.ConvTranspose2d(in_channels, out_channels, kernel_size=1, bias=True),
nn.ReLU()]
in_channels = out_channels
in_channels = dec_channels[0]
for i in range(1, len(dec_channels)):
out_channels = dec_channels[i]
decoder_layers += [nn.ConvTranspose2d(in_channels, out_channels, kernel_size=1, bias=True),
nn.ReLU()]
in_channels = out_channels
self.encoder = nn.Sequential(*encoder_layers)
self.decoder = nn.Sequential(*decoder_layers)
def forward(self, x):
if len(x.shape) == 3:
x = x.unsqueeze(dim=-1)
res = self.decoder(self.encoder(x)).squeeze(-1)
return res
And the data generation is as follows:
def generate_data(batch_size: int, intervals: int, sample_length: int, channels_functions, noise_scale=1)->torch.tensor:
channels = len(channels_functions)
mul_term = 2 * np.pi / sample_length # each sample is 2pi and equally distance
# each example is K * B long
positions = np.arange(0, sample_length * intervals)
x = positions * mul_term
# creating random start points (from negative to positive)
random_starting_pos = (np.random.rand(batch_size) - 0.5) * 10000
start_pos_mat = np.tile(random_starting_pos , (sample_length * intervals, 1))
start_pos_mat = np.tile(start_pos_mat , (channels, 1)).T
start_pos_mat = np.reshape(start_pos_mat , (batch_size, channels, sample_length * intervals))
X = np.tile(x, (channels, 1))
X = np.repeat(X[np.newaxis, :, :], batch_size, axis=0)
X += start_pos_mat #adding the random starting position
# apply each function to a different channel
for i, function in enumerate(channels_functions):
X[:, i, :] = function(X[:, i, :])
clean = X
noise = np.random.normal(scale=noise_scale, size=clean.shape)
noisy = clean + noise
# normalizing each sample
row_sums = np.linalg.norm(clean, axis=2)
clean = clean / row_sums[:, :, np.newaxis]
row_sums = np.linalg.norm(noisy, axis=2)
noisy = noisy / row_sums[:, :, np.newaxis]
clean = torch.from_numpy(clean)
noisy = torch.from_numpy(noisy)
return clean, noisy
Edit - Added the entire training loop:
if __name__ == '__main__':
func_list = [lambda x: np.cos(x),
lambda x: np.cos((x**4) / 10),
lambda x: np.sin(x**3 * np.cos(x**2)),
lambda x: 0.25*np.cos(x**2) - 10*np.sin(0.25*x)]
L = len(func_list)
K = 3
B = 4
enc_channels = [L, 64, 128, 256]
num_epochs = 100
model = models.ConvAutoencoder(enc_channels, enc_channels[::-1])
criterion = torch.nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.005, weight_decay=1e-5)
for epoch in range(num_epochs):
clean, noisy = util.generate_data(128, K, B, func_list)
# ===================forward=====================
output = model(noisy.float())
loss = criterion(output.float(), clean.float())
# ===================backward====================
optimizer.zero_grad()
loss.backward()
optimizer.step()
# ===================log========================
print('epoch [{}/{}], loss:{:.4f}'.format(epoch + 1, num_epochs, loss.data))
if epoch % 10 == 0:
show_clean, show_noisy = util.generate_data(1, K, B, func_list)
print("clean\n{}".format(show_clean))
print("noisy\n{}".format(show_noisy))
print("denoised\n{}".format(model(show_noisy.float())))
sure enough after like 10 epochs the model outputs:
clean vector
tensor([[[ 0.3611, -0.1905, -0.3611, 0.1905, 0.3611, -0.1905, -0.3611,
0.1905, 0.3611, -0.1905, -0.3611, 0.1905],
[ 0.3387, -0.0575, -0.2506, -0.3531, -0.3035, 0.3451, 0.3537,
-0.2416, 0.2652, -0.3126, -0.3203, -0.1707],
[-0.0369, 0.4412, -0.1323, 0.1802, -0.2943, 0.3590, 0.4549,
0.0827, -0.0164, 0.4350, -0.1413, -0.3395],
[ 0.3997, 0.3516, 0.2451, 0.1136, -0.0458, -0.1944, -0.3225,
-0.3925, -0.3971, -0.3382, -0.2457, -0.1153]]], dtype=torch.float64)
noisy vector
tensor([[[-0.1071, -0.0671, 0.0993, -0.2029, 0.1587, -0.4407, -0.0867,
-0.2598, 0.2426, -0.6939, -0.3011, -0.0870],
[ 0.0889, -0.3415, -0.1434, -0.2393, -0.4708, 0.0144, 0.2620,
-0.1186, 0.6424, 0.0886, -0.2192, -0.1562],
[ 0.1989, 0.2794, 0.0848, -0.2729, -0.2168, 0.1475, 0.5294,
0.4788, 0.1754, 0.2333, -0.0549, -0.3665],
[ 0.3611, 0.3535, 0.1957, 0.1980, -0.1115, -0.1912, -0.2713,
-0.4087, -0.3669, -0.3675, -0.2991, -0.1356]]], dtype=torch.float64)
denoised vector
tensor([[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]], grad_fn=<SqueezeBackward1>)
Thanks
| The problem is that you are using a ReLU in the final layer, however your target (clean) contains negative values. When you are using a ReLU in your final layer, you cannot obtain negative values.
Simply replace your decoder with this:
for i in range(1, len(dec_channels)):
out_channels = dec_channels[i]
if i == len(dec_channels) - 1:
# last layer
decoder_layers += [nn.ConvTranspose2d(in_channels, out_channels, kernel_size=1, bias=True)]
else:
decoder_layers += [nn.ConvTranspose2d(in_channels, out_channels, kernel_size=1, bias=True),
nn.ReLU()]
in_channels = out_channels
And then I would suggest using the L2 loss.
| https://stackoverflow.com/questions/63377050/ |
RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' | I'm running into an issue while calculating the loss for my Neural Net. I'm not sure why the program expects a long object because all my Tensors are in float form. I looked at threads with similar errors and the solution was to cast Tensors as floats instead of longs, but that wouldn't work in my case because all my data is already in float form when passed to the network.
Here's my code:
# Dataloader
from torch.utils.data import Dataset, DataLoader
class LoadInfo(Dataset):
def __init__(self, prediction, indicator):
self.prediction = prediction
self.indicator = indicator
def __len__(self):
return len(self.prediction)
def __getitem__(self, idx):
data = torch.tensor(self.indicator.iloc[idx, :],dtype=torch.float)
data = torch.unsqueeze(data, 0)
label = torch.tensor(self.prediction.iloc[idx, :],dtype=torch.float)
sample = {'data': data, 'label': label}
return sample
# Trainloader
test_train = LoadInfo(train_label, train_indicators)
trainloader = DataLoader(test_train, batch_size=64,shuffle=True, num_workers=1,pin_memory=True)
# The Network
class NetDense2(nn.Module):
def __init__(self):
super(NetDense2, self).__init__()
self.rnn1 = nn.RNN(11, 100, 3)
self.rnn2 = nn.RNN(100, 500, 3)
self.fc1 = nn.Linear(500, 100)
self.fc2 = nn.Linear(100, 20)
self.fc3 = nn.Linear(20, 3)
def forward(self, x):
x1, h1 = self.rnn1(x)
x2, h2 = self.rnn2(x1)
x = F.relu(self.fc1(x2))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
# Allocate / Transfer to GPU
dense2 = NetDense2()
dense2.cuda()
# Optimizer
import torch.optim as optim
criterion = nn.CrossEntropyLoss() # specify the loss function
optimizer = optim.SGD(dense2.parameters(), lr=0.001, momentum=0.9,weight_decay=0.001)
# Training
dense2.train()
loss_memory = []
for epoch in range(50): # loop over the dataset multiple times
running_loss = 0.0
for i, samp in enumerate(trainloader):
# get the inputs
ins = samp['data']
targets = samp['label']
tmp = []
tmp = torch.squeeze(targets.float())
ins, targets = ins.cuda(), tmp.cuda()
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = dense2(ins)
loss = criterion(outputs, targets) # The loss
loss.backward()
optimizer.step()
# keep track of loss
running_loss += loss.data.item()
I get the error from above in the line " loss = criterion(outputs, targets) "
| As per the documentation and official example at pytorch webpage, The targets passed to nn.CrossEntropyLoss() should be in torch.long format
# official example
import torch
import torch.nn as nn
loss = nn.CrossEntropyLoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.empty(3, dtype=torch.long).random_(5)
# if you will replace the dtype=torch.float, you will get error
output = loss(input, target)
output.backward()
update this line in your code as
label = torch.tensor(self.prediction.iloc[idx, :],dtype=torch.long) #updated torch.float to torch.long
| https://stackoverflow.com/questions/63383347/ |
Memory Issue while following LM tutorial | SPECS:
OS: Windows 10
CUDA: 10.1
GPU: RTX 2060 6G VRAM (x2)
RAM: 32GB
tutorial: https://huggingface.co/blog/how-to-train
Hello I am trying to train my own language model and I have had some memory issues. I have tried to run some of this code in Pycharm on my computer and then trying to replicate in my Collab Pro Notebook.
First, my code
from transformers import RobertaConfig, RobertaTokenizerFast, RobertaForMaskedLM, LineByLineTextDataset
from transformers import DataCollatorForLanguageModeling, Trainer, TrainingArguments
config = RobertaConfig(vocab_size=60000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6,
type_vocab_size=1)
tokenizer = RobertaTokenizerFast.from_pretrained("./MODEL DIRECTORY", max_len=512)
model = RobertaForMaskedLM(config=config)
print("making dataset")
dataset = LineByLineTextDataset(tokenizer=tokenizer, file_path="./total_text.txt", block_size=128)
print("making c")
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
training_args = TrainingArguments(output_dir="./MODEL DIRECTORY", overwrite_output_dir=True, num_train_epochs=1,
per_gpu_train_batch_size=64, save_steps=10000, save_total_limit=2)
print("Building trainer")
trainer = Trainer(model=model, args=training_args, data_collator=data_collator, train_dataset=dataset,
prediction_loss_only=True)
trainer.train()
trainer.save_model("./MODEL DIRECTORY")
"./total_text.txt" being a 1.7GB text file.
PyCharm Attempt
This code on pycharm builds the dataset and then would throw an error saying that my preferred gpu was running out of memory, and that Torch was already using 3.7GiB of memory.
I tried:
import gc doing a gc clear to try to flush what ever was going on my gpu
Decreasing my batch size for my gpu (training only happened on a batch size of 8 resulting in 200,000+ epochs that all took 1.17 seconds)
Setting my os.environ["CUDA_VISIBLE_OBJECTS"] ="" so that torch would have to use my CPU and not my GPU. Still threw same gpu memory error...
So succumbing to the fact that torch, for the time being, was forcing itself to use my gpu, I decided to go to Collab.
Collab Attempt
Collab has different issues with my code. It does not have the memory to build the dataset, and crashes due to RAM shortages. I purchased a Pro account and then increased the usable RAM to 25GB, still memory shortages.
Cheers!
| I came to the conclusion that my text file for training was way to big. From the other examples I found, the training text was around 300MB not 1.7GB. In both instances I was asking PyCharm and Collab to pull off a very resource expensive task.
| https://stackoverflow.com/questions/63387831/ |
PyTorch calculate MSE and MAE | I would like to calculate the MSE and MAE of the model below. The model is calculating the MSE after each Epoch. What do I need to do to get the overall MSE value, please? Can I use the same code to calculate the MAE? Many Thanks in advance
model.eval()
for images, paths in tqdm(loader_test):
images = images.to(device)
targets = torch.tensor([metadata['count'][os.path.split(path)[-1]] for path in paths]) # B
targets = targets.float().to(device)
# forward pass:
output = model(images) # B x 1 x 9 x 9 (analogous to a heatmap)
preds = output.sum(dim=[1,2,3]) # predicted cell counts (vector of length B)
# logging:
loss = torch.mean((preds - targets)**2)
count_error = torch.abs(preds - targets).mean()
mean_test_error += count_error
writer.add_scalar('test_loss', loss.item(), global_step=global_step)
writer.add_scalar('test_count_error', count_error.item(), global_step=global_step)
global_step += 1
average_accuracy = 0
mean_test_error = mean_test_error / len(loader_test)
writer.add_scalar('mean_test_error', mean_test_error.item(), global_step=global_step)
average_accuracy += mean_test_error
average_accuracy = average_accuracy /len(loader_test)
print("Average accuracy: %f" % average_accuracy)
print("Test count error: %f" % mean_test_error)
if mean_test_error < best_test_error:
best_test_error = mean_test_error
torch.save({'state_dict':model.state_dict(),
'optimizer_state_dict':optimizer.state_dict(),
'globalStep':global_step,
'train_paths':dataset_train.files,
'test_paths':dataset_test.files},checkpoint_path)
| First of all, you would want to keep your batch size as 1 during test phase for simplicity.
This maybe task specific, but calculation of MAE and MSE for a heat map regression model are done based on the following equations:
This means that in your code, you should change the lines where you calculate MAE as following
error = torch.abs(preds - targets).sum().data
squared_error = ((preds - targets)*(preds - targets)).sum().data
runnning_mae += error
runnning_mse += squared_error
and later, after the epoch ends,
mse = math.sqrt(running_mse\len(loader_test))
mae = running_mae\len(loader_test)
| https://stackoverflow.com/questions/63391113/ |
Any ideas on how to efficiently create this matrix/mask? | I want to make a torch tensor or numpy array efficiently of a matrix that is a shifting window of 1s.
So for example the matrix below would be a window=3.
The diagonal element has 3 1s to it's right, and 3 1s to it's left but it doesn't wrap round like a circulant matrix, so row 1 just has 4 1s.
Has anyone got any ideas, this is to be used as a mask.
| Pytorch provides the tensor.diagonal method, which gives you access to any diagonal of a tensor. To assign a value to the resulting view of your tensor, you can use tensor.copy_. That would give you something like :
def circulant(n, window):
circulant_t = torch.zeros(n,n)
# [0, 1, 2, ..., window, -1, -2, ..., window]
offsets = [0] + [i for i in range(window)] + [-i for i in range(window)]
for offset in offsets:
#size of the 1-tensor depends on the length of the diagonal
circulant_t.diagonal(offset=offset).copy_(torch.ones(n-abs(offset)))
return circulant_t
| https://stackoverflow.com/questions/63393651/ |
differing results when using model to infer on a batch vs individual with pytorch | I have a neural network which takes input tensor of dimension (batch_size, 100, 1, 1) and produces an output tensor of dimension (batch_size, 3, 64, 64).
I have differing results when using model to infer on a batch of two elements and on inferring on elements individually.
With the below code I initialize a pytorch tensor of dimension (2, 100, 1, 1). I pass this tensor through the model and I take the first element of the model output and store in variable result1. For result2 I just directly run the first element of my original input tensor through my model.
inputbatch=torch.randn(2, Z_DIM, 1, 1, device=device)
inputElement=inputbatch[0].unsqueeze(0)
result1=model(inputbatch)[0]
result2=model(inputElement)
My expectation was result1 and result2 would be same. But result1 and result2 are entirely different. Could anyone explain why the two outputs differ.
| This is probably because your model has some random processes that are either training specific and you have not disable them (e.g. by using model.eval()) or needed at the model during inference.
To test the above, use:
model = model.eval()
before obtaining result1.
| https://stackoverflow.com/questions/63400559/ |
PyTorch loop through epoch then output final values for all epoch | The code below calculates the MSE and MAE values but I have an issue where the values for MAE and MSE don't get store_MAE and store MSE after the end of each epoch. It appears to use the values of the last epoch only. Any idea what I need to do in the code to save the values for each epoch I hope this makes sense. Thanks for your help
global_step = 0
best_test_error = 10000
MAE_for_all_epochs = []
MSE_for_all_epochs = []
for epoch in range(4):
print("Epoch %d" % epoch)
model.train()
for images, paths in tqdm(loader_train):
images = images.to(device)
targets = torch.tensor([metadata['count'][os.path.split(path)[-1]] for path in paths]) # B
targets = targets.float().to(device)
# forward pass:
output = model(images) # B x 1 x 9 x 9 (analogous to a heatmap)
preds = output.sum(dim=[1,2,3]) # predicted cell counts (vector of length B)
# backward pass:
loss = torch.mean((preds - targets)**2)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# logging:
count_error = torch.abs(preds - targets).mean()
writer.add_scalar('train_loss', loss.item(), global_step=global_step)
writer.add_scalar('train_count_error', count_error.item(), global_step=global_step)
print("Step %d, loss=%f, count error=%f" % (global_step,loss.item(),count_error.item()))
global_step += 1
mean_test_error = 0
model.eval()
for images, paths in tqdm(loader_test):
images = images.to(device)
targets = torch.tensor([metadata['count'][os.path.split(path)[-1]] for path in paths]) # B
targets = targets.float().to(device)
# forward pass:
output = model(images) # B x 1 x 9 x 9 (analogous to a heatmap)
preds = output.sum(dim=[1,2,3]) # predicted cell counts (vector of length B)
# logging:
#error = torch.abs(preds - targets).sum().data
#squared_error = ((preds - targets)*(preds - targets)).sum().data
#runnning_mae += error
#runnning_mse += squared_error
loss = torch.mean((preds - targets)**2)
count_error = torch.abs(preds - targets).mean()
mean_test_error += count_error
writer.add_scalar('test_loss', loss.item(), global_step=global_step)
writer.add_scalar('test_count_error', count_error.item(), global_step=global_step)
global_step += 1
#store_MAE = 0
#store_MSE = 0
mean_test_error = mean_test_error / len(loader_test)
#store_MAE += mean_test_error
MAE_for_all_epochs = np.append(MAE_for_all_epochs, mean_test_error)
mse = math.sqrt(loss / len(loader_test))
#store_MSE +=mse
MSE_for_all_epochs = np.append(MSE_for_all_epochs, mse)
print("Test count error: %f" % mean_test_error)
print("MSE: %f" % mse)
if mean_test_error < best_test_error:
best_test_error = mean_test_error
torch.save({'state_dict':model.state_dict(),
'optimizer_state_dict':optimizer.state_dict(),
'globalStep':global_step,
'train_paths':dataset_train.files,
'test_paths':dataset_test.files},checkpoint_path)
print("MAE Total: %f" % store_MAE)
print("MSE Total: %f" % store_MSE)
model_mae= MAE_for_all_epochs / epoch
model_mse= MSE_for_all_epochs / epoch
print("Model MAE: %f" % model_mae)
print("Model MSE: %f" % model_mse)
| np.append() will work for your case like this,
#outside epochs loop
MAE_for_all_epochs = []
#inside loop
#replace this store_MAE with relevant variable
MAE_for_all_epochs = np.append(MAE_for_all_epochs, store_MAE)
Edit: Toy example: as per the usage
import numpy as np
all_var = []
for e in range(1, 10):
var1 = np.random.random(1)
all_var = np.append(all_var, var1)
print(all_var)
# output : [0.07660848 0.46824825 0.09432051 0.79462902 0.97798061 0.67299183 0.50996432 0.13084029 0.95100381]
| https://stackoverflow.com/questions/63403181/ |
Given a set t of tuples containing elements from the set S, what is the most efficient way to build another set whose members are not contained in t? | For example, suppose I had an (n,2) dimensional tensor t whose elements are all from the set S containing random integers. I want to build another tensor d with size (m,2) where individual elements in each tuple are from S, but the whole tuples do not occur in t.
E.g.
S = [0,1,2,3,7]
t = [[0,1],
[7,3],
[3,1]]
d = some_algorithm(S,t)
/*
d =[[2,1],
[3,2],
[7,4]]
*/
What is the most efficient way to do this in python? Preferably with pytorch or numpy, but I can work around general solutions.
In my naive attempt, I just use
d = np.random.choice(S,(m,2))
non_dupes = [i not in t for i in d]
d = d[non_dupes]
But both t and S are incredibly large, and this takes an enormous amount of time (not to mention, rarely results in a (m,2) array). I feel like there has to be some fancy tensor thing I can do to achieve this, or maybe making a large hash map of the values in t so checking for membership in t is O(1), but this produces the same issue just with memory. Is there a more efficient way?
An approximate solution is also okay.
| my naive attempt would be a base-transformation function to reduce the problem to an integer set problem:
definitions and assumptions:
let S be a set (unique elements)
let L be the number of elements in S
let t be a set of M-tuples with elements from S
the original order of the elements in t is irrelevant
let I(x) be the index function of the element x in S
let x[n] be the n-th tuple-member of an element of t
let f(x) be our base-transform function (and f^-1 its inverse)
since S is a set we can write each element in t as a M digit number to the base L using elements from S as digits.
for M=2 the transformation looks like
f(x) = I(x[1])*L^1 + I(x[0])*L^0
f^-1(x) is also rather trivial ... x mod L to get back the index of the least significant digit. floor(x/L) and repeat until all indices are extracted. lookup the values in S and construct the tuple.
since now you can represet t as an integer set (read hastable) calculating the inverse set d becomes rather trivial
loop from L^(M-1) to (L^(M+1)-1) and ask your hashtable if the element is in t or d
if the size of S is too big you can also just draw random numbers against the hashtable for a subset of the inverse of t
does this help you?
| https://stackoverflow.com/questions/63406415/ |
Something wrong with my checkpoint file when using torch.load() | When I use torch.load to load one checkpoint:
torch.load('./latest_net_G.pth', map_location='cpu')
I got the runtime error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/opt/conda/lib/python3.7/tarfile.py in nti(s)
186 s = nts(s, "ascii", "strict")
--> 187 n = int(s.strip() or "0", 8)
188 except ValueError:
ValueError: invalid literal for int() with base 8: '_v2\nq\x03(('
During handling of the above exception, another exception occurred:
InvalidHeaderError Traceback (most recent call last)
/opt/conda/lib/python3.7/tarfile.py in next(self)
2288 try:
-> 2289 tarinfo = self.tarinfo.fromtarfile(self)
2290 except EOFHeaderError as e:
/opt/conda/lib/python3.7/tarfile.py in fromtarfile(cls, tarfile)
1094 buf = tarfile.fileobj.read(BLOCKSIZE)
-> 1095 obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)
1096 obj.offset = tarfile.fileobj.tell() - BLOCKSIZE
/opt/conda/lib/python3.7/tarfile.py in frombuf(cls, buf, encoding, errors)
1036
-> 1037 chksum = nti(buf[148:156])
1038 if chksum not in calc_chksums(buf):
/opt/conda/lib/python3.7/tarfile.py in nti(s)
188 except ValueError:
--> 189 raise InvalidHeaderError("invalid header")
190 return n
InvalidHeaderError: invalid header
During handling of the above exception, another exception occurred:
ReadError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/torch/serialization.py in _load(f, map_location, pickle_module, **pickle_load_args)
555 try:
--> 556 return legacy_load(f)
557 except tarfile.TarError:
/opt/conda/lib/python3.7/site-packages/torch/serialization.py in legacy_load(f)
466
--> 467 with closing(tarfile.open(fileobj=f, mode='r:', format=tarfile.PAX_FORMAT)) as tar, \
468 mkdtemp() as tmpdir:
/opt/conda/lib/python3.7/tarfile.py in open(cls, name, mode, fileobj, bufsize, **kwargs)
1590 raise CompressionError("unknown compression type %r" % comptype)
-> 1591 return func(name, filemode, fileobj, **kwargs)
1592
/opt/conda/lib/python3.7/tarfile.py in taropen(cls, name, mode, fileobj, **kwargs)
1620 raise ValueError("mode must be 'r', 'a', 'w' or 'x'")
-> 1621 return cls(name, mode, fileobj, **kwargs)
1622
/opt/conda/lib/python3.7/tarfile.py in __init__(self, name, mode, fileobj, format, tarinfo, dereference, ignore_zeros, encoding, errors, pax_headers, debug, errorlevel, copybufsize)
1483 self.firstmember = None
-> 1484 self.firstmember = self.next()
1485
/opt/conda/lib/python3.7/tarfile.py in next(self)
2300 elif self.offset == 0:
-> 2301 raise ReadError(str(e))
2302 except EmptyHeaderError:
ReadError: invalid header
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
<ipython-input-15-2abbf3aab3ae> in <module>
----> 1 torch.load('multi_task/checkpoints/latest_pet/latest_net_G.pth.tar', map_location='cpu')
/opt/conda/lib/python3.7/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
385 f = f.open('rb')
386 try:
--> 387 return _load(f, map_location, pickle_module, **pickle_load_args)
388 finally:
389 if new_fd:
/opt/conda/lib/python3.7/site-packages/torch/serialization.py in _load(f, map_location, pickle_module, **pickle_load_args)
558 if zipfile.is_zipfile(f):
559 # .zip is used for torch.jit.save and will throw an un-pickling error here
--> 560 raise RuntimeError("{} is a zip archive (did you mean to use torch.jit.load()?)".format(f.name))
561 # if not a tarfile, reset file offset and proceed
562 f.seek(0)
RuntimeError: multi_task/checkpoints/latest_pet/latest_net_G.pth.tar is a zip archive (did you mean to use torch.jit.load()?)
And here's how I save the model:
def save_networks(self, epoch):
"""Save all the networks to the disk.
Parameters:
epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name)
"""
for name in self.model_names:
if isinstance(name, str):
save_filename = '%s_net_%s.pth' % (epoch, name)
save_path = os.path.join(self.save_dir, save_filename)
net = getattr(self, 'net' + name)
if len(self.gpu_ids) > 0 and torch.cuda.is_available():
if name == 'Rgr':
torch.save(net.state_dict(), save_path)
else:
torch.save(net.module.cpu().state_dict(), save_path)
net.cuda(self.gpu_ids[0])
else:
if name == 'Rgr':
torch.save(net.state_dict(), save_path)
else:
torch.save(net.cpu().state_dict(), save_path)
I don't know what's wrong with my checkpoint file. Because I can actually load my other checkpoint files successfully. Plus, my pytorch version is 1.1.0. Could you help me with this problem?
Thank you.
| I have found the solution. Because I'm using different clusters to train and debug, torch version of each is different. When saving the model, the torch version is 1.6.0 and it's 1.1.0 when loading it.
| https://stackoverflow.com/questions/63406621/ |
How to right shift each row of a matrix? | I have a matrix whose shape is (TxK, and K << T). I want to extend it into shape TxT, and right shift the i-th row with i steps.
For an example:
inputs: T= 5, and K = 3
1 2 3
1 2 3
1 2 3
1 2 3
1 2 3
expected outputs:
1 2 3 0 0
0 1 2 3 0
0 0 1 2 3
0 0 0 1 2
0 0 0 0 1
My solutions:
right_pad = T - K + 1
output = F.pad(input, (0, right_pad), 'constant', value=0)
output = output.view(-1)[:-T].view(T, T)
My solution will cause the error -- gradient computation has been modified by an in-place operation. Is there an efficient and feasible way to achieve my purpose?
| Your function is fine and is not a cause of your error (using PyTorch 1.6.0, if you are using other version, please update your dependencies).
Code below works fine:
import torch
import torch.nn as nn
import torch.nn.functional as F
T = 5
K = 3
inputs = torch.tensor(
[[1, 2, 3,], [1, 2, 3,], [1, 2, 3,], [1, 2, 3,], [1, 2, 3,],],
requires_grad=True,
dtype=torch.float,
)
right_pad = T - K + 1
output = F.pad(inputs, (0, right_pad), "constant", value=0)
output = output.flatten()[:-T].reshape(T, T)
output.sum().backward()
print(inputs.grad)
Please notice I have explicitly specified dtype as torch.float as you can't backprop integers.
view and slice will never break backpropagation, as the gradient is connected to single value, no matter whether it is viewed as 1D or unsqueezed 2D or whatever. Those are not modified in-place. In-place modification breaking gradient could be:
output[0, 3] = 15.
Also, your solution returns this:
tensor([[1., 2., 3., 0., 0.],
[0., 1., 2., 3., 0.],
[0., 0., 1., 2., 3.],
[0., 0., 0., 1., 2.],
[3., 0., 0., 0., 1.]], grad_fn=<ViewBackward>)
so you have a 3 in the bottom left corner. If that's not what you expect, you should add this line (multiplication by upper triangular matrix with 1) after output = output.flatten()[:-T].reshape(T,T):
output *= torch.triu(torch.ones_like(output))
which gives:
tensor([[1., 2., 3., 0., 0.],
[0., 1., 2., 3., 0.],
[0., 0., 1., 2., 3.],
[0., 0., 0., 1., 2.],
[0., 0., 0., 0., 1.]], grad_fn=<AsStridedBackward>)
And inputs.grad:
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 0.],
[1., 0., 0.]])
| https://stackoverflow.com/questions/63410108/ |
Subsets and Splits