instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Evaluate BERT Model param.requires_grad | I have a doubt regarding the evaluation on the test set of my bert model. During the eval part param.requires_grad is suppose to be True or False? indipendently if I did a full fine tuning during training or not. My model is in model.eval() mode but I want to be sure to not force nothing wrong in the Model() class when i call it for evaluation. Thanks !
if freeze_bert == 'True':
for param in self.bert.parameters():
param.requires_grad = False
#logging.info('freeze_bert: {}'.format(freeze_bert))
#logging.info('param.requires_grad: {}'.format(param.requires_grad))
if freeze_bert == 'False':
for param in self.bert.parameters():
param.requires_grad = True
| If you freeze your model then the parameter of the corresponding modules must not be updated, i.e. they should not require gradient computation: requires_grad=False.
Note nn.Module also has a requires_grad_ method:
if freeze_bert == 'True':
self.bert.requires_grad_(False)
elif freeze_bert == 'False:
self.bert.requires_grad_(True)
Ideally freeze_bert would be a boolean and you would simply do:
self.bert.requires_grad_(not freeze_bert)
| https://stackoverflow.com/questions/72687276/ |
Pytorch: How exactly dataloader get a batch from dataset? | I am trying to use pytorch to implement self-supervised contrastive learning. There is a phenomenon that I can't understand.
Here is my code of transformation to get two augmented views from original data:
class ContrastiveTransformations:
def __init__(self, base_transforms, n_views=2):
self.base_transforms = base_transforms
self.n_views = n_views
def __call__(self, x):
return [self.base_transforms(x) for i in range(self.n_views)]
contrast_transforms = transforms.Compose(
[
transforms.RandomResizedCrop(size=96),
transforms.ToTensor(),
]
)
data_set = CIFAR10(
root='/home1/data',
download=True,
transform=ContrastiveTransformations(contrast_transforms, n_views=2),
)
As the definition of ContrastiveTransformations, the type of data in my dataset is a list containing two tensors [x_1, x_2]. In my understanding, the batch from the dataloader should have the form of [data_batch, label_batch], and each item in data_batch is [x_1, x_2]. However, in fact, the form of the batch is in this way: [[batch_x1, batch_x2], label_batch], which is much more convinient to calculate infoNCE loss. I wonder that how DataLoader implements the fetch of the batch.
I have checked the code of DataLoader in pytorch, it seems that dataloader fetches the data in this way:
class _MapDatasetFetcher(_BaseDatasetFetcher):
def __init__(self, dataset, auto_collation, collate_fn, drop_last):
super(_MapDatasetFetcher, self).__init__(dataset, auto_collation, collate_fn, drop_last)
def fetch(self, possibly_batched_index):
if self.auto_collation:
data = [self.dataset[idx] for idx in possibly_batched_index]
else:
data = self.dataset[possibly_batched_index]
return self.collate_fn(data)
However I still didn't figure out how the dataloader generates the batch of x1 and x2 separately.
I would be very thankful if someone could give me an explanation.
| In order to convert the separate dataset batch elements to an assembled batch, PyTorch's data loaders use a collate function. This defines how the dataloader should assemble the different elements together to form a minibatch
You can define your own collate function and pass it to your data.DataLoader with the collate_fn argument. By default, the collate function used by dataloaders is default_collate defined in torch/utils/data/_utils/collate.py.
This is the behaviour of the default collate function as described in the header of the function:
# Example with a batch of `int`s:
>>> default_collate([0, 1, 2, 3])
tensor([0, 1, 2, 3])
# Example with a batch of `str`s:
>>> default_collate(['a', 'b', 'c'])
['a', 'b', 'c']
# Example with `Map` inside the batch:
>>> default_collate([{'A': 0, 'B': 1}, {'A': 100, 'B': 100}])
{'A': tensor([ 0, 100]), 'B': tensor([ 1, 100])}
# Example with `NamedTuple` inside the batch:
>>> Point = namedtuple('Point', ['x', 'y'])
>>> default_collate([Point(0, 0), Point(1, 1)])
Point(x=tensor([0, 1]), y=tensor([0, 1]))
# Example with `Tuple` inside the batch:
>>> default_collate([(0, 1), (2, 3)])
[tensor([0, 2]), tensor([1, 3])]
# Example with `List` inside the batch:
>>> default_collate([[0, 1], [2, 3]])
[tensor([0, 2]), tensor([1, 3])]
| https://stackoverflow.com/questions/72688503/ |
generate 1D tensor as unique index of rows of an 2D tensor (keeping the order and the original index) | This question is an updated version of generate 1D tensor as unique index of rows of an 2D tensor
Let's say we transform a 2D tensor to a 1D tensor by giving each, different row a different index, from 0 to the number of rows - 1.
[[1,4],[1,3],[1,2]] -> [0,1,2]
But if there are same rows, we repeat the index, like this below, the "original" index is k-1 for the k-th row
[[1,4],[1,2],[1,2]] -> [0,1,1]
Also if there is no repeat for the row (like the third row below), its index should be its original index, which is k-1 for the k-th row (for example 2 for [1,4]).
[[1,3],[1,3],[1,4]] -> [0,0,2]
A longer example:
[[1,2],[4,3],[1,4],[1,4],[4,3],[1,2],[5,6],[7,8]] -> [0,1,2,2,1,0,6,7]
How to implement this on PyTorch?
| See the non-vectorized solution from @Michael
d = {}; torch.tensor([d.setdefault(tuple(i.tolist()), e) for e, i in enumerate(t4)])
Another non-vectorized solution is
t4_list = t4.tolist(); torch.tensor(list(map(lambda x: t4_list.index(x), t4)))
| https://stackoverflow.com/questions/72689843/ |
PIL Image library load and save changing pixel values | I am currently working on an Image segmentation problem. As part of preprocessing, I'm trying to create mask values for 2 classes [0, 1]. While, saving the processed tensor and loading them back produces different mask values. My current guess is under the hood PIL normalizing pixel values.
If so how do I stop it from doing?
I have created below a simple example explaining the same.
tensor_img = torch.where(torch.Tensor(250,250,3) > 0, 1, 0)
img_arr = tensor_img.numpy().astype(np.uint8)
np.unique(img_arr, return_counts=True)
(array([0, 1], dtype=uint8), array([148148, 39352]))
img = Image.fromarray(img_arr)
img.save("tmp.jpg")
#read saved image
img = PIL.create("tmp.jpg")
tensor(img).unique(return_counts=True)
(tensor([0, 1], dtype=torch.uint8), tensor([62288, 212]))
| For this simple case (only 2 classes), you need to work with png and not jpeg since jpeg is a lossy compression and png is lossless.
tensor_img = torch.where(torch.Tensor(250,250,3) > 0, 1, 0)
img_arr = tensor_img.numpy().astype(np.uint8)
np.unique(img_arr, return_counts=True)
(array([0, 1], dtype=uint8), array([159189, 28311]))
img = Image.fromarray(img_arr)
img.save("tmp.png")
#read saved image
img = np.array(Image.open("tmp.png"))
torch.tensor(img).unique(return_counts=True)
(tensor([0, 1], dtype=torch.uint8), tensor([159189, 28311]))
For more classes it is preferred to work with color map.
| https://stackoverflow.com/questions/72696017/ |
How to create an "islands" style pytorch matrix | Probably a simple question, hopefully with a simple solution:
I am given a (sparse) 1D boolean tensor of size [1,N].
I would like to produce a 2D tensor our of it of size [N,N], containing islands which are induced by the 1D tensor. It will be the easiest to observe the following image example, where the upper is the 1D boolean tensor, and the matrix below represents the resulted matrix:
| Given a mask input:
>>> x = torch.tensor([0,0,0,1,0,0,0,0,1,0,0])
You can retrieve the indices with torch.diff:
>>> index = x.nonzero()[:,0].diff(prepend=torch.zeros(1), append=torch.ones(1)*len(x))
tensor([3., 5., 3.])
Then use torch.block_diag to create the diagonal block matrix:
>>> torch.block_diag(*[torch.ones(i,i) for i in index.int()])
tensor([[1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1.]])
| https://stackoverflow.com/questions/72697804/ |
How to solve TypeError: canβt convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first | I am modifying the 'train model' function below so to plot loss and accuracy graphs at every epochs during traning
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
losses=[]
accuracies=[]
y_loss = {} # loss history
y_loss['aug1_train'] = []
y_loss['valid'] = []
y_acc = {}
y_acc['aug1_train'] = []
y_acc['valid'] = []
x_epoch = []
fig = plt.figure()
ax0 = fig.add_subplot(121, title="loss")
ax1 = fig.add_subplot(122, title="accuracy")
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['aug1_train', 'valid']:
if phase == 'aug1_train':
scheduler.step()
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels,paths in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'aug1_train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'aug1_train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f} '.format(
phase, epoch_loss, epoch_acc))
y_loss[phase].append(epoch_loss)
y_acc[phase].append(epoch_acc)
# deep copy the model
if phase == 'valid' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
def draw_curve(current_epoch):
x_epoch.append(current_epoch)
ax0.plot(x_epoch, y_loss['aug1_train'], 'bo-', label='train')
ax0.plot(x_epoch, y_loss['valid'], 'ro-', label='val')
ax1.plot(x_epoch, y_acc['aug1_train'], 'bo-', label='train')
ax1.plot(x_epoch, y_acc['valid'], 'ro-', label='val')
if current_epoch == 0:
ax0.legend()
ax1.legend()
fig.savefig(os.path.join('/content/drive/My Drive/Stanford40/Graphs', 'train.jpg'))
draw_curve(epoch)
if phase=='aug1_train':
losses.append(epoch_loss)
accuracies.append(epoch_acc)
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model,losses,accuracies
and I load the Densenet161 for traning as below
#Load Pretrained Densenet161 model
model_ft = models.densenet161(pretrained=True)
model_ft.classifier=nn.Linear(2208,11)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
opt = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
sched = lr_scheduler.StepLR(opt, step_size=5, gamma=0.1)
Finally I run the code below to start training:
model_ft,losses,accuracies = train_model(model_ft, criterion,opt ,sched,num_epochs=30)
and got this error as in the picture below:
How can I modify the code to get away from this error by using tensor.cpu() ?
| What if try to get item() here
running_corrects += torch.sum(preds == labels.data).item()
and remove double() when dividing?
epoch_acc = running_corrects / dataset_sizes[phase]
| https://stackoverflow.com/questions/72699547/ |
What would be the returned values from torch.Torch.size() apart from height and width? | I'm using a module (SwimIR) for image super resolution based on PyTorch
One of the operations that is done is:
b, c, h, w = img_lq.size()
E = torch.zeros(b, c, h*scale_factor, w*scale_factor).type_as(img_lq)
W = torch.zeros_like(E)
I.e., getting the tensor shape and creating a new tensor of zeros of the same shape. I'm trying to understand what would be the two assigned variables b and c returned by the method size() apart from h and w that are height and width respectively.
Looking in PyTorch documentation, I haven't found anything that could give me a clue about what are those 2 variables.
| The torch.Tensor.size function returns the shape of the tensor. As would accessing torch.Tensor.shape directly.
In other words the first line is simply unpacking the dimension sizes of img_lq as: b ("batch size"), c ("channels"), h ("height"), and w ("width"). Of course, this code makes the assumption that img_lq has exactly four dimensions.
| https://stackoverflow.com/questions/72700171/ |
How can I remove "for loop iteration" by using torch tensor operator? | I want to get rid of "for loop iteration" by using Pytorch functions in my code. But the formula is complicated and I can't find a clue. Can the "for loop iteration" in the below replaced with the Torch operation?
B=10
L=20
H=5
mat_A=torch.randn(B,L,L,H)
mat_B=torch.randn(L,B,B,H)
tmp_B=torch.zeros_like(mat_B)
for x in range(L):
for y in range(B):
for z in range(B):
tmp_B[:,y,z,:]+=mat_B[x,y,z,:]*mat_A[z,x,:,:]
| This looks like a good setup for applying torch.einsum. However, we first need to explicit the : placeholders by defining each individual accumulation term.
In order to do so, consider the shape of your intermediate tensor results. The first, mat_B[x,y,z] is shaped (H,), while the second mat_A[z,x,] is shaped (L, H).
In pseudo-code your initial operation is as follows:
for x, y, z, l, h in LxBxBxLxH:
tmp_B[:,y,z,:] += mat_B[x,y,z,:]*mat_A[z,x,:,:]
Knowing this, we can reformulate your initial loop in pseudo-code as:
for x, y, z, l, h in LxBxBxLxH:
tmp_B[l,y,z,h] += mat_B[x,y,z,h]*mat_A[z,x,l,h]
Therefore, we can apply torch.einsum by using the same notation as above:
>>> torch.einsum('xyzh,zxlh->lyzh', mat_B, mat_A)
| https://stackoverflow.com/questions/72700909/ |
Why is there extra paramaters in my simple CNN? | Here is my problem : when I look for the number of parameters in my first block, I see 36928 parameters (which is what I expect). But when I used this block to construct a model in an other class nn.Module, there are 1792 extra parameters and I can't figure out where they come from.
I put some code below to illustrate.
class Conv2dBlock(torch.nn.Module):
def __init__(self, in_filters, out_filters, kernel_size=3):
super(Conv2dBlock, self).__init__()
self.conv2d_seq = torch.nn.Sequential()
for k in range(2):
self.conv2d_seq.append(torch.nn.Conv2d(in_channels=in_filters, out_channels=out_filters, kernel_size=kernel_size, padding='same'))
self.conv2d_seq.append(torch.nn.ReLU())
in_filters = out_filters
def forward(self, input):
out = self.conv2d_seq(input)
return out
En then, I use this block in an other nn.Module :
class EncoderBlock(torch.nn.Module):
def __init__(self):
super(EncoderBlock, self).__init__()
self.conv2d = Conv2dBlock(3, 64)
self.maxpool = torch.nn.MaxPool2d(kernel_size=2)
def forward(self, input):
x = self.conv2d(input)
p = self.maxpool(x)
out = torch.nn.functional.dropout(p, 0.3)
return x, out
And finaly :
class UNet_model(torch.nn.Module):
def __init__(self):
super(UNet_model, self).__init__()
self.encoder_block1 = EncoderBlock()
def forward(self, input):
p1 = self.encoder_block1(input)
# I removed useless code
return p1
model = UNet_model()
summary(model, (3,128,128))
This last class constructs a model with 38 720 parameters, instead of 36 928. It seems there is an extra convolutional layer ((3,64, (3,3)) = 1792 params) applied twice to the input... I don't understand.
Can somebody take a look ?
Thanks !
| First of all, torch.nn.Sequential() doesn't support append method, it should be changed to add_module, like this:
for k in range(2):
self.conv2d_seq.add_module(f"conv_{k}",torch.nn.Conv2d(in_channels=in_filters, out_channels=out_filters, kernel_size=kernel_size, padding='same'))
self.conv2d_seq.add_module(f"relu_{k}",torch.nn.ReLU())
in_filters = out_filters
Second, if you run torchinfo summary on the initial block you will see:
==========================================================================================
Layer (type:depth-idx) Output Shape Param #
==========================================================================================
Conv2dBlock [1, 64, 64, 64] --
ββSequential: 1-1 [1, 64, 64, 64] --
β ββConv2d: 2-1 [1, 64, 64, 64] 1,792
β ββReLU: 2-2 [1, 64, 64, 64] --
β ββConv2d: 2-3 [1, 64, 64, 64] 36,928
β ββReLU: 2-4 [1, 64, 64, 64] --
==========================================================================================
Total params: 38,720
Trainable params: 38,720
Non-trainable params: 0
Total mult-adds (M): 158.60
==========================================================================================
Input size (MB): 0.05
Forward/backward pass size (MB): 4.19
Params size (MB): 0.15
Estimated Total Size (MB): 4.40
==========================================================================================
So you can see that you have two conv layers (1,792 + 36,928) as you specified 2 layers in your for loop: for k in range(2).
| https://stackoverflow.com/questions/72701110/ |
unsupported layer type "ScatterNDUpdate" | This issue relates to IR inferencing using OpenVINO on NCS2 (MYRIAD). The ONNX representation of the model I am trying to convert comprises 8 ScatterNDupdate layers, which OpenVINO doesn't support on NCS2.
Is there any way to get rid of those layers without affecting the model's functionality?
Does OpenVINO provide any alternative layer which will not affect the inferences quantitatively or qualitatively?
For future references, is there any way to avoid of scatterNDupdate layers? (reason for the creation of these layers)
Link to IR and model
Details:
OpenVINO version: 2022.1
ONNX Opset: 11 (tried on 13, not useful for resolving the issue)
| The ScatterNDUpdate layer is indeed unsupported for NCS2 since it is not listed in this Supported Layer here.
Your available option is to create a custom layer for VPU that could replace the ScatterNDUpdate functionality. To enable operations not supported by OpenVINOβ’ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for your target device
You may refer to this guide.
| https://stackoverflow.com/questions/72701301/ |
Getting different results after converting a model to from pytorch to ONNX | I'm coverting a googlenet model form pytorch to onnx using the following code:
torch.onnx.export(model, # model being run
input_batch, # model input (or a tuple for multiple inputs)
"google-net-onnx-test.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable length axes
'output' : {0 : 'batch_size'}})
When I run the model on pytorch for this image:
I get the right results:
Samoyed 0.9378381967544556
Pomeranian 0.00828344002366066
Great Pyrenees 0.005603068508207798
Arctic fox 0.005527767818421125
white wolf 0.004741032607853413
But when I do it with ONNX I get this:
The pre and pos processing code is different for each case, I It should be equivalent.
This is the complete code in Pytorch:
import torch
from PIL import Image
from torchvision import transforms
model = torch.hub.load('pytorch/vision:v0.10.0', 'googlenet', pretrained=True)
model.eval()
input_image = Image.open(filename)
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model
# move the input and model to GPU for speed if available
if torch.cuda.is_available():
input_batch = input_batch.to('cuda')
model.to('cuda')
with torch.no_grad():
output = model(input_batch)
# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes
#print(output[0])
# The output has unnormalized scores. To get probabilities, you can run a softmax on it.
probabilities = torch.nn.functional.softmax(output[0], dim=0)
print(probabilities[:2])
# Read the categories
with open("imagenet_classes.txt", "r") as f:
categories = [s.strip() for s in f.readlines()]
# Show top categories per image
top5_prob, top5_catid = torch.topk(probabilities, 5)
for i in range(top5_prob.size(0)):
print(categories[top5_catid[i]], top5_prob[i].item())
And this the code for ONNX
from PIL import Image
import imageio
import onnxruntime as ort
import numpy as np
import matplotlib.pyplot as plt
import numpy as np
from collections import namedtuple
import os
import time
def get_image(path):
'''
Using path to image, return the RGB load image
'''
img = imageio.imread(path, pilmode='RGB')
return img
# Pre-processing function for ImageNet models using numpy
def preprocess(img):
'''
Preprocessing required on the images for inference with mxnet gluon
The function takes loaded image and returns processed tensor
'''
img = np.array(Image.fromarray(img).resize((224, 224))).astype(np.float32)
img[:, :, 0] -= 123.68
img[:, :, 1] -= 116.779
img[:, :, 2] -= 103.939
img[:,:,[0,1,2]] = img[:,:,[2,1,0]]
img = img.transpose((2, 0, 1))
img = np.expand_dims(img, axis=0)
return img
def predict(path):
img_batch = preprocess(get_image(path))
outputs = ort_session.run(
None,
{"input": img_batch.astype(np.float32)},
)
a = np.argsort(-outputs[0].flatten())
results = {}
for i in a[0:5]:
results[labels[i]]=float(outputs[0][0][i])
return results
ort_session = ort.InferenceSession("/content/google-net-onnx-test.onnx")
with open('synset.txt', 'r') as f:
labels = [l.rstrip() for l in f]
image_path = "/content/dog.jpg"
predict(image_path)
I took the code of Pytorch from this tutorial
And the code for ONNX fro the github's ONNX Zoo
Edit:
From the comments of @jhso, I think the normalisation step:
mean=[0.485, 0.456, 0.406]
I seems to me that is equivalent to:
img[:, :, 0] -= 123.68
img[:, :, 1] -= 116.779
img[:, :, 2] -= 103.939
because:
constant = 256
a,b,c = 123.68/constant, 116.779/constant, 103.939/constant
print (f'{a:.3f} {b:.3f} {c:.3f}')
0.483 0.456 0.406
Regarding the std part, I'm not sure were it happend or if it is equivalent to:
img[:,:,[0,1,2]] = img[:,:,[2,1,0]]
img = img.transpose((2, 0, 1))
Also I ran the code again today and got a closer result:
| Your preprocessing is wrong. Note that you have a center crop (less important) and a std deviation normalisation step you're not using. You're also seemingly converting from BGR which isn't required when using PIL (it's more of an opencv thing) - happy to be corrected if I'm wrong as I'm going from memory.
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
Your preprocessing stage should look something (ymmv) like this:
# Pre-processing function for ImageNet models using numpy
def preprocess(img):
'''
Preprocessing required on the images for inference with mxnet gluon
The function takes loaded image and returns processed tensor
'''
img = np.array(Image.fromarray(img).resize((256, 256))).astype(np.float32)
#center crop
rm_pad = (256-224)//2
img = img[rm_pad:-rm_pad,rm_pad:-rm_pad]
#normalize to 0-1
img /= 255.
#normalize by mean + std
img = (img - np.array([0.485, 0.456, 0.406]))/np.array([0.229, 0.224, 0.225])
# img[:,:,[0,1,2]] = img[:,:,[2,1,0]] #don't think this is needed?
img = img.transpose((2, 0, 1))
img = np.expand_dims(img, axis=0)
return img
| https://stackoverflow.com/questions/72705873/ |
Why I'm getting error: "Boolean value of Tensor with more than one value is ambiguous" | I'm trying to run the following code:
l = torch.tensor([0, 1, 1, 1], requires_grad=False)
r = torch.rand(4, 2)
torch.nn.CrossEntropyLoss(r, l)
And I'm getting error:
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
I looked here: Bool value of Tensor with more than one value is ambiguous in Pytorch but didn't understand the answers.
What do I need to change in order to run the code ?
| The object you are manipulating, torch.nn.CrossEntropyLoss, is a PyTorch module class, not a function.
Therefore, you should either intialize it beforehand:
>>> ce_loss = nn.CrossEntropyLoss()
>>> cel_loss(r, l)
Or use the functional interface, i.e. torch.nn.functional.cross_entropy:
>>> F.cross_entropy(r, l)
| https://stackoverflow.com/questions/72706218/ |
How to create an iterable DataPipe with PyTorch using txt files | I have two text files to train a transformer model. However, instead of using PyTorch's own datasets, I'm using something I downloaded from the internet.
source = open('./train_de.de', encoding='utf-8').read().split('\n')
target = open('./train_en.en', encoding='utf-8').read().split('\n')
With the code above, I have some Danish sentences in a list named "source", and their translation in English sentences in another list named "target".
My question is, how can I make an iterable DataPipe with PyTorch such that when I write something like:
source, target = next(iter(train_iter))
this will give me the Danish sentence with it's corresponding English translation in seperate strings?
| You can use the Dataset and DataLoader class for that.
import torch
class YourDataset(torch.utils.data.Dataset):
def __init__(self) -> None:
self.source = open('./train_de.de', encoding='utf-8').read().split('\n')
self.target = open('./train_en.en', encoding='utf-8').read().split('\n')
def __getitem__(self, idx) -> torch.Tensor:
# load one sample by index, e.g like this:
source_sample = self.source[idx]
target_sample = self.target[idx]
# do some preprocessing, convert to tensor and what not
return source_sample, target_sample
def __len__(self):
return len(self.source)
Now you can create a DataLoader from your custom dataset:
yourDataset = YourDataset()
dataloader = torch.utils.data.DataLoader(
yourDataset,
batch_size=8,
num_workers=0,
shuffle=True
)
Now you can iterate through the dataloader (using a loop or next(iter(...))) and one iteration returns as many samples as your batchsize (in this case 8 stacked samples).
| https://stackoverflow.com/questions/72707540/ |
Are several subsequent `reshape` operations equivalent to only one? | In Pytorch,
I recently stumbled onto code that looks like this.
# initially, my_tensor is one_dimensional, of length b*x*y
my_tensor = my_tensor.reshape(b, x, y)
my_tensor = my_tensor.reshape(b, x*y)
Is it equivalent to only writing the second line?
my_tensor = my_tensor.reshape(b, x*y)
And in general, is doing several reshape operations always equivalent to only doing the last one?
Intuitively, I think so, but the documentation for reshape doesnβt really mention any invariant, and I couldnβt find information for the inner representation of tensors and how reshape changed that
| The reshape operation does not (need to) touch the underlying data. It simply adjusts the "meta-data" about the dimensions. So a series of reshape operations (without any operations in between!!) is equal to a single reshape operation.
| https://stackoverflow.com/questions/72708646/ |
Inheriting the PyTorch DataLoader class | Can we inherit the DataLoader class? If so, are there any specific restrictions to it?
I know we can do so for the Dataset class, but I need to know specifically about the DataLoader.
I have some requirements for a specific use case which involves overriding some of the DataLoader methods and adding some new ones.
Would really appreciate the help!
Thanks....
| Thanks @Jan and @TheodorPeifer for your useful inputs..! :)
I was able to resolve this with a combination of NamedTuple returns from the dataset __get_item__ calls with the "worker_specific_info"(torch.utils.data.get_worker_info()) along with a custom collate_fn implementation.
| https://stackoverflow.com/questions/72709946/ |
Updating pytorch tensor by matching multiple boolean conditions on the original values | I currently am updating values in a Pytorch tensor using multiple OR conditions:
>>> import torch
>>> my_tensor = torch.tensor([0, 1, 2, 3, 4, 5])
>>> condition = ((my_tensor==1) | (my_tensor==4) | (my_tensor==5))
>>> my_tensor[condition] = 0
>>> my_tensor
[0, 0, 2, 3, 0, 0]
My list of conditions is much longer than the toy example above. Can the condition operator match a list? If not, what is the best solution?
| How about using a combination of torch.where and torch.isin like below:
>>> torch.where(torch.isin(my_tensor, torch.tensor([1,4,5])), 0, my_tensor)
tensor([0, 0, 2, 3, 0, 0])
Update, Second approach: We can use torch.reshape.
(torch.isin not exist in pytorch==1.9.1 as you say in comment)
>>> mask = (my_tensor == torch.reshape(torch.tensor([1,4,5]), (-1,1))).any(0)
>>> torch.where(mask, 0, my_tensor)
| https://stackoverflow.com/questions/72716497/ |
Scipy Optimize Minimize: Optimization terminated successfully but not iterating at all | I am trying to code an optimizer finding the optimal constant parameters so as to minimize the MSE between an array y and a generic function over X. The generic function is given in pre-order, so for example if the function over X is x1 + c*x2 the function would be [+, x1, *, c, x2]. The objective in the previous example, would be minimizing:
sum_for_all_x (y - (x1 + c*x2))^2
I show next what I have done to solve the problem. Some things that sould be known are:
X and y are torch tensors.
constants is the list of values to be optimized.
def loss(self, constants, X, y):
stack = [] # Stack to save the partial results
const = 0 # Index of constant to be used
for idx in self.traversal[::-1]: # Reverse the prefix notation
if idx > Language.max_variables: # If we are dealing with an operator
function = Language.idx_to_token[idx] # Get its associated function
first_operand = stack.pop() # Get first operand
if function.arity == 1: # If the arity of the operator is one (e.g sin)
stack.append(function.function(first_operand)) # Append result
else: # Same but if arity is 2
second_operand = stack.pop() # Need a second operand
stack.append(function.function(first_operand, second_operand))
elif idx == 0: # If it is a constant -> idx 0 indicates a constant
stack.append(constants[const]*torch.ones(X.shape[0])) # Append constant
const += 1 # Update
else:
stack.append(X[:, idx - 1]) # Else append the associated column of X
prediction = stack[0]
return (y - prediction).pow(2).mean().cpu().numpy()
def optimize_constants(self, X, y):
'''
# This function optimizes the constants of the expression tree.
'''
if 0 not in self.traversal: # If there are no constants to be optimized return
return self.traversal
x0 = [0 for i in range(len(self.constants))] # Initial guess
ini = time.time()
res = minimize(self.loss, x0, args=(X, y), method='BFGS', options={'disp': True})
print(res)
print('Time:', time.time() - ini)
The problem is that the optimizer theoretically terminates successfully but does not iterate at all. The output res would be something like that:
Optimization terminated successfully.
Current function value: 2.920725
Iterations: 0
Function evaluations: 2
Gradient evaluations: 1
fun: 2.9207253456115723
hess_inv: array([[1]])
jac: array([0.])
message: 'Optimization terminated successfully.'
nfev: 2
nit: 0
njev: 1
status: 0
success: True
x: array([0.])
So far I have tried to:
Change the method in the minimizer (e.g Nelder-Mead, SLSQP,...) but it happens the same with all of them.
Change the way I return the result (e.g (y - prediction).pow(2).mean().item())
| It seems that scipy optimize minimize does not work well with Pytorch. Changing the code to use numpy ndarrays solved the problem.
| https://stackoverflow.com/questions/72717049/ |
why my pytorch model summy has no parameters? | im beginer of pytorch.
I want to auto encoder model similar to U-Net.
so I make below code, and see summary using pytorch_model_summary but the result told me model have any parameters...
why my model have any parameters??
class unet_like(nn.Module):
def __init__(self):
super(unet_like, self).__init__()
def conv2d_block(self, in_channels, out_channels, x):
x = nn.Conv2d(in_channels=in_channels, out_channels = out_channels, kernel_size = 3, padding = "same")(x)
x = nn.BatchNorm2d(out_channels)(x)
x = nn.ReLU()(x)
x = nn.Conv2d(in_channels = out_channels, out_channels = out_channels, kernel_size = 3, padding = "same")(x)
x = nn.BatchNorm2d(out_channels)(x)
x = nn.ReLU()(x)
return x
def forward(self, x):
c1 = self.conv2d_block(3, 16, x)
p1 = nn.MaxPool2d(kernel_size = 2)(c1)
p1 = nn.Dropout2d(0.1)(p1)
c2 = self.conv2d_block(16, 32, p1)
p2 = nn.MaxPool2d(kernel_size = 2)(c2)
p2 = nn.Dropout(0.1)(p2)
c3 = self.conv2d_block(32, 64, p2)
p3 = nn.MaxPool2d(kernel_size = 2)(c3)
p3 = nn.Dropout(0.1)(p3)
c4 = self.conv2d_block(64, 128, p3)
p4 = nn.MaxPool2d(kernel_size = 2)(c4)
p4 = nn.Dropout(0.1)(p4)
c5 = self.conv2d_block(128, 256, p4)
# nn.ConvTranspose2d(in_channels = 16, out_channels = 64, kernel_size = 3, stride = 1, padding = (1, 1)),
# nn.ReLU(),
u6 = nn.ConvTranspose2d(in_channels=256, out_channels=128, kernel_size = 2, stride = 2, output_padding = (0,1))(c5)
print(u6.shape)
print(c4.shape)
u6 = torch.cat([u6, c4], 1) # u6: 128, c4: 128
print(u6.shape)
u6 = nn.Dropout(0.1)(u6)
c6 = self.conv2d_block(256, 128, u6)
u7 = nn.ConvTranspose2d(in_channels = 128, out_channels = 64, kernel_size = 2, stride = 2, output_padding = (1,0))(c6)
u7 = torch.cat([u7, c3], 1)
u7 = nn.Dropout(0.1)(u7)
c7 = self.conv2d_block(128, 64, u7)
u8 = nn.ConvTranspose2d(in_channels = 64, out_channels = 32, kernel_size = 2, stride = 2, output_padding = (0,1))(c7)
u8 = torch.cat([u8, c2], 1)
u8 = nn.Dropout(0.1)(u8)
c8 = self.conv2d_block(64, 32, u8)
u9 = nn.ConvTranspose2d(in_channels = 32, out_channels = 16, kernel_size = 2, stride = 2, output_padding = (0,1))(c8)
u9 = torch.cat([u9, c1], 1)
u9 = nn.Dropout(0.1)(u9)
c9 = self.conv2d_block(32, 16, u9)
# in_channels, kernel_size,
# outputs = Conv2D(1, (1, 1), activation = "sigmoid")(c9)
c9 = nn.Conv2d(in_channels=16, out_channels = 1, kernel_size = 3, padding = (1,1))(c9)
outputs = nn.Sigmoid()(c9)
return outputs
model = unet_like().to("cpu")
print(pytorch_model_summary.summary(model, torch.tensor(train_images[:1], dtype = torch.float32).to("cpu"), show_input=True))
torch.Size([1, 128, 12, 9])
torch.Size([1, 128, 12, 9])
torch.Size([1, 256, 12, 9])
-----------------------------------------------------------------------
Layer (type) Input Shape Param # Tr. Param #
=======================================================================
unet_like-1 [1, 3, 100, 75] 0 0
=======================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0
-----------------------------------------------------------------------
| Your model definition must go in the initializer of your module class: __init__, while the forward handles the inference logic of that module. The PyTorch layers you are using are module classes, they need to be initialized before being use for inference. Unlike frameworks like keras, PyTorch doesn't need a compilation step, the logic is defined declaratively.
Here's something to get you started:
class UNetlike(nn.Module):
def __init__(self):
super().__init__()
self.c1 = nn.Sequential(self.conv2d_block(3, 16),
nn.MaxPool2d(kernel_size=2),
nn.Dropout2d(0.1))
self.c2 = nn.Sequential(self.conv2d_block(16, 32),
nn.MaxPool2d(kernel_size=2),
nn.Dropout2d(0.1))
self.c2 = nn.Sequential(self.conv2d_block(32, 64),
nn.MaxPool2d(kernel_size=2),
nn.Dropout2d(0.1))
self.c4 = nn.Sequential(self.conv2d_block(64, 128),
nn.MaxPool2d(kernel_size=2),
nn.Dropout2d(0.1))
def conv2d_block(self, in_channels, out_channels):
return nn.Sequential(
nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
kernel_size=3, padding="same"),
nn.BatchNorm2d(out_channels),
nn.ReLU(),
nn.Conv2d(in_channels=out_channels, out_channels=out_channels,
kernel_size=3, padding="same"),
nn.BatchNorm2d(out_channels),
nn.ReLU())
def forward(self, x):
p1 = self.c1(x)
p2 = self.c2(p1)
p3 = self.c3(p2)
p4 = self.c3(p3)
# so on and so forth
| https://stackoverflow.com/questions/72717778/ |
Am getting the following TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first | I have the error being displayed whilst trying to plot the graph...
I am sharing the code in the following link:
https://colab.research.google.com/drive/1nILxtGSSCmOKrcHg3-_SL0l2bsIwUM1p?usp=sharing
I think I'm missing 'tensor.cpu()' somewhere but I can't really pinpoint it.. Everything else works :/ Can anyone help please?
| The thing is that the result of torch.sum(...) is a tensor. Try to change it in the following way in the lines where you add to running_corrects/val_running_corrects: torch.sum(...).item() and then update code accordingly. There are probably other ways to do it but it should work. Here is very similar question by the way: https://stackoverflow.com/a/72704295/14787618
| https://stackoverflow.com/questions/72718881/ |
mat1 and mat2 shapes cannot be multiplied (128x4 and 128x64) | Could not find out why the mat1 from the convolutional network is 128x4 and not 4x128. The following is the convolutional network used:
model = torch.nn.Sequential(
torch.nn.Conv2d(2,32,kernel_size=3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2,2),
torch.nn.Conv2d(32,64,kernel_size=3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2,2),
torch.nn.Conv2d(64,128,kernel_size=3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2,2,padding=1),
torch.nn.Flatten(),
torch.nn.Linear(128, 64),
torch.nn.ReLU(),
torch.nn.Linear(64,4)
)
The model training code is as follows:
epochs = 1000
losses = [] #A
for i in range(epochs): #B
game = Gridworld(size=size, mode='static') #C
# state_ = game.board.render_np().reshape(1,l1) + np.random.rand(1,l1)/10.0 #D
state_ = game.board.render_np() + np.random.rand(size,size)/10.0 #D
state1 = torch.from_numpy(state_).float() #E
print(state1.shape)
status = 1 #F
while(status == 1): #G
qval = model(state1) #H
qval_ = qval.data.numpy()
if (random.random() < epsilon): #I
action_ = np.random.randint(0,4)
else:
action_ = np.argmax(qval_)
action = action_set[action_] #J
game.makeMove(action) #K
state2_ = game.board.render_np().reshape(1,l1) + np.random.rand(1,l1)/10.0
state2 = torch.from_numpy(state2_).float() #L
reward = game.reward()
with torch.no_grad():
newQ = model(state2.reshape(1,l1))
maxQ = torch.max(newQ) #M
if reward == -1: #N
Y = reward + (gamma * maxQ)
else:
Y = reward
Y = torch.Tensor([Y]).detach()
X = qval.squeeze()[action_] #O
loss = loss_fn(X, Y) #P
print(i, loss.item())
clear_output(wait=True)
optimizer.zero_grad()
loss.backward()
losses.append(loss.item())
optimizer.step()
state1 = state2
if reward != -1: #Q
status = 0
if epsilon > 0.1: #R
epsilon -= (1/epochs)
The error log shown is:
torch.Size([2, 12, 12])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-22-d2f43f09fd01> in <module>()
74 status = 1 #F
75 while(status == 1): #G
---> 76 qval = model(state1) #H
77 qval_ = qval.data.numpy()
78 if (random.random() < epsilon): #I
3 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input)
101
102 def forward(self, input: Tensor) -> Tensor:
--> 103 return F.linear(input, self.weight, self.bias)
104
105 def extra_repr(self) -> str:
RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x4 and 128x64)
mat1 should be the output of the convolutional network after it is flattened, and mat2 is the linear network following it.
Appreciate any help. Thanks!
| Here are the output shapes for each layer
Conv2d(2,32,kernel_size=3,padding=1) # 32x12x12
MaxPool2d(2,2) # 32x6x6
Conv2d(32,64,kernel_size=3,padding=1) # 64x6x6
MaxPool2d(2,2) # 64x3x3
Conv2d(64,128,kernel_size=3,padding=1) # 128x3x3
MaxPool2d(2,2,padding=1) # 128x2x2
Flatten() # 128x4
You'll need to change the kernel parameters and padding sizes if you wish to obtain an output of a given shape. This link might help in calculating the output shapes after each layer.
Another approach is that you could take a transpose of the flattened array and pass it into the Linear layers. You'll need to add the line in your forward function like below
import torch
import torch.nn as nn
class NN(nn.Module):
def __init__(self):
super(NN, self).__init__()
self.layer1 = nn.Sequential(
torch.nn.Conv2d(2,32,kernel_size=3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2,2))
self.layer2 = nn.Sequential(
torch.nn.Conv2d(32,64,kernel_size=3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2,2))
self.layer3 = nn.Sequential(
torch.nn.Conv2d(64,128,kernel_size=3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2,2,padding=1))
self.flattened_tensor = nn.Flatten()
self.linear_layer = nn.Sequential(
torch.nn.Linear(128, 64),
torch.nn.ReLU(),
torch.nn.Linear(64,4)
)
def forward(self, inp):
conv_output = self.layer3(self.layer2(self.layer1(inp)))
flattened_output = self.flattened_tensor(conv_output)
transposed_matrix = torch.transpose(flattened_output, 0, 1)
linear_output = self.linear_layer(transposed_matrix)
return linear_output
model = NN()
output = model(arr)
| https://stackoverflow.com/questions/72724452/ |
KL Divergence of two torch.distribution.Distribution objects | I'm trying to determine how to compute KL Divergence of two torch.distribution.Distribution objects. I couldn't find a function to do that so far. Here is what I've tried:
import torch as t
from torch import distributions as tdist
import torch.nn.functional as F
def kl_divergence(x: t.distributions.Distribution, y: t.distributions.Distribution):
"""Compute the KL divergence between two distributions."""
return F.kl_div(x, y)
a = tdist.Normal(0, 1)
b = tdist.Normal(1, 1)
print(kl_divergence(a, b)) # TypeError: kl_div(): argument 'input' (position 1) must be Tensor, not Normal
| torch.nn.functional.kl_div is computing the KL-divergence loss. The KL-divergence between two distributions can be computed using torch.distributions.kl.kl_divergence.
| https://stackoverflow.com/questions/72726304/ |
evaluation about K-fold cross validation | After K-fold cross validation, which evaluation metric was averaged?
Precision and recall, or F-measure?
import pandas as pd
import numpy as np
from sklearn.model_selection import KFold
KFold(n_splits=2, random_state=None, shuffle=False)
| The sklearn.model_selection.KFold function is a utility that provides the folds but does not actually perform k-fold validation. You have to implement this yourself!
See documentation description:
Provides train/test indices to split data in train/test sets. Split
dataset into k consecutive folds (without shuffling by default).
Each fold is then used once as a validation while the k - 1 remaining
folds form the training set.
| https://stackoverflow.com/questions/72727655/ |
IsADirectoryError when loading my pytorch model with load_from_checkpoint | Could someone please explain to me why this function:
def train_graph_classifier(model_name, **model_kwargs):
pl.seed_everything(42)
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join('/home/predictor2', "GraphLevel" + model_name)
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
gpus=1 if str(device).startswith("cuda") else 0,
max_epochs=500,
progress_bar_refresh_rate=0)
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join('/home/predictor2', f"GraphLevel{model_name}.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = GraphLevelGNN.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything(42)
model = GraphLevelGNN(c_in=dataset.num_node_features,
c_out=1 if dataset.num_classes==2 else dataset.num_classes, #change
**model_kwargs)
trainer.fit(model, graph_train_loader, graph_val_loader)
model = GraphLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
# Test best model on validation and test set
train_result = trainer.test(model, graph_train_loader, verbose=False)
test_result = trainer.test(model, graph_test_loader, verbose=False)
result = {"test": test_result[0]['test_acc'], "train": train_result[0]['test_acc']}
return model, result
Returns the error:
Traceback (most recent call last):
File "stability_v3_alternative_net.py", line 604, in <module>
dp_rate=0.2)
File "stability_v3_alternative_net.py", line 591, in train_graph_classifier
model = GraphLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
File "/root/miniconda3/lib/python3.7/site-packages/pytorch_lightning/core/saving.py", line 139, in load_from_checkpoint
checkpoint = pl_load(checkpoint_path, map_location=lambda storage, loc: storage)
File "/root/miniconda3/lib/python3.7/site-packages/pytorch_lightning/utilities/cloud_io.py", line 46, in load
with fs.open(path_or_url, "rb") as f:
File "/root/miniconda3/lib/python3.7/site-packages/fsspec/spec.py", line 1043, in open
**kwargs,
File "/root/miniconda3/lib/python3.7/site-packages/fsspec/implementations/local.py", line 159, in _open
return LocalFileOpener(path, mode, fs=self, **kwargs)
File "/root/miniconda3/lib/python3.7/site-packages/fsspec/implementations/local.py", line 254, in __init__
self._open()
File "/root/miniconda3/lib/python3.7/site-packages/fsspec/implementations/local.py", line 259, in _open
self.f = open(self.path, mode=self.mode)
IsADirectoryError: [Errno 21] Is a directory: '/home/predictor'
where /home/predictor is the current directory i'm working in? (I made predictor2 directory because I get the same error when I replace predictor2 with predictor in the above code).
I understand that it's telling me that it's trying to write a file or something but it's finding that the location in a directory, I can get that from seeing other people's answers. But I can't see specifically here what the issue is becaues I don't name my working directory anywhere? The code was taken from this example.
| This:
model=GraphLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
is failing because you are trying to open a directory and not a file. Check that trainer.checkpoint_callback.best_model_path actually is the path to your .ckpt. As it looks like it is not, then you need to figure out why your callback isn't storing the right path. You can of course hardcode it yourself for an ugly solution.
| https://stackoverflow.com/questions/72732177/ |
How does `torch.gather()` construct new tensors? | I'm struggling to understand what the function torch.gather is doing.
If I have:
a = torch.tensor([[4,5,6],[7,8,9],[10,11,12]])
b = torch.tensor([[1,1],[1,2]])
Then
>>> torch.gather(input=a, dim=1, index=b)
tensor([[5, 5],
[8, 9]])
While
>>> torch.gather(input=a, dim=0, index=b)
tensor([[ 7, 8],
[ 7, 11]])
Can somebody explain very simply how these output tensors are actually constructed?
| On the first call you index a with b along dim=1 (2nd dimension). The performed operation is:
out[i,j] = a[i, b[i,j]]
Which returns:
[[ a[0, b[0,0]], a[0, b[0,1]] ],
[ a[1, b[1,0]], a[1, b[1,1]] ]]
While on the second call, you index a with b along dim=0 (1st dimension). This time around:
out[i,j] = a[b[i,j], j]
Which returns:
[[ a[b[0,0], 0], a[b[0,1], 1] ],
[ a[b[1,0], 0], a[b[1,1], 1] ]]
For more explanations, read on with this thread.
| https://stackoverflow.com/questions/72732786/ |
Reshaping with asterisk operator `*` in PyTorch | While reading this annotated implementation of Diffusion Probabilistic models in PyTorch, I got stuck at understanding this function
def extract(a, t, x_shape):
batch_size = t.shape[0]
out = a.gather(-1, t.cpu())
return out.reshape(batch_size, *((1,) * (len(x_shape) - 1))).to(t.device)
What it's not clear it's the final return statement, what does the *((1,) mean into reshape function? Does that asterisk correspond to the unpacking operator? And if yes, how is it used here?
| (1,) * (len(x_shape) - 1))
means to create a tuple with length len(x_shape) - 1 filled with just 1s
*(...)
means to spread the tuple into arguments
So it ends up being (say len(x_shape) == 5)
return out.reshape(batch_size, 1, 1, 1, 1).to(t.device)
| https://stackoverflow.com/questions/72733726/ |
Get min value and index (of min value) from torch.tensor shape=(x,1) | I want to get a tensor of the min value and its index from a tensor. I have tried torch.min but am unsure how the return works to get BOTH value and index of minimum.
>>> a = torch.tensor([[10], [5], [8], [2], [8]])
# Ideal output
>>> min_value
[2]
>>> min_index
[3]
How can I achieve this using torch.min?
| Here is code example of torch.min usage which returns named tuple with both values and indices of min values. It may have multiple value/indices depending on input tensor shape and dim parameter.
>>> result = torch.min(a, dim=0, keepdim=False)
>>> result.values
tensor(2)
>>> result.indices
tensor(3)
or simply a.min(dim=0)
| https://stackoverflow.com/questions/72739476/ |
Pytorch giving positional argument error while calling a method | Pytorch code is giving an error of missing position argument, while I have already given x as an input argument.
Code:
import torch.nn as nn
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
self.hidden = nn.Linear(8, 5)
self.output = nn.Linear(5, 1)
def forward(self, x):
x = 2*F.sigmoid(self.hidden(x))
x = F.softmax(self.output(x), dim= 0)
return x
x = torch.tensor([1.0, 2.0, 3.0, 4.0,5.0,6.0,7.0,8.0] , dtype = torch.float32)
f = Network()
print(f(x))
tensor([1.], grad_fn=)
Network.forward(x)
--------------------------------------------------------------------------- TypeError Traceback (most recent call
last) Input In [98], in <cell line: 1>()
----> 1 Network.forward(x)
TypeError: forward() missing 1 required positional argument: 'x'
| Network.forward(x) - in this line you are calling method using class, not instance. It requires 2 parameters in this case: self and x.
There is no need to call forward method directly.
The following lines perform forward call implicitly and it is a proper way of using it.
f = Network()
print(f(x))
UPD
f = Network()
network_output = f(x) # <-- this will perform `forward` method indirectly
It is described here: torch.nn.Module
Although the recipe for forward pass needs to be defined within this
function, one should call the Module instance afterwards instead of
this since the former takes care of running the registered hooks while
the latter silently ignores them.
Instead of f.forward(x) you have to do just f(x). Otherwise module functionality will be incomplete because you will not get any hooks registered for the given module. It is so for all modules in PyTorch. For example, you used self.hidden(x) in your code, not self.hidden.forward(x).
| https://stackoverflow.com/questions/72740496/ |
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first;' | I have the error being displayed whilst trying to plot the graph...
I am sharing the code in the following link:
https://colab.research.google.com/drive/1PooWIPVhm67iZquqZvxz3mdfmd6rv-3d#scrollTo=qSM7mNrKhBOt
I think I'm missing 'tensor.cpu()' somewhere but I can't really pinpoint it.. Everything else works :/ Can anyone help please?
def train_epoch(
model,
data_loader,
loss_fn,
optimizer,
device,
scheduler,
n_examples
):
model = model.train()
losses = []
correct_predictions = 0
for d in data_loader:
input_ids = d["input_ids"].to(device)
attention_mask = d["attention_mask"].to(device)
targets = d["targets"].to(device)
outputs = model(
input_ids=input_ids,
attention_mask=attention_mask
)
_, preds = torch.max(outputs, dim=1)
loss = loss_fn(outputs, targets)
correct_predictions += torch.sum(preds == targets)
losses.append(loss.item())
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
return correct_predictions.double() / n_examples, np.mean(losses)
def eval_model(model, data_loader, loss_fn, device, n_examples):
model = model.eval()
losses = []
correct_predictions = 0
with torch.no_grad():
for d in data_loader:
input_ids = d["input_ids"].to(device)
attention_mask = d["attention_mask"].to(device)
targets = d["targets"].to(device)
outputs = model(
input_ids=input_ids,
attention_mask=attention_mask
)
_, preds = torch.max(outputs, dim=1)
loss = loss_fn(outputs, targets)
correct_predictions += torch.sum(preds == targets)
losses.append(loss.item())
return correct_predictions.double() / n_examples, np.mean(losses)
%%time
history = defaultdict(list)
best_accuracy = 0
for epoch in range(EPOCHS):
print(f'Epoch {epoch + 1}/{EPOCHS}')
print('-' * 10)
train_acc, train_loss = train_epoch(
model,
train_data_loader,
loss_fn,
optimizer,
device,
scheduler,
len(df_train)
)
print(f'Train loss {train_loss} accuracy {train_acc}')
val_acc, val_loss = eval_model(
model,
val_data_loader,
loss_fn,
device,
len(df_val)
)
print(f'Val loss {val_loss} accuracy {val_acc}')
print()
history['train_acc'].append(train_acc)
history['train_loss'].append(train_loss)
history['val_acc'].append(val_acc)
history['val_loss'].append(val_loss)
if val_acc > best_accuracy:
torch.save(model.state_dict(), 'best_model_state.bin')
best_accuracy = val_acc
plt.plot(history['train_acc'], label='train accuracy')
plt.plot(history['val_acc'], label='validation accuracy')
plt.title('Training history')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend()
plt.ylim([0, 1]);
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/matplotlib/cbook/__init__.py in index_of(y)
1626 try:
-> 1627 return y.index.values, y.values
1628 except AttributeError:
AttributeError: 'builtin_function_or_method' object has no attribute 'values'
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
8 frames
<__array_function__ internals> in atleast_1d(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torch/_tensor.py in __array__(self, dtype)
730 return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
731 if dtype is None:
--> 732 return self.numpy()
733 else:
734 return self.numpy().astype(dtype, copy=False)
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
| When you are computing the number of correct predictions: correct_predictions += torch.sum(preds == targets), both preds and targets are CUDA tensors, which matplotlib knows nothing about.
In this case, we should detach the tensor (to stop Autograd tracking it), push the data from GPU to CPU, and convert it to numpy elements, like so: torch.sum(preds == targets).detach().cpu().numpy().
Further, since the number of correct predictions is just a single number, we can just do torch.sum(preds == targets).item() which is a shorthand for the above, but only if the tensor is a singleton.
This way, correct_predictions is a Python integer, & you can return float(correct_predictions) / n_examples from your methods and pass them onto matplotlib.
For further reading:
https://discuss.pytorch.org/t/what-is-the-difference-between-loss-and-loss-item/126083
https://discuss.pytorch.org/t/does-item-automatically-move-the-data-to-the-cpu/69629/6
https://discuss.pytorch.org/t/difference-between-loss-item-and-loss-detach-cpu-numpy/127773
| https://stackoverflow.com/questions/72741647/ |
Slice 3d-tensor-based dataset into smaller tensor lengths | I have a dataset for training networks, formed out of two tensors, my features and my labels. The shape of my demonstration set is [351, 4, 34] for features, and [351] for labels.
Now, I would like to re-shape the dataset into chunks of size k (ideally while loading data with DataLoader), to obtain a new demonstration set for features of shape [351 * n, 4, k] and the corresponding label shape [351 * n], with n = floor(34 / k). The main aim is to reduce the length of each feature, to decrease the size of my network afterwards.
As written example: Starting from
t = [[1, 2, 3, 4],
[5, 6, 7, 8]]
i.e. a [2, 4]-tensor, with
l = [1, 0]
as labels, I would like to be able to go to (with k = 2)
t = [[1, 2],
[3, 4],
[5, 6],
[7, 8]]
l = [1, 1, 0, 0]
or to (with k = 3)
t = [[1, 2, 3],
[5, 6, 7]]
l = [1, 0]
I found some solutions for reshaping one of the tensors (by using variations of split()), but then I would have to transfer that to my other tensor, too, and therefore I'd prefer solutions inside my DataLoader instead.
Is that possible?
| You can reshape the input to the desired shape (first dimension is n times longer) while the label can be repeated with torch.repeat_interleave.
def split(x, y, k=2):
n = floor(x.size(1) / k)
x_ = x.reshape(len(x)*n, -1)[:,:k]
y_ = y.repeat_interleave(len(x_)//len(y))
return x_, y_
You can test it like so:
>>> split(t, l, k=2)
(tensor([[1, 2],
[3, 4],
[5, 6],
[7, 8]]), tensor([1, 1, 0, 0]))
>>> split(t, l, k=3)
(tensor([[1, 2, 3],
[5, 6, 7]]), tensor([1, 0]))
I recommend doing this kind of processing in your dataset class.
| https://stackoverflow.com/questions/72744419/ |
Finetuning LayoutLM on FUNSD-like dataset - index out of range in self | I'm experimenting with huggingface transformers to finetune microsoft/layoutlmv2-base-uncased through AutoModelForTokenClassification on my custom dataset that is similar to FUNSD (pre-processed and normalized). After a few iterations of training I get this error :
Traceback (most recent call last):
File "layoutlmV2/train.py", line 137, in <module>
trainer.train()
File "..../lib/python3.8/site-packages/transformers/trainer.py", line 1409, in train
return inner_training_loop(
File "..../lib/python3.8/site-packages/transformers/trainer.py", line 1651, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "..../lib/python3.8/site-packages/transformers/trainer.py", line 2345, in training_step
loss = self.compute_loss(model, inputs)
File "..../lib/python3.8/site-packages/transformers/trainer.py", line 2377, in compute_loss
outputs = model(**inputs)
File "..../lib/python3.8/site-packages/torch/nn/modules/module.py", line 1131, in _call_impl
return forward_call(*input, **kwargs)
File "..../lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 1228, in forward
outputs = self.layoutlmv2(
File "..../lib/python3.8/site-packages/torch/nn/modules/module.py", line 1131, in _call_impl
return forward_call(*input, **kwargs)
File "..../lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 902, in forward
text_layout_emb = self._calc_text_embeddings(
File "..../lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 753, in _calc_text_embeddings
spatial_position_embeddings = self.embeddings._calc_spatial_position_embeddings(bbox)
File "..../lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 93, in _calc_spatial_position_embeddings
h_position_embeddings = self.h_position_embeddings(bbox[:, :, 3] - bbox[:, :, 1])
File "..../lib/python3.8/site-packages/torch/nn/modules/module.py", line 1131, in _call_impl
return forward_call(*input, **kwargs)
File "..../lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
return F.embedding(
File "..../lib/python3.8/site-packages/torch/nn/functional.py", line 2203, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
After further inspection (vocab size, bboxes, dimensions, classes...) I noticed that there's negative values inside the input tensor causing the error. While input tensors of successful previous iterations have unsigned integers only. These negative numbers are returned by _calc_spatial_position_embeddings(self, bbox) in modeling_layoutlmv2.py
line 92 :
h_position_embeddings = self.h_position_embeddings(bbox[:, :, 3] - bbox[:, :, 1])
What may cause the returned input values to be negative?
What could I do to prevent this error from happening?
Example of the input tensor that triggers the error in torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) :
tensor([[ 0, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 9, 9, 9, 9, 9, 9, 9, 9, 9,
9, 9, 9, 9, 9, 9, 9, 10, 10, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12,
12, 12, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 12, 12, 12, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
8, 5, 5, 5, 5, 5, 5, -6, -6, -6, -6, -6, -6, 1, 1, 1, 1, 1,
5, 5, 5, 5, 5, 5, 7, 5, 7, 7, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0]])
| After double checking the dataset and specifically the coordinates of the labels, I've found that some rows bbox coordinates lead to zero width or height. Here's a simplified example:
x1, y1, x2, y2 = dataset_row["bbox"]
print((x2-x1 < 1) or (y2-y1 < 1)) #output is sometimes True
After removing these labels from the dataset, the issue was resolved.
| https://stackoverflow.com/questions/72747399/ |
How to have CCC Loss Function in PyTorch? | How would one write a custom loss function (in this case for CCC) for PyTorch that behaves similar to other losses like backwards()?
| As long as you use PyTorch operators that are differentiable, your function will be too.
So, using torch.mean, torch.var, and torch.cov. Isn't that be what your looking for?
def CCCLoss(x, y):
ccc = 2*torch.cov(x, y) / (x.var() + y.var() + (x.mean() - y.mean())**2)
return ccc
| https://stackoverflow.com/questions/72750834/ |
PyTorch ImageFolder Dataset conversion to 1d | I have a dataset with three labels.
First, I'm loading my data into a Dataset with the ImageFolder class and the CenterCrop transform.
So every picture has now three channels r, g, b with 224x224 values from 0..1. I thought my NN has 224Γ224Γ3 Input Nodes.
I get this error:
RuntimeError: mat1 and mat2 shapes cannot be multiplied (21504x224 and 150528x200)
Assuming that the first mat1 is the Image (I have no clue where the 21504 is coming from) and the second one is the first nn.Linear(2242243, 200) Layer, I can see why they cannot be multiplied. So, I changed the nn.Linear(2242243) to nn.Linear(224). Now it works, but..
transform = transforms.Compose([transforms.Resize((300, 200)),
transforms.CenterCrop(224),
transforms.ToTensor()])
dataset = datasets.ImageFolder('/content/drive/MyDrive/Colab Notebooks/data', transform=transform)
# ... BatchSize = 32
model = nn.Sequential(
nn.Linear(224, 200), # <- The Size of one Row!
nn.Sigmoid(),
nn.Linear(200, 40),
nn.Sigmoid(),
nn.Dropout(p=0.2),
nn.Linear(40, 40),
nn.Sigmoid(),
nn.Linear(40, 20),
nn.Sigmoid(),
nn.Linear(20, 3)
).to(device)
# ...
prediction = model(x)
For some Reaseon my prediction has now this from.
prediction[32][3][224]
I would expect 32 items in a list for a Batch. Every Item from the Batch contains 3 values with the probability what the label is. But why does the Height / width came up here?.
I think I have to change the Format of the dataset from 224x224x3 (3d) to 150528 1d, but operations like view() did not work, probably because of the lazy loading from ImageFolder.
The NN works fine with the MINST dataset. (28x28) In and (10) Out. So my guess is: The dataset has to be transformed, but I can't figure out how.
| It can work if you apply transforms.Lambda and change transforms in the following way:
transform = transforms.Compose([transforms.Resize((300, 200)),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Lambda(lambda x: torch.flatten(x))])
As a result you will have (batch_size, 150528) shape for each batch which means that you need to specify 150528 input size for the first linear layer. Not sure that it's reasonable to classify images with linear fully-connected layers in such a case. It will work slowly and unlikely converge to some good predictions. You'd better consider using convolution layers instead.
| https://stackoverflow.com/questions/72753208/ |
Is there a PyTorch lightning equivalent for Tensorflow? | I saw PyTorch Lightning advertised as PyTorch but for people who don't want to worry so much about the underlying methodology. This narrative is on the PyTorch lightning website but also here for example.
For hardware reasons, does something similar exist for TensorFlow? I have a code example for neural nets here written in PyTorch and PyTorch Lightning but am not sure how to rewrite it in TensorFlow.
| Probably the best association would be Keras (formerly separate from but now for some time integrated in TF - you can you Keras as a high level API).
Note that you can also use tensorflow_addons (I personally enjoy working with it) package and other libraries&wrappers that come into the aid of TensorFlow, because since Keras is integrated into TF, you will be also very likely to use them on your Keras code.
| https://stackoverflow.com/questions/72754232/ |
RuntimeError: the derivative for βtargetβ is not implemented for Auto Encoder | I am getting the following error - RuntimeError: the derivative for βtargetβ is not implemented
I did have a look at similar posts however they are different from my problem. Iβm trying to code an Auto Encoder from scratch. Hereβs my code -
import torch
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
from torch import nn
from torchviz import make_dot
trainset = torchvision.datasets.FashionMNIST(root='./data', train=True,
download=True)
data = trainset.data.float()
# print(trainset.data.shape)
# plt.imshow(trainset.data[9])
# plt.show()
device = "cuda" if torch.cuda.is_available() else "cpu"
data = data.to(device)
print(f"Using device = {device}")
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.encode = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 30),
nn.ReLU()
)
self.decode = nn.Sequential(
nn.Linear(30, 512),
nn.ReLU(),
nn.Linear(512, 28*28),
nn.ReLU()
)
def forward(self, x):
x = self.flatten(x)
encoded = self.encode(x)
decoded = self.decode(encoded)
return decoded
model = NeuralNetwork().to(device)
# print(model)
lossFn = nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr = 1e-3)
for epoch in range(1000):
optimizer.zero_grad()
outputs = model(data)
loss = lossFn(outputs, outputs)
loss.backward()
optimizer.step()
model.forward(data)
| The reason you're having this error message is because you are calling a function which expects its argument to not require gradient computation. More specifically, nn.BCELoss expects the target (2nd argument) to not require gradient.
You can fix this by detaching the argument from the graph:
lossFn(outputs, outputs.detach())
| https://stackoverflow.com/questions/72758008/ |
TypeError: βtupleβ object is not callable when using flatten layer | I Have the following model:
import torch
import torch.nn as nn
import torch.nn.functional as F
class MyModel(nn.Module):
def __init__(self, input_size, num_classes):
super(MyModel, self).__init__()
self.layer_1 = nn.Conv1d(1, 16, 3, bias=False, stride=2)
self.activation_1 = F.relu
self.adap = nn.AdaptiveAvgPool1d(1)
self.flatten = nn.Flatten (),
self.layer_2 = torch.nn.Linear(2249, 500)
self.activation_2 = F.relu
self.layer_3 = torch.nn.Linear(500, 2)
pass
def forward(self, x, labels=None):
x = x.reshape(256, 1, -1)
x = self.layer_1(x)
x = self.activation_1(x)
x = self.flatten(x)
return x
When running torchinfo
model = MyModel(input_size=4500, num_classes=2)
torchinfo.summary(model, (256, 4500))
Iβm Getting error:
Input In [101], in MyModel.forward(self, x, labels)
30 x = self.activation_1(x)
β> 31 x = self.flatten(x)
32 return x
TypeError: βtupleβ object is not callable
What is wrong ?
What do I need to change ?
| You have , at the end of the flatten line.
please remove it
import torch
import torch.nn as nn
import torch.nn.functional as F
class MyModel(nn.Module):
def __init__(self, input_size, num_classes):
super(MyModel, self).__init__()
self.layer_1 = nn.Conv1d(1, 16, 3, bias=False, stride=2)
self.activation_1 = F.relu
self.adap = nn.AdaptiveAvgPool1d(1)
self.flatten = nn.Flatten()
self.layer_2 = torch.nn.Linear(2249, 500)
self.activation_2 = F.relu
self.layer_3 = torch.nn.Linear(500, 2)
pass
def forward(self, x, labels=None):
x = x.reshape(256, 1, -1)
x = self.layer_1(x)
x = self.activation_1(x)
x = self.flatten(x)
return x
| https://stackoverflow.com/questions/72760152/ |
A torch version compatibility error is occurred when using Conv2d | I want to use python 2.7 and torch, I am using torch 1.4.0. When I started implemneting a Conv2d example such as this example
c = nn.Conv2d(1, 1, (2, 1), stride=1)
x = torch.rand(1, 4).unsqueeze(-1)
I can execute these two line in python 2.7 and python 3, However when I call the c layer for a forward propagation it only works in python 3.
y = c(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/larc2/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/larc2/.local/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 343, in forward
return self.conv2d_forward(input, self.weight)
File "/home/larc2/.local/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 340, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 1 1 2, but got 3-dimensional input of size [1, 4, 1] instead
Whereas, the python 3 result for this exact same code is as follows:
tensor([[[-0.9926],
[-0.6937],
[-0.6704]]], grad_fn=<SqueezeBackward1>)
| As the error writes you need a 4-D input for a 2-D Conv: (N,C,H,W)
You are giving a 3-D input.
Please use 4-D tensor as the Docs ask: https://pytorch.org/docs/1.4.0/nn.html#torch.nn.Conv2d
The reason it works in python 3 is probably that PyTorch adds support for using a 3-D tensor in 2-D Conv (C,H,W) in a newer version and maybe you installed a newer PyTorch version in the python 3 env
| https://stackoverflow.com/questions/72760353/ |
Formatting our data into PyTorch Dataset object for fine-tuning BERT | I'm using an already existing code from Towards Data Science for fine-tuning a BERT Model.
The problem I'm facing belongs to this part of the code which where try to format our data into a PyTorch data.Dataset object:
class MeditationsDataset(torch.utils.data.Dataset):
def _init_(self, encodings, *args, **kwargs):
self.encodings = encodings
def _getitem_(self, idx):
return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
def _len_(self):
return len(self.encodings.input_ids)
dataset = MeditationsDataset(inputs)
When I run the code, I face this error:
TypeError Traceback (most recent call last)
<ipython-input-144-41fc3213bc25> in <module>()
----> 1 dataset = MeditationsDataset(inputs)
/usr/lib/python3.7/typing.py in __new__(cls, *args, **kwds)
819 obj = super().__new__(cls)
820 else:
--> 821 obj = super().__new__(cls, *args, **kwds)
822 return obj
823
TypeError: object.__new__() takes exactly one argument (the type to instantiate)
I already searched for this error but the problem here is that sadly I'm not familiar with either PyTorch or OOP so I couldn't fix this problem. Could you please let me know what should I add or remove from this code so I can run it? Thanks a lot in advance.
Also if needed, our data is as below:
{'input_ids': tensor([[ 2, 1021, 1005, ..., 0, 0, 0],
[ 2, 1021, 1005, ..., 0, 0, 0],
[ 2, 1021, 1005, ..., 0, 0, 0],
...,
[ 2, 1021, 1005, ..., 0, 0, 0],
[ 2, 103, 1005, ..., 0, 0, 0],
[ 2, 4, 0, ..., 0, 0, 0]]),
'token_type_ids': tensor([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 0, ..., 0, 0, 0]]),
'labels': tensor([[ 2, 1021, 1005, ..., 0, 0, 0],
[ 2, 1021, 1005, ..., 0, 0, 0],
[ 2, 1021, 1005, ..., 0, 0, 0],
...,
[ 2, 1021, 1005, ..., 0, 0, 0],
[ 2, 1021, 1005, ..., 0, 0, 0],
[ 2, 4, 0, ..., 0, 0, 0]])}
| Special functions in Python use double underscores prefix and suffix. In your case, to implement a data.Dataset, you must have __init__, __getitem__, and __len__:
class MeditationsDataset(torch.utils.data.Dataset):
def __init__(self, encodings, *args, **kwargs):
self.encodings = encodings
def __getitem__(self, idx):
return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
def __len__(self):
return len(self.encodings.input_ids)
| https://stackoverflow.com/questions/72761858/ |
Create a PyTorch tensor of sequences which excludes specified value | I have a 1d PyTorch tensor containing integers between 0 and n-1. Now I need to create a 2d PyTorch tensor with n-1 columns, where each row is a sequence from 0 to n-1 excluding the value in the first tensor. How can I achieve this efficiently?
Ex:
n = 3
a = torch.Tensor([0, 1, 2, 1, 2, 0])
# desired output
b = [
[1, 2],
[0, 2],
[0, 1],
[0, 2],
[0, 1],
[1, 2]
]
Typically, the a.numel() >> n.
Detailed Explanation:
The first element of a is 0, hence it has to map to the sequence [0, 1, 2] excluding 0, which is [1, 2].
Similarly, the second element of a is 1, hence it has to map to [0, 2] and so on.
PS: I actually have an additional batch dimension, which I've excluded here for simplicity. Hence, I need the solution to be easily extendable to one additional dimension.
| We can construct a tensor with the desired sequences and index with tensor a.
import torch
n = 3
a = torch.Tensor([0, 1, 2, 1, 2, 0]) # using torch.tensor is recommended
def exclude_gather(a, n):
sequences = torch.nonzero(torch.arange(n) != torch.arange(n)[:,None], as_tuple=True)[1].reshape(-1, n-1)
return sequences[a.long()]
exclude_gather(a, n)
Output
tensor([[1, 2],
[0, 2],
[0, 1],
[0, 2],
[0, 1],
[1, 2]])
We can add a batch dimension with functorch.vmap
from functorch import vmap
n = 4
b = torch.Tensor([[0, 1, 2, 1, 3, 0],[0, 3, 1, 0, 2, 1]])
vmap(exclude_gather, in_dims=(0, None))(b, n)
Output
tensor([[[1, 2, 3],
[0, 2, 3],
[0, 1, 3],
[0, 2, 3],
[0, 1, 2],
[1, 2, 3]],
[[1, 2, 3],
[0, 1, 2],
[0, 2, 3],
[1, 2, 3],
[0, 1, 3],
[0, 2, 3]]])
| https://stackoverflow.com/questions/72762251/ |
AttributeError: 'Tensor' object has no attribute 'close' | this is the code I am working with:
I searched for a similiar error do i need to change the .close() ?
Can anyone help please?
| You should remove the close() function call on the parameter tensor image in line -
image = tensor.cpu().close().detach().numpy(). This should be replaced with - image = tensor.cpu().detach().numpy()
| https://stackoverflow.com/questions/72762742/ |
How to exclude loss computation on certain tensors in PyTorch? | I am trying to do pose-conditioned face generation. The idea is to use an auto-encoding network that takes an image as an input and a conditional vector containing the pose information, such that the generated image is conditioned on the conditional vector.
For computing the loss using MSE, I am generating the face landmarks of the predicted image using a face-landmark detection library. Unfortunately, during the early epochs, the network produces garble and the face-landmark detection library returns None instead of an expected tensor of the form 256 x 256 x 3 where a pixel value indicates the presence of a landmark.
What I would like to do is to ignore computing the loss when no face has been detected.
Example -- assume that my batch is of the form -> 10 x 256 x 256 x 3, where 10 is the batch_size and 256x256 is the dimension of the image with 3 channels. For the predictions, let's assume that no landmarks could be generated for 3 images in the batch. I could set the prediction tensor for which no face landmarks could be generated to NaN values and the predicted landmarks would have the form - 10 x 256 x 256 x 3. In my MSE loss function, I would like to ignore gradient computation originating from tensor containing the NaN values. To make life simple, I want to ignore those individual 3 tensors that had all NaN values.
Any help would be appreciated. I have a backup with a for loop but that is sub-optimal.
This is some sample code -
import numpy as np
import torch
from torchvision import transforms
import random
transform = transforms.Compose(
[
transforms.ToPILImage(),
transforms.CenterCrop(size),
transforms.ToTensor(),
]
)
image_tensors = torch.randn(10, 3, 256, 256)
tensor_meshes = list()
for image_tensor in image_tensors:
image_array = image_tensor.detach().cpu().numpy()
image_landmarks = generate_mesh_from_image(image_array) # returns either None or landmark image of dimension -> 3 x 256 x 256
if image_landmarks is None:
image_landmarks = np.empty(3, 256, 256) # creates np matrix with Nan values
# convert to tensor
landmark_tensor = transform(image_landmarks)
tensor_meshes.append(landmark_tensor)
# convert tensor_meshes to a torch tensor of dimension -> 10 x 3 x 256 x 256
## SAMPLE CODE ##
# some dummy code to simulate generating landmark images
def generate_mesh_from_image(image_array):
rand = random.randint(0, 1)
if rand == 0:
return None
else:
return np.random.randn(3, 256, 256)
The loss now needs to be computed between the prediction tensor (10 x 3 x 256 x 256) and the ground truth tensor (10 x 3 x 256 x 256). However, the prediction tensor contains some tensor elements that have all NaN values which I would like ignore during the loss computation.
| You can create a mask containing ones for images with landmarks and zeros for those without. Then simply compute the loss on the whole batch and apply the mask afterwards, before performing back propagation.
| https://stackoverflow.com/questions/72763661/ |
Why is my autoencoder not learning the FMNIST dataset? | I am using a simple autoencoder to learn images from the FashionMnist dataset. I have preprocessed the dataset by grayscaling and normalizing it. I did not make the network too deep, to prevent it from creating a direct mapping.
Here's my PyTorch code -
import torch
import torchvision as tv
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
from torch import nn
import os
from torchviz import make_dot
transforms = tv.transforms.Compose([tv.transforms.Grayscale(num_output_channels=1)])
trainset = tv.datasets.FashionMNIST(root='./data', train=True,
download=True, transform=transforms)
PATH = './ae.pth'
data = trainset.data.float()
data = data/255
# print(trainset.data.shape)
plt.imshow(trainset.data[0], cmap = 'gray')
plt.show()
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.encode = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 30),
nn.ReLU()
)
self.decode = nn.Sequential(
nn.Linear(30, 512),
nn.ReLU(),
nn.Linear(512, 28*28),
nn.Sigmoid()
)
def forward(self, x):
x = self.flatten(x)
encoded = self.encode(x)
decoded = self.decode(encoded)
return decoded
if(os.path.exists(PATH)):
print("Loading data on cpu")
device = torch.device('cpu')
model = NeuralNetwork()
model.load_state_dict(torch.load(PATH, map_location=device))
else:
device = "cuda" if torch.cuda.is_available() else "cpu"
data = data.to(device)
print(f"Using device = {device}")
model = NeuralNetwork().to(device)
# print(model)
lossFn = nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr = 1e-3)
for epoch in range(1000):
print("Epoch = ", epoch)
optimizer.zero_grad()
outputs = model(data)
loss = lossFn(outputs, data.reshape(-1, 784))
loss.backward()
optimizer.step()
torch.save(model.state_dict(), PATH)
data = data.to("cpu")
model = model.to("cpu")
pred = model(data)
pred = pred.reshape(-1, 28, 28)
# print(pred.shape)
plt.imshow(pred.detach().numpy()[0], cmap = 'gray')
plt.show()
For testing, I am inputting the following image -
However, I get this as output -
| I had an intuition that there was an issue with your loss function. When working with images, distance-based losses such as L1 or L2 losses work really well, as you are essentially measuring how far-away your predictions are from the ground-truth images. This was what I had observed as well, as the loss wasn't converging with BCE and it was rather oscillating.
I rewrote the entire thing and replaced BCE loss with MSE Loss and in just 50 epochs, the loss has gone down considerably, and it is still going down.
Here is the prediction after just 50 epochs -
The ground-truth image is -
I believe that you can get the loss down much more if you train for longer.
Here is the full code. I used a dataloader for batchifying and processing the data.
I also changed the transformations so that the resulting data is a torch tensor.
import torch
import torchvision as tv
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
from torch import nn
from torch.utils.data import DataLoader
transforms = tv.transforms.Compose([
transforms.Grayscale(num_output_channels=1),
transforms.ToTensor()
])
trainset = tv.datasets.FashionMNIST(root='./data', train=True,
download=True, transform=transforms)
loader = DataLoader(trainset, batch_size=32, num_workers=1, shuffle=True)
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.encode = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 30),
nn.ReLU()
)
self.decode = nn.Sequential(
nn.Linear(30, 512),
nn.ReLU(),
nn.Linear(512, 28*28),
nn.Sigmoid()
)
def forward(self, x):
x = self.flatten(x)
encoded = self.encode(x)
decoded = self.decode(encoded)
return decoded
model = NeuralNetwork().to(device)
lossFn = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr = 1e-2)
epochs = 50
for epoch in range(epochs):
for images, labels in loader:
optimizer.zero_grad()
images, labels = images.to(device), labels.to(device)
outputs = model(images)
loss = lossFn(outputs, images.reshape(-1, 28*28))
loss.backward()
optimizer.step()
print(f'Loss : {loss.item()}')
print(f'Epochs done : {epoch}')
Here is some inference code -
# infer on some test data
testset = tv.datasets.FashionMNIST(root='./data', train=False,
download=False, transform=transforms)
testloader = DataLoader(testset, shuffle=False, batch_size=32, num_workers=1)
test_images, test_labels = next(iter(testloader))
test_images = test_images.to(device)
predictions = model(test_images)
prediction = predictions[0]
prediction = prediction.view(1, 28, 28)
prediction = prediction.detach().cpu().numpy()
prediction = prediction.transpose(1, 2, 0)
# plot the prediction
plt.imshow(prediction, cmap = 'gray')
plt.show()
# plot the actual image
test_image = test_images[0]
test_image = test_image.detach().cpu().numpy()
test_image = test_image.transpose(1, 2, 0)
plt.imshow(test_image, cmap='gray')
plt.show()
This is the loss going down --
Epochs done : 39
Loss : 0.04641226679086685
Epochs done : 40
Loss : 0.04445071145892143
Epochs done : 41
Loss : 0.05033266171813011
Epochs done : 42
Loss : 0.04813298210501671
Epochs done : 43
Loss : 0.0474831722676754
Epochs done : 44
Loss : 0.044186390936374664
Epochs done : 45
Loss : 0.049083154648542404
Epochs done : 46
Loss : 0.04645842686295509
Epochs done : 47
Loss : 0.04586248844861984
Epochs done : 48
Loss : 0.0467853844165802
Epochs done : 49
| https://stackoverflow.com/questions/72763884/ |
How do I merge image back after splitting into blocks | I splitted an image into blocks with the code below
block = []
for x in range(0, 224,16):
for y in range(0, 224,16):
block.append(im[y:y+16, x:x+16])
np_block = np.array(block)
The image shape is 224,224,3 and each block is 16,16,3. How do I combine these blocks together to form the complete image?.
When I reshaped it (np_block.reshape(224, 224,3)), I got the below image which is completely different from the real image
Edit:
This has been answered in the comment section as thus:
reshaped_img = np_block.reshape(14,14,16,16,3).transpose(0,3,1,2,4).reshape(14*16,14*16,3)
The resulting image is rotated and also flipped compared to the original image. Here is the original image
And here is the resulting image from the provided method to merge the blocks together
Doing restored_img = np.flip(np.rot90(reshaped_img, k=1, axes=(0, 1)), axis=0) will correctly restore the resulting image to its original form.
However, converting restored_img into a tensor in pytorch, tensor_img = torch.from_numpy(restored_img)throws the following error:
ValueError: At least one stride in the given numpy array is negative, and tensors with negative strides are not currently supported. (You can probably work around this by making a copy of your array with array.copy().)
I worked around this as suggested in the error message.
tensor_img = torch.from_numpy(restored_img.copy()) got rid of the error but the transformations (rotation and flipping) made on restored_img is not preserved. tensor_img stil remains the same as the last image above.
I noticed that the splitted blocks are not appended in the correct order when splitting the original image. This is evident in np_block when I visualized it (the first image above looks flipped and rotated). So, I changed the line block.append(im[y:y+16, x:x+16]) to block.append(im[x:x+16, y:y+16])
Revisualizing np_block consequently gives the following image
This seem closer to the original image and the blocks also looks like they are in the correct order (this is what I've observed). But the blocks do not overlap with each other.
How do I work around this, please?
| Hi there the problem with original solution proposed is that you have to rearange the blocks 0,3 and 1,2 if you swap those dimensions then it will be solved. check:
block = [] #create blocks
for x in range(0, 224,16):
for y in range(0, 224,16):
block.append(im[y:y+16, x:x+16])
np_block = np.array(block)
reshaped_img = np_block.reshape(14,14,16,16,3).transpose(1,2,0,3,4).reshape(14*16,14*16,3)
#check indices are now (1,2,0,3,4) instead of (0,3,1,2,4)
plt.imshow(np.hstack([im,reshaped_img]))
Edit
also from original comment this is just transpose output dimensions
reshaped_img.transpose(1,0), but if you can correct in the original reshape it's a better solution
Edit 2
using the corrected stacking (with x in first dimension) the code would be:
block = []
for x in range(0, 224,16):
for y in range(0, 224,16):
block.append(im[x:x+16, y:y+16])
np_block = np.array(block)
reshaped_img = np_block.reshape(14,14,16,16,3).transpose(0,2,1,3,4).reshape(14*16,14*16,3)
Here makes morse sence: the first dimension of reshape in block (dim 0) with the first dimension of the subblock (dim 2), are grouped, then the same for the seccond dimensions of the block and and subblock (1,3).
In any case you can check the good solution by:
plt.imshow(np.hstack([im,reshaped_img]))
Edit 3
Last edit, here a code to check the original block reshape (with code of edit 2). Here you can see every block has some sense of getting a cut in the image:
b = np_block.reshape(14,14,16,16,3)
plt.figure()
for i in range(14):
for j in range(14):
plt.subplot(14,14,14*i+j+1)
plt.imshow(b[i,j])
plt.axis('off')
| https://stackoverflow.com/questions/72764481/ |
Implement multiprocessing to test two videos simultaneously in opencv for object detection | I am implementing an object detection model using a YOLO algorithm with PyTorch and OpenCV. Running my model on a single video works fine. But whenever I am trying to use multiprocessing for testing more videos at once it is freezing. Can you please explain what is wrong with this code ??
import torch
import cv2
import time
from multiprocessing import Process
model = torch.hub.load('ultralytics/yolov5', 'custom', path='runs/best.pt', force_reload=True)
def detectObject(video,name):
cap = cv2.VideoCapture(video)
while cap.isOpened():
pTime = time.time()
ret, img = cap.read()
cTime = time.time()
fps = str(int(1 / (cTime - pTime)))
if img is None:
break
else:
results = model(img)
labels = results.xyxyn[0][:, -1].cpu().numpy()
cord = results.xyxyn[0][:, :-1].cpu().numpy()
n = len(labels)
x_shape, y_shape = img.shape[1], img.shape[0]
for i in range(n):
row = cord[i]
# If score is less than 0.3 we avoid making a prediction.
if row[4] < 0.3:
continue
x1 = int(row[0] * x_shape)
y1 = int(row[1] * y_shape)
x2 = int(row[2] * x_shape)
y2 = int(row[3] * y_shape)
bgr = (0, 255, 0) # color of the box
classes = model.names # Get the name of label index
label_font = cv2.FONT_HERSHEY_COMPLEX # Font for the label.
cv2.rectangle(img, (x1, y1), (x2, y2), bgr, 2) # Plot the boxes
cv2.putText(img, classes[int(labels[i])], (x1, y1), label_font, 2, bgr, 2)
cv2.putText(img, f'FPS={fps}', (8, 70), label_font, 3, (100, 255, 0), 3, cv2.LINE_AA)
img = cv2.resize(img, (700, 700))
cv2.imshow(name, img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
Videos = ['../Dataset/Test1.mp4','../Dataset/Test2.mp4']
for i in Videos:
process = Process(target=detectObject, args=(i, str(i)))
process.start()
Every time I run that code it freezes.
Here is the output :
Downloading: "https://github.com/ultralytics/yolov5/archive/master.zip" to /home/com/.cache/torch/hub/master.zip
YOLOv5 2022-6-27 Python-3.9.9 torch-1.11.0+cu102 CPU
Fusing layers...
YOLOv5s summary: 213 layers, 7023610 parameters, 0 gradients
Adding AutoShape...
| I got it to work by adding torch multiprocessing code.
from torch.multiprocessing import Pool, Process, set_start_method
try:
set_start_method('spawn', force=True)
except RuntimeError:
pass
videos = ['videos/video1.mp4', 'videos/video2.mp4']
for i in videos:
process = Process(target=detectObject, args=(i, str(i)))
process.start()
I was able to run on multiple videos at once this way.
| https://stackoverflow.com/questions/72764788/ |
Pytorch nn.MSELoss without specifying target | I was having difficulty with my loss getting stuck at a particular value. It would always decrease to a certain value, then stop decreasing. The code regarding the loss was:
criterion = nn.MSELoss()
loss = criterion(y_pred, y_batch.unsqueeze(1))
When I changed it to:
criterion = nn.MSELoss()
loss = criterion(y_pred, target=y_batch)
the issue was fixed.
What was happening before when the target was not specified? Does the target need to be specified for every Pytorch loss function? I found nothing in the documentation about target specifications.
| It looks like target is the name of the second positional argument, that's all. The only difference between the two lines is the unsqueezing of dim=1 on the second one.
| https://stackoverflow.com/questions/72766283/ |
how to increase a size of image in DCGAN in python? | I am trying to generate artificial images in DCGANs of size 128x128. but in all of the examples on the internet, the generated image size is 64x64. and I don't know where I am making a mistake.
here is a code. i have taken this code from inter and when I run this code for 64x64 image it run perfectly fine but for 128x128 image, i am getting error about this line code
---> 90 disc_fake = disc(fake.detach()).reshape(-1)
Discriminator and Generator implementation from DCGAN paper
import torch
import torch.nn as nn
class Discriminator(nn.Module):
def __init__(self, channels_img, features_d):
super(Discriminator, self).__init__()
self.disc = nn.Sequential(
# input: N x channels_img x 64 x 64
nn.Conv2d(
channels_img, features_d, kernel_size=4, stride=2, padding=1
),
nn.LeakyReLU(0.2),
# _block(in_channels, out_channels, kernel_size, stride, padding)
self._block(features_d, features_d * 2, 4, 2, 1),
self._block(features_d * 2, features_d * 4, 4, 2, 1),
self._block(features_d * 4, features_d * 8, 4, 2, 1),
self._block(features_d * 8, features_d * 16, 4, 2, 1),
# After all _block img output is 4x4 (Conv2d below makes into 1x1)
nn.Conv2d(features_d * 16, 1, kernel_size=4, stride=2, padding=0),
nn.Sigmoid(),
)
def _block(self, in_channels, out_channels, kernel_size, stride, padding):
return nn.Sequential(
nn.Conv2d(
in_channels,
out_channels,
kernel_size,
stride,
padding,
bias=False,
),
#nn.BatchNorm2d(out_channels),
nn.LeakyReLU(0.2),
)
def forward(self, x):
return self.disc(x)
class Generator(nn.Module):
def __init__(self, channels_noise, channels_img, features_g):
super(Generator, self).__init__()
self.net = nn.Sequential(
# Input: N x channels_noise x 1 x 1
self._block(channels_noise, features_g * 32, 4, 1, 0),
self._block(features_g * 32, features_g * 16, 4, 1, 1), # img: 4x4
self._block(features_g * 16, features_g * 8, 4, 2, 1), # img: 8x8
self._block(features_g * 8, features_g * 4, 4, 2, 1), # img: 16x16
self._block(features_g * 4, features_g * 2, 4, 2, 1), # img: 32x32
nn.ConvTranspose2d(
features_g * 2, channels_img, kernel_size=4, stride=2, padding=1
),
# Output: N x channels_img x 64 x 64
nn.Tanh(),
)
def _block(self, in_channels, out_channels, kernel_size, stride, padding):
return nn.Sequential(
nn.ConvTranspose2d(
in_channels,
out_channels,
kernel_size,
stride,
padding,
bias=False,
),
#nn.BatchNorm2d(out_channels),
nn.ReLU(),
)
def forward(self, x):
return self.net(x)
def initialize_weights(model):
# Initializes weights according to the DCGAN paper
for m in model.modules():
if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d, nn.BatchNorm2d)):
nn.init.normal_(m.weight.data, 0.0, 0.02)
Training of DCGAN network on a covid dataset with Discriminator
and Generator imported from models.py
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
from torch.utils.tensorboard import SummaryWriter
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
LEARNING_RATE = 0.00025 # could also use two lrs, one for gen and one for disc
BATCH_SIZE = 128
IMAGE_SIZE = 128
CHANNELS_IMG = 3
NOISE_DIM = 128
NUM_EPOCHS = 7500
FEATURES_DISC = 128
FEATURES_GEN = 128
img_list = []
G_loss = []
D_loss = []
transforms = transforms.Compose(
[
transforms.Resize(IMAGE_SIZE),
transforms.ToTensor(),
transforms.Normalize(
[0.5 for _ in range(CHANNELS_IMG)], [0.5 for _ in range(CHANNELS_IMG)]
),
]
)
dataset = datasets.ImageFolder(root="/content/subset", transform=transforms)
dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True)
gen = Generator(NOISE_DIM, CHANNELS_IMG, FEATURES_GEN).to(device)
disc = Discriminator(CHANNELS_IMG, FEATURES_DISC).to(device)
initialize_weights(gen)
initialize_weights(disc)
opt_gen = optim.Adam(gen.parameters(), lr=LEARNING_RATE, betas=(0.5, 0.999))
opt_disc = optim.Adam(disc.parameters(), lr=LEARNING_RATE, betas=(0.5, 0.999))
criterion = nn.BCELoss()
fixed_noise = torch.randn(128, NOISE_DIM, 1, 1).to(device)
writer_real = SummaryWriter(f"generated/real11")
writer_fake = SummaryWriter(f"generated/fake11")
step = 0
gen.train()
disc.train()
for epoch in range(NUM_EPOCHS):
# Target labels not needed! <3 unsupervised
for batch_idx, (real, _) in enumerate(dataloader):
real = real.to(device)
noise = torch.randn(BATCH_SIZE, NOISE_DIM, 1, 1).to(device)
fake = gen(noise)
### Train Discriminator: max log(D(x)) + log(1 - D(G(z)))
disc_real = disc(real).reshape(-1)
loss_disc_real = criterion(disc_real, torch.ones_like(disc_real))
disc_fake = disc(fake.detach()).reshape(-1)
loss_disc_fake = criterion(disc_fake, torch.zeros_like(disc_fake))
loss_disc = (loss_disc_real + loss_disc_fake) / 2
disc.zero_grad()
loss_disc.backward()
opt_disc.step()
### Train Generator: min log(1 - D(G(z))) <-> max log(D(G(z))
output = disc(fake).reshape(-1)
loss_gen = criterion(output, torch.ones_like(output))
gen.zero_grad()
loss_gen.backward()
opt_gen.step()
G_loss.append(loss_gen.item())
D_loss.append(loss_disc.item())
# Print losses occasionally and print to tensorboard
if batch_idx in range(BATCH_SIZE):
print(
f"Epoch [{epoch}/{NUM_EPOCHS}] Batch {batch_idx}/{len(dataloader)} \
Loss D: {loss_disc:.4f}, loss G: {loss_gen:.4f}"
)
with torch.no_grad():
fake = gen(fixed_noise)
# take out (up to) 32 examples
img_grid_real = torchvision.utils.make_grid(
real[:32], normalize=True
)
img_grid_fake = torchvision.utils.make_grid(
fake[:1], normalize=True
)
writer_real.add_image("Real1", img_grid_real, global_step=step)
for batch_idx in range(BATCH_SIZE):
torchvision.utils.save_image(img_grid_fake, f"/content/generated/generated_image/Fake_image-{batch_idx}.png", global_step=step)
writer_fake.add_image("Fake1", img_grid_fake, global_step=step)
step += 1
this is an error I am getting when I run this code.
RuntimeError Traceback (most recent call last)
<ipython-input-5-8a323fce319d> in <module>()
88
89
---> 90 disc_fake = disc(fake.detach()).reshape(-1)
91 loss_disc_fake = criterion(disc_fake, torch.zeros_like(disc_fake))
92 loss_disc = (loss_disc_real + loss_disc_fake) / 2
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
442 _pair(0), self.dilation, self.groups)
443 return F.conv2d(input, weight, bias, self.stride,
--> 444 self.padding, self.dilation, self.groups)
445
446 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Calculated padded input size per channel: (2 x 2). Kernel size: (4 x 4). Kernel size can't be greater than actual input size
| I checked the code and the issue stems from the fact that you are expecting your Generator to generate images of dimension - 128x3x128x128 (batch_size x channels x image_dim x image_dim). However, the way that you have written the ConvTranspose2d operations, that is not the case.
I checked the output from the intermediate layers and your generator is generating output images of dimension - 128x3x80x80 which is a size mismatch as your Discriminator is expected input images of dimension - 128x3x128x128.
Here are the shapes of the intermediate outputs from your Generator's ConvTranpose2d operations -
torch.Size([128, 4096, 4, 4])
torch.Size([128, 2048, 5, 5])
torch.Size([128, 1024, 10, 10])
torch.Size([128, 512, 20, 20])
torch.Size([128, 256, 40, 40])
torch.Size([128, 3, 80, 80])
I'd suggest, you modify your Generator's ConvTranspose2d parameters as follows -
class Generator(nn.Module):
def __init__(self, channels_noise, channels_img, features_g):
super(Generator, self).__init__()
self.net = nn.Sequential(
self._block(channels_noise, features_g*32, 4, 1, 0),
self._block(features_g*32, features_g*16, 4, 2, 1),
self._block(features_g*16, features_g*8, 4, 2, 1),
self._block(features_g*8, features_g*4, 4, 2, 1),
self._block(features_g*4, features_g*2, 4, 2, 1),
nn.ConvTranspose2d(features_g*2, channels_img, kernel_size=4, stride=2, padding=1),
# Output: N x channels_img x 64 x 64
nn.Tanh(),
)
def _block(self, in_channels, out_channels, kernel_size, stride, padding):
return nn.Sequential(
nn.ConvTranspose2d(
in_channels,
out_channels,
kernel_size,
stride,
padding,
bias=False,
),
#nn.BatchNorm2d(out_channels),
nn.ReLU(),
)
def forward(self, x):
return self.net(x)
This produces the required dimension of 128x3x128x128. The intermediate dimensions are as follows -
torch.Size([128, 4096, 4, 4])
torch.Size([128, 2048, 8, 8])
torch.Size([128, 1024, 16, 16])
torch.Size([128, 512, 32, 32])
torch.Size([128, 256, 64, 64])
torch.Size([128, 3, 128, 128])
Just replace the Generator with this one and your code should work for images of dimension 3x128x128.
| https://stackoverflow.com/questions/72767476/ |
Shifting an image with bilinear interpolation in pytorch | Suppose that I have an input x of size [H,W] and also a mu_x and mu_y (which may be fractional)representing the pixels in x and y direction to shift. Is there any efficient way in pytorch without using c++ to shift the tensor x for mu_x and mu_y units with bilinear interpolation.
To be more precise, let's say we have an image. mu_x = 5 and mu_y = 3, we may want to shift the image so that the image moves rightward 5 pixels and downward 3 pixels, with the pixels out of boundary of [H,W] removed and new pixels introduced at the other end of the boundary to be 0. However, with fractional mu_x and mu_y, we need to use bilinear interpolation to estimate the resulting image.
Is it possible to be implemented with pure pytorch tensor operations? Or do I need to use c++.
| I believe you can achieve this by applying grid sampling on your original input and using a grid to guide the sampling process. If you take a coordinate grid of your image and sample using that the resulting image will be equal to the original image. However you can apply a shift on this grid and therefore sample with the given shift. Grid sampling works with floating-point grids of course, which means you can apply an arbitrary non-round shift to your image and choose a sampling mode (bilinear is the default).
This can be implemented out of the box with F.grid_sampling. Given an image tensor img, we first construct a pixel grid of that image using torch.meshgrid. Keep in mind the grid used by the sampler must be normalized to [-1, -1]. Therefore pixel x=0,y=0 should be mapped to (-1,-1), pixel x=w,y=h mapped to (1,1), and the center pixel will end up at around (0,0).
Use two torch.arange with a [0,1]-normalization followed by a remapping to [-1,1]:
>>> c,h,w = img.shape
>>> x, y = torch.arange(h)/(h-1), torch.arange(w)/(w-1)
>>> grid = torch.dstack(torch.meshgrid(x, y))*2-1
So the resulting grid has a shape of (c, h, w) which will be the dimensions of the output image produced by the sampling process.
Since we are not working with batched elements, we need to unsqueeze singleton dimensions on both img and grid. Then we can apply F.grid_sample:
>>> sampled = F.grid_sample(img[None], grid[None])
Following this you can apply your arbitrary mu_x, mu_y shift and even easily use this to batches of images and shifts. The way you would define your sampling is by defining a shifted grid:
>>> x_s, y_s = (torch.arange(h)+mu_y)/(h-1), (torch.arange(w)+mu_x)/(w-1)
Where mu_x and mu_y are the values in pixels (floating point) with wish which the image is shifted on the horizontal and vertical axes respectively. To acquire the sampled image, apply F.grid_sampling on a grid made up of x_s and y_s:
>>> grid_shifted = torch.dstack(torch.meshgrid(x_s, y_s))*2-1
>>> sampled = F.grid_sample(img[None], grid_shifted[None])
| https://stackoverflow.com/questions/72769563/ |
resize_token_embeddings on the a pertrained model with different embedding size | I would like to ask about the way to change the embedding size of the trained model.
I have a trained model models/BERT-pretrain-1-step-5000.pkl.
Now I am adding a new token [TRA]to the tokeniser and try to use the resize_token_embeddings to the pertained one.
from pytorch_pretrained_bert_inset import BertModel #BertTokenizer
from transformers import AutoTokenizer
from torch.nn.utils.rnn import pad_sequence
import tqdm
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model_bert = BertModel.from_pretrained('bert-base-uncased', state_dict=torch.load('models/BERT-pretrain-1-step-5000.pkl', map_location=torch.device('cpu')))
#print(tokenizer.all_special_tokens) #--> ['[UNK]', '[SEP]', '[PAD]', '[CLS]', '[MASK]']
#print(tokenizer.all_special_ids) #--> [100, 102, 0, 101, 103]
num_added_toks = tokenizer.add_tokens(['[TRA]'], special_tokens=True)
model_bert.resize_token_embeddings(len(tokenizer)) # --> Embedding(30523, 768)
print('[TRA] token id: ', tokenizer.convert_tokens_to_ids('[TRA]')) # --> 30522
But I encountered the error:
AttributeError: 'BertModel' object has no attribute 'resize_token_embeddings'
I assume that it is because the model_bert(BERT-pretrain-1-step-5000.pkl) I had has the different embedding size.
I would like to know if there is any way to fit the embedding size of my modified tokeniser and the model I would like to use as the initial weights.
Thanks a lot!!
| resize_token_embeddings is a huggingface transformer method. You are using the BERTModel class from pytorch_pretrained_bert_inset which does not provide such a method. Looking at the code, it seems like they have copied the BERT code from huggingface some time ago.
You can either wait for an update from INSET (maybe create a github issue) or write your own code to extend the word_embedding layer:
from torch import nn
embedding_layer = model.embeddings.word_embeddings
old_num_tokens, old_embedding_dim = embedding_layer.weight.shape
num_new_tokens = 1
# Creating new embedding layer with more entries
new_embeddings = nn.Embedding(
old_num_tokens + num_new_tokens, old_embedding_dim
)
# Setting device and type accordingly
new_embeddings.to(
embedding_layer.weight.device,
dtype=embedding_layer.weight.dtype,
)
# Copying the old entries
new_embeddings.weight.data[:old_num_tokens, :] = embedding_layer.weight.data[
:old_num_tokens, :
]
model.embeddings.word_embeddings = new_embeddings
| https://stackoverflow.com/questions/72775559/ |
URL fetch failure on resnet50_weights_tf | I have to train a model from where I dont have access to Internet.
base_cnn = resnet.ResNet50(
weights="imagenet", input_shape=target_shape + (3,), include_top=False
)
Subsequently, training is failing with:
Traceback (most recent call last):
base_cnn = resnet.ResNet50(
return ResNet(stack_fn, False, True, 'resnet50', include_top, weights,
weights_path = data_utils.get_file(
raise Exception(error_msg.format(origin, e.errno, e.reason))
Exception: URL fetch failure on https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5: None -- [Errno 101] Network is unreachable
Is there a way I can load the weights from drive instead of fetching an URL?
| Weights could be downloaded as:
from tensorflow.keras.applications import resnet
base_cnn = resnet.ResNet50(
weights="imagenet", input_shape=target_shape + (3,), include_top=False
)
base_cnn.save("weights.h5")
Then load the saved weights:
from tensorflow.keras.models import load_model
base_cnn=load_model('weights.h5')
| https://stackoverflow.com/questions/72777121/ |
pytorch gives me an error when I don't run it in ~/ directory | When I run the following python script in an subdirectory, for example ~/test_dir, python results in an error :
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120) # 5*5 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square, you can specify with a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = torch.flatten(x, 1) # flatten all dimensions except the batch dimension
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
print(net)
Error msg:
Traceback (most recent call last):
File "/Users/xxxxxxx/torch/torch.py", line 1, in <module>
import torch
File "/Users/xxxxxxx/torch/torch.py", line 2, in <module>
import torch.nn as nn
ModuleNotFoundError: No module named 'torch.nn'; 'torch' is not a package
If I run the same file in the home directory ~/ it does not result in the error message shown above.
β ~ python3 test.py
Net(
(conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
β ~ which python3
/Library/Frameworks/Python.framework/Versions/3.9/bin/python3
I did not encounter any similar errors with other packages, so I assume it is caused by pytorch.
I would appreciate any help or hint on how to solve this.
| It seems that you named your working directory and a script torch. It causes a conflict with the installed Pytorch library, therefore you're calling your torch, not the installed one.
Try it after changing the names of your directory and script.
| https://stackoverflow.com/questions/72779228/ |
Using unlabelled custom images instead of Mnist and CIFAR for a simple GAN with Pytorch | I am trying to replace standardized data from pytorch such as MNIST and CIFAR with unlabeled custom images in png format in a simple GAN. Unfortunately most examples always use such datasets and dont show the process of preparing and implementing custom data into GANs. I have stored my png-images (336*336, RGB) in the working directory of VS Code. Could you please provide me with a suggestion on how to go forward? Below you find the current code where I would like to replace mnist with my own images to generate new images (from #Preparing Training Data to #Plotting Samples:
import torch
from torch import nn
import math
import matplotlib.pyplot as plt
import torchvision
import torchvision.transforms as transforms
torch.manual_seed(111)
# DEVICE
device = ""
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
print(device)
***# PREPARING TRAINING DATA
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]
)
# LOADING DATA
train_set = torchvision.datasets.MNIST(
root=".", train=True, download=True, transform=transform
)
# CREATE DATALOADER
batch_size = 32
train_loader = torch.utils.data.DataLoader(
train_set, batch_size=batch_size, shuffle=True
)***
# PLOTTING SAMPLES
real_samples, mnist_labels = next(iter(train_loader))
for i in range(16):
ax = plt.subplot(4, 4, i + 1)
plt.imshow(real_samples[i].reshape(28, 28), cmap="gray_r")
plt.xticks([])
plt.yticks([])
plt.show()Β΄
# IMPLEMENTING DISCRIMINATOR AND GENERATOR
class Discriminator(nn.Module):
def __init__(self):
super().__init__()
self.model = nn.Sequential(
nn.Linear(784, 1024),
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(1024, 512),
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(256, 1),
nn.Sigmoid(),
)
def forward(self, x):
x = x.view(x.size(0), 784)
output = self.model(x)
return output
discriminator = Discriminator().to(device=device)
class Generator(nn.Module):
def __init__(self):
super().__init__()
self.model = nn.Sequential(
nn.Linear(100, 256),
nn.ReLU(),
nn.Linear(256, 512),
nn.ReLU(),
nn.Linear(512, 1024),
nn.ReLU(),
nn.Linear(1024, 784),
nn.Tanh(),
)
def forward(self, x):
output = self.model(x)
output = output.view(x.size(0), 1, 28, 28)
return output
generator = Generator().to(device=device)
# TRAINING PARAMS
lr = 0.0001
num_epochs = 100
loss_function = nn.BCELoss()
optimizer_discriminator = torch.optim.Adam(discriminator.parameters(), lr=lr)
optimizer_generator = torch.optim.Adam(generator.parameters(), lr=lr)
# TRAINING LOOP
for epoch in range(num_epochs):
for n, (real_samples, mnist_labels) in enumerate(train_loader):
# Data for training the discriminator
real_samples = real_samples.to(device=device)
real_samples_labels = torch.ones((batch_size, 1)).to(
device=device
)
latent_space_samples = torch.randn((batch_size, 100)).to(
device=device
)
generated_samples = generator(latent_space_samples)
generated_samples_labels = torch.zeros((batch_size, 1)).to(
device=device
)
all_samples = torch.cat((real_samples, generated_samples))
all_samples_labels = torch.cat(
(real_samples_labels, generated_samples_labels)
)
# Training the discriminator
discriminator.zero_grad()
output_discriminator = discriminator(all_samples)
loss_discriminator = loss_function(
output_discriminator, all_samples_labels
)
loss_discriminator.backward()
optimizer_discriminator.step()
# Data for training the generator
latent_space_samples = torch.randn((batch_size, 100)).to(
device=device
)
# Training the generator
generator.zero_grad()
generated_samples = generator(latent_space_samples)
output_discriminator_generated = discriminator(generated_samples)
loss_generator = loss_function(
output_discriminator_generated, real_samples_labels
)
loss_generator.backward()
optimizer_generator.step()
# Show loss
if n == batch_size - 1:
print(f"Epoch: {epoch} Loss D.: {loss_discriminator}")
print(f"Epoch: {epoch} Loss G.: {loss_generator}")
# SAMPLES
latent_space_samples = torch.randn(batch_size, 100).to(device=device)
generated_samples = generator(latent_space_samples)
generated_samples = generated_samples.cpu().detach()
for i in range(16):
ax = plt.subplot(4, 4, i + 1)
plt.imshow(generated_samples[i].reshape(28, 28), cmap="gray_r")
plt.xticks([])
plt.yticks([])
plt.show()´´´
| In the example that you shared above, you are trying to train your generator on single-channel images. Specifically, your Generator and Discriminator layers are written to handle images of dimension 1x28x28 which are the dimensions of MNIST or Fashion-MNIST datasets.
I am supposing that you are trying to train color images (3 channels) or a different dimension, in your case - 3x336x336. In your example, I have added a tensor transform that first converts an input image of any dimension to an image of dimension - 3x28x28.
Here are the code examples for creating the custom dataset and custom dataloader.
from glob import glob
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
from skimage import io
path = 'your/image/path'
image_paths = glob(path + '/*.jpg')
img_size = 28
batch_size = 32
transform = transforms.Compose(
[
transforms.ToPILImage(),
transforms.Resize(img_size),
transforms.CenterCrop(img_size),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
]
)
class ImageDataset(Dataset):
def __init__(self, paths, transform):
self.paths = paths
self.transform = transform
def __len__(self):
return len(self.paths)
def __getitem__(self, index):
image_path = self.paths[index]
image = io.imread(image_path)
if self.transform:
image_tensor = self.transform(image)
return image_tensor
dataset = ImageDataset(image_paths, transform)
train_loader = DataLoader(dataset, batch_size=batch_size, num_workers=1, shuffle=True)
The dataloader generates image tensors of dimension - batch_size x img_channels x img_dim x img_dim which in this case would be - 32x3x28x28.
import torch
import torch.nn as nn
device = 'cuda' if torch.cuda.is_available() else 'cpu'
class Discriminator(nn.Module):
def __init__(self):
super().__init__()
self.model = nn.Sequential(
nn.Linear(784*3, 2048),
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(2048, 1024),
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(1024, 512),
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(256, 1),
nn.Sigmoid(),
)
def forward(self, x):
x = x.view(x.size(0), 784*3) # change required for 3 channel image
output = self.model(x)
return output
discriminator = Discriminator().to(device=device)
class Generator(nn.Module):
def __init__(self):
super().__init__()
self.model = nn.Sequential(
nn.Linear(100, 256),
nn.ReLU(),
nn.Linear(256, 512),
nn.ReLU(),
nn.Linear(512, 1024),
nn.ReLU(),
nn.Linear(1024, 2048),
nn.ReLU(),
nn.Linear(2048, 784*3),
nn.Tanh(),
)
def forward(self, x):
output = self.model(x)
output = output.view(x.size(0), 3, 28, 28)
return output
generator = Generator().to(device=device)
# TRAINING PARAMS
lr = 0.0001
num_epochs = 100
loss_function = nn.BCELoss()
optimizer_discriminator = torch.optim.Adam(discriminator.parameters(), lr=lr)
optimizer_generator = torch.optim.Adam(generator.parameters(), lr=lr)
This is the code for Generator and Discriminator. I have made slight modifications to the Generator and Discriminator. Notice the addition of the following layers in the Discriminator
nn.Linear(784*3, 2048),
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(2048, 1024),
and these in the Generator
nn.Linear(1024, 2048),
nn.ReLU(),
nn.Linear(2048, 784*3)
This is required to generate and discriminate images of the correct dimension.
Finally, this is your training loop -
for epoch in range(num_epochs):
for n, real_samples in enumerate(train_loader):
# Data for training the discriminator
real_samples = real_samples.to(device=device)
real_samples_labels = torch.ones((batch_size, 1)).to(
device=device
)
latent_space_samples = torch.randn((batch_size, 100)).to(
device=device
)
print(f'Latent space samples : {latent_space_samples.shape}')
generated_samples = generator(latent_space_samples)
generated_samples_labels = torch.zeros((batch_size, 1)).to(
device=device
)
all_samples = torch.cat((real_samples, generated_samples))
print(f'Real samples : {real_samples.shape}, generated samples : {generated_samples.shape}')
all_samples_labels = torch.cat(
(real_samples_labels, generated_samples_labels)
)
# Training the discriminator
discriminator.zero_grad()
output_discriminator = discriminator(all_samples)
loss_discriminator = loss_function(
output_discriminator, all_samples_labels
)
loss_discriminator.backward()
optimizer_discriminator.step()
# Data for training the generator
latent_space_samples = torch.randn((batch_size, 100)).to(
device=device
)
# Training the generator
generator.zero_grad()
generated_samples = generator(latent_space_samples)
output_discriminator_generated = discriminator(generated_samples)
loss_generator = loss_function(
output_discriminator_generated, real_samples_labels
)
loss_generator.backward()
optimizer_generator.step()
# Show loss
if n == batch_size - 1:
print(f"Epoch: {epoch} Loss D.: {loss_discriminator}")
print(f"Epoch: {epoch} Loss G.: {loss_generator}")
This works because the images are reshaped from the 784*3 to the 3*28*28 dimension.
This would work but if you are handling images of 3 channels, you would need to write ConvTranspose2d and Conv2d operations in your Generator and Discriminator for upsampling and downsampling the image respectively.
If you are interested in an example that uses ConvTranspose2d and Conv2d for processing multidimensional images, here it is - https://drive.google.com/file/d/1gYiBHPu-r3kialO0klsTdE2RjBR50rMs/view?usp=sharing. To handle images of different dimensions, you would have to modify the layers in the Generator and Discriminator classes.
| https://stackoverflow.com/questions/72779794/ |
How to perform torch.meshgrid over multiple tensors in parallel? | Let's say we have a tensor x of size [60,9] and a tensor y of size [60,9]
Is it possible to do an operation like xx,yy = torch.meshgrid(x,y) such that xx and yy is of size [60,9,9] and xx[i,:,:], yy[i,:,:] is basically torch.meshgrid(x[i],y[i])?
The built-in torch.meshgrid operation only accepts 1d tensors, is it possible to do the above operation without using for loops (which is inefficient as it does not take use of GPU's parallel operation)?
| I don't believe you will gain anything since the initialization of the tensors is not done on the GPU. So a proposed approach would indeed be to loop over x and y or using map as an iterable:
grids = map(torch.meshgrid, zip(x,y))
| https://stackoverflow.com/questions/72782751/ |
Multiple parallel image-encoders in pytorch | I have an image encoder which can generate output features. I want to split my image into several patches (around 16) and feed each patch to a separate image encoder (the parameters for each encoder are different).
Initially, it's like:
Input -> Encoder -> Output
I want to change it like this:
patch-input1 -> Encoder1 -> output1
patch-input2 -> Encoder2 -> output2
...
patch-inputN -> EncoderN -> outputN
I'm using nn.Module to create the model class. The number of patches N is not fixed and is determined during the initialisation of the model so I wanted to use the list of objects (in __init__() function) to create multiple instances of the same encoder and call them over each part-input in a loop in the forward function.
When I run inference on a sample input, it doesn't show any error and I print out the number of trainable parameters using torchsummary module.
The problem is I do not see the parameters corresponding to the encoders in the number of parameters. More specifically in the single encoder pipeline there are around one million parameters and in the multiple encoder pipeline, params there are only around 200 thousand. The later's summary doesn't even show the description of various layers in the encoder.
A code snippet of my implementation would look like this:
class Patch(nn.Module):
'''
takes image tensor input
returns a list of patch tensors
'''
class Encoder(nn.Module):
'''
definition of the encoder
'''
class Model(nn.Module):
def __init__(self, patches=10, *kwargs):
super().__init__()
self.patch = Patch()
self.enc = []
for i in range(patches):
enc.append(Encoder())
def forward(self, x):
'''
patches is a list of tensors formed using an image
tensor
'''
patches = self.patch(x)
output = []
for i in range(patches):
output.append(self.enc[i](patch[i]))
output_feats = torch.cat(output, dim=0)
Is there a better way to implement parallel encoders? What am I doing wrong?
| You need to use a nn.ModuleList in order to properly register the list of Patch modules inside Model nn.Module:
class Model(nn.Module):
def __init__(self, patches=10, *kwargs):
super().__init__()
self.patch = Patch()
enc = []
for i in range(patches):
enc.append(Encoder())
self.enc = nn.ModuleList(enc)
In a slightly refactored form:
class Model(nn.Module):
def __init__(self, patches=10, *kwargs):
super().__init__()
self.patch = Patch()
self.enc = nn.ModuleList(Encoder() for _ in range(patches))
def forward(self, x):
patches = self.patch(x)
output_feats = torch.cat([enc(patch) for enc, patch in zip(self.enc, patches])
| https://stackoverflow.com/questions/72783690/ |
Gradient Descent Reassignment in Pytorch | I was following a series of tutorial on youtube about deep learning, and I encountered a problem which really confuses me.
X = torch.tensor([1,2,3,4], dtype = torch.float32)
Y = torch.tensor([2,4,6,8], dtype = torch.float32)
w = torch.tensor(0.0, dtype = torch.float32, requires_grad=True)
def forward(x):
return w*x;
def loss(y, y_predicted):
return ((y-y_predicted)**2).mean()
print(f'Prediction before training: f(5) = {forward(5):.3f}')
learning_rate= 0.01
epoch = 20
for i in range(epoch):
y_pred = forward(X)
l = loss(Y, y_pred)
l.backward()
with torch.no_grad():
w = w - learning_rate * w.grad
# (w -= learning_rate * w.grad) # would not cause error in the following line
w.grad.zero_() #error : 'NoneType' object has no attribute 'zero_'
if i % 1 ==0:
print(f'weight : {w}, loss : {l}')
I really wonder the difference between "w = w - learning_rate * w.grad" and "w -= learning_rate * w.grad" cause these two are the same in my expericence. Thanks!
| As pointed out in the comment, the problem is in how Pytorch computes/stores gradients. In fact,
w-= learning_rate * w.grad
is an in-place operation, which will make w keep its initial properties (the requires_grad=True). Usually in Pytorch, we avoid in-place operations as it may break the computational graph used by Autograd (see Pytorch Forum Post).
But for you, this:
w = w - learning_rate * w.grad
is not in-place. Thus, w is assigned to a new copy, and because of the torch.no_grad() statement, this copy won't have a .grad attribute.
| https://stackoverflow.com/questions/72786737/ |
Runtime error: mat1 dim 1 must match mat2 dim 0 | I am running one classification program using GaborNet. Part of my code is
class module(nn.Module):
def __init__(self):
super(module, self).__init__()
self.g0 = modConv2d(in_channels=3, out_channels=32, kernel_size=(11, 11), stride=1)
self.c1 = nn.Conv2d(in_channels=32,out_channels=64,kernel_size=(2, 2),stride=1)
self.c2 = nn.Conv2d(in_channels=64,out_channels=128,kernel_size=(2, 2),stride=1)
#x = x.view(x.size(0), -1)
#x = x.view(1, *x.shape)
#x=x.view(-1,512*12*12)
x = F.relu(self.fc1(x))
print(x.shape)
x = F.relu(self.fc2(x))
print(x.shape)
x = self.fc3(x)
return x
I am getting this error at this position :
x = F.relu(self.fc1(x)
and the error is : RuntimeError: mat1 dim 1 must match mat2 dim 0
However the shape of the input image in the subsequent layers are till fc1 is:
torch.Size([64, 3, 150, 150])
torch.Size([64, 32, 140, 140])
torch.Size([64, 32, 70, 70])
| You were on the right track, you indeed need to reshape your data just after the convolution layers, and before proceeding with the fully connected layers.
The best approach for flattening your tensor is to use nn.Flatten otherwise you might end up disrupting the batch size. The last spatial layer outputs a shape of (64, 128, 3, 3) and once flattened this tensor has a shape of (64, 1152) where 1152 = 128*3*3. Therefore your first fully connected layer should have 1152 neurons.
Something like this should work:
class GaborNN(nn.Module):
def __init__(self):
super().__init__()
...
self.fc1 = nn.Linear(in_features=1152, out_features=128)
self.fc2 = nn.Linear(in_features=128, out_features=128)
self.fc3 = nn.Linear(in_features=128, out_features=7)
self.flatten = nn.Flatten()
def forward(self, x):
...
x = self.flatten(x)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
| https://stackoverflow.com/questions/72790147/ |
Value Error: Torch target size and torch input size in GAN do not match | Hi I am working on a GAN with custom images. I got the following error, which doesn't add up for me:
ValueError: Using a target size (torch.Size([64, 1])) that is different to the input size (torch.Size([47, 1])) is deprecated. Please ensure they have the same size.
I do not see where either of these sizes come from. Could someone please help me out? The error is to be found at the loss_disrimenator in the course of the training (marked with an arrow) after epoch 0. Below you find the related code. I am using vs code windows.
Also is it normal that epoch 0 works and then the problem appears?
[Sceenshot of Terminal- Epoch 0 Loss Discriminatorand Generator][1]
import torch
from glob import glob
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
from skimage import io
import matplotlib.pyplot as plt
path = 'Punks'
image_paths = glob(path + '/*.png')
img_size = 28
batch_size = 32
transform = transforms.Compose(
[
transforms.ToPILImage(),
transforms.Resize(img_size),
transforms.CenterCrop(img_size),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
]
)
class ImageDataset(Dataset):
def __init__(self, paths, transform):
self.paths = paths
self.transform = transform
def __len__(self):
return len(self.paths)
def __getitem__(self, index):
image_path = self.paths[index]
image = io.imread(image_path)
if self.transform:
image_tensor = self.transform(image)
return image_tensor
if __name__ == '__main__':
dataset = ImageDataset(image_paths, transform)
train_loader = DataLoader(
dataset, batch_size=batch_size, num_workers=1, shuffle=True)
# PLOTTING SAMPLES
real_samples = next(iter(train_loader))
for i in range(9):
ax = plt.subplot(3, 3, 3 + 1)
plt.imshow(real_samples[i].reshape(28, 28, 3))
plt.xticks([])
plt.yticks([])
plt.show()
device = 'cuda' if torch.cuda.is_available() else 'cpu'
class Discriminator(nn.Module):
def __init__(self):
super().__init__()
self.model = nn.Sequential(
nn.Linear(784*3, 2048),
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(2048, 1024),
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(1024, 512),
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(0.3),
nn.Linear(256, 1),
nn.Sigmoid(),
)
def forward(self, x):
x = x.view(x.size(0), 784*3) # change required for 3 channel image
output = self.model(x)
return output
discriminator = Discriminator().to(device=device)
class Generator(nn.Module):
def __init__(self):
super().__init__()
self.model = nn.Sequential(
nn.Linear(100, 256),
nn.ReLU(),
nn.Linear(256, 512),
nn.ReLU(),
nn.Linear(512, 1024),
nn.ReLU(),
nn.Linear(1024, 2048),
nn.ReLU(),
nn.Linear(2048, 784*3),
nn.Tanh(),
)
def forward(self, x):
output = self.model(x)
output = output.view(x.size(0), 3, 28, 28)
return output
generator = Generator().to(device=device)
# TRAINING PARAMS
lr = 0.0001
num_epochs = 10
loss_function = nn.BCELoss()
optimizer_discriminator = torch.optim.Adam(discriminator.parameters(), lr=lr)
optimizer_generator = torch.optim.Adam(generator.parameters(), lr=lr)
for epoch in range(num_epochs):
for n, real_samples in enumerate(train_loader):
# Data for training the discriminator
real_samples = real_samples.to(device=device)
real_samples_labels = torch.ones((batch_size, 1)).to(
device=device
)
latent_space_samples = torch.randn((batch_size, 100)).to(
device=device
)
print(f'Latent space samples : {latent_space_samples.shape}')
generated_samples = generator(latent_space_samples)
generated_samples_labels = torch.zeros((batch_size, 1)).to(
device=device
)
all_samples = torch.cat((real_samples, generated_samples))
print(f'Real samples : {real_samples.shape}, generated samples : {generated_samples.shape}')
all_samples_labels = torch.cat(
(real_samples_labels, generated_samples_labels)
)
# Training the discriminator
discriminator.zero_grad()
output_discriminator = discriminator(all_samples)
loss_discriminator = loss_function(
output_discriminator, all_samples_labels
)
-------> loss_discriminator.backward()
optimizer_discriminator.step()
# Data for training the generator
latent_space_samples = torch.randn((batch_size, 100)).to(
device=device
)
# Training the generator
generator.zero_grad()
generated_samples = generator(latent_space_samples)
output_discriminator_generated = discriminator(generated_samples)
loss_generator = loss_function(
output_discriminator_generated, real_samples_labels
)
loss_generator.backward()
optimizer_generator.step()
# Show loss
if n == batch_size - 1:
print(f"Epoch: {epoch} Loss D.: {loss_discriminator}")
print(f"Epoch: {epoch} Loss G.: {loss_generator}")
latent_space_samples = torch.randn(batch_size, 100).to(device=device)
generated_samples = generator(latent_space_samples)
generated_samples = generated_samples.cpu().detach()
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(generated_samples[i].reshape(28, 28, 3))
plt.xticks([])
plt.yticks([])
plt.show()´´´
| Since this is happening at the end of the first epoch, what's essentially happening is that you have specified a batch size of 64 but the number of images in your dataset is some_integer_number * 64 + 47. This is because when you read the data in batches, the number of samples equal to your batch_size is read. However, when you reach the end of the epoch, there is a possibility that fewer than batch_size examples are left to load.
In your code, the number of generated images in the last step of the 0th epoch is 47 whereas the number of fake images that you are generating is 64 since you use batch_size to sample batch_size number of fake images.
A simple solution would be to use len(real_samples) in place of batch_size at all the places. You can do this by first setting batch_size=len(real_samples) as the first line in the for loop.
for epoch in range(num_epochs):
for n, real_samples in enumerate(train_loader):
# Data for training the discriminator
batch_size = len(real_samples)
real_samples = real_samples.to(device=device)
real_samples_labels = torch.ones((batch_size, 1)).to(device=device)
# rest of the code continues
I hope this solves your issue.
| https://stackoverflow.com/questions/72791888/ |
pytorch.info summary for Generator of AN equivalent to discriminator or summary for whole GAN | Is it possible to generate a summary for the generator network of a GAN equivalent to the summary for the discriminator network using pytorch.info (containing inputs and outputs) or is there even a standard summary for the whole GAN including both networks?
For the Discriminator I used the following:
model = Discriminator()
batch_size = 32
summary(model, input_size=(batch_size, 3, 28, 28))
and received the following summary, which I would also like for the generator (see below summary):
==========================================================================================
Layer (type:depth-idx) Output Shape Param #
==========================================================================================
Discriminator [32, 1] --
ββSequential: 1-1 [32, 1] --
β ββLinear: 2-1 [32, 2048] 4,818,944
β ββReLU: 2-2 [32, 2048] --
β ββDropout: 2-3 [32, 2048] --
β ββLinear: 2-4 [32, 1024] 2,098,176
β ββReLU: 2-5 [32, 1024] --
β ββDropout: 2-6 [32, 1024] --
β ββLinear: 2-7 [32, 512] 524,800
β ββReLU: 2-8 [32, 512] --
β ββDropout: 2-9 [32, 512] --
β ββLinear: 2-10 [32, 256] 131,328
β ββReLU: 2-11 [32, 256] --
β ββDropout: 2-12 [32, 256] --
β ββLinear: 2-13 [32, 1] 257
β ββSigmoid: 2-14 [32, 1] --
==========================================================================================´´´
Total params: 7,573,505
Trainable params: 7,573,505
Non-trainable params: 0
Total mult-adds (M): 242.35
==========================================================================================
Input size (MB): 0.30
Forward/backward pass size (MB): 0.98
Params size (MB): 30.29
Estimated Total Size (MB): 31.58
==========================================================================================
For the generator I used the following to create a summary and unfortunately I wasnt able to includ ethe column for output shape as well as everything under and including the input row (as above):
model = Generator()
batch_size = 32
summary(model, output_size=(batch_size, 3, 28, 28))
and received the following shorter summary:
=================================================================
Layer (type:depth-idx) Param #
=================================================================
Generator --
ββSequential: 1-1 --
β ββLinear: 2-1 25,856
β ββReLU: 2-2 --
β ββLinear: 2-3 131,584
β ββReLU: 2-4 --
β ββLinear: 2-5 525,312
β ββReLU: 2-6 --
β ββLinear: 2-7 2,099,200
β ββReLU: 2-8 --
β ββLinear: 2-9 4,819,248
β ββTanh: 2-10 --
=================================================================
Total params: 7,601,200
Trainable params: 7,601,200
Non-trainable params: 0
=================================================================
| This package is not recommended for debugging your code, you should therefore always make sure your code runs on random data before testing the summary.
In the second group of commands, you are using output_size instead of input_size (cf. src). Looking at your code for Generator, the input shape should be (batch_size, 100). Additionally your final linear layer should output a total of 3*28*28 values in order for you to reshape to an image of shape (3, 28, 28).
class Generator(nn.Module):
def __init__(self):
super().__init__()
self.model = nn.Sequential(
nn.Linear(100, 256),
nn.ReLU(),
nn.Linear(256, 512),
nn.ReLU(),
nn.Linear(512, 1024),
nn.ReLU(),
nn.Linear(1024, 2048),
nn.ReLU(),
nn.Linear(2048, 28*28*3),
nn.Tanh(),
)
def forward(self, x):
output = self.model(x)
output = output.view(x.size(0), 3, 28, 28)
return output
Which you can summarize with:
>>> summary(model, input_size=(10,100))
========================================================================================
Layer (type:depth-idx) Output Shape Param #
========================================================================================
Generator [10, 3, 28, 28] --
ββSequential: 1-1 [10, 2352] --
β ββLinear: 2-1 [10, 256] 25,856
β ββReLU: 2-2 [10, 256] --
β ββLinear: 2-3 [10, 512] 131,584
β ββReLU: 2-4 [10, 512] --
β ββLinear: 2-5 [10, 1024] 525,312
β ββReLU: 2-6 [10, 1024] --
β ββLinear: 2-7 [10, 2048] 2,099,200
β ββReLU: 2-8 [10, 2048] --
β ββLinear: 2-9 [10, 2352] 4,819,248
β ββTanh: 2-10 [10, 2352] --
========================================================================================
Total params: 7,601,200
Trainable params: 7,601,200
Non-trainable params: 0
Total mult-adds (M): 76.01
========================================================================================
Input size (MB): 0.00
Forward/backward pass size (MB): 0.50
Params size (MB): 30.40
Estimated Total Size (MB): 30.90
========================================================================================
| https://stackoverflow.com/questions/72794356/ |
How can I access the next step data using DataLoader in PyTorch? | I am using a code that trains neural networks. The code uses the DataLoader of PyTorch to load the data for every iteration. The code looks as follows
for step, data in enumerate(dataloader, 0):
............................................................
output = neuralnetwork_model(data)
.............................................................
Here the step is an integer that gives values 0, 1, 2, 3, ....... and data gives a batch of samples at each step. The code passes corresponding batches to the neural network at each step.
I need to just access the data of step n+1 at step n. I need something like this
for step, data in enumerate(dataloader, 0):
............................................................
output = neuralnetwork_model(data)
access = data_of_next_step
.............................................................
How can I achieve this?
| It seems to be handier to perform such manipulation at the iteration level rather than having to change the data loaders implementation. Looking at Iterate over n successive elements with overlap you can achieve this using itertools.tee:
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return zip(a, b)
Therefore you simply have to iterate over your wrapped data loader with:
>>> for batch1, batch2 pairwise(dataloader)
... # batch1 is current batch
... # batch2 is batch of following step
| https://stackoverflow.com/questions/72797833/ |
PyG: Remove existing edges from prediction matrix | I'm currently working on a recommender system using PyG.
The edges are defined as follows:
edge_index = tensor([[ 0, 0, 0, ..., 9315, 9317, 9317],
[ 100, 448, 452, ..., 452, 1, 307]], device='cuda:0')}
edge_index[0] containing the indexes for a student and edge_index[1] containing the index of connected modules (both have the same length). Therefore edge_index[0][i] is the source node of edge i and edge_index[1][i] is the destination of edge i.
After model training, I'm generating a 2D-tensor recs with the shape of # of Students x # of Modules, with values from 0-1. 0 = not recommended and 1 = recommended. recs could look like this:
recs = tensor([0.54, 0.23, 0.98, ..., 0.12, 0.43, 0.87],
...,
[0.43, 0.53, 0.12, ..., 0.92, 0.12, 0.53])
Of course, I don't want to recommend a module if the student has already taken it. Is there a way to set all edges from the original graph to zero, by using the edge_index from PyG as coordinates or something?
Basically i want to set specific values in recs to 0 like this:
for i in range(0, len(edge_index[0])):
recs[edge_index[0][i]][edge_index[1][i]] = 0
Is there a way using the tensor functions to achieve this?
| Since you want to index recs on both axes simultaneously a straight implementation is to to vectorize your for loop as:
>>> recs[edge_index[0], edge_index[1]] = 0
Which you can improve by splitting edge_index with tuple:
>>> recs[tuple(edge_index)] = 0
| https://stackoverflow.com/questions/72799494/ |
Are those Keras and PyTorch snippets equivalent? | I am wondering if I succeeded in translating the following definition in PyTorch to Keras?
In PyTorch, the following multi-layer perceptron was defined:
from torch import nn
hidden = 128
def mlp(size_in, size_out, act=nn.ReLU):
return nn.Sequential(
nn.Linear(size_in, hidden),
act(),
nn.Linear(hidden, hidden),
act(),
nn.Linear(hidden, hidden),
act(),
nn.Linear(hidden, size_out),
)
My translation is
from tensorflow import keras
from keras import layers
hidden = 128
def mlp(size_in, size_out, act=keras.layers.ReLU):
return keras.Sequential(
[
layers.Dense(hidden, activation=None, name="layer1", input_shape=(size_in, 1)),
act(),
layers.Dense(hidden, activation=None, name="layer2", input_shape=(hidden, 1)),
act(),
layers.Dense(hidden, activation=None, name="layer3", input_shape=(hidden, 1)),
act(),
layers.Dense(size_out, activation=None, name="layer4", input_shape=(hidden, 1))
])
I am particularly confused about the input/output arguments, because that seems to be where tensorflow and PyTorch differ.
From the documentation:
When a popular kwarg input_shape is passed, then keras will create an
input layer to insert before the current layer. This can be treated
equivalent to explicitly defining an InputLayer.
So, did I get it right?
| In Keras, you can provide an input_shape for the first layer or alternatively use the tf.keras.layers.Input layer. If you do not provide either of these details, the model gets built the first time you call fit, eval, or predict, or the first time you call the model on some input data. So the input shape will actually be inferred if you do not provide it. See the docs for more details. PyTorch generally infers the input shape at runtime.
def keras_mlp(size_in, size_out, act=layers.ReLU):
return keras.Sequential([layers.Input(shape=(size_in,)),
layers.Dense(hidden, name='layer1'),
act(),
layers.Dense(hidden, name='layer2'),
act(),
layers.Dense(hidden, name='layer3'),
act(),
layers.Dense(size_out, name='layer4')])
def pytorch_mlp(size_in, size_out, act=nn.ReLU):
return nn.Sequential(nn.Linear(size_in, hidden),
act(),
nn.Linear(hidden, hidden),
act(),
nn.Linear(hidden, hidden),
act(),
nn.Linear(hidden, size_out))
You can compare their summary.
For Keras:
>>> keras_mlp(10, 5).summary()
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
layer1 (Dense) (None, 128) 1408
re_lu_6 (ReLU) (None, 128) 0
layer2 (Dense) (None, 128) 16512
re_lu_7 (ReLU) (None, 128) 0
layer3 (Dense) (None, 128) 16512
re_lu_8 (ReLU) (None, 128) 0
layer4 (Dense) (None, 5) 645
=================================================================
Total params: 35,077
Trainable params: 35,077
Non-trainable params: 0
_________________________________________________________________
For PyTorch:
>>> summary(pytorch_mlp(10, 5), (1,10))
============================================================================
Layer (type:depth-idx) Output Shape Param #
============================================================================
Sequential [1, 5] --
ββLinear: 1-1 [1, 128] 1,408
ββReLU: 1-2 [1, 128] --
ββLinear: 1-3 [1, 128] 16,512
ββReLU: 1-4 [1, 128] --
ββLinear: 1-5 [1, 128] 16,512
ββReLU: 1-6 [1, 128] --
ββLinear: 1-7 [1, 5] 645
============================================================================
Total params: 35,077
Trainable params: 35,077
Non-trainable params: 0
Total mult-adds (M): 0.04
============================================================================
Input size (MB): 0.00
Forward/backward pass size (MB): 0.00
Params size (MB): 0.14
Estimated Total Size (MB): 0.14
============================================================================
| https://stackoverflow.com/questions/72803869/ |
I am new to pytorch , why am I getting the attribute error even after adding super(ClassName,self).__init__() | I am working on a chatbot based out of pytorch
I am unable to figure out the reason behind the attribute error even after adding super to the NueralNet class.
Using jupyter notebook for this project.
HERE IS MY CODE SO FAR , parts from both model.ipynb and train.ipynb
class NeuralNet(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super().__init__()
self.l1 = nn.Linear(input_size, hidden_size)
self.l2 = nn.Linear(hidden_size, hidden_size)
self.l3 = nn.Linear(hidden_size, num_classes)
self.relu = nn.ReLU()
#Train
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
from ipynb.fs.full.model import NeuralNet
class ChatDataset(Dataset):
def __init__(self):
self.n_samples = len(x_train)
self.x_data = x_train
self.y_data = y_train
#hyperParameters
batch_size = 8
hidden_size = 8
output_size = len(tags)
input_size = len(all_words)
# print(input_size, len(all_words))
# print(output_size, tags)
learning_rate = 0.001
num_epochs = 1000
dataset = ChatDataset()
train_loader = DataLoader(dataset = dataset , batch_size=batch_size, shuffle=True, num_workers=2)
model = NeuralNet(input_size, hidden_size, output_size)```
ERROR:
~\Untitled Folder\model.ipynb in __init__(self, input_size, hidden_size, num_classes)
9 "source": [
10 "import torch\n",
---> 11 "import torch.nn as nn"
12 ]
13 },
~\anaconda3a\lib\site-packages\torch\nn\modules\module.py in __setattr__(self, name, value)
1234 if isinstance(value, Module):
1235 if modules is None:
-> 1236 raise AttributeError(
1237 "cannot assign module before Module.__init__() call")
1238 remove_from(self.__dict__, self._parameters, self._buffers, self._non_persistent_buffers_set)
AttributeError: cannot assign module before Module.__init__() call
| This is most probably because you haven't called super().__init__ in your __init__ function of NeuralNet before registering sub-modules to it. See here for additional details.
The only missing component is a function __len__ on ChatDataset. Other than that the provided code runs fine.
| https://stackoverflow.com/questions/72804538/ |
Do Layer Normalization in Pytorch without learnable parameters? | We can add layer normalization in Pytorch by doing: torch.nn.LayerNorm(shape). However, this is layer normalization with learnable parameters. I.e, it's the following equation:
Does Pytorch have builtin layer normalization without learnable parameters?
| You can use nn.LayerNorm, setting the elementwise flag to False. This way the layer won't have learnt parameters. See source code for additional reference.
| https://stackoverflow.com/questions/72806582/ |
Pytorch identifying batch size as number of channels in Conv2d layer | I am a total newbie to neural networks using Pytorch to create a VAE model. I've used a bit of tensorflow before, but I have no idea what "in_channels" and "out_channels" are, as arguments to nn.Conv2d/nn.Conv1d.
Disclaimers aside, currently, my model takes in a dataloader with batch size 128 and where each input is a 248 by 46 tensor (so, a 128 x 248 x 46 tensor).
My encoder looks like this right now -- I chopped it down so I could focus on where the error was coming from.
class Encoder(nn.Module):
def __init__(self, latent_dim):
super(Encoder, self).__init__()
self.latent_dim = latent_dim
self.conv1 = nn.Conv2d(in_channels=248, out_channels=46, kernel_size=(9, 9), stride=(5, 1), padding=(5, 4))
def forward(self, x):
print(x.size())
x = F.relu(self.conv1(x))
return x
The Conv2d layer was meant to reduce the 248 by 46 input into a 50 by 46 tensor. However, I get this error:
RuntimeError: Given groups=1, weight of size [46, 248, 9, 9], expected input[1, 128, 248, 46] to have 248 channels, but got 128 channels instead
...even though I print x.size() and it displays as [torch.Size([128, 248, 46]).
I am unsure a) why the error shows that the layer is adding on an extra dimension to x, and b) whether I am even understanding channels correctly. Should 46 be the real number of channels? Why doesn't Pytorch simply request my input size as a tuple or something, like in=(248, 46)?
Or c) if this is an issue with the way I loaded in my data to the model. I have a numpy array data of shape (-1, 248, 46) and then started training my model as follows.
tensor_data = torch.from_numpy(data)
dataset = TensorDataset(tensor_data, tensor_data)
train_dl = DataLoader(dataset, batch_size=128, shuffle=True)
...
for epoch in range(20):
for x_train, y_train in train_loader:
x_train = x_train.to(device).float()
optimizer.zero_grad()
x_pred, mu, log_var = vae(x_train)
bce_loss = train.BCE(y_train, x_pred)
kl_loss = train.KL(mu, log_var)
loss = bce_loss + kl_loss
loss.backward()
optimizer.step()
Any thoughts appreciated!
| In pytorch, nn.Conv2d assumes the input (mostly image data) is shaped like: [B, C_in, H, W], where B is the batch size, C_in is the number of channels, H and W are the height and width of the image. The output has a similar shape [B, C_out, H_out, W_out]. Here, C_in and C_out are in_channels and out_channels, respectively. (H_out, W_out) is the output image size, which may or may not equal (H, W), depending on the kernel size, the stride and the padding.
However, it is confusing to apply conv2d to reduce [128, 248, 46] inputs to [128, 50, 46]. Are they image data with height 248 and width 46? If so you can reshape the inputs to [128, 1, 248, 46] and use in_channels = 1 and out_channels = 1 in conv2d.
| https://stackoverflow.com/questions/72808402/ |
Overwriting vs mutating pytorch weights | I'm trying to understand why I cannot directly overwrite the weights of a torch layer.
Consider the following example:
import torch
from torch import nn
net = nn.Linear(3, 1)
weights = torch.zeros(1,3)
# Overwriting does not work
net.state_dict()["weight"] = weights # nothing happens
print(f"{net.state_dict()['weight']=}")
# But mutating does work
net.state_dict()["weight"][0] = weights # indexing works
print(f"{net.state_dict()['weight']=}")
#########
# output
: net.state_dict()['weight']=tensor([[ 0.5464, -0.4110, -0.1063]])
: net.state_dict()['weight']=tensor([[0., 0., 0.]])
I'm confused since state_dict()["weight"] is just a torch tensor, so I feel I'm missing something really obvious here.
| This is because net.state_dict() first creates a collections.OrderedDict object, then stores the weight tensor(s) of this module to it, and returns the dict:
state_dict = net.state_dict()
print(type(state_dict)) # <class 'collections.OrderedDict'>
When you "overwrite" (it's in fact not an overwrite; it's assignment in python) this ordered dict, you reassign an int 0 to the key 'weights' of this ordered dict. The data in that tensor is not modified, it's just not referred to by the ordered dict.
When you check whether the tensor is modified by:
print(f"{net.state_dict()['weight']}")
a new ordered dict different from the one you have modified is created, so you see the unchanged tensor.
However, when you use indexing like this:
net.state_dict()["weight"][0] = weights # indexing works
then it's not assignment to the ordered dict anymore. Instead, the __setitem__ method of the tensor is called, which allows you to access and modify the underlying memory inplace. Other tensor APIs such as copy_ can also achieve desired results.
A clear explanation on the difference of a = b and a[:] = b when a is a tensor/array can be found here: https://stackoverflow.com/a/68978622/11790637
| https://stackoverflow.com/questions/72809505/ |
PyTorch Dataset / Dataloader from random source | I have a source of random (non-deterministic, non-repeatable) data, that I'd like to wrap in Dataset and Dataloader for PyTorch training. How can I do this?
__len__ is not defined, as the source is infinite (with possible repition).
__getitem__ is not defined, as the source is non-deterministic.
| When defining a custom dataset class, you'd ordinarily subclass torch.utils.data.Dataset and define __len__() and __getitem__().
However, for cases where you want sequential but not random access, you can use an iterable-style dataset. To do this, you instead subclass torch.utils.data.IterableDataset and define __iter__(). Whatever is returned by __iter__() should be a proper iterator; it should maintain state (if necessary) and define __next__() to obtain the next item in the sequence. __next__() should raise StopIteration when there's nothing left to read. In your case with an infinite dataset, it never needs to do this.
Here's an example:
import torch
class MyInfiniteIterator:
def __next__(self):
return torch.randn(10)
class MyInfiniteDataset(torch.utils.data.IterableDataset):
def __iter__(self):
return MyInfiniteIterator()
dataset = MyInfiniteDataset()
dataloader = torch.utils.data.DataLoader(dataset, batch_size = 32)
for batch in dataloader:
# ... Do some stuff here ...
# ...
# if some_condition:
# break
| https://stackoverflow.com/questions/72809774/ |
convert from 3 channels to 1 channel pytorch tensor from list of tensors | Say I have a list of tensors, volumes, which I can iterate over:
for volume in range(len(volumes)):
print (volume.shape)
torch.Size([3, 512, 512, 222])
<class 'torch.Tensor'>
torch.Size([3, 512, 512, 185])
<class 'torch.Tensor'>
torch.Size([3, 512, 512, 271])
<class 'torch.Tensor'>
torch.Size([3, 512, 512, 261])
<class 'torch.Tensor'>
torch.Size([3, 512, 512, 215])
<class 'torch.Tensor'>
torch.Size([3, 512, 512, 284])
<class 'torch.Tensor'>
torch.Size([3, 512, 512, 191])
<class 'torch.Tensor'>
How can I change the channel from 3 to 1, for all volumes?
Thanks
| If you like just to keep the first channel for each volume, you can create a new list like that:
new_volumes = [volume[0,...] for volume in volumes]
| https://stackoverflow.com/questions/72812602/ |
Pytorch/Numpy: Subtract each of N elements from a single matrix, resulting in N matrices? | Question in the title. Is there an operation or way to broadcast to do this without looping? Here's a simple example with list comprehension:
image = torch.tensor([[6, 9], [8.7, 5.5]])
c = torch.tensor([5.7675, 8.8325])
# with list comprehension
desired_result = torch.stack([image - c_i for c_i in c])
# output:
tensor([[[ 0.2325, 3.2325],
[ 2.9325, -0.2675]],
[[-2.8325, 0.1675],
[-0.1325, -3.3325]]])
I've tried reshaping the "scalar array" every which way to get the desired results with no luck.
| Not sure if torch has outer:
- np.subtract.outer(c.numpy(), image.numpy() )
Output:
array([[[ 0.23250008, 3.2325 ],
[ 2.9325 , -0.26749992]],
[[-2.8325005 , 0.16749954],
[-0.13250065, -3.3325005 ]]], dtype=float32)
In torch, you can flatten the two tensors and reshape:
-(c[:,None] - image.ravel()).reshape(*c.shape, *image.shape)
Output:
tensor([[[ 0.2325, 3.2325],
[ 2.9325, -0.2675]],
[[-2.8325, 0.1675],
[-0.1325, -3.3325]]])
| https://stackoverflow.com/questions/72816643/ |
Detectron2 code not showing anything in object detection | I am using the code described in this article for running inference (object detection) on an image using a trained model.
# import some common detectron2 utilities
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog
import cv2
from detectron2.data.detection_utils import read_image
WINDOW_NAME = "COCO detections"
#im = cv2.imread("sample.jpg") # this also shows the same result
im = read_image("sample.jpg", format="BGR")
# Create config
cfg = get_cfg()
cfg.merge_from_file("C:/Users/preet/detectron_repo/configs/COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model
cfg.MODEL.WEIGHTS = "detectron2://COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x/139173657/model_final_68b088.pkl"
# Create predictor
predictor = DefaultPredictor(cfg)
# Make prediction
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2)
v = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
cv2.imshow(WINDOW_NAME, v.get_image()[:, :, ::-1])
But the window that pops out shows "not responding" and I don't see anything in that window. Moreover, I get the following warning:
UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
How to resolve this?
| After digging out, I found out that I missed this line at the end:
cv2.waitKey(0)
That's why everything was working perfectly but the window wasn't responding.
| https://stackoverflow.com/questions/72818987/ |
PyTorch index in a batch | Given tensor IN of shape (A, B, C, D) and index tensor IDX of shape [A, B, C] with torch.long values in [0, C), how can I get a tensor OUT of shape (A, B, C, D) such that:
OUT[a, b, c, :] == IN[a, b, IDX[a, b, c], :]
This is trivial without dimensions A and B:
# C = 2, D = 3
IN = torch.arange(6).view(2, 3)
IDX = torch.tensor([0,0])
print(IN[IDX])
# tensor([[0, 1, 2],
# [0, 1, 2]])
Obviously, I can write a nested for loop over A and B. But surely there must be a vectorized way to do it?
| This is the perfect use case for torch.gather. Given two 4d tensors, input the input tensor and index the tensor containing the indices for input, calling torch.gather on dim=2 will return a tensor out shaped like input such that:
out[i][j][k][l] = input[i][j][index[i][j][k][l]][l]
In other words, index indexes dimension nΒ°3 of input.
Before applying such function though, notice all tensors must have the same number of dimensions. Since index is only 3d, we need to insert and expand an additional 4th dimension on it. We can do so with the following lines:
>>> idx_ = idx[...,None].expand_as(x)
Then call the torch.gather function
>>> x.gather(dim=2, index=idx_)
You can try out the solution with this code:
>>> A = 1; B = 2; C=3; D=2
>>> x = torch.rand(A,B,C,D)
tensor([[[[0.6490, 0.7670],
[0.7847, 0.9058],
[0.3606, 0.7843]],
[[0.0666, 0.7306],
[0.1923, 0.3513],
[0.5287, 0.3680]]]])
>>> idx = torch.randint(0, C, (A,B,C))
tensor([[[1, 2, 2],
[0, 0, 1]]])
>>> x.gather(dim=2, index=idx[...,None].expand_as(x))
tensor([[[[0.7847, 0.9058],
[0.3606, 0.7843],
[0.3606, 0.7843]],
[[0.0666, 0.7306],
[0.0666, 0.7306],
[0.1923, 0.3513]]]])
| https://stackoverflow.com/questions/72825479/ |
how to convert .pth.tar file to a .pt? | I have a .pth.tar file how can I read in that file and then save it to the same directory in a .pt file?
I am using the modelzoo mlnf file which I cannot find its .pt equivalent.
https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.html
| It turns out you can load your .pth.tar file directly with torch.load:
state_dict = torch.load(model_name)
See this thread for reference.
| https://stackoverflow.com/questions/72826470/ |
Weird Python Lambda() syntax | As below, I understand lambda y:... .
But the first Lambda(...) is a function?.
ds = datasets.FashionMNIST(
...
target_transform=Lambda(lambda y: torch.zeros(10, dtype=torch.float).scatter_(0, torch.tensor(y), value=1))
)
| It's just a function in torchvision for wrapping an arbitrary function as a transform. It's nothing to do with Python syntax, and is spelled Lambda with a capital L instead of lambda to not conflict with the Python keyword.
| https://stackoverflow.com/questions/72828510/ |
How does dim argument of "Tensor.scatter_" method in PyTorch work? | Could anyone teach me why the below code uses dim=1 in the scatter_ method? The meaning of the attached codes is for one-hot encoding. I tried to read the PyTorch document example and thought I should use dim=0 for the desired result. However, the result has shown that dim=1 is correct instead.
>>> target = torch.tensor([3, 5, 0, 2, 7, 5])
>>> target
tensor([3, 5, 0, 2, 7, 5])
>>> onehot = torch.zeros(target.shape[0], 8)
>>> onehot.scatter_(1, target.unsqueeze(1), 1.0)
tensor([[0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 1., 0., 0.]])
| You are applying scatter on a zero tensor onehot shaped (len(target), 8) on dim=1 using target as input and 1. as value. This will have the following effect on onehot:
onehot[i][target[i][j]] = 1.
This means for every row in target it will look at the unique value since j is always equal to 1 and use it to index the 2nd axis of onehot. In other words, for every row, it takes the value from target to position the 1. among the columns of onehot.
Step by step illustration would be:
>>> for i in range(len(target)):
... k = target[i] # k, depends on values of target i.e. dim=1
... onehot[i, k] = 1
... print(onehot)
tensor([[0., 0., 0., 1., 0., 0., 0., 0.], # i=0; k=3
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.]])
tensor([[0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0.], # i=1; k=5
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.]])
tensor([[0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0.], # i=2; k=0
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.]])
tensor([[0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0.], # i=3; k=2
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.]])
tensor([[0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1.], # i=4; k=7
[0., 0., 0., 0., 0., 0., 0., 0.]])
tensor([[0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 1., 0., 0.]]) # i=5; k=5
Notice that onehot.scatter_(0, target.unsqueeze(1), 1.0) would have produced:
onehot[target[i][j]][j] = 1.
Which is a valid operation only if you initialize onehot the other way around:
>>> onehot = torch.zeros(8, len(target))
>>> onehot.scatter_(0, target.unsqueeze(1), 1.)
tensor([[1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0.]])
And you get the transpose of the other matrix.
| https://stackoverflow.com/questions/72829631/ |
Why is my generator and discriminator loss converging at higher values in WGAN-GP? | This is the loss plot of WGAN-GP after training for 14000 iterations. My image size is 128 by 128. Though the loss plot seems to be converging, the generator loss at iteration 14000 is -26646 and critic loss is -249909.
Loss plot
| Batch Normalization in the discriminator breaks Wasserstein GANs with gradient penalty. The authors themselves advocate the usage of layer normalization instead, but this is clearly written in bold in their paper (https://papers.nips.cc/paper/7159-improved-training-of-wasserstein-gans.pdf). It is hard to say if there are other bugs in your code, but I urge you to thoroughly read the DCGAN and the Wasserstein GAN paper and really take notes on the hyperparameters. Getting them wrong really destroys the performance of the GAN and doing a hyperparameter search gets expensive quite quickly.
By the way transposed convolutions produce stairway artifacts in your output images. Use image resizing instead. For an indepth explanation of that phenomenon I can recommend the following resource (https://distill.pub/2016/deconv-checkerboard/).
This is an interesting find as well, which may help you: Accelerated WGAN update strategy with loss change rate balancing.
| https://stackoverflow.com/questions/72831448/ |
pytorch conv2d vs numpy results are different | I'm working on implementing pytorch conv2d with numpy. But pytorch conv2d vs numpy results are different for the same input and conv weight. How to fix it? Thanks for any help.
Code sample below:
Note:
The code contains 4 parts:
Fixed random seed to generate fixed input and conv weight.
Implement conv2d with pytorch.
Implement conv2d with numpy.
Run conv2d and verify the results are different.
import random
import numpy as np
import torch
import torch.nn as nn
fixed_seed = 5179
np.random.seed(fixed_seed)
random.seed(fixed_seed)
torch.manual_seed(fixed_seed)
np.set_printoptions(precision=8, floatmode="fixed")
torch.set_printoptions(precision=8)
def conv_forward_torch(input_image_tensor, weight, stride, pad):
# input_image_tensor B Hi Wi Ci
# weight Hk Wk Ci Co
input_image_tensor = input_image_tensor.permute(0, 3, 1, 2) # B Ci Hi Wi
weight = weight.permute(3, 2, 0, 1) # Co Ci Hk Wk
output = torch.nn.functional.conv2d(input_image_tensor, weight, stride=stride, padding=pad) # B Co Ho Wo
output = output.permute(0, 2, 3, 1).cpu().detach().numpy() # B Ho Wo Co
return output
def conv_forward_naive(x, w, stride, pad, bias = None):
# x B Hi Wi Ci
# w Hk Wk Ci Co
x = np.transpose(x, [0, 3, 1, 2]) # B Ci Hi Wi
w = np.transpose(w, [3, 2, 0, 1]) # Co Ci Kh Kw
if pad != 0:
x = np.pad(x, ((0, 0), (0 ,0), (pad, pad), (pad, pad)),'constant')
b, ci, hi, wi = x.shape
co, ci, hk, wk = w.shape
ho = np.floor(1 + (hi - hk) / stride).astype(int)
wo = np.floor(1 + (wi - wk) / stride).astype(int)
out = np.zeros((b, co, ho, wo), dtype=np.float32) # B Co Ho Wo
x = np.expand_dims(x, axis=1) # B 1 Ci Hi Wi
w = np.expand_dims(w, axis=0) # 1 Co Ci Hk Wk
for i in range(ho):
for j in range(wo):
x_windows = x[:, :, :, i * stride:i * stride + hk, j * stride: j * stride + wk] # B 1 Ci Hk Wk
out[:, :, i, j] = np.sum(x_windows * w, axis=(2, 3, 4)) # B Co
out = np.transpose(out, [0, 2, 3, 1]) # B Ho Wo Co
return out
B = 1 # Batch size
Hi = 2 # Input height
Wi = 2 # Input width
Ci = 1 # Input channel
Co = 1 # Ouput channel
P = 0 # Padding size
Hk = 2 # Kernel height
Wk = 2 # Kernel width
S = 1 # Stride
input_image_tensor = torch.randn(B, Hi, Wi, Ci)
conv_weight_tensor = torch.randn(Hk, Wk, Ci, Co)
input_image = input_image_tensor.detach().numpy()
conv_weight = conv_weight_tensor.detach().numpy()
y_torch = conv_forward_torch(input_image_tensor, conv_weight_tensor, S, P)
y_np = conv_forward_naive(input_image, conv_weight, S, P)
is_same = y_torch == y_np
print(is_same, y_torch, y_np)
Expected output:
is_same should be True.
Actual output for y_torch, y_np:
[[[[False]]]] [[[[-3.62229419]]]] [[[[-3.62229395]]]]
torch.version = '1.11.0'
np.version = '1.20.1'
| Your code works as expected. The result you observed is because of a difference in floating-point precisions between NumPy and PyTorch. To compare floating points you should not use a direct equal check, but instead something like np.allclose.
In this case your snippet indeed returns True:
>>> np.allclose(y_torch, y_np)
True
| https://stackoverflow.com/questions/72840140/ |
Create 3D kernels with Pytorch: Is there a simple way for large kernels? | I would like to test multiple kernels for convolution in 3D space. Here k is a 7x7x7 cross kernel, where the voxel in the middle of the kernel equals 1 and all other voxels in the wings equal -1. This way of assigning k is messy and time-consuming. Any ideas to make it more clear and simple?
tensor([[[[[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.]],
[[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.]],
[[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.]],
[[ 0., 0., 0., -1., 0., 0., 0.],
[ 0., 0., 0., -1., 0., 0., 0.],
[ 0., 0., 0., -1., 0., 0., 0.],
[-1., -1., -1., 1., -1., -1., -1.],
[ 0., 0., 0., -1., 0., 0., 0.],
[ 0., 0., 0., -1., 0., 0., 0.],
[ 0., 0., 0., -1., 0., 0., 0.]],
[[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.]],
[[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.]],
[[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0.]]]]])
| You can do so with column and row assignments, given kernel_size, in your case 7. Keep in mind that the shape of a nn.Conv3d layer weight is (out_channels=n_filters, in_channels, kernel_size_depth, kernel_size_height, kernel_size_width).
>>> n = kernel_size // 2
>>> k = torch.zeros(1,1,*(kernel_size,)*3) # define zero tensor
>>> k[...,n,n,:] = -1 # middle column at middle depth
>>> k[...,n,:,n] = -1 # middle row at middle depth
>>> k[...,n,n] = 1 # middle point at all depths
| https://stackoverflow.com/questions/72840249/ |
How do you use Pytorch model's function in Onnx to get output instead of model.forward() function | TL;DR: How can I use model.whatever_function(input) instead of model.forward(input) for the onnxruntime?
I use CLIP embedding to create embedding for my Image and texts as:
Code is from the official git merge
! pip install ftfy regex tqdm
! pip install git+https://github.com/openai/CLIP.git
import clip
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model, preprocess = clip.load("RN50", device=device) # Load any model
model = model.eval() # Inference Only
img_size = model.visual.input_resolution
dummy_image = torch.randn(10, 3, img_size, img_size).to(device)
image_embedding = model.encode_image(dummy_image).to(device))
dummy_texts = clip.tokenize(["quick brown fox", "lorem ipsum"]).to(device)
model.encode_text(dummy_texts)
and it works fine giving me [Batch, 1024] tensors for both for the loaded model.
Now I have quantized my model in Onnx as:
model.forward(dummy_image,dummy_texts) # Original CLIP result (1)
torch.onnx.export(model, (dummy_image, dummy_texts), "model.onnx", export_params=True,
input_names=["IMAGE", "TEXT"],
output_names=["LOGITS_PER_IMAGE", "LOGITS_PER_TEXT"],
opset_version=14,
dynamic_axes={
"IMAGE": {
0: "image_batch_size",
},
"TEXT": {
0: "text_batch_size",
},
"LOGITS_PER_IMAGE": {
0: "image_batch_size",
1: "text_batch_size",
},
"LOGITS_PER_TEXT": {
0: "text_batch_size",
1: "image_batch_size",
},
}
)
and the model is saved.
When I test the model as :
# Now run onnxruntime to verify
import onnxruntime as ort
ort_sess = ort.InferenceSession("model.onnx")
result=ort_sess.run(["LOGITS_PER_IMAGE", "LOGITS_PER_TEXT"],
{"IMAGE": dummy_image.numpy(), "TEXT": dummy_texts.numpy()})
It gives me a list of length 2, one for each image and text and the result[0] has shape of [Batch,2].
| If your encode_image on your module isn't calling forward then nothing is stopping you from overriding forward before exporting to Onnx:
>>> model.forward = model.encode_image
>>> torch.onnx.export(model, (dummy_image, dummy_texts), "model.onnx", ...))
| https://stackoverflow.com/questions/72841141/ |
Vectorizing torch tensor instead of using for loop | I am looking to make this calculation without using any for loops (vectorized) but cant really seem to find a good solution. Maybe someone can help?
edge_in = torch.ones(len(edge_embeds), len(edge_embeds[0]), len(edge_embeds[0][0]) + 2*len(nodes_a_embeds[0]))
for i in range(0, len(nodes_a_embeds)): # A
for u in range(0, len(nodes_b_embeds)): # B
edge_in[i][u] = torch.cat([nodes_a_embeds[i], nodes_b_embeds[u], edge_embeds[i][u]], dim=0)
# OUT: edge_in: torch.Tensor with shape (|A|, |B|, 2*node_dim + 2*edge_dim)
# IN: edge_embeds: torch.Tensor with shape (|A|, |B|, 2 x edge_dim)
# IN: nodes_a_embeds: torch.Tensor with shape (|A|, node_dim)
# IN: nodes_b_embeds: torch.Tensor with shape (|B|, node_dim)
| You can expand nodes_a_embed and nodes_b_embeds to the same shape as edge_embeds and concatenate them directly:
nodes_a_embed = nodes_a_embeds[:, None].expand(-1, n_B, -1): [n_A, node_dim] => [n_A, n_B, node_dim]
nodes_b_embed = nodes_b_embeds[None].expand(n_A, -1, -1): [n_B, node_dim] => [n_A, n_B, node_dim]
Verification:
import torch
n_A = 100
n_B = 200
node_dim = 32
edge_dim = 32
edge_in = torch.randn(n_A, n_B, 2*node_dim + 2*edge_dim)
edge_embeds = torch.randn(n_A, n_B, 2*edge_dim)
nodes_a_embeds = torch.randn(n_A, node_dim)
nodes_b_embeds = torch.randn(n_B, node_dim)
edge_in = torch.ones(len(edge_embeds), len(edge_embeds[0]), len(edge_embeds[0][0]) + 2*len(nodes_a_embeds[0]))
for i in range(0, len(nodes_a_embeds)): # A
for u in range(0, len(nodes_b_embeds)): # B
edge_in[i][u] = torch.cat([nodes_a_embeds[i], nodes_b_embeds[u], edge_embeds[i][u]], dim=0)
# vectorized version
edge_in_vectorized = torch.cat([
nodes_a_embeds[:, None].expand(-1, n_B, -1),
nodes_b_embeds[None].expand(n_A, -1, -1),
edge_embeds], dim=-1)
print((edge_in_vectorized == edge_in).all()) # True
| https://stackoverflow.com/questions/72841511/ |
What happens in a convolution when the stride is larger than the kernel? | I recently was experiment with convolutions and transposed convolutions in Pytorch. I noticed with the nn.ConvTranspose2d API (I haven't tried with the normal convolution API yet), you can specify a stride that is larger than the kernel size and the convolution will still work.
What is happening in this case? I'm confused because if the stride is larger than the kernel, that means some pixels in the input image will not be convolved. So what happens to them?
I have the following snippet where I manually set the weights for a nn.ConvTranspose2d layer:
IN = 1
OUT = 1
KERNEL_SIZE = 2
proof_conv = nn.ConvTranspose2d(IN, OUT, kernel_size=KERNEL_SIZE, stride=4)
assert proof_conv.weight.shape == (IN, OUT, KERNEL_SIZE, KERNEL_SIZE)
FILTER = [
[1., 2.],
[0., 1.]
]
weights = [
[FILTER]
]
weights_as_tensor = torch.from_numpy(np.asarray(weights)).float()
assert weights_as_tensor.shape == proof_conv.weight.shape
proof_conv.weight = nn.Parameter(weights_as_tensor)
img = [[
[1., 2.],
[3., 4.]
]]
img_as_tensor = torch.from_numpy(np.asarray(img)).float()
out_img = proof_conv(img_as_tensor)
assert out_img.shape == (OUT, 6, 6)
The stride is larger than the KERNEL_SIZE of 2. Yet, the transposed convolution still occurs and we get an output of 6x6. What is happening underneath the hood?
This post: Understanding the PyTorch implementation of Conv2DTranspose is helpful but does not answer the edge-case of when the stride is greater than the kernel.
| As you already guessed - when the stride is larger than the kernel size, there are input pixels that do not participate in the convolution operation.
It's up to you - the designer of the architecture to decide whether this property is a bug or a feature. In some cases, I took advantage of this property to ignore portions of the inputs.
Update:
I think you are being confused by the bias term in proof_conv. Try to eliminate it:
proof_conv = nn.ConvTranspose2d(IN, OUT, kernel_size=KERNEL_SIZE, stride=4, bias=False)
Now you'll get out_img to be:
[[[[1., 2., 0., 0., 2., 4.],
[0., 1., 0., 0., 0., 2.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[3., 6., 0., 0., 4., 8.],
[0., 3., 0., 0., 0., 4.]]]]
Which represent 4 copies of the kernel, weighted by the input image, spaced 4 pixels apart according to stride=4.
The rest of the output image is filled with zeros - representing pixels that do not contribute to the transposed convolution.
ConvTranspose follows the same "logic" as the regular conv, only in a "transposed" fashion. If you look at the formula for computing output shape you'll see that the behavior you get is consistent.
| https://stackoverflow.com/questions/72843240/ |
How to load the whole dataset to GPU | I have dataset of 1550 images 3x112x112. When i am training my model I create dataset wia ImageFolder and then use DataLoader. It takes so much time because of reading from memory every time. I have enough gpu memory to load the whole dataset at once. It will be much faster. What is the best way to do it?
| You can store the images as an attribute of the dataset, put it on the GPU at initialization, and let __getitem__ return images from this directly.
import torch
from torch.utils.data import Dataset, DataLoader
class GPUDataset(Dataset):
def __init__(self):
self.len = 1550
self.data = torch.randn(self.len, 3, 112, 112).cuda() # ~ 223 MB
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
dataset = GPUDataset()
loader = DataLoader(dataset, batch_size=4)
batch = next(iter(loader))
print(batch.device, batch.shape) # cuda:0 torch.Size([4, 3, 112, 112])
This is meant to be a quick fix for small datasets. For larger datasets, consider more advanced solutions such as lmdb.
| https://stackoverflow.com/questions/72843847/ |
`torch.gather` without unbroadcasting | I have some batched input x of shape [batch, time, feature], and some batched indices i of shape [batch, new_time] which I want to gather into the time dim of x. As output of this operation I want a tensor y of shape [batch, new_time, feature] with values like this:
y[b, t', f] = x[b, i[b, t'], f]
In Tensorflow, I can accomplish this by using the batch_dims: int argument of tf.gather: y = tf.gather(x, i, axis=1, batch_dims=1).
In PyTorch, I can think of some functions which do similar things:
torch.gather of course, but this does not have an argument similar to Tensorflow's batch_dims. The output of torch.gather will always have the same shape as the indices. So I would need to unbroadcast the feature dim into i before passing it to torch.gather.
torch.index_select, but here, the indices must be one-dimensional. So to make it work I would need to unbroadcast x to add a "batch * new_time" dim, and then after torch.index_select reshape the output.
torch.nn.functional.embedding. Here, the embedding matrices would correspond to x. But this embedding function does not support the weights to be batched, so I run into the same issue as for torch.index_select (looking at the code, tf.embedding uses torch.index_select under the hood).
Is it possible to accomplish such gather operation without relying on unbroadcasting which is inefficient for large dims?
| This is actually the most frequent case: when input and index tensors don't perfectly match the number of dimensions. You can still utilize torch.gather though since you can rewrite your expression:
y[b, t, f] = x[b, i[b, t], f]
as:
y[b, t, f] = x[b, i[b, t, f], f]
which ensures all three tensors have an equal number of dimensions. This reveals a third dimension on i, which we can easily create for free by unsqueezing a dimension and expanding it to the shape of x. You can do so with i[:,None].expand_as(x).
Here is a minimal example:
>>> b = 2; t = 3; f = 1
>>> x = torch.rand(b, t, f)
>>> i = torch.randint(0, t, (b, f))
>>> x.gather(1, i[:,None].expand_as(x))
| https://stackoverflow.com/questions/72845808/ |
bert-base-uncased: TypeError: tuple indices must be integers or slices, not tuple | I want to see embeddings for the input text I give to the model, and then feed it to the rest of the BERT. To do so, I partitioned the model into two sequential models, but I must have done it wrong because rest_of_bert model raises TypeError. Original model does not raise any error with the input_ids as input processed with text_to_input function.
Input[0]:
import torch
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
cls_token_id = tokenizer.cls_token_id
sep_token_id = tokenizer.sep_token_id
pad_token_id = tokenizer.pad_token_id
model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)
model.eval()
Output[0]:
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(3): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(4): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(5): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(6): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(7): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(8): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(9): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(10): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(11): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
Input[1]:
def text_to_input(text):
x = tokenizer.encode(text, add_special_tokens=False) # returns python list
x = [cls_token_id] + x + [sep_token_id]
token_count = len(x)
pad_count = 512 - token_count
x = x + [pad_token_id for i in range(pad_count)]
return torch.tensor([x])
extract_embeddings = torch.nn.Sequential(list(model.children())[0])
rest_of_bert = torch.nn.Sequential(*list(model.children())[1:])
input_ids = text_to_input('A sentence.')
x_embedding = extract_embeddings(input_ids)
output = rest_of_bert(x_embedding)
Output[1]:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-d371d8a2fb3c> in <module>()
12 input_ids = text_to_input('A sentence.')
13 x_embedding = extract_embeddings(input_ids)
---> 14 output = rest_of_bert(x_embedding)
4 frames
/usr/local/lib/python3.7/dist-packages/transformers/utils/generic.py in __getitem__(self, k)
220 return inner_dict[k]
221 else:
--> 222 return self.to_tuple()[k]
223
224 def __setattr__(self, name, value):
TypeError: tuple indices must be integers or slices, not tuple
| Each BertLayer returns a tuple that contains at least one tensor (depending on what output you requested). The first element of the tuple is the tensor you want to feed to the next BertLayer.
A more huggingface-like approach would be calling the model with output_hidden_states:
o = model(input_ids, output_hidden_states=True)
print(len(o.hidden_states))
Output:
13
The first tensor of the hidden_states tuple is the output of your extract_embeddings object (token embeddings). The other 12 tensors are the contextualized embeddings that are the output of each BertLayer.
You should, by the way, provide an attention mask, because otherwise, your padding tokens will affect your output. The tokenizer is able to do that for you and you can replace your whole text_to_input method with:
tokenizer('A sentence.', return_tensors='pt', padding='max_length', max_length=512)
| https://stackoverflow.com/questions/72845812/ |
PyTorch: running Neural Network in identical venv-configuration - one working fine, one keeps throwing 'list index out of range' error | I set up an virtual environment via 'venv' to build and train a neural network using PyTorch and JupyterLab. However, when working on my computer 'PC 1' everything works fine, but running the same code with identical Python settings on my second computer ('PC 2') keeps constantly an error ('list index out of range') thrown while training.
Any suggestions what might cause this behavior? I'm running out of ideas ...
To be more specific:
On both computers Python 3.7.9 is installed via the Microsoft Store.
To be clear: both computers access the same *.ipynb and the same data / datasets, which are synced via a cloud service.
I tried:
I synced the created venv (via a cloud), activated venv and runned the *.ipynb via jupyter-lab (on 'PC 2') -> error
I got my venv-configuration from the working 'PC 1' via pip freeze > requirements.txt and set up a fresh venv on 'PC 2' using the requirements.txt-> error
Nevertheless I try, the thrown error is always the same.
This is the error thrown:
('julab' is my venv)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_9608\2329219183.py in <module>
10 val_dl = test_loader,
11 epochs=num_epochs,
---> 12 device='cpu')
~\AppData\Local\Temp\ipykernel_9608\132402798.py in train(model, optimizer, loss_fn, train_dl, val_dl, epochs, device)
27 num_train_examples = 0
28
---> 29 for batch in train_dl:
30
31 optimizer.zero_grad()
d:\<CLOUD>\<SUBFOLDER>\julab\lib\site-packages\torch\utils\data\dataloader.py in __next__(self)
650 # TODO(https://github.com/pytorch/pytorch/issues/76750)
651 self._reset() # type: ignore[call-arg]
--> 652 data = self._next_data()
653 self._num_yielded += 1
654 if self._dataset_kind == _DatasetKind.Iterable and \
d:\<CLOUD>\<SUBFOLDER>\julab\lib\site-packages\torch\utils\data\dataloader.py in _next_data(self)
690 def _next_data(self):
691 index = self._next_index() # may raise StopIteration
--> 692 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
693 if self._pin_memory:
694 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
d:\<CLOUD>\<SUBFOLDER>\julab\lib\site-packages\torch\utils\data\_utils\fetch.py in fetch(self, possibly_batched_index)
47 def fetch(self, possibly_batched_index):
48 if self.auto_collation:
---> 49 data = [self.dataset[idx] for idx in possibly_batched_index]
50 else:
51 data = self.dataset[possibly_batched_index]
d:\<CLOUD>\<SUBFOLDER>\julab\lib\site-packages\torch\utils\data\_utils\fetch.py in <listcomp>(.0)
47 def fetch(self, possibly_batched_index):
48 if self.auto_collation:
---> 49 data = [self.dataset[idx] for idx in possibly_batched_index]
50 else:
51 data = self.dataset[possibly_batched_index]
d:\<CLOUD>\<SUBFOLDER>\julab\lib\site-packages\torch\utils\data\dataset.py in __getitem__(self, idx)
288 if isinstance(idx, list):
289 return self.dataset[[self.indices[i] for i in idx]]
--> 290 return self.dataset[self.indices[idx]]
291
292 def __len__(self):
~\AppData\Local\Temp\ipykernel_9608\2122586536.py in __getitem__(self, index)
32
33 def __getitem__(self, index):
---> 34 image_name = os.path.join(self.image_dir, self.image_files[index])
35 image = PIL.Image.open(image_name)
36 label = self.data[index]
IndexError: list index out of range
And as this might help, this is my requirements.txt:
anyio==3.6.1
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
attrs==21.4.0
Babel==2.10.3
backcall==0.2.0
beautifulsoup4==4.11.1
bleach==5.0.1
certifi==2022.6.15
cffi==1.15.1
charset-normalizer==2.1.0
colorama==0.4.5
cycler==0.11.0
debugpy==1.6.0
decorator==5.1.1
defusedxml==0.7.1
dill==0.3.5.1
entrypoints==0.4
fastjsonschema==2.15.3
fonttools==4.33.3
idna==3.3
importlib-metadata==4.12.0
importlib-resources==5.8.0
ipykernel==6.15.0
ipython==7.34.0
ipython-genutils==0.2.0
jedi==0.18.1
Jinja2==3.1.2
joblib==1.1.0
json5==0.9.8
jsonschema==4.6.1
jupyter-client==7.3.4
jupyter-core==4.10.0
jupyter-server==1.18.0
jupyterlab==3.4.3
jupyterlab-pygments==0.2.2
jupyterlab-server==2.14.0
kiwisolver==1.4.3
MarkupSafe==2.1.1
matplotlib==3.5.2
matplotlib-inline==0.1.3
mistune==0.8.4
nbclassic==0.4.0
nbclient==0.6.6
nbconvert==6.5.0
nbformat==5.4.0
nest-asyncio==1.5.5
notebook-shim==0.1.0
numpy==1.21.6
packaging==21.3
pandas==1.3.5
pandocfilters==1.5.0
parso==0.8.3
pickleshare==0.7.5
Pillow==9.2.0
prometheus-client==0.14.1
prompt-toolkit==3.0.30
psutil==5.9.1
pycparser==2.21
Pygments==2.12.0
pyparsing==3.0.9
pyrsistent==0.18.1
python-dateutil==2.8.2
pytz==2022.1
pywin32==304
pywinpty==2.0.5
pyzmq==23.2.0
requests==2.28.1
scikit-learn==1.0.2
scipy==1.7.3
Send2Trash==1.8.0
six==1.16.0
sklearn==0.0
sniffio==1.2.0
soupsieve==2.3.2.post1
terminado==0.15.0
threadpoolctl==3.1.0
tinycss2==1.1.1
torch==1.12.0
torchsummary==1.5.1
torchvision==0.13.0
tornado==6.1
traitlets==5.3.0
typing_extensions==4.3.0
urllib3==1.26.9
wcwidth==0.2.5
webencodings==0.5.1
websocket-client==1.3.3
zipp==3.8.0
Has anyone an idea what's the problem and how to solve it? Many thanks in advance!
| [solved] see comments above...
| https://stackoverflow.com/questions/72847523/ |
Merge images by mask | I'm trying to merge two images according to the values in a mask where at all points where the mask is 1, the resulting image has the values of the first image, and otherwise it has values of the second image. Does anyone know how it can be achieved in pytorch? Using numpy, it can be achieved using
>>> import numpy as np
>>> img1 = np.random.rand(100,100,3)
>>> img2 = np.random.rand(100,100,3)
>>> mask = np.random.rand(100,100)>.5
>>> res = img2.copy()
>>> res[mask] = img1[mask]
| You are looking for np.where:
res = no.where(mask, img1, img2)
| https://stackoverflow.com/questions/72849119/ |
Are PyTorch activation functions best stored as fields? | An example of a simple neural network in PyTorch can be found at https://visualstudiomagazine.com/articles/2020/10/14/pytorch-define-network.aspx
class Net(T.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.hid1 = T.nn.Linear(4, 8) # 4-(8-8)-1
self.hid2 = T.nn.Linear(8, 8)
self.oupt = T.nn.Linear(8, 1)
T.nn.init.xavier_uniform_(self.hid1.weight)
T.nn.init.zeros_(self.hid1.bias)
T.nn.init.xavier_uniform_(self.hid2.weight)
T.nn.init.zeros_(self.hid2.bias)
T.nn.init.xavier_uniform_(self.oupt.weight)
T.nn.init.zeros_(self.oupt.bias)
def forward(self, x):
z = T.tanh(self.hid1(x))
z = T.tanh(self.hid2(z))
z = T.sigmoid(self.oupt(z))
return z
A distinctive feature of the above is that the layers are stored as fields within the Net object (as they need to be, in the sense that they contain the weights, which need to be remembered across training epochs), but the activation functors such as tanh are re-created on every call to forward. The author says:
The most common structure for a binary classification network is to define the network layers and their associated weights and biases in the __init__() method, and the input-output computations in the forward() method.
Fair enough. On the other hand, perhaps it would be marginally faster to store the functors rather than re-create them on every call to forward. On the third hand, it's unlikely to make any measurable difference, which means it might end up being a matter of code style.
Is the above, indeed the most common way to do it? Does either way have any technical advantage, or is it just a matter of style?
| On "storing" functors
The snippet is not "re-creating" anything -- calling torch.tanh(x) is literally just calling the function tanh exported by the torch package with arguments x.
Other ways of doing it
I think the snippet is a fair example for small neural blocks that are use-and-forget or are just not meant to be parameterizable.
Depending on your intentions, there are of course alternatives, but you'd have to weigh yourself whether the added complexity offers any value.
activation functions as strings
allow a selection of an activation function from a fixed set
class Model(torch.nn.Module):
def __init__(..., activation_function: Literal['tanh'] | Literal['relu']):
...
if activation_function == 'tanh':
self.activation_function = torch.tanh
elif activation_function == 'relu':
self.activation_function = torch.relu
else:
raise ValueError(f'activation function {activation_function} not allowed, use tanh or relu.'}
def forward(...) -> Tensor:
output = ...
return self.activation_function(output)
activation functions as callables
use arbitrary modules or functions as activations
class Model(torch.nn.Module):
def __init__(..., activation_function: torch.nn.Module | Callable[[Tensor], Tensor]):
self.activation_function = activation_function
def forward(...) -> Tensor:
output = ...
return self.activation_function(output)
which would for instance work like
def cube(x: Tensor) -> Tensor: return x**3
cubic_model = Model(..., activation_function=cube)
The key difference between the above examples and your snippet is the fact that the latter are transparent and adjustable wrt. to the activation used; you can inspect the activation function (i.e. model.activation_function), and change it (before or after initialization), whereas in the case of the original snippet it is invisible and baked into the model's functionality (to replicate the model with a different function, you'd need to define it from scratch).
Overall, I think the best way to go is to create small, locally tunable blocks that are as parametric as you need them to be, and wrap them into bigger blocks that make generalizations over the contained parameters. i.e. if your big model consists of 5 linear layers, you could make a single, activation-parametric wrapper for 1 layer (including dropouts, layer norms, whatever), and then another wrapper for a flow of N layers, which asks once for which activation function to initialize its children with. In other words, generalize and parameterize when you anticipate this to save you from extra effort and copy-pasting code in the future, but don't overdo it or you'll end up far away from your original specifications and needs.
ps: I don't know whether calling activation functions functors is justifiable.
| https://stackoverflow.com/questions/72850229/ |
Fast computation for changing the leftmost different bit | Given the two number in 8-bit:
x = 0b11110111
y = 0b11001010
What I want to do is to compare x and y and change x only the first different leftmost bit based on y. For example:
z = 0b11010111 (Because the leftmost different bit between x and y is in the third place, therefore, change the third bit in x based on y and other remain the same.)
And my code is:
flag = True
for i in range(8):
if flag and x[i] != y[i]: # Change only the left-most different bit.
flag = False
else:
y[i] = x[i] # Otherwise, remain the same.
This could work find.
Buit the problem is if I have many pairs like:
for (x, y) in nums:
flag = True
for i in range(8):
if flag and x[i] != y[i]: # Change only the left-most different bit.
flag = False
else:
y[i] = x[i] # Otherwise, remain the same.
When nums is large, then this process will be really slow.
So how can I improve the process of the problem?
BTW, this is the project of the deep learning task, so it can run on GPU, but I don't know whether it can be paralleled by GPU or not.
| The function you're after:
from math import floor, log2
def my_fun(x, y):
return x ^ (2 ** floor(log2(x ^ y)))
z = my_fun(0b11110111, 0b11001010)
print(f'{z:b}')
Output:
11010111
The function does the following:
compute the XOR result of x and y, which will include the most significant bit where they differ as the most significant bit to be 1
compute the floor of the log2 of that value, and raising 2 to that power, to get a number that only has that bit set to 1
return the XOR of x and that number, flipping the relevant bit
| https://stackoverflow.com/questions/72851439/ |
Are the pre-trained layers of the Huggingface BERT models frozen? | I use the following classification model from Huggingface:
model = AutoModelForSequenceClassification.from_pretrained("dbmdz/bert-base-german-cased", num_labels=2).to(device)
As I understand, this adds a dense layer at the end of the pre-trained model which has 2 output nodes. But are all the pre-trained layers before that frozen? Or are they also updated when fine-tuning? I can't find information about that in the docs...
So do I still have to do something like this?:
for param in model.bert.parameters():
param.requires_grad = False
| They are not frozen. All parameters are trainable by default. You can also check that with:
for name, param in model.named_parameters():
print(name, param.requires_grad)
Output:
bert.embeddings.word_embeddings.weight True
bert.embeddings.position_embeddings.weight True
bert.embeddings.token_type_embeddings.weight True
bert.embeddings.LayerNorm.weight True
bert.embeddings.LayerNorm.bias True
bert.encoder.layer.0.attention.self.query.weight True
bert.encoder.layer.0.attention.self.query.bias True
bert.encoder.layer.0.attention.self.key.weight True
bert.encoder.layer.0.attention.self.key.bias True
bert.encoder.layer.0.attention.self.value.weight True
bert.encoder.layer.0.attention.self.value.bias True
bert.encoder.layer.0.attention.output.dense.weight True
bert.encoder.layer.0.attention.output.dense.bias True
bert.encoder.layer.0.attention.output.LayerNorm.weight True
bert.encoder.layer.0.attention.output.LayerNorm.bias True
bert.encoder.layer.0.intermediate.dense.weight True
bert.encoder.layer.0.intermediate.dense.bias True
bert.encoder.layer.0.output.dense.weight True
bert.encoder.layer.0.output.dense.bias True
bert.encoder.layer.0.output.LayerNorm.weight True
bert.encoder.layer.0.output.LayerNorm.bias True
bert.encoder.layer.1.attention.self.query.weight True
bert.encoder.layer.1.attention.self.query.bias True
bert.encoder.layer.1.attention.self.key.weight True
bert.encoder.layer.1.attention.self.key.bias True
bert.encoder.layer.1.attention.self.value.weight True
bert.encoder.layer.1.attention.self.value.bias True
bert.encoder.layer.1.attention.output.dense.weight True
bert.encoder.layer.1.attention.output.dense.bias True
bert.encoder.layer.1.attention.output.LayerNorm.weight True
bert.encoder.layer.1.attention.output.LayerNorm.bias True
bert.encoder.layer.1.intermediate.dense.weight True
bert.encoder.layer.1.intermediate.dense.bias True
bert.encoder.layer.1.output.dense.weight True
bert.encoder.layer.1.output.dense.bias True
bert.encoder.layer.1.output.LayerNorm.weight True
bert.encoder.layer.1.output.LayerNorm.bias True
bert.encoder.layer.2.attention.self.query.weight True
bert.encoder.layer.2.attention.self.query.bias True
bert.encoder.layer.2.attention.self.key.weight True
bert.encoder.layer.2.attention.self.key.bias True
bert.encoder.layer.2.attention.self.value.weight True
bert.encoder.layer.2.attention.self.value.bias True
bert.encoder.layer.2.attention.output.dense.weight True
bert.encoder.layer.2.attention.output.dense.bias True
bert.encoder.layer.2.attention.output.LayerNorm.weight True
bert.encoder.layer.2.attention.output.LayerNorm.bias True
bert.encoder.layer.2.intermediate.dense.weight True
bert.encoder.layer.2.intermediate.dense.bias True
bert.encoder.layer.2.output.dense.weight True
bert.encoder.layer.2.output.dense.bias True
bert.encoder.layer.2.output.LayerNorm.weight True
bert.encoder.layer.2.output.LayerNorm.bias True
bert.encoder.layer.3.attention.self.query.weight True
bert.encoder.layer.3.attention.self.query.bias True
bert.encoder.layer.3.attention.self.key.weight True
bert.encoder.layer.3.attention.self.key.bias True
bert.encoder.layer.3.attention.self.value.weight True
bert.encoder.layer.3.attention.self.value.bias True
bert.encoder.layer.3.attention.output.dense.weight True
bert.encoder.layer.3.attention.output.dense.bias True
bert.encoder.layer.3.attention.output.LayerNorm.weight True
bert.encoder.layer.3.attention.output.LayerNorm.bias True
bert.encoder.layer.3.intermediate.dense.weight True
bert.encoder.layer.3.intermediate.dense.bias True
bert.encoder.layer.3.output.dense.weight True
bert.encoder.layer.3.output.dense.bias True
bert.encoder.layer.3.output.LayerNorm.weight True
bert.encoder.layer.3.output.LayerNorm.bias True
bert.encoder.layer.4.attention.self.query.weight True
bert.encoder.layer.4.attention.self.query.bias True
bert.encoder.layer.4.attention.self.key.weight True
bert.encoder.layer.4.attention.self.key.bias True
bert.encoder.layer.4.attention.self.value.weight True
bert.encoder.layer.4.attention.self.value.bias True
bert.encoder.layer.4.attention.output.dense.weight True
bert.encoder.layer.4.attention.output.dense.bias True
bert.encoder.layer.4.attention.output.LayerNorm.weight True
bert.encoder.layer.4.attention.output.LayerNorm.bias True
bert.encoder.layer.4.intermediate.dense.weight True
bert.encoder.layer.4.intermediate.dense.bias True
bert.encoder.layer.4.output.dense.weight True
bert.encoder.layer.4.output.dense.bias True
bert.encoder.layer.4.output.LayerNorm.weight True
bert.encoder.layer.4.output.LayerNorm.bias True
bert.encoder.layer.5.attention.self.query.weight True
bert.encoder.layer.5.attention.self.query.bias True
bert.encoder.layer.5.attention.self.key.weight True
bert.encoder.layer.5.attention.self.key.bias True
bert.encoder.layer.5.attention.self.value.weight True
bert.encoder.layer.5.attention.self.value.bias True
bert.encoder.layer.5.attention.output.dense.weight True
bert.encoder.layer.5.attention.output.dense.bias True
bert.encoder.layer.5.attention.output.LayerNorm.weight True
bert.encoder.layer.5.attention.output.LayerNorm.bias True
bert.encoder.layer.5.intermediate.dense.weight True
bert.encoder.layer.5.intermediate.dense.bias True
bert.encoder.layer.5.output.dense.weight True
bert.encoder.layer.5.output.dense.bias True
bert.encoder.layer.5.output.LayerNorm.weight True
bert.encoder.layer.5.output.LayerNorm.bias True
bert.encoder.layer.6.attention.self.query.weight True
bert.encoder.layer.6.attention.self.query.bias True
bert.encoder.layer.6.attention.self.key.weight True
bert.encoder.layer.6.attention.self.key.bias True
bert.encoder.layer.6.attention.self.value.weight True
bert.encoder.layer.6.attention.self.value.bias True
bert.encoder.layer.6.attention.output.dense.weight True
bert.encoder.layer.6.attention.output.dense.bias True
bert.encoder.layer.6.attention.output.LayerNorm.weight True
bert.encoder.layer.6.attention.output.LayerNorm.bias True
bert.encoder.layer.6.intermediate.dense.weight True
bert.encoder.layer.6.intermediate.dense.bias True
bert.encoder.layer.6.output.dense.weight True
bert.encoder.layer.6.output.dense.bias True
bert.encoder.layer.6.output.LayerNorm.weight True
bert.encoder.layer.6.output.LayerNorm.bias True
bert.encoder.layer.7.attention.self.query.weight True
bert.encoder.layer.7.attention.self.query.bias True
bert.encoder.layer.7.attention.self.key.weight True
bert.encoder.layer.7.attention.self.key.bias True
bert.encoder.layer.7.attention.self.value.weight True
bert.encoder.layer.7.attention.self.value.bias True
bert.encoder.layer.7.attention.output.dense.weight True
bert.encoder.layer.7.attention.output.dense.bias True
bert.encoder.layer.7.attention.output.LayerNorm.weight True
bert.encoder.layer.7.attention.output.LayerNorm.bias True
bert.encoder.layer.7.intermediate.dense.weight True
bert.encoder.layer.7.intermediate.dense.bias True
bert.encoder.layer.7.output.dense.weight True
bert.encoder.layer.7.output.dense.bias True
bert.encoder.layer.7.output.LayerNorm.weight True
bert.encoder.layer.7.output.LayerNorm.bias True
bert.encoder.layer.8.attention.self.query.weight True
bert.encoder.layer.8.attention.self.query.bias True
bert.encoder.layer.8.attention.self.key.weight True
bert.encoder.layer.8.attention.self.key.bias True
bert.encoder.layer.8.attention.self.value.weight True
bert.encoder.layer.8.attention.self.value.bias True
bert.encoder.layer.8.attention.output.dense.weight True
bert.encoder.layer.8.attention.output.dense.bias True
bert.encoder.layer.8.attention.output.LayerNorm.weight True
bert.encoder.layer.8.attention.output.LayerNorm.bias True
bert.encoder.layer.8.intermediate.dense.weight True
bert.encoder.layer.8.intermediate.dense.bias True
bert.encoder.layer.8.output.dense.weight True
bert.encoder.layer.8.output.dense.bias True
bert.encoder.layer.8.output.LayerNorm.weight True
bert.encoder.layer.8.output.LayerNorm.bias True
bert.encoder.layer.9.attention.self.query.weight True
bert.encoder.layer.9.attention.self.query.bias True
bert.encoder.layer.9.attention.self.key.weight True
bert.encoder.layer.9.attention.self.key.bias True
bert.encoder.layer.9.attention.self.value.weight True
bert.encoder.layer.9.attention.self.value.bias True
bert.encoder.layer.9.attention.output.dense.weight True
bert.encoder.layer.9.attention.output.dense.bias True
bert.encoder.layer.9.attention.output.LayerNorm.weight True
bert.encoder.layer.9.attention.output.LayerNorm.bias True
bert.encoder.layer.9.intermediate.dense.weight True
bert.encoder.layer.9.intermediate.dense.bias True
bert.encoder.layer.9.output.dense.weight True
bert.encoder.layer.9.output.dense.bias True
bert.encoder.layer.9.output.LayerNorm.weight True
bert.encoder.layer.9.output.LayerNorm.bias True
bert.encoder.layer.10.attention.self.query.weight True
bert.encoder.layer.10.attention.self.query.bias True
bert.encoder.layer.10.attention.self.key.weight True
bert.encoder.layer.10.attention.self.key.bias True
bert.encoder.layer.10.attention.self.value.weight True
bert.encoder.layer.10.attention.self.value.bias True
bert.encoder.layer.10.attention.output.dense.weight True
bert.encoder.layer.10.attention.output.dense.bias True
bert.encoder.layer.10.attention.output.LayerNorm.weight True
bert.encoder.layer.10.attention.output.LayerNorm.bias True
bert.encoder.layer.10.intermediate.dense.weight True
bert.encoder.layer.10.intermediate.dense.bias True
bert.encoder.layer.10.output.dense.weight True
bert.encoder.layer.10.output.dense.bias True
bert.encoder.layer.10.output.LayerNorm.weight True
bert.encoder.layer.10.output.LayerNorm.bias True
bert.encoder.layer.11.attention.self.query.weight True
bert.encoder.layer.11.attention.self.query.bias True
bert.encoder.layer.11.attention.self.key.weight True
bert.encoder.layer.11.attention.self.key.bias True
bert.encoder.layer.11.attention.self.value.weight True
bert.encoder.layer.11.attention.self.value.bias True
bert.encoder.layer.11.attention.output.dense.weight True
bert.encoder.layer.11.attention.output.dense.bias True
bert.encoder.layer.11.attention.output.LayerNorm.weight True
bert.encoder.layer.11.attention.output.LayerNorm.bias True
bert.encoder.layer.11.intermediate.dense.weight True
bert.encoder.layer.11.intermediate.dense.bias True
bert.encoder.layer.11.output.dense.weight True
bert.encoder.layer.11.output.dense.bias True
bert.encoder.layer.11.output.LayerNorm.weight True
bert.encoder.layer.11.output.LayerNorm.bias True
bert.pooler.dense.weight True
bert.pooler.dense.bias True
classifier.weight True
classifier.bias True
| https://stackoverflow.com/questions/72854302/ |
How to turn tensor type to original text (before tokenized) in Pytorch | for example, a tensor type data below is tokenized by a kind of English tokenizer.
tensor([[ 2992, 1852, 9439, ..., 2610, 1704, 29189],
[ 1852, 9439, 7, ..., 1704, 29189, 23223],
[ 9439, 7, 2367, ..., 29189, 23223, 838],
...,
[ 12, 7469, 28844, ..., 2973, 16, 73],
[ 7469, 28844, 28469, ..., 16, 73, 735],
[28844, 28469, 191, ..., 73, 735, 4482]])
how to transform it to original English text? (using Pytorch)
| The method you're looking for is tokenizer.decode, which is applied to sequences of numbers to yield the original source text. In your case, you have a batch of sentences (i.e. sequence of sequences) so you'll need to iterate the function over your tensor, i.e.
decoded = [tokenizer.decode(x) for x in xs]
where tokenizer your tokenization model and xs the tensor you want to decode.
maybe also useful:
tokenizer also provides methods convert_ids_to_tokens which does what the name suggests, and convert_tokens_to_string which merges subword tokens into words to recover the original input.
| https://stackoverflow.com/questions/72854762/ |
PyTorch: Computing the norm of batched tensors | I have tensor t with shape (Batch_Size x Dims) and another tensor v with shape (Vocab_Size x Dims). I'd like to produce a tensor d with shape (Batch_Size x Vocab_Size), such that d[i,j] = norm(t[i] - v[j]).
Doing this for a single tensor (no batches) is trivial: d = torch.norm(v - t), since t would be broadcast. How can I do this when the tensors have batches?
| Insert unitary dimensions into v and t to make them (1 x Vocab_Size x Dims) and (Batch_Size x 1 x Dims) respectively. Next, take the broadcasted difference to get a tensor of shape (Batch_Size x Vocab_Size x Dims). Pass that to torch.norm along with the optional dim=2 argument so that the norm is taken along the last dimension. This will result in the desired (Batch_Size x Vocab_Size) tensor of norms.
d = torch.norm(v.unsqueeze(0) - t.unsqueeze(1), dim=2)
Edit: As pointed out by @KonstantinosKokos in the comments, due to the broadcasting rules used by numpy and pytorch, the leading unitary dimension on v does not need to be explicit. I.e. you can use
d = torch.norm(v - t.unsqueeze(1), dim=2)
| https://stackoverflow.com/questions/72860548/ |
How to define PyTorch3D plane geometry? | I'm new to PyTorch3D and I'm trying to define a (subdivided) plane.
What I've tried to do is mirror the structure I found in PyTorch3D's ico_sphere.py.
For reference these are the contents:
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
import torch
from pytorch3d.ops.subdivide_meshes import SubdivideMeshes
from pytorch3d.structures.meshes import Meshes
# Vertex coordinates for a level 0 ico-sphere.
_ico_verts0 = [
[-0.5257, 0.8507, 0.0000],
[0.5257, 0.8507, 0.0000],
[-0.5257, -0.8507, 0.0000],
[0.5257, -0.8507, 0.0000],
[0.0000, -0.5257, 0.8507],
[0.0000, 0.5257, 0.8507],
[0.0000, -0.5257, -0.8507],
[0.0000, 0.5257, -0.8507],
[0.8507, 0.0000, -0.5257],
[0.8507, 0.0000, 0.5257],
[-0.8507, 0.0000, -0.5257],
[-0.8507, 0.0000, 0.5257],
]
# Faces for level 0 ico-sphere
_ico_faces0 = [
[0, 11, 5],
[0, 5, 1],
[0, 1, 7],
[0, 7, 10],
[0, 10, 11],
[1, 5, 9],
[5, 11, 4],
[11, 10, 2],
[10, 7, 6],
[7, 1, 8],
[3, 9, 4],
[3, 4, 2],
[3, 2, 6],
[3, 6, 8],
[3, 8, 9],
[4, 9, 5],
[2, 4, 11],
[6, 2, 10],
[8, 6, 7],
[9, 8, 1],
]
def ico_sphere(level: int = 0, device=None):
"""
Create verts and faces for a unit ico-sphere, with all faces oriented
consistently.
Args:
level: integer specifying the number of iterations for subdivision
of the mesh faces. Each additional level will result in four new
faces per face.
device: A torch.device object on which the outputs will be allocated.
Returns:
Meshes object with verts and faces.
"""
if device is None:
device = torch.device("cpu")
if level < 0:
raise ValueError("level must be >= 0.")
if level == 0:
verts = torch.tensor(_ico_verts0, dtype=torch.float32, device=device)
faces = torch.tensor(_ico_faces0, dtype=torch.int64, device=device)
else:
mesh = ico_sphere(level - 1, device)
subdivide = SubdivideMeshes()
mesh = subdivide(mesh)
verts = mesh.verts_list()[0]
verts /= verts.norm(p=2, dim=1, keepdim=True)
faces = mesh.faces_list()[0]
return Meshes(verts=[verts], faces=[faces])
This is my attempt of tweaking the above for a basic quad plane from 2 triangles:
import torch
from pytorch3d.ops.subdivide_meshes import SubdivideMeshes
from pytorch3d.structures.meshes import Meshes
# Vertex coordinates for a level 0 plane.
_plane_verts0 = [
[-0.5000,-0.5000, 0.0000], # TL
[+0.5000,-0.5000, 0.0000], # TR
[+0.5000,+0.5000, 0.0000], # BR
[-0.5000,+0.5000, 0.0000] # BL
]
# Faces for level 0 plane
_plane_faces0 = [
[2, 1, 0],
[0, 3, 2]
]
def plane(level: int = 0, device=None):
"""
Create verts and faces for a unit plane, with all faces oriented
consistently.
Args:
level: integer specifying the number of iterations for subdivision
of the mesh faces. Each additional level will result in four new
faces per face.
device: A torch.device object on which the outputs will be allocated.
Returns:
Meshes object with verts and faces.
"""
if device is None:
device = torch.device("cpu")
if level < 0:
raise ValueError("level must be >= 0.")
if level == 0:
verts = torch.tensor(_plane_verts0, dtype=torch.float32, device=device)
faces = torch.tensor(_plane_faces0, dtype=torch.int64, device=device)
else:
mesh = plane(level - 1, device)
subdivide = SubdivideMeshes()
mesh = subdivide(mesh)
verts = mesh.verts_list()[0]
verts /= verts.norm(p=2, dim=1, keepdim=True)
faces = mesh.faces_list()[0]
return Meshes(verts=[verts], faces=[faces])
The plane with no subdivision (level=0) seems fine.
The issue I'm having is when I subdivide the plane, the result shows holes / flipped normals by the looks of it:
I've tried to change the face indices a bit (offsetting the starting point, using CW vs CCW winding order, etc.), but the result is the same, so I'm not sure if the issue with the plane geometry itself, the subdivision, or both.
(I haven't found a good way to visualise face normals. I've tried plotly's Mesh3D option, but as far as I can tell it's doing double sided rendering and I couldn't figure out how to set it's renderer to render triangles single sided only. Any tips on visually debugging are the geometry also welcome)
Update
I've tried something slightly different: copying the vertex and face from a (triangulated) plane drawn in Blender:
'''
# Blender v2.82 (sub 7) OBJ File: ''
# www.blender.org
mtllib plane.mtl
o Plane
v -1.000000 0.000000 1.000000
v 1.000000 0.000000 1.000000
v -1.000000 0.000000 -1.000000
v 1.000000 0.000000 -1.000000
vt 1.000000 0.000000
vt 0.000000 1.000000
vt 0.000000 0.000000
vt 1.000000 1.000000
vn 0.0000 1.0000 0.0000
usemtl None
s off
f 2/1/1 3/2/1 1/3/1
f 2/1/1 4/4/1 3/2/1
'''
# Vertex coordinates for a level 0 plane.
_plane_verts0 = [
[-1.000000, 0.000000, 1.000000],
[1.000000, 0.000000, 1.000000],
[-1.000000, 0.000000, -1.000000],
[1.000000, 0.000000, -1.000000],
]
# Faces for level 0 plane
_plane_faces0 = [
[1, 2, 0],
[1, 3, 2]
]
This didn't work either.
My current hacky workaround is to load the plane from Blender:
blender_plane = load_objs_as_meshes(['plane.obj'], device=device)
(Once it's in PyTorch3D's Meshes format I can use SubdivideMeshes as needed.)
I would like to understand what the correct face index winding is for PyTorch3D (so I can potentially define other procedural meshes).
| The solution is to remove this line: verts /= verts.norm(p=2, dim=1, keepdim=True)
In more detail:
import torch
from pytorch3d.ops.subdivide_meshes import SubdivideMeshes
from pytorch3d.structures.meshes import Meshes
# Vertex coordinates for a level 0 plane.
_plane_verts0 = [
[-0.5000, -0.5000, 0.0000], # TL
[+0.5000, -0.5000, 0.0000], # TR
[+0.5000, +0.5000, 0.0000], # BR
[-0.5000, +0.5000, 0.0000], # BL
]
# Faces for level 0 plane
_plane_faces0 = [[2, 1, 0], [0, 3, 2]]
def plane(level: int = 0, device=None):
"""
Create verts and faces for a unit plane, with all faces oriented
consistently.
Args:
level: integer specifying the number of iterations for subdivision
of the mesh faces. Each additional level will result in four new
faces per face.
device: A torch.device object on which the outputs will be allocated.
Returns:
Meshes object with verts and faces.
"""
if device is None:
device = torch.device("cpu")
if level < 0:
raise ValueError("level must be >= 0.")
if level == 0:
verts = torch.tensor(_plane_verts0, dtype=torch.float32, device=device)
faces = torch.tensor(_plane_faces0, dtype=torch.int64, device=device)
else:
mesh = plane(level - 1, device)
subdivide = SubdivideMeshes()
mesh = subdivide(mesh)
verts = mesh.verts_list()[0]
faces = mesh.faces_list()[0]
return Meshes(verts=[verts], faces=[faces])
| https://stackoverflow.com/questions/72862446/ |
Subdivide values in a tensor | I have a PyTorch tensor that contains the labels of some samples.
I want to split each label into n_groups groups, introducing new virtual labels.
For example, for the labels:
labels = torch.as_tensor([0, 0, 0, 1, 1, 1, 2, 2, 2], dtype=torch.long)
One possible solution to subdivide each label into n_groups=2 is the following:
subdivided_labels = [0, 3, 0, 1, 4, 1, 2, 5, 2]
The constraints are the following:
There is no assumption about the order of the initial labels
The labels should be well distributed among the groups
The label ranking should be consistent among different groups, i.e., the first label 0 in the first group should be the first label also in any other group.
The following should always be true torch.equal(labels, subdivided_labels % num_classes)
It is possible that the number of groups is greater than the number of samples for a given label
The following tests should pass for the desired algorithm:
@pytest.mark.parametrize(
"labels",
(
torch.randint(100, size=(50,)),
torch.arange(100),
torch.ones(100),
torch.randint(100, size=(50,)).repeat(4),
torch.arange(100).repeat(4),
torch.ones(100).repeat(4),
torch.randint(100, size=(50,)).repeat_interleave(4),
torch.arange(100).repeat_interleave(4),
torch.ones(100).repeat_interleave(4),
),
)
@pytest.mark.parametrize("n_groups", (1, 2, 3, 4, 5, 50, 150))
def test_subdivide_labels(labels, n_groups):
subdivided_labels = subdivide_labels(labels, n_groups=n_groups, num_classes=100)
assert torch.equal(labels, subdivided_labels % 100)
@pytest.mark.parametrize(
"labels, n_groups, n_classes, expected_result",
(
(
torch.tensor([0, 0, 1, 1, 2, 2]),
2,
3,
torch.tensor([0, 3, 1, 4, 2, 5]),
),
(
torch.tensor([0, 0, 1, 1, 2, 2]),
2,
10,
torch.tensor([0, 10, 1, 11, 2, 12]),
),
(
torch.tensor([0, 0, 1, 1, 2, 2]),
1,
10,
torch.tensor([0, 0, 1, 1, 2, 2]),
),
(
torch.tensor([0, 0, 2, 2, 1, 1]),
2,
3,
torch.tensor([0, 3, 2, 5, 1, 4]),
),
(
torch.tensor([0, 0, 2, 2, 1, 1]),
30,
3,
torch.tensor([0, 3, 2, 5, 1, 4]),
),
),
)
def test_subdivide_labels_with_gt(labels, n_groups, n_classes, expected_result):
subdivided_labels = subdivide_labels(labels, n_groups=n_groups, num_classes=n_classes)
assert torch.equal(expected_result, subdivided_labels)
assert torch.equal(labels, subdivided_labels % n_classes)
I have a non-vectorized solution:
import torch
def subdivide_labels(labels: torch.Tensor, n_groups: int, num_classes: int) -> torch.Tensor:
"""Divide each label in groups introducing virtual labels.
Args:
labels: the tensor containing the labels, each label should be in [0, num_classes)
n_groups: the number of groups to create for each label
num_classes: the number of classes
Returns:
a tensor with the same shape of labels, but with each label partitioned in n_groups virtual labels
"""
unique, counts = labels.unique(
sorted=True,
return_counts=True,
return_inverse=False,
)
virtual_labels = labels.clone().detach()
max_range = num_classes * (torch.arange(counts.max()) % n_groups)
for value, count in zip(unique, counts):
virtual_labels[labels == value] = max_range[:count] + value
return virtual_labels
labels = torch.as_tensor([0, 0, 0, 1, 1, 1, 2, 2, 2], dtype=torch.long)
subdivide_labels(labels, n_groups=2, num_classes=3)
tensor([0, 3, 0, 1, 4, 1, 2, 5, 2])
Is it possible to vectorize this algorithm?
Alternatively, are there any faster algorithms to perform the same operation?
| A variation of OP's approach can be vectorized with a grouped cumcount (numpy implementation by @divakar). All tests pass, but the output is slightly different since argsort has no 'stable' option in pytorch, AFAIK.
def vector_labels(labels, n_groups, num_classes):
counts = torch.unique(labels, return_counts=True)[1]
idx = counts.cumsum(0)
id_arr = torch.ones(idx[-1], dtype=torch.long)
id_arr[0] = 0
id_arr[idx[:-1]] = -counts[:-1] + 1
rng = id_arr.cumsum(0)[labels.argsort().argsort()] % n_groups
maxr = torch.arange(n_groups) * num_classes
return maxr[rng] + labels
labels = torch.arange(100).repeat_interleave(4)
%timeit vector_labels(labels, 2, 100)
%timeit subdivide_labels(labels, 2, 100)
Output
10000 loops, best of 5: 117 Β΅s per loop
1000 loops, best of 5: 1.6 ms per loop
This is far from the fastest algorithm. For example a trivial O(n) approach, but only CPU and needs numba to be fast with Python.
import numpy as np
import numba as nb
@nb.njit
def numpy_labels(labels, n_groups, num_classes):
lookup = np.zeros(labels.max() + 1, np.intp)
res = np.empty_like(labels)
for i in range(len(labels)):
res[i] = num_classes * lookup[labels[i]] + labels[i]
lookup[labels[i]] = lookup[labels[i]] + 1 if lookup[labels[i]] < n_groups-1 else 0
return res
numpy_labels(labels.numpy(), 20, 100) # compile run
%timeit torch.from_numpy(numpy_labels(labels.numpy(), 20, 100))
Output
100000 loops, best of 5: 3.63 Β΅s per loop
| https://stackoverflow.com/questions/72862624/ |
Understanding model training and evaluation in Pytorch | I am following a Pytorch code on deep learning. Where I saw model evaluation taking place within the training epoch!
Q) Should the torch.no_grad and model.eval() be out of the training epoch loop?
Q) And how to determine that, which parameter (weight) are getting optimised by the optimiser during the back-propagation?
...
for l in range(1):
model = GTN(num_edge=A.shape[-1],
num_channels=num_channels,w_in = node_features.shape[1],w_out = node_dim,
num_class=num_classes,num_layers=num_layers,norm=norm)
if adaptive_lr == 'false':
optimizer = torch.optim.Adam(model.parameters(), lr=0.005, weight_decay=0.001)
else:
optimizer = torch.optim.Adam([{'params':model.weight},{'params':model.linear1.parameters()},{'params':model.linear2.parameters()},
{"params":model.layers.parameters(), "lr":0.5}], lr=0.005, weight_decay=0.001)
loss = nn.CrossEntropyLoss()
# Train & Valid & Test
best_val_loss = 10000
best_train_loss = 10000
best_train_f1 = 0
best_val_f1 = 0
for i in range(epochs):
print('Epoch: ',i+1)
model.zero_grad()
model.train()
loss,y_train,Ws = model(A, node_features, train_node, train_target)
train_f1 = torch.mean(f1_score(torch.argmax(y_train.detach(),dim=1), train_target, num_classes=num_classes)).cpu().numpy()
print('Train - Loss: {}, Macro_F1: {}'.format(loss.detach().cpu().numpy(), train_f1))
loss.backward()
optimizer.step()
model.eval()
# Valid
with torch.no_grad():
val_loss, y_valid,_ = model.forward(A, node_features, valid_node, valid_target)
val_f1 = torch.mean(f1_score(torch.argmax(y_valid,dim=1), valid_target, num_classes=num_classes)).cpu().numpy()
if val_f1 > best_val_f1:
best_val_loss = val_loss.detach().cpu().numpy()
best_train_loss = loss.detach().cpu().numpy()
best_train_f1 = train_f1
best_val_f1 = val_f1
print('---------------Best Results--------------------')
print('Train - Loss: {}, Macro_F1: {}'.format(best_train_loss, best_train_f1))
print('Valid - Loss: {}, Macro_F1: {}'.format(best_val_loss, best_val_f1))
final_f1 += best_test_f1
|
For each epoch, you are doing train, followed by validation/test.
For validation/test you are moving the model to evaluation model
using model.eval() and then doing forward propagation with
torch.no_grad() which is correct. Again, you are moving back the
model back to train model using model.train() at the start of
train. There is no issue with the code and you are using the model
modes correctly.
In your code, if adaptive_lr if False then you are optimizing the parameters given by model.parameters() and when adaptive_lr
is True then you are optimizing:
model.weight
model.linear1.parameters()
model.linear2.parameters()
model.layers.parameters()
| https://stackoverflow.com/questions/72865629/ |
How do I create a model from a state dict? | I am trying to load a checkpoint pth file from the faster_rcnn_resnet101 model which is not currently in the PyTorch model zoo.
This causes PyTorch to throw a KeyError saying that I the layers in the state dict does not match the model architecture of faster_rcnn_fpn_resnet50 that I've loaded from the model zoo.
Note: I tried posting the architecture of the faster_rcnn_fpn_resnet50 model, but I went over the character limit of 30.000
print(state_dict[model].keys()):
backbone.stem.conv1.weight
backbone.stem.conv1.norm.weight
backbone.stem.conv1.norm.bias
backbone.stem.conv1.norm.running_mean
backbone.stem.conv1.norm.running_var
backbone.res2.0.shortcut.weight
backbone.res2.0.shortcut.norm.weight
backbone.res2.0.shortcut.norm.bias
backbone.res2.0.shortcut.norm.running_mean
backbone.res2.0.shortcut.norm.running_var
backbone.res2.0.conv1.weight
backbone.res2.0.conv1.norm.weight
backbone.res2.0.conv1.norm.bias
backbone.res2.0.conv1.norm.running_mean
backbone.res2.0.conv1.norm.running_var
backbone.res2.0.conv2.weight
backbone.res2.0.conv2.norm.weight
backbone.res2.0.conv2.norm.bias
backbone.res2.0.conv2.norm.running_mean
backbone.res2.0.conv2.norm.running_var
backbone.res2.0.conv3.weight
backbone.res2.0.conv3.norm.weight
backbone.res2.0.conv3.norm.bias
backbone.res2.0.conv3.norm.running_mean
backbone.res2.0.conv3.norm.running_var
backbone.res2.1.conv1.weight
backbone.res2.1.conv1.norm.weight
backbone.res2.1.conv1.norm.bias
backbone.res2.1.conv1.norm.running_mean
backbone.res2.1.conv1.norm.running_var
backbone.res2.1.conv2.weight
backbone.res2.1.conv2.norm.weight
backbone.res2.1.conv2.norm.bias
backbone.res2.1.conv2.norm.running_mean
backbone.res2.1.conv2.norm.running_var
backbone.res2.1.conv3.weight
backbone.res2.1.conv3.norm.weight
backbone.res2.1.conv3.norm.bias
backbone.res2.1.conv3.norm.running_mean
backbone.res2.1.conv3.norm.running_var
backbone.res2.2.conv1.weight
backbone.res2.2.conv1.norm.weight
backbone.res2.2.conv1.norm.bias
backbone.res2.2.conv1.norm.running_mean
backbone.res2.2.conv1.norm.running_var
backbone.res2.2.conv2.weight
backbone.res2.2.conv2.norm.weight
backbone.res2.2.conv2.norm.bias
backbone.res2.2.conv2.norm.running_mean
backbone.res2.2.conv2.norm.running_var
backbone.res2.2.conv3.weight
backbone.res2.2.conv3.norm.weight
backbone.res2.2.conv3.norm.bias
backbone.res2.2.conv3.norm.running_mean
backbone.res2.2.conv3.norm.running_var
backbone.res3.0.shortcut.weight
backbone.res3.0.shortcut.norm.weight
backbone.res3.0.shortcut.norm.bias
backbone.res3.0.shortcut.norm.running_mean
backbone.res3.0.shortcut.norm.running_var
backbone.res3.0.conv1.weight
backbone.res3.0.conv1.norm.weight
backbone.res3.0.conv1.norm.bias
backbone.res3.0.conv1.norm.running_mean
backbone.res3.0.conv1.norm.running_var
backbone.res3.0.conv2.weight
backbone.res3.0.conv2.norm.weight
backbone.res3.0.conv2.norm.bias
backbone.res3.0.conv2.norm.running_mean
backbone.res3.0.conv2.norm.running_var
backbone.res3.0.conv3.weight
backbone.res3.0.conv3.norm.weight
backbone.res3.0.conv3.norm.bias
backbone.res3.0.conv3.norm.running_mean
backbone.res3.0.conv3.norm.running_var
backbone.res3.1.conv1.weight
backbone.res3.1.conv1.norm.weight
backbone.res3.1.conv1.norm.bias
backbone.res3.1.conv1.norm.running_mean
backbone.res3.1.conv1.norm.running_var
backbone.res3.1.conv2.weight
backbone.res3.1.conv2.norm.weight
backbone.res3.1.conv2.norm.bias
backbone.res3.1.conv2.norm.running_mean
backbone.res3.1.conv2.norm.running_var
backbone.res3.1.conv3.weight
backbone.res3.1.conv3.norm.weight
backbone.res3.1.conv3.norm.bias
backbone.res3.1.conv3.norm.running_mean
backbone.res3.1.conv3.norm.running_var
backbone.res3.2.conv1.weight
backbone.res3.2.conv1.norm.weight
backbone.res3.2.conv1.norm.bias
backbone.res3.2.conv1.norm.running_mean
backbone.res3.2.conv1.norm.running_var
backbone.res3.2.conv2.weight
backbone.res3.2.conv2.norm.weight
backbone.res3.2.conv2.norm.bias
backbone.res3.2.conv2.norm.running_mean
backbone.res3.2.conv2.norm.running_var
backbone.res3.2.conv3.weight
backbone.res3.2.conv3.norm.weight
backbone.res3.2.conv3.norm.bias
backbone.res3.2.conv3.norm.running_mean
backbone.res3.2.conv3.norm.running_var
backbone.res3.3.conv1.weight
backbone.res3.3.conv1.norm.weight
backbone.res3.3.conv1.norm.bias
backbone.res3.3.conv1.norm.running_mean
backbone.res3.3.conv1.norm.running_var
backbone.res3.3.conv2.weight
backbone.res3.3.conv2.norm.weight
backbone.res3.3.conv2.norm.bias
backbone.res3.3.conv2.norm.running_mean
backbone.res3.3.conv2.norm.running_var
backbone.res3.3.conv3.weight
backbone.res3.3.conv3.norm.weight
backbone.res3.3.conv3.norm.bias
backbone.res3.3.conv3.norm.running_mean
backbone.res3.3.conv3.norm.running_var
backbone.res4.0.shortcut.weight
backbone.res4.0.shortcut.norm.weight
backbone.res4.0.shortcut.norm.bias
backbone.res4.0.shortcut.norm.running_mean
backbone.res4.0.shortcut.norm.running_var
backbone.res4.0.conv1.weight
backbone.res4.0.conv1.norm.weight
backbone.res4.0.conv1.norm.bias
backbone.res4.0.conv1.norm.running_mean
backbone.res4.0.conv1.norm.running_var
backbone.res4.0.conv2.weight
backbone.res4.0.conv2.norm.weight
backbone.res4.0.conv2.norm.bias
backbone.res4.0.conv2.norm.running_mean
backbone.res4.0.conv2.norm.running_var
backbone.res4.0.conv3.weight
backbone.res4.0.conv3.norm.weight
backbone.res4.0.conv3.norm.bias
backbone.res4.0.conv3.norm.running_mean
backbone.res4.0.conv3.norm.running_var
backbone.res4.1.conv1.weight
backbone.res4.1.conv1.norm.weight
backbone.res4.1.conv1.norm.bias
backbone.res4.1.conv1.norm.running_mean
backbone.res4.1.conv1.norm.running_var
backbone.res4.1.conv2.weight
backbone.res4.1.conv2.norm.weight
backbone.res4.1.conv2.norm.bias
backbone.res4.1.conv2.norm.running_mean
backbone.res4.1.conv2.norm.running_var
backbone.res4.1.conv3.weight
backbone.res4.1.conv3.norm.weight
backbone.res4.1.conv3.norm.bias
backbone.res4.1.conv3.norm.running_mean
backbone.res4.1.conv3.norm.running_var
backbone.res4.2.conv1.weight
backbone.res4.2.conv1.norm.weight
backbone.res4.2.conv1.norm.bias
backbone.res4.2.conv1.norm.running_mean
backbone.res4.2.conv1.norm.running_var
backbone.res4.2.conv2.weight
backbone.res4.2.conv2.norm.weight
backbone.res4.2.conv2.norm.bias
backbone.res4.2.conv2.norm.running_mean
backbone.res4.2.conv2.norm.running_var
backbone.res4.2.conv3.weight
backbone.res4.2.conv3.norm.weight
backbone.res4.2.conv3.norm.bias
backbone.res4.2.conv3.norm.running_mean
backbone.res4.2.conv3.norm.running_var
backbone.res4.3.conv1.weight
backbone.res4.3.conv1.norm.weight
backbone.res4.3.conv1.norm.bias
backbone.res4.3.conv1.norm.running_mean
backbone.res4.3.conv1.norm.running_var
backbone.res4.3.conv2.weight
backbone.res4.3.conv2.norm.weight
backbone.res4.3.conv2.norm.bias
backbone.res4.3.conv2.norm.running_mean
backbone.res4.3.conv2.norm.running_var
backbone.res4.3.conv3.weight
backbone.res4.3.conv3.norm.weight
backbone.res4.3.conv3.norm.bias
backbone.res4.3.conv3.norm.running_mean
backbone.res4.3.conv3.norm.running_var
backbone.res4.4.conv1.weight
backbone.res4.4.conv1.norm.weight
backbone.res4.4.conv1.norm.bias
backbone.res4.4.conv1.norm.running_mean
backbone.res4.4.conv1.norm.running_var
backbone.res4.4.conv2.weight
backbone.res4.4.conv2.norm.weight
backbone.res4.4.conv2.norm.bias
backbone.res4.4.conv2.norm.running_mean
backbone.res4.4.conv2.norm.running_var
backbone.res4.4.conv3.weight
backbone.res4.4.conv3.norm.weight
backbone.res4.4.conv3.norm.bias
backbone.res4.4.conv3.norm.running_mean
backbone.res4.4.conv3.norm.running_var
backbone.res4.5.conv1.weight
backbone.res4.5.conv1.norm.weight
backbone.res4.5.conv1.norm.bias
backbone.res4.5.conv1.norm.running_mean
backbone.res4.5.conv1.norm.running_var
backbone.res4.5.conv2.weight
backbone.res4.5.conv2.norm.weight
backbone.res4.5.conv2.norm.bias
backbone.res4.5.conv2.norm.running_mean
backbone.res4.5.conv2.norm.running_var
backbone.res4.5.conv3.weight
backbone.res4.5.conv3.norm.weight
backbone.res4.5.conv3.norm.bias
backbone.res4.5.conv3.norm.running_mean
backbone.res4.5.conv3.norm.running_var
backbone.res4.6.conv1.weight
backbone.res4.6.conv1.norm.weight
backbone.res4.6.conv1.norm.bias
backbone.res4.6.conv1.norm.running_mean
backbone.res4.6.conv1.norm.running_var
backbone.res4.6.conv2.weight
backbone.res4.6.conv2.norm.weight
backbone.res4.6.conv2.norm.bias
backbone.res4.6.conv2.norm.running_mean
backbone.res4.6.conv2.norm.running_var
backbone.res4.6.conv3.weight
backbone.res4.6.conv3.norm.weight
backbone.res4.6.conv3.norm.bias
backbone.res4.6.conv3.norm.running_mean
backbone.res4.6.conv3.norm.running_var
backbone.res4.7.conv1.weight
backbone.res4.7.conv1.norm.weight
backbone.res4.7.conv1.norm.bias
backbone.res4.7.conv1.norm.running_mean
backbone.res4.7.conv1.norm.running_var
backbone.res4.7.conv2.weight
backbone.res4.7.conv2.norm.weight
backbone.res4.7.conv2.norm.bias
backbone.res4.7.conv2.norm.running_mean
backbone.res4.7.conv2.norm.running_var
backbone.res4.7.conv3.weight
backbone.res4.7.conv3.norm.weight
backbone.res4.7.conv3.norm.bias
backbone.res4.7.conv3.norm.running_mean
backbone.res4.7.conv3.norm.running_var
backbone.res4.8.conv1.weight
backbone.res4.8.conv1.norm.weight
backbone.res4.8.conv1.norm.bias
backbone.res4.8.conv1.norm.running_mean
backbone.res4.8.conv1.norm.running_var
backbone.res4.8.conv2.weight
backbone.res4.8.conv2.norm.weight
backbone.res4.8.conv2.norm.bias
backbone.res4.8.conv2.norm.running_mean
backbone.res4.8.conv2.norm.running_var
backbone.res4.8.conv3.weight
backbone.res4.8.conv3.norm.weight
backbone.res4.8.conv3.norm.bias
backbone.res4.8.conv3.norm.running_mean
backbone.res4.8.conv3.norm.running_var
backbone.res4.9.conv1.weight
backbone.res4.9.conv1.norm.weight
backbone.res4.9.conv1.norm.bias
backbone.res4.9.conv1.norm.running_mean
backbone.res4.9.conv1.norm.running_var
backbone.res4.9.conv2.weight
backbone.res4.9.conv2.norm.weight
backbone.res4.9.conv2.norm.bias
backbone.res4.9.conv2.norm.running_mean
backbone.res4.9.conv2.norm.running_var
backbone.res4.9.conv3.weight
backbone.res4.9.conv3.norm.weight
backbone.res4.9.conv3.norm.bias
backbone.res4.9.conv3.norm.running_mean
backbone.res4.9.conv3.norm.running_var
backbone.res4.10.conv1.weight
backbone.res4.10.conv1.norm.weight
backbone.res4.10.conv1.norm.bias
backbone.res4.10.conv1.norm.running_mean
backbone.res4.10.conv1.norm.running_var
backbone.res4.10.conv2.weight
backbone.res4.10.conv2.norm.weight
backbone.res4.10.conv2.norm.bias
backbone.res4.10.conv2.norm.running_mean
backbone.res4.10.conv2.norm.running_var
backbone.res4.10.conv3.weight
backbone.res4.10.conv3.norm.weight
backbone.res4.10.conv3.norm.bias
backbone.res4.10.conv3.norm.running_mean
backbone.res4.10.conv3.norm.running_var
backbone.res4.11.conv1.weight
backbone.res4.11.conv1.norm.weight
backbone.res4.11.conv1.norm.bias
backbone.res4.11.conv1.norm.running_mean
backbone.res4.11.conv1.norm.running_var
backbone.res4.11.conv2.weight
backbone.res4.11.conv2.norm.weight
backbone.res4.11.conv2.norm.bias
backbone.res4.11.conv2.norm.running_mean
backbone.res4.11.conv2.norm.running_var
backbone.res4.11.conv3.weight
backbone.res4.11.conv3.norm.weight
backbone.res4.11.conv3.norm.bias
backbone.res4.11.conv3.norm.running_mean
backbone.res4.11.conv3.norm.running_var
backbone.res4.12.conv1.weight
backbone.res4.12.conv1.norm.weight
backbone.res4.12.conv1.norm.bias
backbone.res4.12.conv1.norm.running_mean
backbone.res4.12.conv1.norm.running_var
backbone.res4.12.conv2.weight
backbone.res4.12.conv2.norm.weight
backbone.res4.12.conv2.norm.bias
backbone.res4.12.conv2.norm.running_mean
backbone.res4.12.conv2.norm.running_var
backbone.res4.12.conv3.weight
backbone.res4.12.conv3.norm.weight
backbone.res4.12.conv3.norm.bias
backbone.res4.12.conv3.norm.running_mean
backbone.res4.12.conv3.norm.running_var
backbone.res4.13.conv1.weight
backbone.res4.13.conv1.norm.weight
backbone.res4.13.conv1.norm.bias
backbone.res4.13.conv1.norm.running_mean
backbone.res4.13.conv1.norm.running_var
backbone.res4.13.conv2.weight
backbone.res4.13.conv2.norm.weight
backbone.res4.13.conv2.norm.bias
backbone.res4.13.conv2.norm.running_mean
backbone.res4.13.conv2.norm.running_var
backbone.res4.13.conv3.weight
backbone.res4.13.conv3.norm.weight
backbone.res4.13.conv3.norm.bias
backbone.res4.13.conv3.norm.running_mean
backbone.res4.13.conv3.norm.running_var
backbone.res4.14.conv1.weight
backbone.res4.14.conv1.norm.weight
backbone.res4.14.conv1.norm.bias
backbone.res4.14.conv1.norm.running_mean
backbone.res4.14.conv1.norm.running_var
backbone.res4.14.conv2.weight
backbone.res4.14.conv2.norm.weight
backbone.res4.14.conv2.norm.bias
backbone.res4.14.conv2.norm.running_mean
backbone.res4.14.conv2.norm.running_var
backbone.res4.14.conv3.weight
backbone.res4.14.conv3.norm.weight
backbone.res4.14.conv3.norm.bias
backbone.res4.14.conv3.norm.running_mean
backbone.res4.14.conv3.norm.running_var
backbone.res4.15.conv1.weight
backbone.res4.15.conv1.norm.weight
backbone.res4.15.conv1.norm.bias
backbone.res4.15.conv1.norm.running_mean
backbone.res4.15.conv1.norm.running_var
backbone.res4.15.conv2.weight
backbone.res4.15.conv2.norm.weight
backbone.res4.15.conv2.norm.bias
backbone.res4.15.conv2.norm.running_mean
backbone.res4.15.conv2.norm.running_var
backbone.res4.15.conv3.weight
backbone.res4.15.conv3.norm.weight
backbone.res4.15.conv3.norm.bias
backbone.res4.15.conv3.norm.running_mean
backbone.res4.15.conv3.norm.running_var
backbone.res4.16.conv1.weight
backbone.res4.16.conv1.norm.weight
backbone.res4.16.conv1.norm.bias
backbone.res4.16.conv1.norm.running_mean
backbone.res4.16.conv1.norm.running_var
backbone.res4.16.conv2.weight
backbone.res4.16.conv2.norm.weight
backbone.res4.16.conv2.norm.bias
backbone.res4.16.conv2.norm.running_mean
backbone.res4.16.conv2.norm.running_var
backbone.res4.16.conv3.weight
backbone.res4.16.conv3.norm.weight
backbone.res4.16.conv3.norm.bias
backbone.res4.16.conv3.norm.running_mean
backbone.res4.16.conv3.norm.running_var
backbone.res4.17.conv1.weight
backbone.res4.17.conv1.norm.weight
backbone.res4.17.conv1.norm.bias
backbone.res4.17.conv1.norm.running_mean
backbone.res4.17.conv1.norm.running_var
backbone.res4.17.conv2.weight
backbone.res4.17.conv2.norm.weight
backbone.res4.17.conv2.norm.bias
backbone.res4.17.conv2.norm.running_mean
backbone.res4.17.conv2.norm.running_var
backbone.res4.17.conv3.weight
backbone.res4.17.conv3.norm.weight
backbone.res4.17.conv3.norm.bias
backbone.res4.17.conv3.norm.running_mean
backbone.res4.17.conv3.norm.running_var
backbone.res4.18.conv1.weight
backbone.res4.18.conv1.norm.weight
backbone.res4.18.conv1.norm.bias
backbone.res4.18.conv1.norm.running_mean
backbone.res4.18.conv1.norm.running_var
backbone.res4.18.conv2.weight
backbone.res4.18.conv2.norm.weight
backbone.res4.18.conv2.norm.bias
backbone.res4.18.conv2.norm.running_mean
backbone.res4.18.conv2.norm.running_var
backbone.res4.18.conv3.weight
backbone.res4.18.conv3.norm.weight
backbone.res4.18.conv3.norm.bias
backbone.res4.18.conv3.norm.running_mean
backbone.res4.18.conv3.norm.running_var
backbone.res4.19.conv1.weight
backbone.res4.19.conv1.norm.weight
backbone.res4.19.conv1.norm.bias
backbone.res4.19.conv1.norm.running_mean
backbone.res4.19.conv1.norm.running_var
backbone.res4.19.conv2.weight
backbone.res4.19.conv2.norm.weight
backbone.res4.19.conv2.norm.bias
backbone.res4.19.conv2.norm.running_mean
backbone.res4.19.conv2.norm.running_var
backbone.res4.19.conv3.weight
backbone.res4.19.conv3.norm.weight
backbone.res4.19.conv3.norm.bias
backbone.res4.19.conv3.norm.running_mean
backbone.res4.19.conv3.norm.running_var
backbone.res4.20.conv1.weight
backbone.res4.20.conv1.norm.weight
backbone.res4.20.conv1.norm.bias
backbone.res4.20.conv1.norm.running_mean
backbone.res4.20.conv1.norm.running_var
backbone.res4.20.conv2.weight
backbone.res4.20.conv2.norm.weight
backbone.res4.20.conv2.norm.bias
backbone.res4.20.conv2.norm.running_mean
backbone.res4.20.conv2.norm.running_var
backbone.res4.20.conv3.weight
backbone.res4.20.conv3.norm.weight
backbone.res4.20.conv3.norm.bias
backbone.res4.20.conv3.norm.running_mean
backbone.res4.20.conv3.norm.running_var
backbone.res4.21.conv1.weight
backbone.res4.21.conv1.norm.weight
backbone.res4.21.conv1.norm.bias
backbone.res4.21.conv1.norm.running_mean
backbone.res4.21.conv1.norm.running_var
backbone.res4.21.conv2.weight
backbone.res4.21.conv2.norm.weight
backbone.res4.21.conv2.norm.bias
backbone.res4.21.conv2.norm.running_mean
backbone.res4.21.conv2.norm.running_var
backbone.res4.21.conv3.weight
backbone.res4.21.conv3.norm.weight
backbone.res4.21.conv3.norm.bias
backbone.res4.21.conv3.norm.running_mean
backbone.res4.21.conv3.norm.running_var
backbone.res4.22.conv1.weight
backbone.res4.22.conv1.norm.weight
backbone.res4.22.conv1.norm.bias
backbone.res4.22.conv1.norm.running_mean
backbone.res4.22.conv1.norm.running_var
backbone.res4.22.conv2.weight
backbone.res4.22.conv2.norm.weight
backbone.res4.22.conv2.norm.bias
backbone.res4.22.conv2.norm.running_mean
backbone.res4.22.conv2.norm.running_var
backbone.res4.22.conv3.weight
backbone.res4.22.conv3.norm.weight
backbone.res4.22.conv3.norm.bias
backbone.res4.22.conv3.norm.running_mean
backbone.res4.22.conv3.norm.running_var
proposal_generator.rpn_head.conv.weight
proposal_generator.rpn_head.conv.bias
proposal_generator.rpn_head.objectness_logits.weight
proposal_generator.rpn_head.objectness_logits.bias
proposal_generator.rpn_head.anchor_deltas.weight
proposal_generator.rpn_head.anchor_deltas.bias
roi_heads.res5.0.shortcut.weight
roi_heads.res5.0.shortcut.norm.weight
roi_heads.res5.0.shortcut.norm.bias
roi_heads.res5.0.shortcut.norm.running_mean
roi_heads.res5.0.shortcut.norm.running_var
roi_heads.res5.0.conv1.weight
roi_heads.res5.0.conv1.norm.weight
roi_heads.res5.0.conv1.norm.bias
roi_heads.res5.0.conv1.norm.running_mean
roi_heads.res5.0.conv1.norm.running_var
roi_heads.res5.0.conv2.weight
roi_heads.res5.0.conv2.norm.weight
roi_heads.res5.0.conv2.norm.bias
roi_heads.res5.0.conv2.norm.running_mean
roi_heads.res5.0.conv2.norm.running_var
roi_heads.res5.0.conv3.weight
roi_heads.res5.0.conv3.norm.weight
roi_heads.res5.0.conv3.norm.bias
roi_heads.res5.0.conv3.norm.running_mean
roi_heads.res5.0.conv3.norm.running_var
roi_heads.res5.1.conv1.weight
roi_heads.res5.1.conv1.norm.weight
roi_heads.res5.1.conv1.norm.bias
roi_heads.res5.1.conv1.norm.running_mean
roi_heads.res5.1.conv1.norm.running_var
roi_heads.res5.1.conv2.weight
roi_heads.res5.1.conv2.norm.weight
roi_heads.res5.1.conv2.norm.bias
roi_heads.res5.1.conv2.norm.running_mean
roi_heads.res5.1.conv2.norm.running_var
roi_heads.res5.1.conv3.weight
roi_heads.res5.1.conv3.norm.weight
roi_heads.res5.1.conv3.norm.bias
roi_heads.res5.1.conv3.norm.running_mean
roi_heads.res5.1.conv3.norm.running_var
roi_heads.res5.2.conv1.weight
roi_heads.res5.2.conv1.norm.weight
roi_heads.res5.2.conv1.norm.bias
roi_heads.res5.2.conv1.norm.running_mean
roi_heads.res5.2.conv1.norm.running_var
roi_heads.res5.2.conv2.weight
roi_heads.res5.2.conv2.norm.weight
roi_heads.res5.2.conv2.norm.bias
roi_heads.res5.2.conv2.norm.running_mean
roi_heads.res5.2.conv2.norm.running_var
roi_heads.res5.2.conv3.weight
roi_heads.res5.2.conv3.norm.weight
roi_heads.res5.2.conv3.norm.bias
roi_heads.res5.2.conv3.norm.running_mean
roi_heads.res5.2.conv3.norm.running_var
roi_heads.box_predictor.cls_score.weight
roi_heads.box_predictor.cls_score.bias
roi_heads.box_predictor.bbox_pred.weight
roi_heads.box_predictor.bbox_pred.bias
Is there a way to dynamically create the model architecture of the faster_rcnn_resnet101 model in torch by iterating over the model dict or is there another way?
I have tried searching for a GitHub repository with a hubconf.py for torch.hub, but to no avail.
| TLDR; the short answer is you can't.
The state_dict of a nn.Module contains the module's state, but not its function. It is not possible to instantiate a model with its state dictionary alone.
You can manage to initialize some of the sub-modules based on the state dict keys and the weight shapes, but this is not guaranteed. In any case you won't be able to know the model's behaviour because this information is simply not contained in its state dictionary. Therefore you are required to have access to the forward definition of the model in order to know the forward logic, i.e. which functions are applied on the input and intermediate outputs and in which order those submodules are used.
Have a look at this minimal example showing two models with identical state dictionaries. However, their forward is different from one another:
class A(nn.Linear):
def forward(self, x):
return super().forward(x)
class B(nn.Linear):
def forward(self, x):
return super().forward(x)**2
Here, a and b are initialized and the state dict is copied from one to the other:
>>> a = A(2,1)
>>> b = B(2,1)
>>> a.load_state_dict(b.state_dict())
Both a and b have the exact same state... but actually implement two different functions!
| https://stackoverflow.com/questions/72866756/ |
What is PyTorch Dataset supposed to return? | I'm trying to get PyTorch to work with DataLoader, this being said to be the easiest way to handle mini batches, which are in some cases necessary for best performance.
DataLoader wants a Dataset as input.
Most of the documentation on Dataset assumes you are working with an off-the-shelf standard data set e.g. MNIST, or at least with images, and can use existing machinery as a black box. I'm working with non-image data I'm generating myself. My best current attempt to distill the documentation about how to do that, down to a minimal test case, is:
import torch
from torch import nn
from torch.utils.data import Dataset, DataLoader
class Dataset1(Dataset):
def __init__(self):
pass
def __len__(self):
return 80
def __getitem__(self, i):
# actual data is blank, just to test the mechanics of Dataset
return [0.0, 0.0, 0.0], 1.0
train_dataloader = DataLoader(Dataset1(), batch_size=8)
for X, y in train_dataloader:
print(f"X: {X}")
print(f"y: {y.shape} {y.dtype} {y}")
break
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layers = nn.Sequential(
nn.Linear(3, 10),
nn.ReLU(),
nn.Linear(10, 1),
nn.Sigmoid(),
)
def forward(self, x):
return self.layers(x)
device = torch.device("cpu")
model = Net().to(device)
criterion = nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
for epoch in range(10):
for X, y in train_dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
loss = criterion(pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
The output of the above program is:
X: [tensor([0., 0., 0., 0., 0., 0., 0., 0.], dtype=torch.float64), tensor([0., 0., 0., 0., 0., 0., 0., 0.], dtype=torch.float64), tensor([0., 0., 0., 0., 0., 0., 0., 0.], dtype=torch.float64)]
y: torch.Size([8]) torch.float64 tensor([1., 1., 1., 1., 1., 1., 1., 1.], dtype=torch.float64)
Traceback (most recent call last):
File "C:\ml\test_dataloader.py", line 47, in <module>
X, y = X.to(device), y.to(device)
AttributeError: 'list' object has no attribute 'to'
In all the example code I can find, X, y = X.to(device), y.to(device) succeeds, because X is indeed a tensor (whereas it is not in my version). Now I'm trying to find out what exactly converts X to a tensor, because either the example code e.g. https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html does not do so, or I am failing to understand how and where it does.
Does Dataset itself convert things to tensors? The answer seems to be 'sort of'.
It has converted y to a tensor, a column of the y value for every example in the batch. That much, makes sense, though it has used type float64, whereas in machine learning, we usually prefer float32. I am used to the idea that Python always represents scalars in double precision, so the conversion from double to single precision happens at the time of forming a tensor, and that this can be insured by specifying the dtype parameter. But in this case Dataset seems to have formed the tensor implicitly. Is there a place or way to specify the dtype parameter?
X is not a tensor, but a list thereof. It would make intuitive sense if it were a list of the examples in the batch, but instead of a list of 8 elements each containing 3 elements, it's the other way around. So Dataset has transposed the input data, which would make sense if it is forming a tensor to match the shape of y, but instead of making a single 2d tensor, it has made a list of 1d tensors. (And, again, in double precision.) Why? Is there a way to change this behavior?
The answer posted so far to Does pytorch Dataset.__getitem__ have to return a dict? says __getitem__ can return anything. Okay, but then how does the anything get converted to the form the training procedure requires?
| The dataset instance is only tasked with returning a single element of the dataset, which can take many forms: a dict, a list, an int, a float, a tensor, etc...
But the behaviour you are seeing is actually handled by your PyTorch data loader and not by the underlying dataset. This
mechanism is called collating and its implementation is done by collate_fn. You can actually provide your own as an argument to a data.DataLoader. The default collate function is provided by PyTorch as default_collate and will handle the vast majority of cases. Please have a look at its documentation, as it gives insights on what possible use cases it can handle.
With this default collate the returned batch will take the same types as the item you returned in your dataset.
You should therefore return tensors instead of a list as @dx2-66 explained.
| https://stackoverflow.com/questions/72867109/ |
How can i solve this error with Pytorch summary? | I am trying to load a DNN pytorch model using:
from torch import nn
import torch.nn.functional as F
class myDNN(nn.Module):
def __init__(self):
super(myDNN, self).__init__()
# layers definition
# first convolutional block
self.conv1 = nn.Conv1d(in_channels=2, out_channels=8, kernel_size=7)
self.pool1 = nn.MaxPool1d(kernel_size = 2, stride=2)
# second convolutional block
self.conv2 = nn.Conv1d(in_channels=8, out_channels=16, kernel_size=3)
self.pool2 = nn.MaxPool1d(kernel_size = 2, stride=2)
# third convolutional block
self.conv3 = nn.Conv1d(in_channels=16, out_channels=32, kernel_size=3)
self.pool3 = nn.MaxPool1d(kernel_size = 2, stride=2)
# fourth convolutional block
self.conv4 = nn.Conv1d(in_channels=32, out_channels=64, kernel_size=3)
self.pool4 = nn.MaxPool1d(kernel_size = 2, stride=2)
# fifth convolutional block
self.conv5 = nn.Conv1d(in_channels=64, out_channels=128, kernel_size=3)
self.pool5 = nn.MaxPool1d(kernel_size = 2, stride=2)
self.flatten = nn.Flatten()
self.drop1 = nn.Dropout(p=0.5)
self.fc1 = nn.Linear(in_features=3200, out_features=50) #3200 is a random number,
probably wrong
self.drop2 = nn.Dropout(p=0.5) #dropout
self.fc2 = nn.Linear(in_features=50, out_features=25)
self.fc3 = nn.Linear(in_features=25, out_features=2)
def forward(self, x, y):
x = F.relu(self.conv1(x))
x = self.pool1(x)
x = F.relu(self.conv2(x))
x = self.pool2(x)
x = F.relu(self.conv3(x))
x = self.pool3(x)
x = F.relu(self.conv4(x))
x = self.pool3(x)
x = F.relu(self.conv5(x))
x = self.pool5(x)
y = F.relu(self.conv1(y))
y = self.pool1(y)
y = F.relu(self.conv2(y))
y = self.pool2(y)
y = F.relu(self.conv3(y))
y = self.pool3(y)
y = F.relu(self.conv4(y))
y = self.pool3(y)
y = F.relu(self.conv5(y))
y = self.pool5(y)
#flatten
x = self.flatten(x)
y = self.flatten(y)
w = torch.cat(x,y,1)
w = self.drop1(w) #dropout layer
w = F.relu(self.fc1(w)) #layer fully connected with re lu
w = self.drop2(w)
w = F.relu(self.fc2(w)) #layer fully connected with re lu
w = self.fc3(w) #layer fully connected
out = F.log_softmax(w, dim=1)
return out
The DNN that i am trying to reproduce is the one mentioned in this paper:
https://iopscience.iop.org/article/10.1088/2399-6528/aa83fa/pdf
I am not sure that it's correct, anyway, when I load Pytorch's summary function:
model = myDNN()
print(model)
from torchsummary import summary
if torch.cuda.is_available():
summary(model.cuda(), input_size = [(1,892),(1,492)])
else:
summary(model, input_size = [(1,892),(1,492)])
but i get this error:
RuntimeError Traceback (most recent call last)
<ipython-input-123-6846b44c144c> in <module>()
6 from torchsummary import summary
7 if torch.cuda.is_available():
----> 8 summary(model.cuda(), input_size = [(1,892),(1,492)])
9 else:
10 summary(model, input_size = [(1,892),(1,492)])
5 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
297 _single(0), self.dilation, self.groups)
298 return F.conv1d(input, weight, bias, self.stride,
--> 299 self.padding, self.dilation, self.groups)
300
301 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Given groups=1, weight of size [8, 2, 7], expected input[2, 1, 892] to have 2 channels, but got 1 channels instead
Please any help would be grateful, thank you
| The first layer of the model expects two channels rather than one.
Simply pass the correct input shape to "summary" as follows:
summary(model, ((2, dim1),(2,dim2))
Edit: In the forward function I would do the concatenation as follows (if both model's inputs have the same shape):
w = torch.cat([x,y], dim=1)
w = self.flatten(w)
Edit:
Here is a working code using the correct implementation
from torch import nn
import torch.nn.functional as F
import torch
class myDNN(nn.Module):
def __init__(self):
super(myDNN, self).__init__()
# layers definition
# first convolutional block
self.path1_conv1 = nn.Conv1d(in_channels=2, out_channels=8, kernel_size=7)
self.path1_pool1 = nn.MaxPool1d(kernel_size = 2, stride=2)
# second convolutional block
self.path1_conv2 = nn.Conv1d(in_channels=8, out_channels=16, kernel_size=3)
self.path1_pool2 = nn.MaxPool1d(kernel_size = 2, stride=2)
# third convolutional block
self.path1_conv3 = nn.Conv1d(in_channels=16, out_channels=32, kernel_size=3)
self.path1_pool3 = nn.MaxPool1d(kernel_size = 2, stride=2)
# fourth convolutional block
self.path1_conv4 = nn.Conv1d(in_channels=32, out_channels=64, kernel_size=3)
self.path1_pool4 = nn.MaxPool1d(kernel_size = 2, stride=2)
# fifth convolutional block
self.path1_conv5 = nn.Conv1d(in_channels=64, out_channels=128, kernel_size=3)
self.path1_pool5 = nn.MaxPool1d(kernel_size = 2, stride=2)
self.path2_conv1 = nn.Conv1d(in_channels=2, out_channels=8, kernel_size=7)
self.path2_pool1 = nn.MaxPool1d(kernel_size = 2, stride=2)
# second convolutional block
self.path2_conv2 = nn.Conv1d(in_channels=8, out_channels=16, kernel_size=3)
self.path2_pool2 = nn.MaxPool1d(kernel_size = 2, stride=2)
# third convolutional block
self.path2_conv3 = nn.Conv1d(in_channels=16, out_channels=32, kernel_size=3)
self.path2_pool3 = nn.MaxPool1d(kernel_size = 2, stride=2)
# fourth convolutional block
self.path2_conv4 = nn.Conv1d(in_channels=32, out_channels=64, kernel_size=3)
self.path2_pool4 = nn.MaxPool1d(kernel_size = 2, stride=2)
# fifth convolutional block
self.path2_conv5 = nn.Conv1d(in_channels=64, out_channels=128, kernel_size=3)
self.path2_pool5 = nn.MaxPool1d(kernel_size = 2, stride=2)
self.flatten = nn.Flatten()
self.drop1 = nn.Dropout(p=0.5)
self.fc1 = nn.Linear(in_features=2048, out_features=50) #3200 is a random number,probably wrong
self.drop2 = nn.Dropout(p=0.5) #dropout
self.fc2 = nn.Linear(in_features=50, out_features=25)
self.fc3 = nn.Linear(in_features=25, out_features=2)
def forward(self, x, y):
x = F.relu(self.path1_conv1(x))
x = self.path1_pool1(x)
x = F.relu(self.path1_conv2(x))
x = self.path1_pool2(x)
x = F.relu(self.path1_conv3(x))
x = self.path1_pool3(x)
x = F.relu(self.path1_conv4(x))
x = self.path1_pool3(x)
x = F.relu(self.path1_conv5(x))
x = self.path1_pool5(x)
y = F.relu(self.path2_conv1(y))
y = self.path2_pool1(y)
y = F.relu(self.path2_conv2(y))
y = self.path2_pool2(y)
y = F.relu(self.path2_conv3(y))
y = self.path2_pool3(y)
y = F.relu(self.path2_conv4(y))
y = self.path2_pool3(y)
y = F.relu(self.path2_conv5(y))
y = self.path2_pool5(y)
#flatten
x = self.flatten(x)
y = self.flatten(y)
w = torch.cat([x,y],dim=1)
print(w.shape)
w = self.drop1(w) #dropout layer
w = F.relu(self.fc1(w)) #layer fully connected with re lu
w = self.drop2(w)
w = F.relu(self.fc2(w)) #layer fully connected with re lu
w = self.fc3(w) #layer fully connected
out = F.log_softmax(w, dim=1)
return out
def main():
model = myDNN()
print(model)
from torchsummary import summary
if torch.cuda.is_available():
summary(model.cuda(), input_size = [(2,246),(2,447)])
else:
summary(model, input_size = [(2,246),(2,447)])
if __name__ == '__main__':
main()
| https://stackoverflow.com/questions/72872690/ |
I can't load an small AI model with nvidia 3090 in Windows (Cuda: Out Of Memory) | I come here because i am having problems loading any size of model with a nvidia 3090 (24Gb ram)
I just followed this video (and thousands more xD) and install everything that Jeffs said here:
Jeffs instruction to install tensorflow, cuda, etc in windows
Then I installed pytorch for cuda 11.6:
pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cu116
I have tensoflow 2.9.1 and cuda 11.5.
When I load any model of microsoft/DialoGPT-* , the vram of the 3090 go directly to 24Gb so i go out of memory.
The code in Colab its working correctly with 16 Gb of VRAM.
I tried in windows without success so I installed Ubuntu today to check it and I have the same problem!
Except one time that it loaded correctly 10Gb (I thought that was the pytorch version) but the app wasn't doing predictions because It needed libs and then it stopped working.
btw this is the code:
gpt2bot
This is what happen with my VRAM loading DialoGPT-small:
Nvidia 3090 VRAM
How can I solve this?
Edited:
Error:
Loading the pipeline 'microsoft/DialoGPT-medium'...
2022-07-06 03:38:34.293286: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-07-06 03:38:34.709282: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 21670 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:0b:00.0, compute capability: 8.6
Loading the pipeline 'microsoft/DialogRPT-updown'...
Traceback (most recent call last):
File "C:\Users\barbi\PycharmProjects\gpt2bot\run_bot.py", line 33, in <module>
run_telegram_bot(**config)
File "C:\Users\barbi\PycharmProjects\gpt2bot\gpt2bot\telegram_bot.py", line 286, in run
TelegramBot(**kwargs).run()
TelegramBot(**kwargs).run()
TelegramBot(**kwargs).run()
TelegramBot(**kwargs).run()
File "C:\Users\barbi\PycharmProjects\gpt2bot\gpt2bot\telegram_bot.py", line 240, in __init__
self.ranker_dict = build_ranker_dict(device=device, **prior_ranker_weights, **cond_ranker_weights)
File "C:\Users\barbi\PycharmProjects\gpt2bot\gpt2bot\utils.py", line 210, in build_ranker_dict
pipeline=load_pipeline('sentiment-analysis', model='microsoft/DialogRPT-updown', **kwargs),
File "C:\Users\barbi\PycharmProjects\gpt2bot\gpt2bot\utils.py", line 166, in load_pipeline
return transformers.pipeline(task, **kwargs)
File "C:\Users\barbi\anaconda3\envs\tensorflow\lib\site-packages\transformers\pipelines\__init__.py", line 684, in pipeline
return pipeline_class(model=model, framework=framework, task=task, **kwargs)
File "C:\Users\barbi\anaconda3\envs\tensorflow\lib\site-packages\transformers\pipelines\text_classification.py", line 68, in __init__
super().__init__(**kwargs)
File "C:\Users\barbi\anaconda3\envs\tensorflow\lib\site-packages\transformers\pipelines\base.py", line 770, in __init__
self.model = self.model.to(self.device)
File "C:\Users\barbi\anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\module.py", line 927, in to
return self._apply(convert)
File "C:\Users\barbi\anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "C:\Users\barbi\anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "C:\Users\barbi\anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "C:\Users\barbi\anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply
param_applied = fn(param)
File "C:\Users\barbi\anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\module.py", line 925, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 24.00 GiB total capacity; 2.07 GiB already allocated; 0 bytes free; 2.07 GiB reserved in tota
l by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_
CUDA_ALLOC_CONF
nvidia-smi COlab
VRAM in colab after load all the models:
VRAM
| SOLVED!
The problem was that I was using Tensorflow GPU + Pytorch GPU so it colapsed the VRAM. As far as I begun to use Tensorflow CPU + Pytorch GPU, everything works correctly.
| https://stackoverflow.com/questions/72877754/ |
How do you make the reported step a multiple of the logging frequency in PyTorch-Lightning, not the logging frequency minus 1? | [Warning!! pedantry inside]
I'm using PyTorch Lightning to wrap my PyTorch model, but because I'm pedantic, I am finding the logger to be frustrating in the way it reports the steps at the frequency I've asked for, minus 1:
When I set log_every_n_steps=100 in Trainer, my Tensorboard output shows my metrics at step 99, 199, 299, etc. Why not at 100, 200, 300?
When I set check_val_every_n_epoch=30 in Trainer, my console output shows progress bar goes up to epoch 29, then does a validate, leaving a trail of console outputs that report metrics after epochs 29, 59, 89, etc. Like this:
Epoch 29: 100%|βββββββββββββββββββββββββββββ| 449/449 [00:26<00:00, 17.01it/s, loss=0.642, v_num=logs]
[validation] {'roc_auc': 0.663, 'bacc': 0.662, 'f1': 0.568, 'loss': 0.633}
Epoch 59: 100%|βββββββββββββββββββββββββββββ| 449/449 [00:26<00:00, 16.94it/s, loss=0.626, v_num=logs]
[validation] {'roc_auc': 0.665, 'bacc': 0.652, 'f1': 0.548, 'loss': 0.630}
Epoch 89: 100%|βββββββββββββββββββββββββββββ| 449/449 [00:27<00:00, 16.29it/s, loss=0.624, v_num=logs]
[validation] {'roc_auc': 0.665, 'bacc': 0.652, 'f1': 0.548, 'loss': 0.627}
Am I doing something wrong? Should I simply submit a PR to PL to fix this?
| You are not doing anything wrong. Python uses zero-based indexing so epoch counting starts at zero as well. If you want to change the behavior of what is being displayed you will need to override the default TQDMProgressBar and modify on_train_epoch_start to display an offsetted value. You can achieve this by:
from pytorch_lightning.callbacks.progress.tqdm_progress import convert_inf
class LitProgressBar(TQDMProgressBar):
def init_validation_tqdm(self):
bar = super().init_validation_tqdm()
bar.set_description("running validation...")
return bar
def on_train_epoch_start(self, trainer, *_) -> None:
total_train_batches = self.total_train_batches
total_val_batches = self.total_val_batches
if total_train_batches != float("inf") and total_val_batches != float("inf"):
# val can be checked multiple times per epoch
val_checks_per_epoch = total_train_batches // trainer.val_check_batch
total_val_batches = total_val_batches * val_checks_per_epoch
total_batches = total_train_batches + total_val_batches
self.main_progress_bar.reset(convert_inf(total_batches))
self.main_progress_bar.set_description(f"Epoch {trainer.current_epoch + 1}")
Notice the +1 in the last line of code. This will offset the epoch displayed in the progress bar. Then pass your custom bar to your trainer:
# Initialize a trainer
trainer = Trainer(
accelerator="auto",
devices=1 if torch.cuda.is_available() else None, # limiting got iPython runs
max_epochs=3,
callbacks=[LitProgressBar()],
log_every_n_steps=100
)
Finaly:
trainer.fit(mnist_model, train_loader)
For the first epoch this will display:
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
| Name | Type | Params
--------------------------------
0 | l1 | Linear | 7.9 K
--------------------------------
7.9 K Trainable params
0 Non-trainable params
7.9 K Total params
0.031 Total estimated model params size (MB)
Epoch 1: 17% 160/938 [00:02<00:11, 68.93it/s, loss=1.05, v_num=4]
and not the default
Epoch 0: 17% 160/938 [00:02<00:11, 68.93it/s, loss=1.05, v_num=4]
| https://stackoverflow.com/questions/72880993/ |
How to concat resnet output and original size of input image? | I have image dataset with 17 classes. There is a significant differ in sizes of some classes eg. images in one class are on average 250x200 and in the other class 25x25.
My idea was to concat output of pretrained resnet18 and original image size, because I think it's a valuable information for classification.
To be more specific - I would like to use resnet18 but to the last layer which is
(fc): Linear(in_features=512, out_features=17, bias=True)
I would like to add also Image.Shape which might be important for better classification.
Is this a reasonable solution for this kind of problem and is there a way to do it in PyTorch?
| I guess you want to use the embedding that are produced by the Adaptive Pooling layer to have a fixed output size. First you need to get rid of the last linear layer (see this post):
model = models.resnet152(pretrained=True)
newmodel = torch.nn.Sequential(*(list(model.children())[:-1]))
Then you can get the embeddings and use pytorch.cat for concatenation:
out = model(X)
emb = torch.cat([out, X.shape])
| https://stackoverflow.com/questions/72881421/ |
what is difference between the following optimization method? | when i am studying RNN while running the examples on the following site,
i would like to ask one question.
https://tutorials.pytorch.kr/intermediate/char_rnn_classification_tutorial
According to the site:
1. Model
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(input_size + hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
return output, hidden
def initHidden(self):
return torch.zeros(1, self.hidden_size)
n_hidden = 128
rnn = RNN(n_letters, n_hidden, n_categories)
learning_rate = 0.005
criterion = nn.NLLLoss()
The rnn model with pytorch is like above code
2. Training
The problem was with this part!
According to the site, learning proceeds as follows (in this case it worked fine):
def train(category_tensor, name_tensor):
hidden = rnn.initHidden()
rnn.zero_grad()
for i in range(name_tensor.size()[0]):
output, hidden = rnn(name_tensor[i], hidden)
loss = criterion(output, category_tensor)
loss.backward()
for p in rnn.parameters():
p.data.add_(p.grad.data, alpha=-learning_rate)
return output, loss.item()
But in the pytorch model I recently learned, learning proccess was carried out by optim.step()
So, i also tried the following method (in this case, it didn't work well):
optimizer = optim.Adam(rnn.parameters(), lr = learning_rate)
def train(category_tensor, name_tensor):
hidden = rnn.initHidden()
rnn.zero_grad()
for i in range(name_tensor.size()[0]):
output, hidden = rnn(name_tensor[i], hidden)
loss = criterion(output, category_tensor)
optimizer.zero_grad()
loss.backward()
optimizer.step()
return output, loss.item()
But in this case the training didn't work properly
3. The result window
for epoch in range(epochs):
...
loss = train(category_tensor, name_tensor)
...
(Result window when this method is used: it seemed learning is not progressing)
so what is difference between the above optimization method?
and what should i do if i want use optim.step()??
Thanks for reading this long post and I hope you have a good day!!
| You are using two different optimization methods. The first one is SGD while the 2nd is Adam.
| https://stackoverflow.com/questions/72881661/ |
Why CNN model cannot fit well for a pair of random tensor? | I build a toy CNN model to fit a pair of random tensors(input_tensor & truth).
batch_size = 1
channel = 3
input_size = 128
input_tensor = torch.rand((batch_size, channel, input_size, input_size))
truth = torch.rand((batch_size, channel, input_size, input_size))
device = torch.device("cuda")
class ConvModel(nn.Module):
def __init__(self):
super(ConvModel, self).__init__()
self.conv1 = nn.Conv2d(3, 57344, (3, 3), (1, 1), padding=1)
self.conv2 = nn.Conv2d(57344, 3, (3, 3), (1, 1), padding=1)
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
def forward(self, input_):
x = self.conv1(input_)
x = self.relu(x)
x = self.conv2(x)
x = self.sigmoid(x)
return x
model = ConvModel().to(device)
loss_func = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=1e-5)
for epoch in range(100):
output = model(input_tensor.to(device))
loss = loss_func(output, truth.to(device))
loss.backward()
optimizer.step()
optimizer.zero_grad()
if (1 + epoch) % 10 == 0:
print(loss.detach().item())
I used the above codes to generate input&output pair and trained the model, and I got loss values as follow:
0.08877705037593842
0.08524381369352341
0.08396070450544357
0.0834180936217308
0.08318136632442474
0.08298520743846893
0.08282201737165451
0.08265350759029388
0.08248833566904068
0.08231770992279053
I'm confused that my model almost cannot fit ONE pair of data in 100 EPOCHS. Is there any problem?
Thanks for any feedback.
| Note that the convolution kernel is shared spatially. You network is just like trying to map a random 7*7 matrix to a random value (7 is the size of the receptive field of the output layer), and you have 128*128 this kind of pairs (despite you have only one pair of tensor). So you network failed to overfit your dataset. Reducing the input_size may help you reduce the loss.
| https://stackoverflow.com/questions/72881993/ |
Imported package searches for modules in my code | Can someone explain me what is going on here and how to prevent this?
I have a main.py with the following code:
import utils
import torch
if __name__ == "__main__":
# Foo
print("Foo")
# Bar
utils.bar()
model = torch.hub.load("ultralytics/yolov5", "yolov5s")
I outsourced some functions into a module named utils.py:
def bar():
print("Bar")
When I run this I get the following output:
(venv) jan@xxxxx test % python main.py
Foo
Bar
Using cache found in /Users/jan/.cache/torch/hub/ultralytics_yolov5_master
Traceback (most recent call last):
File "/Users/jan/PycharmProjects/test/main.py", line 12, in <module>
model = torch.hub.load("ultralytics/yolov5", "yolov5s")
File "/Users/jan/PycharmProjects/test/venv/lib/python3.10/site-packages/torch/hub.py", line 540, in load
model = _load_local(repo_or_dir, model, *args, **kwargs)
File "/Users/jan/PycharmProjects/test/venv/lib/python3.10/site-packages/torch/hub.py", line 569, in _load_local
model = entry(*args, **kwargs)
File "/Users/jan/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 81, in yolov5s
return _create('yolov5s', pretrained, channels, classes, autoshape, _verbose, device)
File "/Users/jan/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 31, in _create
from models.common import AutoShape, DetectMultiBackend
File "/Users/jan/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 24, in <module>
from utils.dataloaders import exif_transpose, letterbox
ModuleNotFoundError: No module named 'utils.dataloaders'; 'utils' is not a package
So it seems like the torch package I imported has also a utils resource (package) and searches for a module named "utils.dataloaders". Okay. But why is it searching in my utils module? And why isn't it continuing searching in its own package if it doesn't find a matching resource in my code? And last but not least: How can I prevent this situation?
I changed import utils to import utils as ut and call my function with ut.bar() but it doesn't make any difference.
The only thing that worked is to rename my utils.py to something else but this cannot be the solution...
Thanks for your help. Cheers,
Jan
| The other utils package does not belong to torch, it belongs to the yolov5 repository: /Users/jan/.cache/torch/hub/ultralytics_yolov5_master/utils.
Now, to explain the error: It seems that python would search for sys.modules first when you import utils. If you import your own utils first, it is registered in sys.modules, and python would search in your own utils for things needed for the yolov5 model. Same thing would happen if you create the model first and import your own utils later (at this time, utils of the yolov5 repo is registered in sys.modules, so you won't be able to import your own utils).
The easiest solution would be to rename your utils.py, or put it under a folder, e.g.,
your_project/
main.py
some_name_other_than_utils/
utils.py
and import it as
import some_name_other_than_utils.utils as utils
Then, some_name_other_than_utils.utils is registered in sys.modules so it won't affect importing in yolov5.
Another solution is to copy the yolov5 folder as a subfolder of your project. So everything in the yolov5 folder is under another namespace, without conflicting with your files. But you may need to change some import statements, and replace torch.hub.load with your own model loading function.
In most cases, having utils in your own project shouldn't conflict with third-party software that also has utils. If you try to import torch and then check sys.modules, you can see torch.utlis, torch.nn.utils, etc., rather than just utils. The following is an example of how this can be done with relative imports:
your_project/
main.py
utils.py
some_3rdparty_lib/
__init__.py
utils.py
In some_3rdparty_lib/__init__.py:
# some_3rdparty_lib/__init__.py
from . import utils
Then in main.py:
# main.py
import utils # your own utils
import some_3rdparty_lib.utils # the one under some_3rdparty_lib
Note that some_3rdparty_lib does not have to be under your project directory. You can put it anywhere as long as the path is in sys.path. The imported utils would still belong to the namespace some_3rdparty_lib.
| https://stackoverflow.com/questions/72882082/ |
Gathering entries in a matrix based on a matrix of column indices (tensorflow/numpy) | A little example to demonstrate what I need
I have a question about gathering in tensorflow. Let's say I have a tensor of values (that I care about for some reason):
test1 = tf.round(5*tf.random.uniform(shape=(2,3)))
which gives me this output:
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[1., 1., 2.],
[4., 5., 0.]], dtype=float32)>
and I also have a tensor of indices column indices that I want to pick out on every row:
test_ind = tf.constant([[0,1,0,0,1],
[0,1,1,1,0]], dtype=tf.int64)
I want to gather this so that from the first row (0th row), I pick out items in column 0, 1, 0, 0, 1, and same for the second row.
So the output for this example should be:
<tf.Tensor: shape=(2, 5), dtype=float32, numpy=
array([[1., 1., 1., 1., 1.],
[4., 5., 5., 5., 4.]], dtype=float32)>
My attempt
So I figured out a way to do this in general, I wrote the following function gather_matrix_indices() that will take in a tensor of values and a tensor of indices and do exactly what I specified above.
def gather_matrix_indices(input_arr, index_arr):
row, _ = input_arr.shape
li = []
for i in range(row):
li.append(tf.expand_dims(tf.gather(params=input_arr[i], indices=index_arr[i]), axis=0))
return tf.concat(li, axis=0)
My Question
I'm just wondering, is there a way to do this using ONLY tensorflow or numpy methods? The only solution I could come up with is writing my own function that iterates through every row and gathers indices for all columns in that row. I have not had runtime issues yet but I would much rather utilize built-in tensorflow or numpy methods when possible. I've tried tf.gather before too, but I don't know if this particular case is possible with any combination of tf.gather and tf.gather_nd. If anyone has a suggestion, I would greatly appreciate it.
Edit (08/18/22)
I would like to add an edit that in PyTorch, calling torch.gather() and setting dim=1 in the arguments will do EXACTLY what I wanted in this question. So if you're familiar with both libraries, and you really need this functionality, torch.gather() can do this out of the box.
| You can use gather_nd() for this. It can look a bit tricky to get this working. Let me try to explain this with shapes.
We got test1 -> [2, 3] and test_ind_col_ind -> [2, 5]. test_ind_col_ind has only column indices, but you also need row indices to use gather_nd(). To use gather_nd() with a [2,3] tensor, we need to create a test_ind -> [2, 5, 2] sized tensor. The inner most dimension of this new test_ind correspond to individual indices you want to index from test1. Here we have the inner most dimension = 2 in the format (<row index>, <col index>). In other words, looking at the shape of test_ind,
[ 2 , 5 , 2 ]
| |
V |
(2,5) | <- The size of the final tensor
V
(2,) <- The full index to a scalar in your input tensor
import tensorflow as tf
test1 = tf.round(5*tf.random.uniform(shape=(2,3)))
print(test1)
test_ind_col_ind = tf.constant([[0,1,0,0,1],
[0,1,1,1,0]], dtype=tf.int64)[:, :, tf.newaxis]
test_ind_row_ind = tf.repeat(tf.range(2, dtype=tf.int64)[:, tf.newaxis, tf.newaxis], 5, axis=1)
test_ind = tf.concat([test_ind_format, test_ind], axis=-1)
res = tf.gather_nd(indices=test_ind, params=test1)
| https://stackoverflow.com/questions/72889843/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.