instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Understanding torch.nn.Flatten
I understand that Flatten removes all of the dimensions except for one. For example, I understand flatten(): > t = torch.ones(4, 3) > t tensor([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]) > flatten(t) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) However, I don't get Flatten, especially I don't get meaning of this snippet from the doc: >>> input = torch.randn(32, 1, 5, 5) >>> m = nn.Sequential( >>> nn.Conv2d(1, 32, 5, 1, 1), >>> nn.Flatten() >>> ) >>> output = m(input) >>> output.size() torch.Size([32, 288]) I felt the output should have size [160], because 32*5=160. Q1. So why it outputted size [32,288]? Q2. I also don't get meaning of shape information given in the doc: Q3. And also meaning of parameters:
It is a difference in the default behaviour. torch.flatten flattens all dimensions by default, while torch.nn.Flatten flattens all dimensions starting from the second dimension (index 1) by default. You can see this behaviour in the default values of the start_dim and end_dim arguments. The start_dim argument denotes the first dimension to be flattened (zero-indexed), and the end_dim argument denotes the last dimension to be flattened. So, when start_dim=1, which is the default for torch.nn.Flatten, the first dimension (index 0) is not flattened, but it is included when start_dim=0, which is the default for torch.flatten. The reason behind this difference is probably because torch.nn.Flatten is intended to be used with torch.nn.Sequential, where typically a series of operations are performed on a batch of inputs, where each input is treated independently of the others. For example, if you have a batch of images and you call torch.nn.Flatten, the typical use case would be to flatten each image separately, and not flatten the whole batch. If you do want to flatten all dimensions using torch.nn.Flatten, you can simply create the object as torch.nn.Flatten(start_dim=0). Finally, the shape information in the docs just covers how the shape of the tensor will be affected, illustrating that the first (index 0) dimension is left as it is. So, if you have an input tensor of shape (N, *dims), where *dims is an arbitrary sequence of dimensions, the output tensor will have the shape (N, product of *dims), since all dimensions except the batch dimension are flattened. For example, an input of shape (3,10,10) will have an output of shape (3, 10 x 10) = (3, 100).
https://stackoverflow.com/questions/67460123/
What is the relation between a learning rate scheduler and an optimizer?
If I have a model: import torch import torch.nn as nn import torch.optim as optim class net_x(nn.Module): def __init__(self): super(net_x, self).__init__() self.fc1=nn.Linear(2, 20) self.fc2=nn.Linear(20, 20) self.out=nn.Linear(20, 4) def forward(self, x): x=self.fc1(x) x=self.fc2(x) x=self.out(x) return x nx = net_x() And then I'm defining my inputs, optimizer (with lr=0.1), scheduler (with base_lr=1e-3), and training: r = torch.tensor([1.0,2.0]) optimizer = optim.Adam(nx.parameters(), lr = 0.1) scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=1e-3, max_lr=0.1, step_size_up=1, mode="triangular2", cycle_momentum=False) path = 'opt.pt' for epoch in range(10): optimizer.zero_grad() net_predictions = nx(r) loss = torch.sum(torch.randint(0,10,(4,)) - net_predictions) loss.backward() optimizer.step() scheduler.step() print('loss:' , loss) #save state dict torch.save({ 'epoch': epoch, 'net_x_state_dict': nx.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'scheduler': scheduler.state_dict(), }, path) #loading state dict checkpoint = torch.load(path) nx.load_state_dict(checkpoint['net_x_state_dict']) optimizer.load_state_dict(checkpoint['optimizer_state_dict']) scheduler.load_state_dict(checkpoint['scheduler']) The optimizer seems to take the learning rate of the scheduler for g in optimizer.param_groups: print(g) >>> {'lr': 0.001, 'betas': (0.9, 0.999), 'eps': 1e-08, 'weight_decay': 0, 'amsgrad': False, 'initial_lr': 0.001, 'params': [Parameter containing: Does the learning rate scheduler overwrite the optimizer? How does it connect to it? Trying to understand the relation between them (i.e how they interact, etc.)
TL;DR: The LR scheduler contains the optimizer as a member and alters its parameters learning rates explicitly. As mentioned in PyTorch Official Documentations, the learning rate scheduler receives the optimizer as a parameter in its constructor, and thus has access to its parameters. The common use is to update the LR after every epoch: scheduler = ... # initialize some LR scheduler for epoch in range(100): train(...) # here optimizer.step() is called numerous times. validate(...) scheduler.step() All optimizers inherit from a common parent class torch.nn.Optimizer and are updated using the step method implemented for each of them. Similarly, all LR schedulers (besides ReduceLROnPlateau) inherit from a common parent class named _LRScheduler. Observing its source code uncovers that in the step method the class indeed changes the LR of the parameters of the optimizer: ... for i, data in enumerate(zip(self.optimizer.param_groups, values)): param_group, lr = data param_group['lr'] = lr ...
https://stackoverflow.com/questions/67461425/
Pytorch min and max of tensor
I have following pytorch tensor: >>> mean_actions tensor([[-5.7547e-04, 1.4318e-02, 1.9328e-04, -2.5660e-03, 3.5269e-03, -1.3797e-02, -6.1871e-04, -2.7425e-03, 1.1661e-03, 1.6873e-03, 3.9045e-03, 1.8047e-03, 4.8656e-03, 5.7182e-03, -4.8501e-03, -5.5913e-03, -4.4574e-03, -3.3154e-03, -4.9826e-03, -1.0071e-02, -2.3483e-03, -1.1413e-02, -4.9613e-03, -1.1648e-02, 2.4752e-03, -1.9764e-03, 3.1063e-03, -6.3481e-05, 7.6710e-03, 5.7503e-04]]) I am getting following min and max values out of tensor: >>> th.min(mean_actions) tensor(-0.0138) >>> th.max(mean_actions) tensor(0.0143) However, I dont see -0.0138 and 0.0143 present in the tensor. What I am missing? Here are the screenshots from debug session:
1.4318e-02 is scientific notation for 0.014318 and -1.3797e-02 is scientific notation for -0.013797 See Wikipedia - E Notation
https://stackoverflow.com/questions/67462476/
PyTorch: two binary masks union?
I have two binary masks of shape (batch_size, width, heigh) that I want to create a binary mask which indicates the union of elements between the two. To find the intersection, I can use torch.where(A == B, 1, 0), but how can I find the union?
When working with binary masks, you should use logical operations such as: logical_or(), logical_and(). The intersection is then the binary mask: intersection = A.logical_and(B) and the union is: union = A.logical_or(B) BTW, I'll leave it to you as an exercise to check why the intersection you computed (A == B) is not correct.
https://stackoverflow.com/questions/67465227/
CNN Pytorch only batches of spatial targets supported error
So I've architected the following model which I will use to classify the MNIST Fashion data. class CNN(nn.Module): def __init__(self, **kwargs): super().__init__() self.conv1 = nn.Conv2d(784, 64, 2, 1, padding=5) self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2) self.conv2 = nn.Conv2d(64, 128, 2, 2, padding = 0) self.conv2_bn = nn.BatchNorm2d(128) self.relu = nn.ReLU() self.dense = nn.Linear(1, 128) self.softmax = nn.Softmax() def forward(self, x): # you can add any additional parameters you want x = self.conv1(x) x = F.max_pool2d(F.relu(x), kernel_size=2) x = self.conv2(x) x = self.conv2_bn(x) x = F.max_pool2d(F.relu(x), kernel_size=2) print(x.shape) x = self.dense(x) x = F.relu(x) return F.log_softmax(x) And this is where I run my code: for epoch in range(max_epoch): print('EPOCH='+str(epoch)) correct = 0 total = 0 running_loss = 0 for data, label in tzip(TRAX, TRAY): #train = data.view(64,1,2,2) DAAA = data.view(1,784,1,1) #zeroing the parameter optimizer.zero_grad() label = torch.tensor([label]).type(torch.LongTensor) #forwards prop outputs = model2(DAAA) loss = criterion(outputs, label) loss.backward() optimizer.step() running_loss += loss.item() '========================================' _, predicted = torch.max(outputs.data, 1) total += label.size(0) correct += (predicted == label).sum().item() '========================================' print('\n') print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) print('\n') print(str(epoch)+'loss= '+str(running_loss)) lossjournal.append(running_loss) accjournal.append(100 * correct / total) print('Finished Training') <ipython-input-378-27ce013b2c10> in <module> 55 #forwards prop 56 outputs = model2(DAAA) ---> 57 loss = criterion(outputs, label) 58 loss.backward() 59 optimizer.step() /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 1045 def forward(self, input: Tensor, target: Tensor) -> Tensor: 1046 assert self.weight is None or isinstance(self.weight, Tensor) -> 1047 return F.cross_entropy(input, target, weight=self.weight, 1048 ignore_index=self.ignore_index, reduction=self.reduction) 1049 /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2691 if size_average is not None or reduce is not None: 2692 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2693 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2694 2695 /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 2388 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 2389 elif dim == 4: -> 2390 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 2391 else: 2392 # dim == 3 or dim > 4 RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 1 When I run my model I got this error but I don't know what to do from now on? What adjustments should I make for this model to work? I know the problem is with the criterion but is it because of the output of the model, which is of shape [1, 128, 1, 128]?
MNIST has 10 classes so your output should be of size [batch_size, 10]. Change the last linear layer to self.dense = nn.Linear(128,10). Then, since your label is of size [batch_size,1], you should use torch.nn.CrossEntropyLoss as the criterion. Additionally, you need not include the last softmax layer during training, as the aforementioned loss function performs a softmax operation during computation. You can instead use softmax or argmax for inference only.
https://stackoverflow.com/questions/67466531/
How to implement Flatten layer with batch size > 1 in Pytorch (Pytorch_Geometric)
I am new to Pytorch and am trying to transfer my previous code from Tensorflow to Pytorch due to memory issues. However, when trying to reproduce Flatten layer, some issues kept coming out. In my DataLoader object, batch_size is mixed with the first dimension of input (in my GNN, the input unpacked from DataLoader object is of size [batch_size*node_num, attribute_num], e.g. [4*896, 32] after the GCNConv layers). Basically, if I implement torch.flatten() after GCNConv, samples are mixed together (to [4*896*32]) and there would be only 1 output from this network, while I expect #batch_size outputs. And if I use nn.Flatten() instead, nothing seems to happen (still [4*896, 32]). Should I set batch_size as the first dim of the input at the very beginning, or should I directly use view() function? I tried directly using view() and it (seemed to have) worked, although I am not sure if this is the same as Flatten. Please refer to my code below. I am currently using global_max_pool because it works (it can separate batch_size directly). By the way, I am not sure why training is so slow in Pytorch... When node_num is raised to 13000, I need an hour to go through an epoch, and I have 100 epoch per test fold and 10 test folds. In tensorflow the whole training process only takes several hours. Same network architecture and raw input data, as shown here in another post of mine, which also described the memory issues I met when using TF. Have been quite frustrated for a while. I checked this and this post, but it seems their problems somewhat differ from mine. Would greatly appreciate any help! Code: # Generate dataset class STDataset(InMemoryDataset): def __init__(self, root, transform=None, pre_transform=None): super(STDataset, self).__init__(root, transform, pre_transform) self.data, self.slices = torch.load(self.processed_paths[0]) @property def raw_file_names(self): return [] @property def processed_file_names(self): return ['pygdata.pt'] def download(self): pass def process(self): data_list= [] for i in range(sample_size): data = Data(x=torch.tensor(X_all[i],dtype=torch.float),edge_index=edge_index,y=torch.FloatTensor(y_all[i])) data_list.append(data) data, slices = self.collate(data_list) torch.save((data, slices), self.processed_paths[0]) dataset = STDataset(root=save_dir) train_dataset = dataset[:len(X_train)] val_dataset = dataset[len(X_train):(len(X_train)+len(X_val))] test_dataset = dataset[(len(X_train)+len(X_val)):] # Build network from torch_geometric.nn import GCNConv, GATConv, TopKPooling, global_max_pool, global_mean_pool from torch.nn import Flatten, Linear, ELU import torch.nn.functional as F class GCN(torch.nn.Module): def __init__(self): super(GCN, self).__init__() self.conv1 = GCNConv(in_channels = feature_num, out_channels = 32) self.conv2 = GCNConv(in_channels = 32, out_channels = 32) self.fc1 = Flatten() # self.ln1 = Linear(in_features = batch_size*N*32, out_features = 512) self.ln1 = Linear(in_features = 32, out_features = 32) self.ln2 = Linear(in_features = 32, out_features = 1) def forward(self,x,edge_index,batch): # x, edge_index, batch = data.x, data.edge_index, data.batch # print(np.shape(x),np.shape(edge_index),np.shape(batch)) x = F.elu(self.conv1(x,edge_index)) # x = x.squeeze(1) x = F.elu(self.conv2(x,edge_index)) print(np.shape(x)) x = self.fc1(x) # x = torch.flatten(x,0) # x = torch.cat([global_max_pool(x,batch),global_mean_pool(x,batch)],dim=1) print(np.shape(x)) x = self.ln1(x) x = F.relu(x) ## Dropout? print("o") x = torch.sigmoid(self.ln2(x)) return x # training def train(): model.train() loss_all=0 correct = 0 for i, data in enumerate(train_loader, 0): data = data.to(device) optimizer.zero_grad() output = model(data.x, data.edge_index,data.batch) label = data.y.to(device) loss = loss_func(output, label) loss.backward() loss_all += loss.item() output = output.detach().cpu().numpy().squeeze() label = label.detach().cpu().numpy().squeeze() correct += (abs(output-label)<0.5).sum() optimizer.step() return loss_all / len(train_dataset), correct / len(train_dataset) device = torch.device('cuda') model = GCN().to(device) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) loss_func = torch.nn.BCELoss() # binary cross-entropy train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle = True) val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle = True) for epoch in range(num_epochs): gc.collect() train_loss, train_acc = train() Error message for using torch.nn.Flatten(start_dim = 1) (code above): ValueError Traceback (most recent call last) <ipython-input-42-c96e8b058742> in <module> 65 for epoch in range(num_epochs): 66 gc.collect() ---> 67 train_loss, train_acc = train() <ipython-input-42-c96e8b058742> in train() 10 output = model(data.x, data.edge_index,data.batch) 11 label = data.y.to(device) ---> 12 loss = loss_func(output, label) 13 loss.backward() 14 loss_all += loss.item() ~/miniconda3/envs/ST-Torch/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) ~/miniconda3/envs/ST-Torch/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 496 497 def forward(self, input, target): --> 498 return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction) 499 500 ~/miniconda3/envs/ST-Torch/lib/python3.7/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction) 2068 if input.numel() != target.numel(): 2069 raise ValueError("Target and input must have the same number of elements. target nelement ({}) " -> 2070 "!= input nelement ({})".format(target.numel(), input.numel())) 2071 2072 if weight is not None: ValueError: Target and input must have the same number of elements. target nelement (4) != input nelement (3584)
The way you want the shape to be batch_size*node_num, attribute_num is kinda weird. Usually it should be batch_size, node_num*attribute_num as you need to match the input to the output. And Flatten in Pytorch does exactly that. If what you want is really batch_size*node_num, attribute_num then you left with only reshaping the tensor using view or reshape. And actually Flatten itself just calls .reshape. tensor.view: This will reshape the existing tensor to a new shape, if you edit this new tensor the old one will change too. tensor.reshape: This will create a new tensor using the data from old tensor but with new shape. def forward(self,x,edge_index,batch): x = F.elu(self.conv1(x,edge_index)) x = F.elu(self.conv2(x,edge_index)) # print(np.shape(x)) # don't use this print(x.size()) # use this # x = self.fc1(x) # this is the old one ## choose one of these x = x.view(4*896, 32) x = x.reshape(4*896, 32) # print(np.shape(x)) # don't use this print(x.size()) # use this x = self.ln1(x) x = F.relu(x) ## Dropout? print("o") x = torch.sigmoid(self.ln2(x)) return x Edit 2 reshape Let's say we have an array of [[[1, 1, 1], [2, 2, 2]]] which shape (1, 2, 3), which represent (batch, length, channel) in Tensorflow. If you want to use this data properly in Pytorch you need to make it (batch, channel, length), which is (1, 3, 2). Here's the difference between permute and reshape >>> x = torch.tensor([[[1, 1, 1], [2, 2, 2]]]) >>> x.size() torch.Size([1, 2, 3]) >>> x[0, 0, :] tensor([1, 1, 1]) >>> y = x.reshape((1, 3, 2)) >>> y tensor([[[1, 1], [1, 2], [2, 2]]]) >>> y[0, :, 0] tensor([1, 1, 2]) >>> z = x.permute(0, 2, 1) >>> z tensor([[[1, 2], [1, 2], [1, 2]]]) >>> z[0, :, 0] tensor([1, 1, 1]) As you can see, the first channel of both x and z are [1, 1, 1] which is what we want while y is [1, 1, 2].
https://stackoverflow.com/questions/67469355/
Pytorch: a similar process to reverse pooling and replicate padding?
I have a tensor A that has shape (batch_size, width, height). Assume that it has these values: A = torch.tensor([[[0, 1], [1, 0]]]) I am also given a number K that is a positive integer. Let K=2 in this case. I want to do a process that is similar to reverse pooling and replicate padding. This is the expected output: B = torch.tensor([[[0, 0, 1, 1], [0, 0, 1, 1], [1, 1, 0, 0], [1, 1, 0, 0]]]) Explanation: for each element in A, we expand it to the matrix of shape (K, K), and put it in the result tensor. We continue to do this with other elements, and let the stride between them equals to the kernel size (that is, K). How can I do this in PyTorch? Currently, A is a binary mask, but it could be better if I can expand it to non-binary case.
Square expansion You can get your desired output by expanding twice: def dilate(t, k): x = t.squeeze() x = x.unsqueeze(-1).expand([*x.shape,k]) x = x.unsqueeze(-1).expand([*x.shape,k]) x = torch.cat([*x], dim=1) x = torch.cat([*x], dim=1) x = x.unsqueeze(0) return x B = dilate(A, k) Resizing / interpolating nearest If you don't mind corners potentially 'bleeding' in larger expansions (since it uses Euclidean as opposed to Manhattan distance when determining 'nearest' points to interpolate), a simpler method is to just resize: import torchvision.transforms.functional as F B = F.resize(A, A.shape[-1]*k) For completeness: MaxUnpool2d takes in as input the output of MaxPool2d including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.
https://stackoverflow.com/questions/67472107/
Using PyTorch's autograd efficiently with tensors by calculating the Jacobian
In my previous question I found how to use PyTorch's autograd with tensors: import torch from torch.autograd import grad import torch.nn as nn import torch.optim as optim class net_x(nn.Module): def __init__(self): super(net_x, self).__init__() self.fc1=nn.Linear(1, 20) self.fc2=nn.Linear(20, 20) self.out=nn.Linear(20, 4) #a,b,c,d def forward(self, x): x=torch.tanh(self.fc1(x)) x=torch.tanh(self.fc2(x)) x=self.out(x) return x nx = net_x() #input t = torch.tensor([1.0, 2.0, 3.2], requires_grad = True) #input vector t = torch.reshape(t, (3,1)) #reshape for batch #method dx = torch.autograd.functional.jacobian(lambda t_: nx(t_), t) dx = torch.diagonal(torch.diagonal(dx, 0, -1), 0)[0] #first vector #dx = torch.diagonal(torch.diagonal(dx, 1, -1), 0)[0] #2nd vector #dx = torch.diagonal(torch.diagonal(dx, 2, -1), 0)[0] #3rd vector #dx = torch.diagonal(torch.diagonal(dx, 3, -1), 0)[0] #4th vector dx >>> tensor([-0.0142, -0.0517, -0.0634]) The issue is that grad only knows how to propagate gradients from a scalar tensor (which my network's output is not), which is why I had to calculate the Jacobian. However, this is not very efficient and a bit slow as my matrix is large and calculating the entire Jacobian takes a while (and I'm also not using the entire Jacobian matrix). Is there a way to calculate only the diagonals of the Jacobian (to get the 4 vectors in this example)? There appears to be an open feature request but it doesn't appear to have gotten much attention. Update 1: I tried what @iacob said about setting torch.autograd.functional.jacobian(vectorize=True). However, this seems to be slower. To test this I changed my network output from 4 to 400, and my input t to be: val = 100 t = torch.rand(val, requires_grad = True) #input vector t = torch.reshape(t, (val,1)) #reshape for batch Without vectorized = True: Wall time: 10.4 s With: Wall time: 14.6 s
OK, results first: The performance (My laptop has an RTX-2070 and PyTorch is using it): # Method 1: Use the jacobian function CPU times: user 34.6 s, sys: 165 ms, total: 34.7 s Wall time: 5.8 s # Method 2: Sample with appropriate vectors CPU times: user 1.11 ms, sys: 0 ns, total: 1.11 ms Wall time: 191 µs It's about 30000x faster. Why should you use backward instead of jacobian (in your case) I'm not a pro with PyTorch. But, according to my experience, it's pretty inefficient to calculate the jacobi-matrix if you do not need all the elements in it. If you only need the diagonal elements, you can use backward function to calculate vector-jacobian multiplication with some specific vectors. If you set the vectors correctly, you can sample/extract specific elements from the Jacobi matrix. A little linear algebra: j = np.array([[1,2],[3,4]]) # 2x2 jacobi you want sv = np.array([[1],[0]]) # 2x1 sampling vector first_diagonal_element = sv.T.dot(j).dot(sv) # it's j[0, 0] It's not that powerful for this simple case. But if PyTorch needs to calculate all jacobians along the way (j could be the result of a long sequence of matrix-matrix multiplications), it would be way too slow. In contrast, if we calculate a sequence of vector-jacobian multiplications, the computation would be super fast. Solution Sample elements from jacobian: import torch from torch.autograd import grad import torch.nn as nn import torch.optim as optim class net_x(nn.Module): def __init__(self): super(net_x, self).__init__() self.fc1=nn.Linear(1, 20) self.fc2=nn.Linear(20, 20) self.out=nn.Linear(20, 400) #a,b,c,d def forward(self, x): x=torch.tanh(self.fc1(x)) x=torch.tanh(self.fc2(x)) x=self.out(x) return x nx = net_x() #input val = 100 a = torch.rand(val, requires_grad = True) #input vector t = torch.reshape(a, (val,1)) #reshape for batch #method %time dx = torch.autograd.functional.jacobian(lambda t_: nx(t_), t) dx = torch.diagonal(torch.diagonal(dx, 0, -1), 0)[0] #first vector #dx = torch.diagonal(torch.diagonal(dx, 1, -1), 0)[0] #2nd vector #dx = torch.diagonal(torch.diagonal(dx, 2, -1), 0)[0] #3rd vector #dx = torch.diagonal(torch.diagonal(dx, 3, -1), 0)[0] #4th vector print(dx) out = nx(t) m = torch.zeros((val,400)) m[:, 0] = 1 %time out.backward(m) print(a.grad) a.grad should be equal to the first tensor dx. And, m is the sampling vector that corresponds to what is called the "first vector" in your code. but if I run it again the answer will change. Yeah, you've already figured it out. The gradients accumulate every time you call backward. So you have to set a.grad to zero first if you have to run that cell multiple times. can you explain the idea behind the m method? Both using the torch.zeros and setting the column to 1. Also, how come the grad is on a rather than on t? The idea behind the m method is: what the function backward calculates is actually a vector-jacobian multiplication, where the vector represents the so-called "upstream gradient" and the Jacobi-matrix is the "local gradient" (and this jacobian is also the one you get with the jacobian function, since your lambda could be viewed as a single "local" operation). If you need some elements from the jacobian, you can fake (or, more precisely, construct) some "upstream gradient", with which you can extract specific entries from the jacobian. However, sometimes these upstream gradients might be hard to find (for me at least) if complicated tensor operations are involved. PyTorch accumulates gradients on leaf nodes of the computational graph. And, your original line of code t = torch.reshape(t, (3,1)) loses handle to the leaf node, and t refers now to an intermediate node instead of a leaf node. In order to have access to the leaf node, I created the tensor a.
https://stackoverflow.com/questions/67472361/
Training Will Be Stop After a While in GRU Layer Pytorch
I use my custom dataset class to convert audio files to mel-Spectrogram images. the shape will be padded to (128,1024). I have 10 classes. after a while of training in the first epoch, my network will be crashed inside the hidden layer in GRU shapes due to this error: Current run is terminating due to exception: Expected hidden size (1, 7, 32), got [1, 16, 32] Engine run is terminating due to exception: Expected hidden size (1, 7, 32), got [1, 16, 32] Traceback (most recent call last): File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3418, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-b8f3a45f8e35>", line 1, in <module> runfile('/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/tools/train_net.py', wdir='/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/tools') File "/home/omid/OMID/program/pycharm-professional-2020.2.4/pycharm-2020.2.4/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "/home/omid/OMID/program/pycharm-professional-2020.2.4/pycharm-2020.2.4/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/tools/train_net.py", line 60, in <module> main() File "/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/tools/train_net.py", line 56, in main train(cfg) File "/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/tools/train_net.py", line 35, in train do_train( File "/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/engine/trainer.py", line 79, in do_train trainer.run(train_loader, max_epochs=epochs) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 702, in run return self._internal_run() File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 775, in _internal_run self._handle_exception(e) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 469, in _handle_exception raise e File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 745, in _internal_run time_taken = self._run_once_on_dataset() File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 850, in _run_once_on_dataset self._handle_exception(e) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 469, in _handle_exception raise e File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 833, in _run_once_on_dataset self.state.output = self._process_function(self, self.state.batch) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/__init__.py", line 103, in _update y_pred = model(x) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/modeling/model.py", line 113, in forward x, h1 = self.gru1(x, h0) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/rnn.py", line 819, in forward self.check_forward_args(input, hx, batch_sizes) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/rnn.py", line 229, in check_forward_args self.check_hidden_size(hidden, expected_hidden_size) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/rnn.py", line 223, in check_hidden_size raise RuntimeError(msg.format(expected_hidden_size, list(hx.size()))) RuntimeError: Expected hidden size (1, 7, 32), got [1, 16, 32] My Network Is: import torch import torch.nn as nn import torch.nn.functional as F print('cuda', torch.cuda.is_available()) class MusicClassification(nn.Module): def __init__(self, cfg): super(MusicClassification, self).__init__() device = cfg.MODEL.DEVICE num_class = cfg.MODEL.NUM_CLASSES self.np_layers = 4 self.np_filters = [64, 128, 128, 128] self.kernel_size = (3, 3) self.pool_size = [(2, 2), (4, 2)] self.channel_axis = 1 self.frequency_axis = 2 self.time_axis = 3 # self.h0 = torch.zeros((1, 16, 32)).to(device) self.bn0 = nn.BatchNorm2d(num_features=self.channel_axis) self.bn1 = nn.BatchNorm2d(num_features=self.np_filters[0]) self.bn2 = nn.BatchNorm2d(num_features=self.np_filters[1]) self.bn3 = nn.BatchNorm2d(num_features=self.np_filters[2]) self.bn4 = nn.BatchNorm2d(num_features=self.np_filters[3]) self.conv1 = nn.Conv2d(1, self.np_filters[0], kernel_size=self.kernel_size) self.conv2 = nn.Conv2d(self.np_filters[0], self.np_filters[1], kernel_size=self.kernel_size) self.conv3 = nn.Conv2d(self.np_filters[1], self.np_filters[2], kernel_size=self.kernel_size) self.conv4 = nn.Conv2d(self.np_filters[2], self.np_filters[3], kernel_size=self.kernel_size) self.max_pool_2_2 = nn.MaxPool2d(self.pool_size[0]) self.max_pool_4_2 = nn.MaxPool2d(self.pool_size[1]) self.drop_01 = nn.Dropout(0.1) self.drop_03 = nn.Dropout(0.3) self.gru1 = nn.GRU(input_size=128, hidden_size=32, batch_first=True) self.gru2 = nn.GRU(input_size=32, hidden_size=32, batch_first=True) self.activation = nn.ELU() self.dense = nn.Linear(32, num_class) self.softmax = nn.LogSoftmax(dim=1) def forward(self, x): # x [16, 1, 128,938] x = self.bn0(x) # x [16, 1, 128,938] x = F.pad(x, (0, 0, 2, 1)) # x [16, 1, 131,938] x = self.conv1(x) # x [16, 64, 129,936] x = self.activation(x) # x [16, 64, 129,936] x = self.bn1(x) # x [16, 64, 129,936] x = self.max_pool_2_2(x) # x [16, 64, 64,468] x = self.drop_01(x) # x [16, 64, 64,468] x = F.pad(x, (0, 0, 2, 1)) # x [16, 64, 67,468] x = self.conv2(x) # x [16, 128, 65,466] x = self.activation(x) # x [16, 128, 65,466] x = self.bn2(x) # x [16, 128, 65,455] x = self.max_pool_4_2(x) # x [16, 128, 16,233] x = self.drop_01(x) # x [16, 128, 16,233] x = F.pad(x, (0, 0, 2, 1)) # x [16, 128, 19,233] x = self.conv3(x) # x [16, 128, 17,231] x = self.activation(x) # x [16, 128, 17,231] x = self.bn3(x) # x [16, 128, 17,231] x = self.max_pool_4_2(x) # x [16, 128, 4,115] x = self.drop_01(x) # x [16, 128, 4,115] x = F.pad(x, (0, 0, 2, 1)) # x [16, 128, 7,115] x = self.conv4(x) # x [16, 128, 5,113] x = self.activation(x) # x [16, 128, 5,113] x = self.bn4(x) # x [16, 128, 5,113] x = self.max_pool_4_2(x) # x [16, 128, 1,56] x = self.drop_01(x) # x [16, 128, 1,56] x = x.permute(0, 3, 1, 2) # x [16, 56, 128,1] resize_shape = list(x.shape)[2] * list(x.shape)[3] # x [16, 128, 56,1], reshape size is 128 x = torch.reshape(x, (list(x.shape)[0], list(x.shape)[1], resize_shape)) # x [16, 56, 128] device = torch.device("cuda" if torch.cuda.is_available() else "cpu") h0 = torch.zeros((1, 16, 32)).to(device) x, h1 = self.gru1(x, h0) # x [16, 56, 32] x, _ = self.gru2(x, h1) # x [16, 56, 32] x = x[:, -1, :] x = self.dense(x) # x [16,10] x = self.softmax(x) # x [16, 10] # x = torch.argmax(x, 1) return x My Dataset is: from __future__ import print_function, division import os import librosa import matplotlib.pyplot as plt import numpy as np import torch import torchaudio from sklearn.preprocessing import OneHotEncoder, LabelEncoder from torch.utils.data import Dataset from utils.util import pad_along_axis print(torch.__version__) print(torchaudio.__version__) # Ignore warnings import warnings warnings.filterwarnings("ignore") plt.ion() import pathlib print(pathlib.Path().absolute()) class GTZANDataset(Dataset): def __init__(self, genre_folder='/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/data/dataset/genres_original', one_hot_encoding=False, sr=16000, n_mels=128, n_fft=2048, hop_length=512, transform=None): self.genre_folder = genre_folder self.one_hot_encoding = one_hot_encoding self.audio_address, self.labels = self.extract_address() self.sr = sr self.n_mels = n_mels self.n_fft = n_fft self.transform = transform self.le = LabelEncoder() self.hop_length = hop_length def __len__(self): return len(self.labels) def __getitem__(self, index): address = self.audio_address[index] y, sr = librosa.load(address, sr=self.sr) S = librosa.feature.melspectrogram(y, sr=sr, n_mels=self.n_mels, n_fft=self.n_fft, hop_length=self.hop_length) sample = librosa.amplitude_to_db(S, ref=1.0) sample = np.expand_dims(sample, axis=0) sample = pad_along_axis(sample, 1024, axis=2) # print(sample.shape) sample = torch.from_numpy(sample) label = self.labels[index] # label = torch.from_numpy(label) print(sample.shape,label) if self.transform: sample = self.transform(sample) return sample, label def extract_address(self): label_map = { 'blues': 0, 'classical': 1, 'country': 2, 'disco': 3, 'hiphop': 4, 'jazz': 5, 'metal': 6, 'pop': 7, 'reggae': 8, 'rock': 9 } labels = [] address = [] # extract all genres' folders genres = [path for path in os.listdir(self.genre_folder)] for genre in genres: # e.g. ./data/generes_original/country genre_path = os.path.join(self.genre_folder, genre) # extract all sounds from genre_path songs = os.listdir(genre_path) for song in songs: song_path = os.path.join(genre_path, song) genre_id = label_map[genre] # one_hot_targets = torch.eye(10)[genre_id] labels.append(genre_id) address.append(song_path) samples = np.array(address) labels = np.array(labels) # convert labels to one-hot encoding # if self.one_hot_encoding: # labels = OneHotEncoder(sparse=False).fit_transform(labels) # else: # labels = LabelEncoder().fit_transform(labels) return samples, labels and trainer : # encoding: utf-8 import logging from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator from ignite.handlers import ModelCheckpoint, Timer from ignite.metrics import Accuracy, Loss, RunningAverage def do_train( cfg, model, train_loader, val_loader, optimizer, scheduler, loss_fn, ): log_period = cfg.SOLVER.LOG_PERIOD checkpoint_period = cfg.SOLVER.CHECKPOINT_PERIOD output_dir = cfg.OUTPUT_DIR device = cfg.MODEL.DEVICE epochs = cfg.SOLVER.MAX_EPOCHS model = model.to(device) logger = logging.getLogger("template_model.train") logger.info("Start training") trainer = create_supervised_trainer(model, optimizer, loss_fn, device=device) evaluator = create_supervised_evaluator(model, metrics={'accuracy': Accuracy(), 'ce_loss': Loss(loss_fn)}, device=device) checkpointer = ModelCheckpoint(output_dir, 'mnist', None, n_saved=10, require_empty=False) timer = Timer(average=True) trainer.add_event_handler(Events.EPOCH_COMPLETED, checkpointer, {'model': model.state_dict(), 'optimizer': optimizer.state_dict()}) timer.attach(trainer, start=Events.EPOCH_STARTED, resume=Events.ITERATION_STARTED, pause=Events.ITERATION_COMPLETED, step=Events.ITERATION_COMPLETED) RunningAverage(output_transform=lambda x: x).attach(trainer, 'avg_loss') @trainer.on(Events.ITERATION_COMPLETED) def log_training_loss(engine): iter = (engine.state.iteration - 1) % len(train_loader) + 1 if iter % log_period == 0: logger.info("Epoch[{}] Iteration[{}/{}] Loss: {:.2f}" .format(engine.state.epoch, iter, len(train_loader), engine.state.metrics['avg_loss'])) @trainer.on(Events.EPOCH_COMPLETED) def log_training_results(engine): evaluator.run(train_loader) metrics = evaluator.state.metrics avg_accuracy = metrics['accuracy'] avg_loss = metrics['ce_loss'] logger.info("Training Results - Epoch: {} Avg accuracy: {:.3f} Avg Loss: {:.3f}" .format(engine.state.epoch, avg_accuracy, avg_loss)) if val_loader is not None: @trainer.on(Events.EPOCH_COMPLETED) def log_validation_results(engine): evaluator.run(val_loader) metrics = evaluator.state.metrics avg_accuracy = metrics['accuracy'] avg_loss = metrics['ce_loss'] logger.info("Validation Results - Epoch: {} Avg accuracy: {:.3f} Avg Loss: {:.3f}" .format(engine.state.epoch, avg_accuracy, avg_loss) ) # adding handlers using `trainer.on` decorator API @trainer.on(Events.EPOCH_COMPLETED) def print_times(engine): logger.info('Epoch {} done. Time per batch: {:.3f}[s] Speed: {:.1f}[samples/s]' .format(engine.state.epoch, timer.value() * timer.step_count, train_loader.batch_size / timer.value())) timer.reset() trainer.run(train_loader, max_epochs=epochs)
Errors like this are usually due to your data changing in some unexpected way, as the model is fixed and (as you said) working until a point. I think your error comes from this line in your model.forward() call: h0 = torch.zeros((1, 16, 32)).to(device) As this is hard-coded to be 16, when it appears that your dataloader is passing a batch_size of 7 at some stage. There are two ways to fix this, one of which is to set the h0 batch size dynamically: h0 = torch.zeros((1,list(x.shape[0]),list(x.shape[2])) or by passing drop_last=True in your dataloader creation.
https://stackoverflow.com/questions/67476115/
Assign value to sub-tensor in Pytorch
Suppose I have a PyTorch tensor t of shape (3, 3, 3, 3), and I want to set t[0, 1, x, y] to z for all x and y. I noticed all the following syntaxes work: t[0, 1] = z t[0, 1, :] = z t[0, 1, :, :] = z t[0, 1, ...] = z Is there any difference in terms of how things are executed under the hood? Is any one of these methods preferred over the others?
There is a subtle difference in how : and ... are translated to C++ under the hood: The one-to-one translation between Python and C++ index types is as follows: Python C++ (assuming using namespace torch::indexing) None None Ellipsis Ellipsis ... "..." : or :: Slice() or Slice(None, None) or Slice(None, None, None) Practically this has no effect on performance however. Use whichever style is most readable in your context.
https://stackoverflow.com/questions/67479388/
Python pytorch function consumes memory excessively quickly
I am writing a function with pytorch that feeds inputs through a transformer model, then condenses the last embedding layer by computing the average along a specific axis (using a subset of indices defined by a mask). Since the output of the model is very very large, I need to process the inputs in batches. My question does not concern the logic of this function as I believe I have the correct implementation. My issue is that the function I wrote consumes memory excessively quickly and practically makes it unusable. Here is my function: def get_chunk_embeddings(encoded_dataset, batch_size): chunk_embeddings = torch.empty([0,768]) for i in range(len(encoded_dataset['input_ids'])//batch_size): input_ids = encoded_dataset['input_ids'][i*batch_size:i*batch_size + batch_size] attention_mask = encoded_dataset['attention_mask'][i*batch_size:i*batch_size + batch_size] embeddings = model.forward(input_ids=input_ids, attention_mask=attention_mask)['last_hidden_state'] embeddings = embeddings * attention_mask[:,:,None] embeddings = embeddings.sum(dim=1)/attention_mask.sum(dim=1)[:,None] chunk_embeddings = torch.cat([chunk_embeddings, embeddings],0) return chunk_embeddings Now let's talk memory (the numbers below assume that I pass a batch_size of 8): I am using google colab and I have ~25 GB of RAM available The model is a BERT model and consumes 413 MB encoded_dataset consumes 0.48 GB input_ids consumes 0.413 MB attention_mask consumes 4.096 KB embeddings at its peak consumption consume 12.6 MB chunk_embeddings adds 0.024576 MB with each iteration So from my understanding, I should be able to allow chunk_embeddings to grow up to: 25GB - 413MB - 0.48GB - 0.413MB - 4.096KB - 12.6MB ~= 24 GB. Enough for almost 1 million iterations.. Here I will walk through an example of what I am experiencing: Before running my function, google colab tells me that I have plenty of memory Now, for the sake of example, I will run the function (for only 3 iterations) To be explicit, I put this at the end of my for loop: if (i == 2):return chunk_embeddings Now I run the code val = get_chunk_embeddings(train_encoded_dataset, 8) So even with just 3 iterations, somehow I consume almost 5.5 GB of RAM. Why is this happening? Also after I have returned from the function all of the local variables should be deleted and there is no way that val is so large. Could someone tell me what it is I am doing wrong or am not understanding? Please let me know if further information is needed.
To expand upon @GoodDeeds' answer, by default the computations within a pytorch.nn module (model) create the computation graph and preserve the gradient (unless you are using with torch.no_grad() or something similar. That means that at every iteration of your loop, the computational graph for embeddings is stored within the tensor embeddings. embeddings.grad is likely much larger than embeddings itself because the gradient of each layer value with respect to each previous layer's value is maintained. Next, since you use torch.cat, you append embeddingsd and the associated gradient to chunk_embeddings. That means that after a few iterations, chunk_embeddings stores a huge number of gradient values, which is where your memory is going. There are a few solutions: If you need to use chunk embeddings for backpropogation (i.e. training), you should move your loss calculation and optimizer step within the loop, such that the gradients are automatically cleared afterwards. If this function is only used during inference, you can disable the gradient computation entirely (which should also speed computation slightly) using torch.no_grad(), or you can use torch.detach() on embeddings at each iteration as suggested in the comments. Example: def get_chunk_embeddings(encoded_dataset, batch_size): with torch.no_grad(): chunk_embeddings = torch.empty([0,768]) for i in range(len(encoded_dataset['input_ids'])//batch_size): input_ids = encoded_dataset['input_ids'][i*batch_size:i*batch_size + batch_size] attention_mask = encoded_dataset['attention_mask'][i*batch_size:i*batch_size + batch_size] embeddings = model.forward(input_ids=input_ids, attention_mask=attention_mask)['last_hidden_state'] embeddings = embeddings * attention_mask[:,:,None] embeddings = embeddings.sum(dim=1)/attention_mask.sum(dim=1)[:,None] chunk_embeddings = torch.cat([chunk_embeddings, embeddings],0) return chunk_embeddings
https://stackoverflow.com/questions/67480290/
PyTorch: Why create multiple instances of the same type of layer?
This code is from PyTorch transformer: self.linear1 = Linear(d_model, dim_feedforward, **factory_kwargs) self.dropout = Dropout(dropout) self.linear2 = Linear(dim_feedforward, d_model, **factory_kwargs) self.norm1 = LayerNorm(d_model, eps=layer_norm_eps, **factory_kwargs) self.norm2 = LayerNorm(d_model, eps=layer_norm_eps, **factory_kwargs) self.norm3 = LayerNorm(d_model, eps=layer_norm_eps, **factory_kwargs) self.dropout1 = Dropout(dropout) self.dropout2 = Dropout(dropout) self.dropout3 = Dropout(dropout) Why do they add self.dropout1, ...2, ...3 when self.dropout already exists and is the exact same function? Also, what is the difference between (self.linear1, self.linear2) and self.linear?
That's because to separate one Linear layer or Dropout layer from one another. That's very simple logic. You are creating different instances or layers in the network of the Dropout function using self.dropout = Dropout(dropout).
https://stackoverflow.com/questions/67480473/
TensorFlow equivalent of PyTorch's transforms.Normalize()
I'm trying to inference a TFLite model that was originally built in PyTorch. I have been following along the lines of the PyTorch implementation and have to preprocess images along the RGB channels. I found the closest TensorFlow equivalent of transforms.Normalize() to be tf.image.per_image_standardization() (documentation). Although this is a pretty good match, tf.image.per_image_standardization() does this by taking mean and std across the channels and applies it to them. Here's their full implementation from here def per_image_standardization(image): """Linearly scales `image` to have zero mean and unit norm. This op computes `(x - mean) / adjusted_stddev`, where `mean` is the average of all values in image, and `adjusted_stddev = max(stddev, 1.0/sqrt(image.NumElements()))`. `stddev` is the standard deviation of all values in `image`. It is capped away from zero to protect against division by 0 when handling uniform images. Args: image: 3-D tensor of shape `[height, width, channels]`. Returns: The standardized image with same shape as `image`. Raises: ValueError: if the shape of 'image' is incompatible with this function. """ image = ops.convert_to_tensor(image, name='image') _Check3DImage(image, require_static=False) num_pixels = math_ops.reduce_prod(array_ops.shape(image)) image = math_ops.cast(image, dtype=dtypes.float32) image_mean = math_ops.reduce_mean(image) variance = (math_ops.reduce_mean(math_ops.square(image)) - math_ops.square(image_mean)) variance = gen_nn_ops.relu(variance) stddev = math_ops.sqrt(variance) # Apply a minimum normalization that protects us against uniform images. min_stddev = math_ops.rsqrt(math_ops.cast(num_pixels, dtypes.float32)) pixel_value_scale = math_ops.maximum(stddev, min_stddev) pixel_value_offset = image_mean image = math_ops.subtract(image, pixel_value_offset) image = math_ops.div(image, pixel_value_scale) return image whereas PyTorch's transforms.Normalize() allows us to mention the mean and std to be applied across each channel like below. # transformation pose_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) What would be a way to get this functionality in TensorFlow 2.x? Edit: I created a quick botch that seems to solve this by defining a function as such: def normalize_image(image, mean, std): for channel in range(3): image[:,:,channel] = (image[:,:,channel] - mean[channel])/std[channel] return image I'm not sure how efficient this is but seems to get the job done. I still have to convert the output to a tensor before inputing to the model.
The workaround that you mentioned seems ok. But using for...loop to compute normalization to each RGB channel for a single image can be a bit problematic when you deal with a large dataset in the data pipeline (generator or tf.data). But it's ok anyway. Here is the demonstration of your approach, and later we will provide two possible alternatives that might work for you easily. from PIL import Image from matplotlib.pyplot import imshow, subplot, title, hist # load image (RGB) img = Image.open('/content/9.jpg') def normalize_image(image, mean, std): for channel in range(3): image[:,:,channel] = (image[:,:,channel] - mean[channel]) / std[channel] return image OP_approach = normalize_image(np.array(img) / 255.0, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) Now, let's observe the transform properties afterward. plt.figure(figsize=(25,10)) subplot(121); imshow(OP_approach); title(f'Normalized Image \n min-px: \ {OP_approach.min()} \n max-pix: {OP_approach.max()}') subplot(122); hist(OP_approach.ravel(), bins=50, density=True); \ title('Histogram - pixel distribution') The range of minimum and maximum pixel after normalization are (-2.1179039301310043, 2.6399999999999997) respectively. Option 2 We can use the tf. keras...Normalization preprocessing layer to do the same. It takes two important arguments which are mean and, variance (square of the std). from tensorflow.keras.experimental.preprocessing import Normalization input_data = np.array(img)/255 layer = Normalization(mean=[0.485, 0.456, 0.406], variance=[np.square(0.299), np.square(0.224), np.square(0.225)]) plt.figure(figsize=(25,10)) subplot(121); imshow(layer(input_data).numpy()); title(f'Normalized Image \n min-px: \ {layer(input_data).numpy().min()} \n max-pix: {layer(input_data).numpy().max()}') subplot(122); hist(layer(input_data).numpy().ravel(), bins=50, density=True);\ title('Histogram - pixel distribution') The range of minimum and maximum pixel after normalization are (-2.0357144, 2.64) respectively. Option 3 This is more like subtracting the average mean and divide by the average std. norm_img = ((tf.cast(np.array(img), tf.float32) / 255.0) - 0.449) / 0.226 plt.figure(figsize=(25,10)) subplot(121); imshow(norm_img.numpy()); title(f'Normalized Image \n min-px: \ {norm_img.numpy().min()} \n max-pix: {norm_img.numpy().max()}') subplot(122); hist(norm_img.numpy().ravel(), bins=50, density=True); \ title('Histogram - pixel distribution') The range of minimum and maximum pixel after normalization are (-1.9867257, 2.4380531) respectively. Lastly, if we compare to the pytorch way, there is not that much difference among these approaches. import torchvision.transforms as transforms transform_norm = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) norm_pt = transform_norm(img) plt.figure(figsize=(25,10)) subplot(121); imshow(np.array(norm_pt).transpose(1, 2, 0));\ title(f'Normalized Image \n min-px: \ {np.array(norm_pt).min()} \n max-pix: {np.array(norm_pt).max()}') subplot(122); hist(np.array(norm_pt).ravel(), bins=50, density=True); \ title('Histogram - pixel distribution') The range of minimum and maximum pixel after normalization are (-2.117904, 2.64) respectively.
https://stackoverflow.com/questions/67480507/
Why is using the GPU slower than using the CPU?
Consider the following network: %%time import torch from torch.autograd import grad import torch.nn as nn import torch.optim as optim class net_x(nn.Module): def __init__(self): super(net_x, self).__init__() self.fc1=nn.Linear(1, 20) self.fc2=nn.Linear(20, 20) self.out=nn.Linear(20, 400) #a,b,c,d def forward(self, x): x=torch.tanh(self.fc1(x)) x=torch.tanh(self.fc2(x)) x=self.out(x) return x nx = net_x() #input val = 100 t = torch.rand(val, requires_grad = True) #input vector t = torch.reshape(t, (val,1)) #reshape for batch #method dx = torch.autograd.functional.jacobian(lambda t_: nx(t_), t) This outputs CPU times: user 11.1 s, sys: 3.52 ms, total: 11.1 s Wall time: 11.1 s However, when I change to using the GPU with .to(device): %%time import torch from torch.autograd import grad import torch.nn as nn import torch.optim as optim device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') class net_x(nn.Module): def __init__(self): super(net_x, self).__init__() self.fc1=nn.Linear(1, 20) self.fc2=nn.Linear(20, 20) self.out=nn.Linear(20, 400) #a,b,c,d def forward(self, x): x=torch.tanh(self.fc1(x)) x=torch.tanh(self.fc2(x)) x=self.out(x) return x nx = net_x() nx.to(device) #input val = 100 t = torch.rand(val, requires_grad = True) #input vector t = torch.reshape(t, (val,1)).to(device) #reshape for batch #method dx = torch.autograd.functional.jacobian(lambda t_: nx(t_), t) This outputs: CPU times: user 18.6 s, sys: 1.5 s, total: 20.1 s Wall time: 19.5 s Update 1: Checking the timing of the process of moving the inputs and model to device: %%time nx.to(device) t.to(device) This outputs: CPU times: user 2.05 ms, sys: 0 ns, total: 2.05 ms Wall time: 2.13 ms Update 2: Looks like @Gulzar was right. I changed my batch size to 1000 (val=1000) and the CPU outputs: Wall time: 8min 44s While the GPU outputs: Wall time: 3min 12s
A hand-wavey answer GPUs are "weaker" computers, with much more computing cores than CPUs. Data has to be passed to them from RAM memory to GRAM in a "costly" manner, every once in a while, so they can process it. If data is "large", and processing can be parallelized on that data, it is likely computing there will be faster. If data is not "big enough", the cost of transferring the data, or the cost of using weaker cores and synchronizing them can outweigh the benefit of parallelization. When will GPU be useful? For larger networks, or for heavier computations, such as convolutions, or larger fully connected layers (larger matrix multiplications) For larger batches - batches are a very easy way to parallelize computation, as they are (almost*) independent. *Almost, as they do need to be synchronized programmatically at some point.
https://stackoverflow.com/questions/67489202/
Why does Dropout not change my input tensor?
Please see the following code associated with output, import torch import torch.nn as nn inputTensor = torch.tensor([1.0, 2.0, 3, 4, 5]) outplace_dropout = nn.Dropout(p=0.4) print(inputTensor) output_afterDropout = outplace_dropout(inputTensor) print(output_afterDropout) print(inputTensor) The output is: tensor([1., 2., 3., 4., 5.]) tensor([1.6667, 3.3333, 0.0000, 6.6667, 0.0000]) tensor([1., 2., 3., 4., 5.]) Could you please elaborate why the input tensor values are still unchanged?
From the documentation of torch.nn.Dropout, you can see that the inplace argument defaults to False. If you wish to change the input tensor in place, change the initialization to: outplace_dropout = nn.Dropout(p=0.4, inplace=True)
https://stackoverflow.com/questions/67490258/
PyTorch NN : RuntimeError: mat1 dim 1 must match mat2 dim 0
I'm new to the Neural Network domain and I have stuck on a problem. I'm trying to create a NN with dropout with 0.1 probability for the hidden fully connected layer. When I code like below: class ConvNet(torch.nn.Module): def __init__(self): super().__init__() self.layers = torch.nn.Sequential( #layer1 torch.nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1), torch.nn.MaxPool2d(kernel_size=2, stride=3), torch.nn.BatchNorm2d(num_features=16), torch.nn.ReLU(), #layer2 torch.nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, stride=1), torch.nn.MaxPool2d(kernel_size=2, stride=3), torch.nn.BatchNorm2d(num_features=32), torch.nn.ReLU(), #layer3 torch.nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3, stride=1), torch.nn.MaxPool2d(kernel_size=2, stride=3), torch.nn.BatchNorm2d(num_features=32), torch.nn.Flatten(), torch.nn.Linear(32,16), torch.nn.Dropout2d(p=0.1), torch.nn.Linear(16, 2) ) def forward(self, x): return self.layers(x) test_convnet = ConvNet().to('cuda') test_input = torch.randn(16, 3, 100, 100, device='cuda') test_output = test_convnet(test_input) print(test_output.shape) And then I got the error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-27-b5ce1c300266> in <module>() 48 test_convnet = ConvNet().to('cuda') 49 test_input = torch.randn(16, 3, 100, 100, device='cuda') ---> 50 test_output = test_convnet(test_input) 51 print(test_output.shape) 6 frames /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1751 if has_torch_function_variadic(input, weight): 1752 return handle_torch_function(linear, (input, weight), input, weight, bias=bias) -> 1753 return torch._C._nn.linear(input, weight, bias) 1754 1755 RuntimeError: mat1 dim 1 must match mat2 dim 0 Thanks in advance for all of your help
In layer 3 #layer3 torch.nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3, stride=1), torch.nn.MaxPool2d(kernel_size=2, stride=3), torch.nn.BatchNorm2d(num_features=32), torch.nn.Flatten(), torch.nn.Linear(288,16), torch.nn.Dropout2d(p=0.1), torch.nn.Linear(16, 2) Corrected Line = torch.nn.Linear(288,16) This is the wrong shape: torch.nn.Linear(32,16) Full code: from argparse import ArgumentParser import numpy as np from torch.utils.data import random_split, DataLoader, TensorDataset import torch from torch.autograd import Variable from torchvision import transforms import torch from torch import nn from torch.nn import functional as F from torch.optim import Adam from torch.optim.optimizer import Optimizer class ConvNet(torch.nn.Module): def __init__(self): super().__init__() self.layers = torch.nn.Sequential( #layer1 torch.nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1), torch.nn.MaxPool2d(kernel_size=2, stride=3), torch.nn.BatchNorm2d(num_features=16), torch.nn.ReLU(), #layer2 torch.nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, stride=1), torch.nn.MaxPool2d(kernel_size=2, stride=3), torch.nn.BatchNorm2d(num_features=32), torch.nn.ReLU(), #layer3 torch.nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3, stride=1), torch.nn.MaxPool2d(kernel_size=2, stride=3), torch.nn.BatchNorm2d(num_features=32), torch.nn.Flatten(), torch.nn.Linear(288,16), torch.nn.Dropout2d(p=0.1), torch.nn.Linear(16, 2) ) def forward(self, x): return self.layers(x) test_convnet = ConvNet() test_input = torch.randn(16, 3, 100, 100) test_output = test_convnet(test_input) print(test_output.shape)
https://stackoverflow.com/questions/67493202/
Problem with freezing pytorch model - requires_grad is always true
I have tried to freeze part of my model but it does not work. Gradient computation is still enabled for each layer. Is that some sort of bug or am I doing something wrong? :) model = models.resnet18(pretrained=True) # To freeze the residual layers for param in model.parameters(): param.require_grad = False for param in model.fc.parameters(): param.require_grad = True # Replace last layer num_features = model.fc.in_features model.fc = nn.Linear(num_features, 2) model.fc = nn.Dropout(0.5) # Find total parameters and trainable parameters total_params = sum(p.numel() for p in model.parameters()) print(f'{total_params:,} total parameters.') # >>> 21,284,672 total parameters. total_trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'{total_trainable_params:,} training parameters.') # >>> 21,284,672 training parameters.
This is because of a typo. Change require_grad to requires_grad: for param in model.parameters(): param.requires_grad = False for param in model.fc.parameters(): param.requires_grad = True Currently, you are declaring a new attribute for the model and assigning it to True and False as appropriate, so it has no effect.
https://stackoverflow.com/questions/67495100/
How to efficiently multiply by torch tensor with repeated rows without storing all the rows in memory or iterating?
Given a torch tensor: # example tensor size 2 x 4 a = torch.Tensor([[1, 2, 3, 4], [5, 6, 7, 8]]) and another where every n rows are repeated: # example tensor size 4 x 3 where every 2 rows repeated b = torch.Tensor([[1, 2, 3], [4, 5, 6], [1, 2, 3], [4, 5, 6]]) how can one perform matrix multiplication: >>> torch.mm(a, b) tensor([[ 28., 38., 48.], [ 68., 94., 120.]]) without copying the whole repeated row tensor into memory or iterating? i.e. only store the first 2 rows: # example tensor size 2 x 3 where only the first two rows from b are actually stored in memory b_abbreviated = torch.Tensor([[1, 2, 3], [4, 5, 6]]) since these rows will be repeated. There is a function torch.expand() but this does work when repeating more than a single row, and also, as this question: Repeating a pytorch tensor without copying memory indicates and my own tests confirm often ends up copying the whole tensor into memory anyway when calling .to(device) It is also possible to do this iteratively, but this is relatively slow. Is there some way to perform this operation efficiently without storing the whole repeated row tensor in memory? Edit explanation: Sorry, for not initially clarifying: One was used as the first dimension of the first tensor to keep the example simple, but I am actually looking for a solution to the general case for any two tensors a and b such that their dimensions are compatible for matrix multiplication and the rows of b repeat every n rows. I have updated the example to reflect this.
Assuming that the first dimension of a is 1 as in your example, you could do the following: a = torch.Tensor([[1, 2, 3, 4]]) b_abbreviated = torch.Tensor([[1, 2, 3], [4, 5, 6]]) torch.mm(a.reshape(-1, 2), b_abbreviated).sum(axis=0, keepdim=True) Here, instead of repeating the rows, you multiply a in chunks, then add them up column-wise to get the same result. If the first dimension of a is not necessarily 1, you could try the following: torch.cat(torch.split(torch.mm(a.reshape(-1,2),b_abbreviated), a.shape[0]), dim=1).sum( dim=0, keepdim=True).reshape(a.shape[0], -1) Here, you do the following: With torch.mm(a.reshape(-1,2),b_abbreviated, you again split each row of a into chunks of size 2 and stack them one over the other, and then stack each row over the other. With torch.split(torch.mm(a.reshape(-1,2),b_abbreviated), a.shape[0]), these stacks are then separated row-wise, so that each resultant component of the split corresponds to chunks of a single row. With torch.cat(torch.split(torch.mm(a.reshape(-1,2),b_abbreviated), a.shape[0]), dim=1) these stacks are then concatenated column-wise. With .sum(dim=0, keepdim=True), results corresponding to separate chunks of individual rows in a are added up. With .reshape(a.shape[0], -1), rows of a that were concatenated column-wise are again stacked row-wise. It seems quite slow compared to direct matrix multiplication, which is not surprising, but I have not yet checked in comparison to explicit iteration. There are likely better ways of doing this, will edit if I think of any.
https://stackoverflow.com/questions/67496315/
RuntimeError: Input, output and indices must be on the current device. (fill_mask("Random text .")
I am getting "RuntimeError: Input, output and indices must be on the current device." when I run this line. fill_mask("Auto Car .") I am running it on Colab. My Code: from transformers import BertTokenizer, BertForMaskedLM from pathlib import Path from tokenizers import ByteLevelBPETokenizer from transformers import BertTokenizer, BertForMaskedLM paths = [str(x) for x in Path(".").glob("**/*.txt")] print(paths) bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') from transformers import BertModel, BertConfig configuration = BertConfig() model = BertModel(configuration) configuration = model.config print(configuration) model = BertForMaskedLM.from_pretrained("bert-base-uncased") from transformers import LineByLineTextDataset dataset = LineByLineTextDataset( tokenizer=bert_tokenizer, file_path="./kant.txt", block_size=128, ) from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=bert_tokenizer, mlm=True, mlm_probability=0.15 ) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./KantaiBERT", overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=64, save_steps=10_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, ) trainer.train() from transformers import pipeline fill_mask = pipeline( "fill-mask", model=model, tokenizer=bert_tokenizer ) fill_mask("Auto Car <mask>.") The last line is giving me the error mentioned above. Please let me know what I am doing wrong or what I have to do in order to remove this error.
The trainer trains your model automatically at GPU (default value no_cuda=False). You can verify this by running: model.device after training. The pipeline does not this and this leads to the error you see (i.e. your model is on your GPU but your example sentence is on your CPU). You can fix that by either run the pipeline with GPU support as well: fill_mask = pipeline( "fill-mask", model=model, tokenizer=bert_tokenizer, device=0, ) or by transferring your model to CPU before initializing the pipeline: model.to('cpu')
https://stackoverflow.com/questions/67496616/
What does x[x!=x] mean?
I don't understand this line: lprobs[lprobs != lprobs] = torch.tensor(-math.inf).to(lprobs) There is no comment, so is it some well-known Python (or PyTorch?) idiom? Could someone explain what it means, or show a different way that makes the intent clearer? lprobs is a pytorch Tensor, and it could contain any size float type (I doubt this code is intended to support int or complex types). As far as I know, the Tensor classes don't override the __ne__ function.
It's a combination of fancy indexing with a boolean mask, and a "trick" (although intended by design) to check for NaN: x != x holds iff x is NaN (for floats, that is). They could alternatively have written lprobs[torch.isnan(lprobs)] = torch.tensor(-math.inf).to(lprobs) or, probably even more idiomatically, used torch.nan_to_num (but beware that the latter also has special behaviour towards infinities). A non-updating variant of the above would be torch.where(torch.isnan(lprobs), torch.tensor(-math.inf), lprobs)
https://stackoverflow.com/questions/67499644/
Can I unpack an `nn.ModuleList` inside `nn.Sequential`?
I am parametrizing the number of hidden layers of a simple ANN using nn.ModuleList. I am wondering if passing this list into a nn.Sequential module as follows would lead to any adverse effects with the execution graph. nn.Sequential is not necessary, however it seems cleaner to me to have the entire architecture explicit in the constructor. class ANN(nn.Module): def __init__( self, in_feats=3, in_hidden=5, n_hidden_layers=3, ): super(ANN, self).__init__() # ====== dynamically register hidden layers ====== self.hidden_linears = nn.ModuleList() for i in range(n_hidden_layers): self.hidden_linears.append(nn.Linear(in_hidden, in_hidden)) self.hidden_linears.append(nn.ReLU()) # ====== sequence of layers ====== self.layers = nn.Sequential( nn.Linear(in_feats, in_hidden), nn.ReLU(), *self.hidden_linears, nn.Linear(in_hidden, 1), nn.Sigmoid(), ) def forward(self, X): return self.layers(X) Also open to suggestions of neater ways to put this together.
You could use nn.Sequential.modules() to get every single part of the Sequential and then pack them into a list.
https://stackoverflow.com/questions/67503326/
Pytorch nll_loss returning a constant loss during training loop
I've an image binary classification problem which i want to classify weather an image is of an ant or bee. I've scraped the images and i did all the cleaning, reshaping, converting to grayscale. The images are of size 200x200 one channel grayscale. I first wanted to solve this problem using Feed Forwad NN before i jump to Conv Nets.. My problem during the training loop I am getting a constant loss I am using Adam Optimizer, F.log_softmax for the last layer in the network as well as the nll_loss function. My code so far looks as follows: FF - Network class Net(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(in_features , 64) self.fc2 = nn.Linear(64, 64) self.fc3 = nn.Linear(64, 32) self.fc4 = nn.Linear(32, 2) def forward(self, X): X = F.relu(self.fc1(X)) X = F.relu(self.fc2(X)) X = F.relu(self.fc3(X)) X = F.log_softmax(self.fc4(X), dim=1) return X net = Net() Training loop. optimizer = torch.optim.Adam(net.parameters(), lr=0.001) EPOCHS = 10 BATCH_SIZE = 5 for epoch in range(EPOCHS): print(f'Epochs: {epoch+1}/{EPOCHS}') for i in range(0, len(y_train), BATCH_SIZE): X_batch = X_train[i: i+BATCH_SIZE].view(-1,200 * 200) y_batch = y_train[i: i+BATCH_SIZE].type(torch.LongTensor) net.zero_grad() ## or you can say optimizer.zero_grad() outputs = net(X_batch) loss = F.nll_loss(outputs, y_batch) loss.backward() optimizer.step() print("Loss", loss) I am suspecting that the problem maybe with my batching and the loss function. I will appreciate any help. Note: The images are gray-scale images of shape (200, 200).
I've been waiting for answers, but i couldn't even get a comment. I figured out myself the solution, maybe this can help someone in the future. class Net(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(200 * 200 , 64) # 200 * 200 are in_features, which is an image of shape 200*200 (gray image) self.fc2 = nn.Linear(64, 64) self.fc3 = nn.Linear(64, 32) self.fc4 = nn.Linear(32, 2) def forward(self, X): X = F.relu(self.fc1(X)) X = F.relu(self.fc2(X)) X = F.relu(self.fc3(X)) X = self.fc4(X) # I removed the activation function here, return X net = Net() # I changed the loss function to CrossEntropyLoss() since i didn't apply activation function on the last layer loss_function = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(net.parameters(), lr=0.001) EPOCHS = 10 BATCH_SIZE = 5 for epoch in range(EPOCHS): print(f'Epochs: {epoch+1}/{EPOCHS}') for i in range(0, len(y_train), BATCH_SIZE): X_batch = X_train[i: i+BATCH_SIZE].view(-1, 200 * 200) y_batch = y_train[i: i+BATCH_SIZE].type(torch.LongTensor) net.zero_grad() ## or you can say optimizer.zero_grad() outputs = net(X_batch) loss = loss_function(outputs, y_batch) loss.backward() optimizer.step() print("Loss", loss)
https://stackoverflow.com/questions/67503469/
Python(PyTorch): TypeError: string indices must be integers
I have written the following code to train a bert model on my dataset but when I execute it I get an error at the part where I implement tqdm. I have written the entire training code below with full description of the error. How to fix this? Code Model TRANSFORMERS = { "bert-multi-cased": (BertModel, BertTokenizer, "bert-base-uncased"), } class Transformer(nn.Module): def __init__(self, model, num_classes=1): """ Constructor Arguments: model {string} -- Transformer to build the model on. Expects "camembert-base". num_classes {int} -- Number of classes (default: {1}) """ super().__init__() self.name = model model_class, tokenizer_class, pretrained_weights = TRANSFORMERS[model] bert_config = BertConfig.from_json_file(MODEL_PATHS[model] + 'bert_config.json') bert_config.output_hidden_states = True self.transformer = BertModel(bert_config) self.nb_features = self.transformer.pooler.dense.out_features self.pooler = nn.Sequential( nn.Linear(self.nb_features, self.nb_features), nn.Tanh(), ) self.logit = nn.Linear(self.nb_features, num_classes) def forward(self, tokens): """ Usual torch forward function Arguments: tokens {torch tensor} -- Sentence tokens Returns: torch tensor -- Class logits """ _, _, hidden_states = self.transformer( tokens, attention_mask=(tokens > 0).long() ) hidden_states = hidden_states[-1][:, 0] # Use the representation of the first token of the last layer ft = self.pooler(hidden_states) return self.logit(ft) Training def fit(model, train_dataset, val_dataset, epochs=1, batch_size=8, warmup_prop=0, lr=5e-4): train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False) optimizer = AdamW(model.parameters(), lr=lr) num_warmup_steps = int(warmup_prop * epochs * len(train_loader)) num_training_steps = epochs * len(train_loader) scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps) loss_fct = nn.BCEWithLogitsLoss(reduction='mean').cuda() for epoch in range(epochs): model.train() start_time = time.time() optimizer.zero_grad() avg_loss = 0 for step, (x, y_batch) in tqdm(enumerate(train_loader), total=len(train_loader)): y_pred = model(x.to(device)) loss = loss_fct(y_pred.view(-1).float(), y_batch.float().to(device)) loss.backward() avg_loss += loss.item() / len(train_loader) xm.optimizer_step(optimizer, barrier=True) #optimizer.step() scheduler.step() model.zero_grad() optimizer.zero_grad() model.eval() preds = [] truths = [] avg_val_loss = 0. with torch.no_grad(): for x, y_batch in tqdm(val_loader): y_pred = model(x.to(device)) loss = loss_fct(y_pred.detach().view(-1).float(), y_batch.float().to(device)) avg_val_loss += loss.item() / len(val_loader) probs = torch.sigmoid(y_pred).detach().cpu().numpy() preds += list(probs.flatten()) truths += list(y_batch.numpy().flatten()) score = roc_auc_score(truths, preds) dt = time.time() - start_time lr = scheduler.get_last_lr()[0] print(f'Epoch {epoch + 1}/{epochs} \t lr={lr:.1e} \t t={dt:.0f}s \t loss={avg_loss:.4f} \t val_loss={avg_val_loss:.4f} \t val_auc={score:.4f}') Error --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <timed eval> in <module> <ipython-input-19-e47eae808597> in fit(model, train_dataset, val_dataset, epochs, batch_size, warmup_prop, lr) 22 for step, (x, y_batch) in tqdm(enumerate(train_loader), total=len(train_loader)): 23 ---> 24 y_pred = model(x.to(device)) 25 26 loss = loss_fct(y_pred.view(-1).float(), y_batch.float().to(device)) /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 724 result = self._slow_forward(*input, **kwargs) 725 else: --> 726 result = self.forward(*input, **kwargs) 727 for hook in itertools.chain( 728 _global_forward_hooks.values(), <ipython-input-11-2002cc7ec843> in forward(self, tokens) 41 ) 42 ---> 43 hidden_states = hidden_states[-1][:, 0] # Use the representation of the first token of the last layer 44 45 ft = self.pooler(hidden_states) TypeError: string indices must be integers
Your code is designed for an older version of the transformers library: AttributeError: 'str' object has no attribute 'dim' in pytorch As such you will need to either downgrade to version 3.0.0, or adapt the code to deal with the new-format output of bert.
https://stackoverflow.com/questions/67504836/
How to save and load nn.Parameter() in pytorch, such that one can continue optimisation over it?
I know how to store and load nn.Model, but can not find how to make a checkpoint for nn.Parameter. I tried this version, but the optimizer is not changing the nn.Parameter value after restoring. from torch import nn as nn import torch from torch.optim import Adam alpha = torch.ones(10) lr = 0.001 alpha = nn.Parameter(alpha) print(alpha) alpha_optimizer = Adam([alpha], lr=lr) for i in range(10): alpha_loss = - alpha.mean() alpha_optimizer.zero_grad() alpha_loss.backward() alpha_optimizer.step() print(alpha) path = "./test.pt" state = dict(alpha_optimizer=alpha_optimizer.state_dict(), alpha=alpha) torch.save(state, path) checkpoint = torch.load(path) alpha = checkpoint["alpha"] alpha_optimizer.load_state_dict(checkpoint["alpha_optimizer"]) for i in range(10): alpha_loss = - alpha.mean() alpha_optimizer.zero_grad() alpha_loss.backward() alpha_optimizer.step() print(alpha)
The problem is that the optimizer is still with a reference to the old alpha (check id(alpha) vs. id(alpha_optimizer.param_groups[0]["params"][0]) before the last for loop), while a new one is set when you load it from the checkpoint in alpha = checkpoint["alpha"]. You need to update the params of the optimizer before loading its state: # .... torch.save(state, path) checkpoint = torch.load(path) # here's where the reference of alpha changes, and the source of the problem alpha = checkpoint["alpha"] # reset optim alpha_optimizer = Adam([alpha], lr=lr) alpha_optimizer.load_state_dict(checkpoint["alpha_optimizer"]) for i in range(10): alpha_loss = - alpha.mean() alpha_optimizer.zero_grad() alpha_loss.backward() alpha_optimizer.step() print(alpha)
https://stackoverflow.com/questions/67505670/
Pytorch batchwise matrix vector rowwise multiplication
I am using pytorch. If I have a matrix M of size (d1, d2) and a vector V of size d2, doing M*V gives me an output OUT of size (d1, d2), where each row of M has been multiplied by V. I need to do the same thing batch-wise, where the matrix M is fixed and I have a batch of dB vectors. In practice, given a tensor M of size (d1, d2) and a tensor V of size (dB, d2), I need to get as output OUT a tensor of size (dB, d1, d2), so that OUT[i] = M*V[i]. How can I efficiently get it with pytorch?
This simple trick works for the problem: M.unsqueeze(0) * V.unsqueeze(1) This does multiplications of tensors having shapes (1, d1, d2) and (dB, 1, d2) and you get the desired output having shape (dB, d1, d2).
https://stackoverflow.com/questions/67507141/
Given multiple prediction vectors, how to efficiently obtain the label with most votes (in numpy/pytorch)?
I have 3 vectors representing the 3 different predictions of labels for the same data: P1=[31, 22, 11, 10, 9, 9, 0, 0, 23 ....] # length over 1M P2=[31, 22, 12, 10, 8, 9, 0, 0, 30 ....] # length over 1M P3=[30, 22, 12, 11, 8, 9, 0, 1, 31 ....] # length over 1M Ans= [31, 22, 12, 10, 8, 9, 0, 0, 23, ....] The basic idea is if a prediction has the highest vote count (e.g. "31" has count 2 in the first column), we pick it, but if all candidates have a different vote (e.g. "23", "30", "31" in the last column), we can pick any one of them. These vectors may be numpy array, list or pytorch tensor. Considering the length of such vector is over 1000,000, what's the most efficient way (mainly runtime) to find Ans?
You can take the mode of the tensors: t = torch.tensor([P1,P2,P3]) t.mode(0).values tensor([31, 22, 12, 10, 8, 9, 0, 0, 23])
https://stackoverflow.com/questions/67510845/
Training PyTorch models on different machines leads to different results
I am training the same model on two different machines, but the trained models are not identical. I have taken the following measures to ensure reproducibility: # set random number random.seed(0) torch.cuda.manual_seed(0) np.random.seed(0) # set the cudnn torch.backends.cudnn.benchmark=False torch.backends.cudnn.deterministic=True # set data loader work threads to be 0 DataLoader(dataset, num_works=0) When I train the same model multiple times on the same machine, the trained model is always the same. However, the trained models on two different machines are not the same. Is this normal? Are there any other tricks I can employ?
There are a number of areas that could additionally introduce randomness e.g: PyTorch random number generator You can use torch.manual_seed() to seed the RNG for all devices (both CPU and CUDA): CUDA convolution determinism While disabling CUDA convolution benchmarking (discussed above) ensures that CUDA selects the same algorithm each time an application is run, that algorithm itself may be nondeterministic, unless either torch.use_deterministic_algorithms(True) or torch.backends.cudnn.deterministic = True is set. The latter setting controls only this behavior, unlike torch.use_deterministic_algorithms() which will make other PyTorch operations behave deterministically, too. CUDA RNN and LSTM In some versions of CUDA, RNNs and LSTM networks may have non-deterministic behavior. See torch.nn.RNN() and torch.nn.LSTM() for details and workarounds. DataLoader DataLoader will reseed workers following Randomness in multi-process data loading algorithm. Use worker_init_fn() to preserve reproducibility: https://pytorch.org/docs/stable/notes/randomness.html
https://stackoverflow.com/questions/67511658/
PipelineException: No mask_token ([MASK]) found on the input
I am getting this error "PipelineException: No mask_token ([MASK]) found on the input" when I run this line. fill_mask("Auto Car .") I am running it on Colab. My Code: from transformers import BertTokenizer, BertForMaskedLM from pathlib import Path from tokenizers import ByteLevelBPETokenizer from transformers import BertTokenizer, BertForMaskedLM paths = [str(x) for x in Path(".").glob("**/*.txt")] print(paths) bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') from transformers import BertModel, BertConfig configuration = BertConfig() model = BertModel(configuration) configuration = model.config print(configuration) model = BertForMaskedLM.from_pretrained("bert-base-uncased") from transformers import LineByLineTextDataset dataset = LineByLineTextDataset( tokenizer=bert_tokenizer, file_path="./kant.txt", block_size=128, ) from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=bert_tokenizer, mlm=True, mlm_probability=0.15 ) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./KantaiBERT", overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=64, save_steps=10_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, ) trainer.train() from transformers import pipeline fill_mask = pipeline( "fill-mask", model=model, tokenizer=bert_tokenizer, device=0, ) fill_mask("Auto Car <mask>."). # This line is giving me the error... The last line is giving me the error mentioned above. Please let me know what I am doing wrong or what I have to do in order to remove this error. Complete error: "f"No mask_token ({self.tokenizer.mask_token}) found on the input","
Even if you have already found the error, a recommendation to avoid it in the future. Instead of calling fill_mask("Auto Car <mask>.") you can do the following to be more flexible when you use different models: MASK_TOKEN = tokenizer.mask_token fill_mask("Auto Car {}.".format(MASK_TOKEN))
https://stackoverflow.com/questions/67511800/
What is bias node in googlenet and how to remove it?
I am new to deep learning, i want to build a model that can identify similar images, i am reading classification is a Strong Baseline for Deep Metric Learning research paper. and here is they used the phrase: "remove the bias term inthe last linear layer". i have no idea what is bias term is and how to remove it from googlenet or other pretrained models. if someone help me out with this it would be great! :)
To compute the layer n outputs, a linear neural network computes a linear combination of the layer n-1 output for each layer n output, adds a scalar constant value to each layer n output (the bias term), and then applies an activation function. In pytorch, one could disable the bias in a linear layer using: layer = torch.nn.Linear(n_in_features, n_out_features, bias=False) To overwrite the existing structure of, say, the googlenet included in torchvision.models defined here, you can simply override the last fully connected layer after initialization: from torchvision.models import googlenet num_classes = 1000 # or whatever you need it to be model = googlenet(num_classes) model.fc = torch.nn.Linear(1000,num_classes,bias = False)
https://stackoverflow.com/questions/67513214/
How can I add a module into a network without change the main body of the network?
How can I add a module into a network without change the main body of the network? For example, I want to add an attention module after the layer1 of Resnet. I have tried to use forward_hook to add the new module, but after reading the source code of nn.Module: def __call__(self, *input, **kwargs): result = self.forward(*input, **kwargs) for hook in self._forward_hooks.values(): hook_result = hook(self, input, result) ... return result I found that hooks are done after forward, so I can't use hook to add new modules. Are there any other way to realize this function?
The key point is make a new forward function and ditch the old one. You can inherit the ResNet directly and just add a new layer. Then specific how the forward should be. Like class AttentionResNet(torchvision.models.resnet.ResNet): def __init__(self, *args, **kwargs): super(AttentionResNet, self).__init__(*args, **kwargs) self.attention = Attention() def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) # this is what you asked x = self.attention(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = torch.flatten(x, 1) x = self.fc(x) return x Or if you want pretrained weights you can also make the resnet as a sub module and don't call the resnet's forward directly, like. class AttentionResNet(nn.Module): def __init__(self): super(AttentionResNet, self).__init__() self.attention = Attention() self.resnet = torchvision.models.resnet.resnet50() def forward(self, x): x = self.resnet.conv1(x) x = self.resnet.bn1(x) x = self.resnet.relu(x) x = self.resnet.maxpool(x) x = self.resnet.layer1(x) # this is what you want x = self.attention(x) x = self.resnet.layer2(x) x = self.resnet.layer3(x) x = self.resnet.layer4(x) x = self.resnet.avgpool(x) x = torch.flatten(x, 1) x = self.resnet.fc(x) return x Note that the forward in ResNet looks like this. And for me using forward directly doesn't cause problem but you can also use the _forward_impl() like original if you want to. def _forward_impl(self, x): # See note [TorchScript super()] x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = torch.flatten(x, 1) x = self.fc(x) return x def forward(self, x): return self._forward_impl(x)
https://stackoverflow.com/questions/67514156/
How to add layers to a pretrained model in PyTorch?
I want to add layer normalization function just after AdaptiveAvgPool2d layer and L2 normalization after fc layer. i wanted my fc layer output to be 200 so tried not to include fc layer instead of it make new fc layer, but it did not remove fc layers comes with pretrained model, i am using googlenet. my code: class GoogleNet(nn.Module): def __init__(self): super(GoogleNet,self).__init__() self.model = googlenet_pytorch.GoogLeNet.from_pretrained('googlenet') self.fc = nn.Linear(1024,200, bias=False), def forward(self, x): batch_size ,_,_,_ =x.shape x = self.model.extract_features(x) x = self.model._avg_pooling(x) x = F.layer_norm(x,x.size[1:],elementwise_affine=False) x = self.fc(x) x = F.normalize(x, p=2, dim=1) return x Output I am getting: ..... ..... ..... (aux1): InceptionAux( (conv): BasicConv2d( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (fc1): Linear(in_features=2048, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=1000, bias=True) ) (aux2): InceptionAux( (conv): BasicConv2d( (conv): Conv2d(528, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (fc1): Linear(in_features=2048, out_features=1024, bias=True) (fc2): Linear(in_features=1024, out_features=1000, bias=True) ) (avgpool): AdaptiveAvgPool2d(output_size=(1, 1)) (dropout): Dropout(p=0.2, inplace=False) (fc): Linear(in_features=1024, out_features=1000, bias=True) ) ) Output I want: ...... ...... ...... (aux1): None (aux2): None (avgpool): AdaptiveAvgPool2d(output_size=(1, 1)) ** layer normalization here** (dropout): Dropout(p=0.2, inplace=False) (fc): Linear(in_features=1024, out_features=200, bias=False) **L2 normalization here** ) If someone needs to know the solution code for this problem, with the help of iacob's answer I solved it, I added it as an answer.
When printing a model, PyTorch will print every layer up until it encounters an error in the forward, irrespective of whether the model will run (on appropriately formatted input data). You have a number of issues in your code after loading the backbone GoogLeNet model, and hence all layers you have added after this fail to display when printed. The immediately obvious issues are: you must remove the , after self.fc = nn.Linear(1024,200, bias=False), or it will be interpreted as a tuple GoogLeNet has no attribute _avg_pooling x.size is not subscriptable. Either use a function call x.size()[1:] or use x.shape[1:] F.layer_norm() has no argument elementwise_affine You need to correct these in order to get the model to run, like so.
https://stackoverflow.com/questions/67514943/
Pytorch transfer learning error: The size of tensor a (16) must match the size of tensor b (128) at non-singleton dimension 2
Currently, I'm working on an image motion deblurring problem with PyTorch. I have two kinds of images: Blurry images (variable = blur_image) that are the input image and the sharp version of the same images (variable = shar_image), which should be the output. Now I wanted to try out transfer learning, but I can't get it to work. Here is the code for my dataloaders: train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle = True) validation_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=batch_size, shuffle = False) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle = False) Their shape: Trainloader - Shape of blur_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) Trainloader - Shape of sharp_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) torch.float32 Validationloader - Shape of blur_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) Validationloader - Shape of sharp_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) torch.float32 Testloader- Shape of blur_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) Testloader- Shape of sharp_image [N, C, H, W]: torch.Size([16, 3, 128, 128]) torch.float32 The way I use transfer learning (I thought that for the 'in_features' I have to put in the amount of pixels): model = models.alexnet(pretrained=True) model.classifier[6] = torch.nn.Linear(model.classifier[6].in_features, 128) device_string = "cuda" if torch.cuda.is_available() else "cpu" device = torch.device(device_string) model = model.to(device) The way I define my training process: # Define the loss function (MSE was chosen due to the comparsion of pixels # between blurred and sharp images criterion = nn.MSELoss() # Define the optimizer and learning rate optimizer = optim.Adam(model.parameters(), lr=0.001) # Learning rate schedule - If the loss value does not improve after 5 epochs # back-to-back then the new learning rate will be: previous_rate*0.5 #scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( optimizer, mode='min', patience=5, factor=0.5, verbose=True ) def training(model, trainDataloader, epoch): """ Function to define the model training Args: model (Model object): The model that is going to be trained. trainDataloader (Dataloader object): Dataloader object of the trainset. epoch (Integer): Number of training epochs. """ # Changing model into trainings mode model.train() # Supporting variable to display the loss for each epoch running_loss = 0.0 running_psnr = 0.0 for i, data in tqdm(enumerate(trainDataloader), total=int(len(train_dataset)/trainDataloader.batch_size)): blur_image = data[0] sharp_image = data[1] # Transfer the blurred and sharp image instance to the device blur_image = blur_image.to(device) sharp_image = sharp_image.to(device) # Sets the gradient of tensors to zero optimizer.zero_grad() outputs = model(blur_image) loss = criterion(outputs, sharp_image) # Perform backpropagation loss.backward() # Update the weights optimizer.step() # Add the loss that was calculated during the trainigs run running_loss += loss.item() # calculate batch psnr (once every `batch_size` iterations) batch_psnr = psnr(sharp_image, blur_image) running_psnr += batch_psnr # Display trainings loss trainings_loss = running_loss/len(trainDataloader.dataset) final_psnr = running_psnr/int(len(train_dataset)/trainDataloader.batch_size) final_ssim = ssim(sharp_image, blur_image, data_range=1, size_average=True) print(f"Trainings loss: {trainings_loss:.5f}") print(f"Train PSNR: {final_psnr:.5f}") print(f"Train SSIM: {final_ssim:.5f}") return trainings_loss, final_psnr, final_ssim And here is my way to start the training: train_loss = [] val_loss = [] train_PSNR_score = [] train_SSIM_score = [] val_PSNR_score = [] val_SSIM_score = [] start = time.time() for epoch in range(nb_epochs): print(f"Epoch {epoch+1}\n-------------------------------") train_epoch_loss = training(model, train_loader, nb_epochs) val_epoch_loss = validation(model, validation_loader, nb_epochs) train_loss.append(train_epoch_loss[0]) val_loss.append(val_epoch_loss[0]) train_PSNR_score.append(train_epoch_loss[1]) train_SSIM_score.append(train_epoch_loss[2]) val_PSNR_score.append(val_epoch_loss[1]) val_SSIM_score.append(val_epoch_loss[2]) scheduler.step(train_epoch_loss[0]) scheduler.step(val_epoch_loss[0]) end = time.time() print(f"Took {((end-start)/60):.3f} minutes to train") But every time when I want to perform the training I receive the following error: 0%| | 0/249 [00:00<?, ?it/s]Epoch 1 ------------------------------- /usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([16, 3, 128, 128])) that is different to the input size (torch.Size([16, 128])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-195-ff0214e227cd> in <module>() 9 for epoch in range(nb_epochs): 10 print(f"Epoch {epoch+1}\n-------------------------------") ---> 11 train_epoch_loss = training(model, train_loader, nb_epochs) 12 val_epoch_loss = validation(model, validation_loader, nb_epochs) 13 train_loss.append(train_epoch_loss[0]) <ipython-input-170-dfa2c212ad23> in training(model, trainDataloader, epoch) 25 optimizer.zero_grad() 26 outputs = model(blur_image) ---> 27 loss = criterion(outputs, sharp_image) 28 29 # Perform backpropagation /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py in forward(self, input, target) 526 527 def forward(self, input: Tensor, target: Tensor) -> Tensor: --> 528 return F.mse_loss(input, target, reduction=self.reduction) 529 530 /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in mse_loss(input, target, size_average, reduce, reduction) 2926 reduction = _Reduction.legacy_get_string(size_average, reduce) 2927 -> 2928 expanded_input, expanded_target = torch.broadcast_tensors(input, target) 2929 return torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction)) 2930 /usr/local/lib/python3.7/dist-packages/torch/functional.py in broadcast_tensors(*tensors) 72 if has_torch_function(tensors): 73 return handle_torch_function(broadcast_tensors, tensors, *tensors) ---> 74 return _VF.broadcast_tensors(tensors) # type: ignore 75 76 RuntimeError: The size of tensor a (16) must match the size of tensor b (128) at non-singleton dimension 2 model structure: AlexNet( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2)) (1): ReLU(inplace=True) (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2)) (4): ReLU(inplace=True) (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (7): ReLU(inplace=True) (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (9): ReLU(inplace=True) (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace=True) (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): AdaptiveAvgPool2d(output_size=(6, 6)) (classifier): Sequential( (0): Dropout(p=0.5, inplace=False) (1): Linear(in_features=9216, out_features=4096, bias=True) (2): ReLU(inplace=True) (3): Dropout(p=0.5, inplace=False) (4): Linear(in_features=4096, out_features=4096, bias=True) (5): ReLU(inplace=True) (6): Linear(in_features=4096, out_features=128, bias=True) ) ) I'm a newbie in terms of using Pytorch (and image deblurring in general) and so I rather confused about the meaning of the error message and how to fix it. I tried to change my parameters and nothing worked. Does anyone have any advice for me on how to solve this problem? I would appreciate every input :)
Here your you can't use alexnet for this task. becouse output from your model and sharp_image should be shame. because convnet encode your image as enbeddings you and fully connected layers can not convert these images to its normal size you can not use fully connected layers for decoding, for obtain the same size you need to use ConvTranspose2d() for this task. your encoder should be: class ConvEncoder(nn.Module): """ A simple Convolutional Encoder Model """ def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 16, (3, 3), padding=(1, 1)) self.relu1 = nn.ReLU(inplace=True) self.maxpool1 = nn.MaxPool2d((2, 2)) self.conv2 = nn.Conv2d(16, 32, (3, 3), padding=(1, 1)) self.relu2 = nn.ReLU(inplace=True) self.maxpool2 = nn.MaxPool2d((2, 2)) self.conv3 = nn.Conv2d(32, 64, (3, 3), padding=(1, 1)) self.relu3 = nn.ReLU(inplace=True) self.maxpool3 = nn.MaxPool2d((2, 2)) self.conv4 = nn.Conv2d(64, 128, (3, 3), padding=(1, 1)) self.relu4 = nn.ReLU(inplace=True) self.maxpool4 = nn.MaxPool2d((2, 2)) def forward(self, x): # Downscale the image with conv maxpool etc. x = self.conv1(x) x = self.relu1(x) x = self.maxpool1(x) x = self.conv2(x) x = self.relu2(x) x = self.maxpool2(x) x = self.conv3(x) x = self.relu3(x) x = self.maxpool3(x) x = self.conv4(x) x = self.relu4(x) x = self.maxpool4(x) return x And your decoder should be: class ConvDecoder(nn.Module): """ A simple Convolutional Decoder Model """ def __init__(self): super().__init__() self.deconv1 = nn.ConvTranspose2d(256, 128, (2, 2), stride=(2, 2)) self.relu1 = nn.ReLU(inplace=True) self.deconv2 = nn.ConvTranspose2d(128, 64, (2, 2), stride=(2, 2)) self.relu2 = nn.ReLU(inplace=True) self.deconv3 = nn.ConvTranspose2d(64, 32, (2, 2), stride=(2, 2)) self.relu3 = nn.ReLU(inplace=True) self.deconv4 = nn.ConvTranspose2d(32, 16, (2, 2), stride=(2, 2)) self.relu4 = nn.ReLU(inplace=True) def forward(self, x): # Upscale the image with convtranspose etc. x = self.deconv1(x) x = self.relu1(x) x = self.deconv2(x) x = self.relu2(x) x = self.deconv3(x) x = self.relu3(x) x = self.deconv4(x) x = self.relu4(x) return x encoder = ConvEncoder() decoder = ConvDecoder() You can train your model like that: encoder.train() decoder.train() for batch_idx, (train_img, target_img) in enumerate(train_loader): # Move images to device train_img = train_img.to(device) target_img = target_img.to(device) # Zero grad the optimizer optimizer.zero_grad() # Feed the train images to encoder enc_output = encoder(train_img) # The output of encoder is input to decoder ! dec_output = decoder(enc_output) # Decoder output is reconstructed image # Compute loss with it and orginal image which is target image. loss = loss_fn(dec_output, target_img) # Backpropogate loss.backward() # Apply the optimizer to network by calling step. optimizer.step() # Return the loss return loss.item() you might want visit this for getting help in your project.
https://stackoverflow.com/questions/67519746/
PyTorch Boolean - Stop Backpropagation?
I need to create a Neural Network where I use binary gates to zero-out certain tensors, which are the output of disabled circuits. To improve runtime speed, I was looking forward to use torch.bool binary gates to stop backpropagation along disabled circuits in the network. However, I created a small experiment using the official PyTorch example for the CIFAR-10 dataset, and the runtime speed is exactly the same for any values for gate_A and gate_B: (this means that the idea is not working) class Net(nn.Module): def __init__(self): super().__init__() self.pool = nn.MaxPool2d(2, 2) self.conv1a = nn.Conv2d(3, 6, 5) self.conv2a = nn.Conv2d(6, 16, 5) self.conv1b = nn.Conv2d(3, 6, 5) self.conv2b = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(32 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Only one gate is supposed to be enabled at random # However, for the experiment, I fixed the values to [1,0] and [1,1] choice = randint(0,1) gate_A = torch.tensor(choice ,dtype = torch.bool) gate_B = torch.tensor(1-choice ,dtype = torch.bool) a = self.pool(F.relu(self.conv1a(x))) a = self.pool(F.relu(self.conv2a(a))) b = self.pool(F.relu(self.conv1b(x))) b = self.pool(F.relu(self.conv2b(b))) a *= gate_A b *= gate_B x = torch.cat( [a,b], dim = 1 ) x = torch.flatten(x, 1) # flatten all dimensions except batch x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x How can i define gate_A and gate_B in such a way that backpropagation effectively stops if they are zero? PS. Changing concatenation dynamically at runtime would also change which weights are assigned to every module. (for example, the weights associated to a could be assigned to b in another pass, disrupting how the network operates).
You could use torch.no_grad (the code below can probably be made more concise): def forward(self, x): # Only one gate is supposed to be enabled at random # However, for the experiment, I fixed the values to [1,0] and [1,1] choice = randint(0,1) gate_A = torch.tensor(choice ,dtype = torch.bool) gate_B = torch.tensor(1-choice ,dtype = torch.bool) if choice: a = self.pool(F.relu(self.conv1a(x))) a = self.pool(F.relu(self.conv2a(a))) a *= gate_A with torch.no_grad(): # disable gradient computation b = self.pool(F.relu(self.conv1b(x))) b = self.pool(F.relu(self.conv2b(b))) b *= gate_B else: with torch.no_grad(): # disable gradient computation a = self.pool(F.relu(self.conv1a(x))) a = self.pool(F.relu(self.conv2a(a))) a *= gate_A b = self.pool(F.relu(self.conv1b(x))) b = self.pool(F.relu(self.conv2b(b))) b *= gate_B x = torch.cat( [a,b], dim = 1 ) x = torch.flatten(x, 1) # flatten all dimensions except batch x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x On a second look, I think the following is a simpler solution to the specific problem: def forward(self, x): # Only one gate is supposed to be enabled at random # However, for the experiment, I fixed the values to [1,0] and [1,1] choice = randint(0,1) if choice: a = self.pool(F.relu(self.conv1a(x))) a = self.pool(F.relu(self.conv2a(a))) b = torch.zeros(shape_of_conv_output) # replace shape of conv output here else: b = self.pool(F.relu(self.conv1b(x))) b = self.pool(F.relu(self.conv2b(b))) a = torch.zeros(shape_of_conv_output) # replace shape of conv output here x = torch.cat( [a,b], dim = 1 ) x = torch.flatten(x, 1) # flatten all dimensions except batch x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x
https://stackoverflow.com/questions/67523716/
Can I use a convolution filter instead of a dense layer for clasification?
I was reading a decent paper S-DCNet and I fell upon a section (page3,table1,classifier) where a convolution layer has been used on the feature map in order to produce a binary classification output as part of an internal process. Since I am a noob and when someone talks to me about classification I automatically make a synapse relating to FCs combined with softmax, I started wondering ... Is this a possible thing to do? Can indeed a convolutional layer be used to classify a binary outcome? The whole concept triggered my imagination so much that I insist on getting answers... Honestly, how does this actually work? What is the difference between using a convolution filter instead of a fully connected layer for classification purposes? Edit (Uncertain answer on how does it work): I asked a colleague and he told me that using a filter of the same shape as the length-width shape of the feature map at the current stage, may lead to a learnable binary output (considering that you also reduce the #channels of the feature map to a single channel). But I still don't understand the motivations behind such a technique ..
Using convolutions as FCs can be done (for example) with filters of spatial size (1,1) and with depth of the same size as the FC input size. The resulting feature map would be of the same size as the input feature map, but each pixel would be the output of a "FC" layer whose weights are the weights of the shared 1x1 conv filter. This kind of thing is used mainly for semantic segmentation, meaning classification per pixel. U-net is a good example if memory serves. Also see this. Also note that 1x1 convolutions have other uses as well. paperswithcode probably some of the nets there use this trick.
https://stackoverflow.com/questions/67525656/
Bert Tokenizer add_token function not working properly
tokenizer add_tokens is not adding new tokens. Here is my code: from transformers import BertTokenizer bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') new_tokens = [] text = open("parsed_data.txt", "r") for line in text: for word in line.split(): new_tokens.append(word) print(len(new_tokens)) # 53966 print(len(bert_tokenizer)) # 36369 bert_tokenizer.add_tokens(new_tokens) print(len(bert_tokenizer)) # 36369
Yes, if a token already exists, it is skipped. By the way, after changing the tokenizer you have to also update your model. See the last line below. bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') tokenizer.add_tokens(my_new_tokens) model.resize_token_embeddings(len(bert_tokenizer))
https://stackoverflow.com/questions/67526697/
Multiply by the factor the number of times that the position appears in the array (Python)
I'm trying to get a "pythonic" way or using a library (anything except "for" loops) for doing this. my_array = np.array([2,4,6]) my_array[[0,2,0]] = my_array[[0,2,0]] * 2 Expected result: [8,4,12] However, I get: [4,4,12] This doesn't work for me as I need to multiply twice the factor (2) for the position 0 if, through the index, the 0 position repeats n amount of times. "For" loops can't be used as there are billions of data points
You could try: >>> num_occurrences_list = [0,2,0] >>> factor = 2 >>> my_array * np.power(factor, np.bincount(num_occurrences_list)) array([ 8, 4, 12]) But I am not sure if it provides any advantage over using for loops explicitly. Here, np.bincount provides an array where each index contains the number of occurrences of it in your list. np.power with base 2 specifies how many times you want to multiply by 2. Finally, you perform the actual multiplication with the computed weights for each element of the array.
https://stackoverflow.com/questions/67528101/
How to construct batch that return equal number of images for per classes
I am working an image retrieval project, for making model more fair i want to construct batches that return: 5 images per class, and 75 images and per batch I have total 300 classes in my dataset, so it obvious that only 15 classes of images can be contained in each batch.data is balanced this mean there is equal number of images for per class,I am using pytorch. I have create pytorch dataset and I want to add above functionality in my ImageFolderLoader class whose code I added below. IMG_EXTENSIONS = [ '.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', ] def is_image_file(filename): return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) def find_classes(dir): classes = os.listdir(dir) classes.sort() class_to_idx = {classes[i]: i for i in range(len(classes))} classes = [clss.split('.')[1] for clss in classes] return classes, class_to_idx def make_dataset(dir, class_to_idx): images = [] for target in os.listdir(dir): d = os.path.join(dir, target) if not os.path.isdir(d): continue for filename in os.listdir(d): if is_image_file(filename): path = '{0}/{1}'.format(target, filename) item = (path, class_to_idx[target]) images.append(item) return images def default_loader(path): return Image.open(path).convert('RGB') class ImageFolderLoader(Dataset): def __init__(self, root, transform=None, loader=default_loader,): classes, class_to_idx = find_classes(root) imgs = make_dataset(root, class_to_idx) self.root = root self.imgs = imgs self.classes = classes self.class_to_idx = class_to_idx self.transform = transform self.loader = loader def __getitem__(self, index): path, target = self.imgs[index] img = self.loader(os.path.join(self.root, path)) if self.transform is not None: img = self.transform(img) return img, target def __len__(self): return len(self.imgs) if there is way to do this then please let me know>. edit:- Anyone want to see solution for this, i added the solution below after solving this problem.
I solved the problem by including batch_sampler in DataLoader module. for this i used pytorch-balanced-sampler git project, which allows awesome customization for batch_sampler, you should visit this repo. My custom dataset: IMG_EXTENSIONS = [ '.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', ] def is_image_file(filename): return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) def find_classes(dir): classes = os.listdir(dir) classes.sort() class_to_idx = {classes[i]: i for i in range(len(classes))} classes = [clss.split('.')[1] for clss in classes] return classes, class_to_idx def make_dataset(dir, class_to_idx): images = [] for target in os.listdir(dir): d = os.path.join(dir, target) if not os.path.isdir(d): continue for filename in os.listdir(d): if is_image_file(filename): path = '{0}/{1}'.format(target, filename) item = (path, class_to_idx[target]) images.append(item) data_dict = {} for item in images: cls = item[1] if cls not in data_dict.keys(): data_dict[cls] = [item] else: data_dict[cls].append(item) return images,data_dict def default_loader(path): return Image.open(path).convert('RGB') class ImageFolderLoader(Dataset): def __init__(self, root, transform=None, loader=default_loader): classes, class_to_idx = find_classes(root) imgs,instance_labels = make_dataset(root, class_to_idx) self.instance_labels = instance_labels self.root = root self.imgs = imgs self.classes = classes self.class_to_idx = class_to_idx self.transform = transform self.loader = loader def __getitem__(self, index): path, target = self.imgs[index] img = self.loader(os.path.join(self.root, path)) if self.transform is not None: img = self.transform(img) return img, target def __len__(self): return len(self.imgs) Then i used SamplerFactory class from pytorch-balances-sampler project, you need to visit this repository for understand the parameters, train_data = ImageFolderLoader(root=TRAIN_PATH, transform=transform) batch_sampler = SamplerFactory().get( class_idxs=my_list, batch_size=75, n_batches=146, alpha=1, kind="fixed" )
https://stackoverflow.com/questions/67535660/
Reverse operation of torch.cat
Suppose I have a tensor like [A,A,A,A...,A]. How can I quickly obtain [[A],[A],[A],...,[A]] as a tensor in torch?
You can use torch.chunk as the inverse of cat, but it looks like you want unsqueeze(1): A = torch.randn(2, 3) A_rep = (A, A, A, A, A, A, A, A) catted = torch.cat(A_rep) #uncatted = torch.chunk(catted, len(A_rep)) catted.unsqueeze(1)
https://stackoverflow.com/questions/67539912/
Learnable scalar weight in PyTorch
I have two neural networks running in parallel. Each gives a features map of same size say Nx1. Now I want weighted average of these embedding like this w1 * embed1 + w2 * embed2. I have tried these 1 2.But the weights are not updating. Any help would be appreciated. Here is how I am trying to do it: class LinearWeightedAvg(nn.Module): def __init__(self, n_inputs): super(LinearWeightedAvg, self).__init__() self.weight1 = Variable(torch.randn(1), requires_grad=True).cuda() self.weight2 = Variable(torch.randn(1), requires_grad=True).cuda() def forward(self, inp_embed): return self.weight1 * inp_embed[0] + self.weight2 * inp_embed[1] class EmbedBranch(nn.Module): def __init__(self, feat_dim, embedding_dim): super(EmbedBranch, self).__init__() fc_layer1 = fc_layer def forward(self, x): x = self.fc_layer1(x) return x class EmbeddingNetwork(nn.Module): def __init__(self, args, N): super(EmbeddingNetwork, self).__init__() embedding_dim = N self.embed1 = EmbedBranch(N, N) self.embed2 = EmbedBranch(N, N) self.comb_branch = LinearWeightedAvg(metric_dim) self.args = args if args.cuda: self.cuda() def forward(self, emb1, emb2): embeds1 = self.text_branch(emb1) embeds2 = self.image_branch(emb2) combined = self.comb_branch([embeds1, embeds2]) return combined def train_forward(self, embed1, embed2): combined = self(embed1, embed2) embeds = model.train_forward(embed1, embed2) loss = loss_func(embeds, labels) running_loss.update(loss.data.item()) optimizer.zero_grad() loss.backward() Also I want the weight to be within 0-1 range. Thanks,
You should use self.weightx = torch.nn.Parameter(your_inital_tensor) to register a tensor as a learnable parameter of the model.
https://stackoverflow.com/questions/67540438/
Apply hooks on inner layers of ResNet
The pytorch official implementation of resnet results in the following model: ResNet( (conv1): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (layer1): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (shortcut): Sequential() ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (shortcut): Sequential() ) ) (layer2): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (shortcut): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (shortcut): Sequential() ) ) ### Skipping layers 3 and 4 (linear): Linear(in_features=512, out_features=10, bias=True) ) I tried applying hook to the conv1 in the first BasicBlock of layer2 using handle = model.layer2[0][0].register_forward_hook(batchout_pre_hook) but got the following error : TypeError: 'BasicBlock' object does not support indexing I am able to apply hook to the BasicBlock using handle = model.layer2[0].register_forward_hook(batchout_pre_hook) but cannot apply hook in the modules present inside the BasicBlock
For attaching a hook to conv1 in layer2's 0th block, you need to use handle = model.layer2[0].conv1.register_forward_hook(batchout_pre_hook) This is because inside the 0th block, the modules are named as conv1, bn1, etc. and are not a list to be accessed via an index.
https://stackoverflow.com/questions/67544726/
Predict new samples with PyTorch model
I am newbee in neural networks, i have teached my model and now i want to test it. I have wrote a code with google help, but it do not work. The problem is that i do not understand from where i am getting the 4th dimension. Code is the following: import matplotlib.pyplot as plt import numpy as np import torch import torchvision import torchvision.transforms as transforms import torch.nn as nn import torch.nn.functional as F import torch.optim as optim transform = transforms.Compose([ transforms.Resize(32), transforms.CenterCrop(32), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ]) main_path = 'D:/RTU/dataset/ready_dataset_2classes' train_data_path = main_path + '/train' #test_data_path = main_path + '/test' weigths_path = 'D:/RTU/dataset/weights_done/weights_noise_original037-97%.pt' train_data = torchvision.datasets.ImageFolder(root=train_data_path, transform=transform) class Net(nn.Module): def __init__(self): super(Net, self).__init__() # convolutional layer (sees 32x32x3 image tensor) self.conv1 = nn.Conv2d(3, 16, 3, padding=1) # convolutional layer (sees 16x16x16 tensor) self.conv2 = nn.Conv2d(16, 32, 3, padding=1) # convolutional layer (sees 8x8x32 tensor) self.conv3 = nn.Conv2d(32, 64, 3, padding=1) # max pooling layer self.pool = nn.MaxPool2d(2, 2) # linear layer (64 * 4 * 4 -> 500) self.fc1 = nn.Linear(64 * 4 * 4, 500) # linear layer (500 -> 10) self.fc2 = nn.Linear(500, 250) self.fc3 = nn.Linear(250, 2) # dropout layer (p=0.25) self.dropout = nn.Dropout(0.25) #0.25 def forward(self, x): # add sequence of convolutional and max pooling layers x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = self.pool(F.relu(self.conv3(x))) # flatten image input x = x.view(-1, 64 * 4 * 4) # add dropout layer x = self.dropout(x) # add 1st hidden layer, with relu activation function x = F.relu(self.fc1(x)) # add dropout layer x = self.dropout(x) # add 2nd hidden layer, with relu activation function x = F.relu(self.fc2(x)) x = self.dropout(x) x = self.fc3(x) return x # Disable grad with torch.no_grad(): # Retrieve item index = 1 item = train_data[index] image = item[0] true_target = item[1] # Loading the saved model mlp = Net() optimizer = optim.SGD(mlp.parameters(), lr=0.01) epoch=5 valid_loss_min = np.Inf checkpoint = torch.load(weigths_path , map_location=torch.device('cpu')) mlp.load_state_dict(checkpoint['model_state_dict']) optimizer.load_state_dict(checkpoint['optimizer_state_dict']) epoch = checkpoint['epoch'] valid_loss_min = checkpoint['valid_loss_min'] mlp.eval() # Generate prediction prediction = mlp(image) # Predicted class value using argmax predicted_class = np.argmax(prediction) # Reshape image #image = image.reshape(28, 28, 1) # Show result plt.imshow(image, cmap='gray') plt.title(f'Prediction: {predicted_class} - Actual target: {true_target}') plt.show() The code seems working till the "mlp.eval()" and then i am getting an error Expected 4-dimensional input for 4-dimensional weight [16, 3, 3, 3], but got 3-dimensional input of size [3, 32, 32] instead. What i am doing wrong? Error
When you are training neural nets, you are feeding small batches of input data to your model. Indeed even it's not clearly specified when writting Layers in Pytorch, If you look at the documentation, here you can see that Layers receive 4D arrays with N corresponding to batch size and C to number of channels, here 3 because you are using RGB images So when testing your model once trained, the testing data should be built the same way to be fed into the network. Thus if you want to feed 1 image to your network you must reshape it proprely myimage.reshape(-1,3,32,32) print(myimage.shape) #(1,3,32,33)
https://stackoverflow.com/questions/67547554/
How to convert torchscript model in PyTorch to ordinary nn.Module?
I am loading the torchscript model in the following way: model = torch.jit.load("model.pt").to(device) The children modules of this model are identified as RecursiveScriptModule. I would like to finetune the uploaded weights and in order to make it simplier and cast them to torch.float32 It is preferable to convert all this stuff to ordinary PyTorch nn.Module. In the official docs https://pytorch.org/docs/stable/jit.html it is told how to convert nn.Module to torchscript, but I have not found any examples in doing this in the opposite direction. Is there a way to do this? P.S the example of loading model pretrained model is given here: https://github.com/openai/CLIP/blob/main/notebooks/Interacting_with_CLIP.ipynb
You may try to load it as it e.g. state_dict = torch.load(src).state_dict(). Then manually convert every key and value new_v = state_dict[k].cpu().float().
https://stackoverflow.com/questions/67549262/
How to generate an attention mask after padding?
After padding a tensor A which has 4 dimensions [4, 5, 129, 24] in second and third dimension to [4, 6, 136, 24], how can I generate its' attention mask? I have figured out two solutions: The first one is to create a zero-tensor A_attention that likes A_pad,and then traverse A to fill 1 to relevant position in A_attention. The second one is to create attention mask in the padding procedure. But it seems a little troublesome because the initial tensor A has 4 dimensions. Are there any ways to generate attention mask more efficiently after padding? Is there an API? Thank you very much.
You can try to use the Transformers library from Hugging Face, which provides a really useful tokenizer. I suggest you go through the entire quickstart, but in principle, this is the part you are interested in.
https://stackoverflow.com/questions/67554536/
How to load pretrained pytorch weight for
I am following this blog https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html want to run pytorch model in onnx runtime . here in example it has given a pretrained weight a URL how to load a pretrained weight from local disk . # Load pretrained model weights model_url = 'https://s3.amazonaws.com/pytorch/test_data/export/superres_epoch100-44c6958e.pth' path = "/content/best.pt" batch_size = 1 # just a random number # Initialize model with the pretrained weights map_location = lambda storage, loc: storage if torch.cuda.is_available(): map_location = None torch_model.load_state_dict(model_zoo.load_url(model_url, map_location=map_location)) # set the model to inference mode torch_model.eval() I want to load the weight which is defined as Path .
If you want to load the state dict from a path, this is what you should do: torch_model.load_state_dict(torch.load(path)) This should work.
https://stackoverflow.com/questions/67566751/
Python: BERT Tokenizer cannot be loaded
I am working on the bert-base-mutilingual-uncased model but when I try to set the TOKENIZER in the config it throws an OSError. Model Config class config: DEVICE = "cuda:0" MAX_LEN = 256 TRAIN_BATCH_SIZE = 8 VALID_BATCH_SIZE = 4 EPOCHS = 1 BERT_PATH = {"bert-base-multilingual-uncased": "workspace/data/jigsaw-multilingual/input/bert-base-multilingual-uncased"} MODEL_PATH = "workspace/data/jigsaw-multilingual/model.bin" TOKENIZER = transformers.BertTokenizer.from_pretrained( BERT_PATH["bert-base-multilingual-uncased"], do_lower_case=True) Error --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-33-83880b6b788e> in <module> ----> 1 class config: 2 # def __init__(self): 3 4 DEVICE = "cuda:0" 5 MAX_LEN = 256 <ipython-input-33-83880b6b788e> in config() 11 TOKENIZER = transformers.BertTokenizer.from_pretrained( 12 BERT_PATH["bert-base-multilingual-uncased"], ---> 13 do_lower_case=True) /opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, *inputs, **kwargs) 1138 1139 """ -> 1140 return cls._from_pretrained(*inputs, **kwargs) 1141 1142 @classmethod /opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1244 ", ".join(s3_models), 1245 pretrained_model_name_or_path, -> 1246 list(cls.vocab_files_names.values()), 1247 ) 1248 ) OSError: Model name 'workspace/data/jigsaw-multilingual/input/bert-base-multilingual-uncased' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'workspace/data/jigsaw-multilingual/input/bert-base-multilingual-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url. As I can interpret the error, it says that the vocab.txt file was not found at the given location but actually its present. Following are the files available in the bert-base-multilingual-uncased folder: config.json pytorch_model.bin vocab.txt I am new to working with bert, so I am not sure if there is a different way to define the tokenizer.
I think this should work: from transformers import BertTokenizer TOKENIZER = BertTokenizer.from_pretrained('bert-base-multilingual-uncased', do_lower_case=True) It will download the tokenizer from huggingface.
https://stackoverflow.com/questions/67567587/
How to create n-dimensional sparse tensor? (pytorch)
I want to initialize tensor to sparse tensor. When tensor's dimensional is 2, I can use torch.nn.init.sparse(tensor, sparsity=0.1) import torch dim = torch.Size([3,2]) w = torch.Tensor(dim) torch.nn.init.sparse_(w, sparsity=0.1) Result tensor([[ 0.0000, 0.0147], [-0.0190, 0.0004], [-0.0004, 0.0000]]) But when tensor dimensions > 2, this function isn't work. v = torch.Tensor(torch.Size([5,5,30,2])) torch.nn.init.sparse_(v, sparsity=0.1) Result ValueError: Only tensors with 2 dimensions are supported I need this because I want to use it to initialize the convolution weights. torch.nn.init.sparse_() function's def is below def sparse_(tensor, sparsity, std=0.01): r"""Fills the 2D input `Tensor` as a sparse matrix, where the non-zero elements will be drawn from the normal distribution :math:`\mathcal{N}(0, 0.01)`, as described in `Deep learning via Hessian-free optimization` - Martens, J. (2010). Args: tensor: an n-dimensional `torch.Tensor` sparsity: The fraction of elements in each column to be set to zero std: the standard deviation of the normal distribution used to generate the non-zero values Examples: >>> w = torch.empty(3, 5) >>> nn.init.sparse_(w, sparsity=0.1) """ if tensor.ndimension() != 2: raise ValueError("Only tensors with 2 dimensions are supported") rows, cols = tensor.shape num_zeros = int(math.ceil(sparsity * rows)) with torch.no_grad(): tensor.normal_(0, std) for col_idx in range(cols): row_indices = torch.randperm(rows) zero_indices = row_indices[:num_zeros] tensor[zero_indices, col_idx] = 0 return tensor How could I make n-dimensional sparse tensor? Is there a way in pytorch to create this kind of tensor? Or can I make it another way?
This function is an implementation of the following method: The best random initialization scheme we found was one of our own design, "sparse initialization". In this scheme we hard limit the number of non-zero incoming connection weights to each unit (we used 15 in our experiments) and set the biases to 0 (or 0.5 for tanh units). Deep learning via Hessian-free optimization - Martens, J. (2010). The reason it is not supported for higher order tensors is because it maintains the same proportion of zeros in each column, and it is not clear which [subset of] dimensions this condition should be maintained across for higher order tensors. You can implement this initialization strategy with dropout or an equivalent function e.g: def sparse_(tensor, sparsity, std=0.01): with torch.no_grad(): tensor.normal_(0, std) tensor = F.dropout(tensor, sparsity) return tensor If you wish to enforce column, channel, etc-wise proportions of zeros (as opposed to just total proportion) you can implement logic similar to the original function.
https://stackoverflow.com/questions/67570342/
Derive the gradient through torch.topk
I want to derive the gradient through torch.topk function. Suppose I the input is a vector , then it is transformed by a parameter matrix , and the top k values of the vector are selected . The result vector is further transformed by element-wise multiplication. Finally the loss is computed by . I wonder, is the loss differentiable with respect to W? Formally, can we calculate the following gradient?
The topk() operation is simply a linear transformation to pick the top k elements of a tensor. Since this is a W @ X or matrix-vector multiplication kind of operation, this is also differentiable. Example: Below I have computed the pipelined operation topk(Wx) in two ways and showed the gradients resulting from both are identical. In [1]: import torch In [2]: x1 = torch.rand(6, requires_grad = True) In [3]: W1 = torch.rand(6, 6, requires_grad = True) In [4]: x1 Out[4]: tensor([0.1511, 0.5990, 0.6338, 0.5137, 0.5203, 0.0560], requires_grad=True) In [5]: W1 Out[5]: tensor([[0.2541, 0.6699, 0.5311, 0.7801, 0.5042, 0.5475], [0.7523, 0.1331, 0.7670, 0.8132, 0.0524, 0.0269], [0.3974, 0.2880, 0.9142, 0.9906, 0.4401, 0.3984], [0.7956, 0.2071, 0.2209, 0.6192, 0.2054, 0.7693], [0.8587, 0.8415, 0.6033, 0.3812, 0.2498, 0.9813], [0.9033, 0.0417, 0.2272, 0.1576, 0.9087, 0.3284]], requires_grad=True) In [6]: y1 = W1 @ x1 In [7]: y1 Out[7]: tensor([1.4699, 1.1260, 1.5721, 0.8523, 1.3969, 0.8776], grad_fn=<MvBackward>) In [8]: yk, _ = torch.topk(y1, 3) In [9]: yk Out[9]: tensor([1.5721, 1.4699, 1.3969], grad_fn=<TopkBackward>) In [10]: loss1 = (yk ** 2).sum() In [11]: loss1.backward() In [12]: W1.grad Out[12]: tensor([[0.4442, 1.7609, 1.8633, 1.5102, 1.5296, 0.1646], [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [0.4751, 1.8833, 1.9928, 1.6152, 1.6359, 0.1760], [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [0.4222, 1.6734, 1.7706, 1.4352, 1.4535, 0.1564], [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]) Now let us evaluate the same set of operations but using topk() as a linear transformation explicitly. Note that the constructed Wk matrix selectively picks out top k (here 3) elements from the 6 element tensor through multiplication. In [13]: x2 = torch.tensor([0.1511, 0.5990, 0.6338, 0.5137, 0.5203, 0.0560], req ...: uires_grad=True) In [14]: W2 = torch.tensor([[0.2541, 0.6699, 0.5311, 0.7801, 0.5042, 0.5475], ...: [0.7523, 0.1331, 0.7670, 0.8132, 0.0524, 0.0269], ...: [0.3974, 0.2880, 0.9142, 0.9906, 0.4401, 0.3984], ...: [0.7956, 0.2071, 0.2209, 0.6192, 0.2054, 0.7693], ...: [0.8587, 0.8415, 0.6033, 0.3812, 0.2498, 0.9813], ...: [0.9033, 0.0417, 0.2272, 0.1576, 0.9087, 0.3284]], requires_gra ...: d=True) In [15]: y2 = W2 @ x2 In [16]: y2 Out[16]: tensor([1.4700, 1.1260, 1.5721, 0.8523, 1.3969, 0.8776], grad_fn=<MvBackward>) # Use the indices obtained earlier to construct the matrix In [19]: _ Out[19]: tensor([2, 0, 4]) In [20]: k = 3 In [21]: Wk = torch.zeros(k, y2.shape[0]) In [22]: Wk[torch.arange(k), _] = 1 In [23]: Wk.requires_grad = True In [24]: Wk Out[24]: tensor([[0., 0., 1., 0., 0., 0.], [1., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0.]], requires_grad=True) In [25]: yk2 = Wk @ y2 In [26]: yk2 Out[26]: tensor([1.5721, 1.4700, 1.3969], grad_fn=<MvBackward>) In [27]: loss2 = (yk2 ** 2).sum() In [28]: loss2.backward() Now compare the gradients obtained in both cases: In [29]: W2.grad Out[29]: tensor([[0.4442, 1.7611, 1.8634, 1.5103, 1.5297, 0.1646], [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [0.4751, 1.8834, 1.9929, 1.6152, 1.6360, 0.1761], [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [0.4222, 1.6735, 1.7707, 1.4352, 1.4536, 0.1565], [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]) In [30]: W1.grad Out[30]: tensor([[0.4442, 1.7609, 1.8633, 1.5102, 1.5296, 0.1646], [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [0.4751, 1.8833, 1.9928, 1.6152, 1.6359, 0.1760], [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [0.4222, 1.6734, 1.7706, 1.4352, 1.4535, 0.1564], [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]) In [31]: x1.grad Out[31]: tensor([4.3955, 5.2256, 6.1213, 6.4732, 3.5637, 5.6037]) In [32]: x2.grad Out[32]: tensor([4.3957, 5.2261, 6.1215, 6.4733, 3.5641, 5.6040]) As you can see the results are identical upto some floating point errors which were introduced when I copied the values of x1 and W1 without taking their full precision.
https://stackoverflow.com/questions/67570529/
How to build YOLACT++ using Docker?
I have to build yolact++ in docker enviromment (i'm using sagemaker notebook). Like this ARG PYTORCH="1.3" ARG CUDA="10.1" ARG CUDNN="7" FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel And i want to run this COPY yolact/external/DCNv2/setup.py /opt/ml/code/external/DCNv2/setup.py RUN cd /opt/ml/code/external/DCNv2 && \ python setup.py build develop But i got this error : No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' Traceback (most recent call last): File "setup.py", line 64, in <module> ext_modules=get_extensions(), File "setup.py", line 41, in get_extensions raise NotImplementedError('Cuda is not available') NotImplementedError: Cuda is not available But the enviromment supports CUDA. Anyone have an idea where is the problem ? Thank you.
SOLUTION : i edit the /etc/docker/daemon.json with content: { "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } }, "default-runtime": "nvidia" } Then i Restart docker daemon: sudo system restart docker it solved my problem.
https://stackoverflow.com/questions/67570694/
how can I load pretrained model by pytorch? ( mmfashion)
import io import torch import torch.nn as nn from torchvision import models from PIL import Image import torchvision.transforms as transforms checkpoint_path = 'C:/venvs/ai/aiproduct/latest.pth' pretrained_weights = torch.load(checkpoint_path, map_location='cpu', strict=False) model = models.resnet50(pretrained=True) model.load_state_dict(pretrained_weights) this bring TypeError: 'strict' is an invalid keyword argument for load() and import io import torch import torch.nn as nn from torchvision import models from PIL import Image import torchvision.transforms as transforms checkpoint_path = 'C:/venvs/ai/aiproduct/latest.pth' pretrained_weights = torch.load(checkpoint_path, map_location='cpu') model = models.resnet50(pretrained=True) model.load_state_dict(pretrained_weights) # model.eval() print(model) # model.summary() if I get rid of "strict" Traceback (most recent call last): File "c:\venvs\ai\aiproduct\test.py", line 13, in <module> model.load_state_dict(pretrained_weights) File "C:\Python39\lib\site-packages\torch\nn\modules\module.py", line 1223, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for ResNet: Missing key(s) in state_dict: "conv1.weight", "bn1.weight", "bn1.bias", "bn1.running_mean", "bn1.running_var", "layer1.0.conv1.weight", "layer1.0.bn1.weight", "layer1.0.bn1.bias", "layer1.0.bn1.running_mean", "layer1.0.bn1.running_var", "layer1.0.conv2.weight", "layer1.0.bn2.weight", "layer1.0.bn2.bias", "layer1.0.bn2.running_mean", "layer1.0.bn2.running_var", "layer1.0.conv3.weight", "layer1.0.bn3.weight", "layer1.0.bn3.bias", "layer1.0.bn3.running_mean", "layer1.0.bn3.running_var", "layer1.0.downsample.0.weight", "layer1.0.downsample.1.weight", "layer1.0.downsample.1.bias", "layer1.0.downsample.1.running_mean", "layer1.0.downsample.1.running_var", "layer1.1.conv1.weight", "layer1.1.bn1.weight", "layer1.1.bn1.bias", "layer1.1.bn1.running_mean", "layer1.1.bn1.running_var", "layer1.1.conv2.weight", "layer1.1.bn2.weight", "layer1.1.bn2.bias", "layer1.1.bn2.running_mean", "layer1.1.bn2.running_var", "layer1.1.conv3.weight", "layer1.1.bn3.weight", "layer1.1.bn3.bias", "layer1.1.bn3.running_mean", "layer1.1.bn3.running_var", "layer1.2.conv1.weight", "layer1.2.bn1.weight", "layer1.2.bn1.bias", "layer1.2.bn1.running_mean", "layer1.2.bn1.running_var", "layer1.2.conv2.weight", "layer1.2.bn2.weight", "layer1.2.bn2.bias", "layer1.2.bn2.running_mean", "layer1.2.bn2.running_var", "layer1.2.conv3.weight", "layer1.2.bn3.weight", "layer1.2.bn3.bias", "layer1.2.bn3.running_mean", "layer1.2.bn3.running_var", "layer2.0.conv1.weight", "layer2.0.bn1.weight", "layer2.0.bn1.bias", "layer2.0.bn1.running_mean", "layer2.0.bn1.running_var", "layer2.0.conv2.weight", "layer2.0.bn2.weight", "layer2.0.bn2.bias", "layer2.0.bn2.running_mean", "layer2.0.bn2.running_var", "layer2.0.conv3.weight", "layer2.0.bn3.weight", "layer2.0.bn3.bias", "layer2.0.bn3.running_mean", "layer2.0.bn3.running_var", "layer2.0.downsample.0.weight", "layer2.0.downsample.1.weight", "layer2.0.downsample.1.bias", "layer2.0.downsample.1.running_mean", "layer2.0.downsample.1.running_var", "layer2.1.conv1.weight", "layer2.1.bn1.weight", "layer2.1.bn1.bias", "layer2.1.bn1.running_mean", "layer2.1.bn1.running_var", "layer2.1.conv2.weight", "layer2.1.bn2.weight", "layer2.1.bn2.bias", "layer2.1.bn2.running_mean", "layer2.1.bn2.running_var", "layer2.1.conv3.weight", "layer2.1.bn3.weight", "layer2.1.bn3.bias", "layer2.1.bn3.running_mean", "layer2.1.bn3.running_var", "layer2.2.conv1.weight", "layer2.2.bn1.weight", "layer2.2.bn1.bias", "layer2.2.bn1.running_mean", "layer2.2.bn1.running_var", "layer2.2.conv2.weight", "layer2.2.bn2.weight", "layer2.2.bn2.bias", "layer2.2.bn2.running_mean", "layer2.2.bn2.running_var", "layer2.2.conv3.weight", "layer2.2.bn3.weight", "layer2.2.bn3.bias", "layer2.2.bn3.running_mean", "layer2.2.bn3.running_var", "layer2.3.conv1.weight", "layer2.3.bn1.weight", "layer2.3.bn1.bias", "layer2.3.bn1.running_mean", "layer2.3.bn1.running_var", "layer2.3.conv2.weight", "layer2.3.bn2.weight", "layer2.3.bn2.bias", "layer2.3.bn2.running_mean", "layer2.3.bn2.running_var", "layer2.3.conv3.weight", "layer2.3.bn3.weight", "layer2.3.bn3.bias", "layer2.3.bn3.running_mean", "layer2.3.bn3.running_var", "layer3.0.conv1.weight", "layer3.0.bn1.weight", "layer3.0.bn1.bias", "layer3.0.bn1.running_mean", "layer3.0.bn1.running_var", "layer3.0.conv2.weight", "layer3.0.bn2.weight", "layer3.0.bn2.bias", "layer3.0.bn2.running_mean", "layer3.0.bn2.running_var", "layer3.0.conv3.weight", "layer3.0.bn3.weight", "layer3.0.bn3.bias", "layer3.0.bn3.running_mean", "layer3.0.bn3.running_var", "layer3.0.downsample.0.weight", "layer3.0.downsample.1.weight", "layer3.0.downsample.1.bias", "layer3.0.downsample.1.running_mean", "layer3.0.downsample.1.running_var", "layer3.1.conv1.weight", "layer3.1.bn1.weight", "layer3.1.bn1.bias", "layer3.1.bn1.running_mean", "layer3.1.bn1.running_var", "layer3.1.conv2.weight", "layer3.1.bn2.weight", "layer3.1.bn2.bias", "layer3.1.bn2.running_mean", "layer3.1.bn2.running_var", "layer3.1.conv3.weight", "layer3.1.bn3.weight", "layer3.1.bn3.bias", "layer3.1.bn3.running_mean", "layer3.1.bn3.running_var", "layer3.2.conv1.weight", "layer3.2.bn1.weight", "layer3.2.bn1.bias", "layer3.2.bn1.running_mean", "layer3.2.bn1.running_var", "layer3.2.conv2.weight", "layer3.2.bn2.weight", "layer3.2.bn2.bias", "layer3.2.bn2.running_mean", "layer3.2.bn2.running_var", "layer3.2.conv3.weight", "layer3.2.bn3.weight", "layer3.2.bn3.bias", "layer3.2.bn3.running_mean", "layer3.2.bn3.running_var", "layer3.3.conv1.weight", "layer3.3.bn1.weight", "layer3.3.bn1.bias", "layer3.3.bn1.running_mean", "layer3.3.bn1.running_var", "layer3.3.conv2.weight", "layer3.3.bn2.weight", "layer3.3.bn2.bias", "layer3.3.bn2.running_mean", "layer3.3.bn2.running_var", "layer3.3.conv3.weight", "layer3.3.bn3.weight", "layer3.3.bn3.bias", "layer3.3.bn3.running_mean", "layer3.3.bn3.running_var", "layer3.4.conv1.weight", "layer3.4.bn1.weight", "layer3.4.bn1.bias", "layer3.4.bn1.running_mean", "layer3.4.bn1.running_var", "layer3.4.conv2.weight", "layer3.4.bn2.weight", "layer3.4.bn2.bias", "layer3.4.bn2.running_mean", "layer3.4.bn2.running_var", "layer3.4.conv3.weight", "layer3.4.bn3.weight", "layer3.4.bn3.bias", "layer3.4.bn3.running_mean", "layer3.4.bn3.running_var", "layer3.5.conv1.weight", "layer3.5.bn1.weight", "layer3.5.bn1.bias", "layer3.5.bn1.running_mean", "layer3.5.bn1.running_var", "layer3.5.conv2.weight", "layer3.5.bn2.weight", "layer3.5.bn2.bias", "layer3.5.bn2.running_mean", "layer3.5.bn2.running_var", "layer3.5.conv3.weight", "layer3.5.bn3.weight", "layer3.5.bn3.bias", "layer3.5.bn3.running_mean", "layer3.5.bn3.running_var", "layer4.0.conv1.weight", "layer4.0.bn1.weight", "layer4.0.bn1.bias", "layer4.0.bn1.running_mean", "layer4.0.bn1.running_var", "layer4.0.conv2.weight", "layer4.0.bn2.weight", "layer4.0.bn2.bias", "layer4.0.bn2.running_mean", "layer4.0.bn2.running_var", "layer4.0.conv3.weight", "layer4.0.bn3.weight", "layer4.0.bn3.bias", "layer4.0.bn3.running_mean", "layer4.0.bn3.running_var", "layer4.0.downsample.0.weight", "layer4.0.downsample.1.weight", "layer4.0.downsample.1.bias", "layer4.0.downsample.1.running_mean", "layer4.0.downsample.1.running_var", "layer4.1.conv1.weight", "layer4.1.bn1.weight", "layer4.1.bn1.bias", "layer4.1.bn1.running_mean", "layer4.1.bn1.running_var", "layer4.1.conv2.weight", "layer4.1.bn2.weight", "layer4.1.bn2.bias", "layer4.1.bn2.running_mean", "layer4.1.bn2.running_var", "layer4.1.conv3.weight", "layer4.1.bn3.weight", "layer4.1.bn3.bias", "layer4.1.bn3.running_mean", "layer4.1.bn3.running_var", "layer4.2.conv1.weight", "layer4.2.bn1.weight", "layer4.2.bn1.bias", "layer4.2.bn1.running_mean", "layer4.2.bn1.running_var", "layer4.2.conv2.weight", "layer4.2.bn2.weight", "layer4.2.bn2.bias", "layer4.2.bn2.running_mean", "layer4.2.bn2.running_var", "layer4.2.conv3.weight", "layer4.2.bn3.weight", "layer4.2.bn3.bias", "layer4.2.bn3.running_mean", "layer4.2.bn3.running_var", "fc.weight", "fc.bias". Unexpected key(s) in state_dict: "meta", "state_dict", "optimizer". what should I do? I just want to make some cloth attribute predicting web app with pretrained model (mmfashion https://github.com/open-mmlab/mmfashion/blob/master/docs/MODEL_ZOO.md) but I fail to use pretrained model.
Lets say if you downloaded weights for wide_resnet50_2 and you performing same task that the weights you downloaded trained then: import torchvision model = torchvision.models.wide_resnet50_2(pretrained=True) for param in model.parameters(): param.required_grad = False and then the parameters you downloaded can be load as: Load model state dict model.load_state_dict(torch.load('./C:/venvs/ai/aiproduct/latest.pth')) # path of your weights model.eval() model.cuda()
https://stackoverflow.com/questions/67572091/
How can I pick a subset of a dataset in Pytorch?
I'm trying to run https://github.com/menardai/FashionGenAttnGAN in Google Colab on GPU with the disk size of 30 GB. The code file with its dataset files are about 15 GB. After I extract this code, the disk remains is about 14 GB. When I try to run the Pretrain.py, I can see the captions are loading but suddenly I got the "Assertion Error". Since I've not got any proper answer for the cause of this error, I think that it is because of my lack of space in Colab environment. The solution that came to my mind is that write some code to tell the model to select just 30% of the train and test datasets to load. But I don't know how to do this. Can anyone help me please?
data is your total data, you can divide it how much you want just edit valid_size. valid_size=0.3 num_train = len(data) indices = list(range(num_train)) np.random.shuffle(indices) split = int(np.floor(valid_size * num_train)) train_idx, valid_idx = indices[split:], indices[:split] # define samplers for obtaining training and validation batches train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) # prepare data loaders (combine dataset and sampler) train_loader = torch.utils.data.DataLoader(data, batch_size=4, sampler=train_sampler, num_workers=2) valid_loader = torch.utils.data.DataLoader(data, batch_size=4, sampler=valid_sampler, num_workers=2) IF there memory issue ocurres just reduce batch_size.
https://stackoverflow.com/questions/67572438/
What is the recommended way to sum a list in TorchScript
Because Python builtin <built-in function sum> is currently not supported in Torchscript: I'm looking for the recommended way to do something like: class Model(nn.Module): def __init__(self): super().__init__() def forward(self, x): return sum(x.tolist()) model = Model() model = torch.jit.script(model) model(torch.arange(10))
Easiest approach would be to use PyTorch's sum directly: class Model(nn.Module): def __init__(self): super().__init__() def forward(self, x): return torch.sum(x) If, for some reason, this is not an option, you have to use type specification with explicit loop (please notice type hints!): import typing import torch import torch.nn as nn class Model(nn.Module): def __init__(self): super().__init__() def forward(self, x) -> int: x: typing.List[int] = x.tolist() result = 0 for elem in x: result += elem return result
https://stackoverflow.com/questions/67576054/
Torchaudio C++ extension is not available - Python
Having just started learning Torchaudio, I got this error. I fixed the first part but even after some deep internet surfing, I can't find a fix. I use Windows 10. C:\Users\bala006\Miniconda39\lib\site-packages\torchaudio\extension\extension.py:13: UserWarning: torchaudio C++ extension is not available. warnings.warn('torchaudio C++ extension is not available.') Thanks for any help !
https://pytorch.org/audio/stable/backend.html UserWarning: torchaudio C++ extension is not available as @ex4 said this extencion is not avalible on windows
https://stackoverflow.com/questions/67581152/
Does torch.manual_seed include the operation of torch.cuda.manual_seed_all?
Does torch.manual_seed include the operation of torch.cuda.manual_seed_all? If yes, we can just use torch.manual_seed to set the seed. Otherwise we should call both functions.
Yes, torch.manual_seed() does include CUDA: You can use torch.manual_seed() to seed the RNG for all devices (both CPU and CUDA): https://pytorch.org/docs/stable/notes/randomness.html
https://stackoverflow.com/questions/67581281/
Print the validation loss in each epoch in PyTorch
I want to print the model's validation loss in each epoch, what is the right way to get and print the validation loss? Is it like this: criterion = nn.CrossEntropyLoss(reduction='mean') for x, y in validation_loader: optimizer.zero_grad() out = model(x) loss = criterion(out, y) loss.backward() optimizer.step() losses += loss display_loss = losses/len(validation_loader) print(display_loss) or like this criterion = nn.CrossEntropyLoss(reduction='mean') for x, y in validation_loader: optimizer.zero_grad() out = model(x) loss = criterion(out, y) loss.backward() optimizer.step() losses += loss display_loss = losses/len(validation_loader.dataset) print(display_loss) or something else? Thank you.
NO!!!! Under no circumstances should you train your model (i.e., call loss.backward() + optimizer.step()) using validation / test data!!! If you want to validate your model: model.eval() # handle drop-out/batch norm layers loss = 0 with torch.no_grad(): for x,y in validation_loader: out = model(x) # only forward pass - NO gradients!! loss += criterion(out, y) # total loss - divide by number of batches val_loss = loss / len(validation_loader) Note how optimizer has nothing to do with evaluating the model on the validation set. You do not change the model according to the validation data - only validate it.
https://stackoverflow.com/questions/67581589/
Remove zero dimension in pytorch
I have torch.tensor which has dimension of 0 x 240 x 3 x 540 x 960 I just want to remove 0 dimension, so I want to have pytorch tensor size of 240 x 3 x 540 x 960. I used tensor= torch.squeeze(tensor) to try that, but zero dimension was not removed... In my case, the size of tensor is variable so I can't hard code it to torch.squeeze.. Is there any simple way to remove undesired dimension in pytorch.tensor?
You can't. The size (=number of elements) in a tensor of shape 0 x 240 x 3 x 540 x 960 is 0. You can't reshape it to a tensor of shape 240 x 3 x 540 x 960 because that has 373248000 elements. squeeze() doc: Returns a tensor with all the dimensions of input of size 1 removed. This removes 1 sized dims, not 0 sized dims, which makes sense. Anyway, the amount of data in your original tensor is 0, which can be presented as any shape with at least one zero dim, but not as shapes whose size > 0.
https://stackoverflow.com/questions/67581687/
Pytorch Tensor::data_ptr() not working on Linux
I cannot link my program to pytorch under Linux, get the following error: /tmp/ccbgkLx2.o: In function `long long* at::Tensor::data<long long>() const': test.cpp:(.text._ZNK2at6Tensor4dataIxEEPT_v[_ZNK2at6Tensor4dataIxEEPT_v]+0x14): undefined reference to `long long* at::Tensor::data_ptr<long long>() const' I am building a very simple minimal example: #include "torch/script.h" #include <iostream> int main() { auto options = torch::TensorOptions().dtype(torch::kInt64); torch::NoGradGuard no_grad; auto T = torch::zeros(20, options).view({ 10, 2 }); long long *data = (long long *)T.data<long long>(); data[0] = 1; return 0; } The command used to build it: g++ -w -std=c++17 -o test-torch test.cpp -D_GLIBCXX_USE_CXX11_ABI=1 -Wl,--whole-archive -ldl -lpthread -Wl,--no-whole-archive -I../libtorch/include -L../libtorch/lib -ltorch -ltorch_cpu -lc10 -Wl,-rpath,../libtorch/lib Pytorch has been downloaded from the link https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-1.7.0%2Bcpu.zip and unzipped (so I have the libtorch folder next to the folder with test.cpp). Any ideas how to solve this problem? Same program works just fine under Visual C++. P.S. I know pytorch is kind of designed for cmake, but I have zero experience with cmake and no desire to write a cmake-based build system for my app. Also, the examples they give are seemingly supposed to only work if pytorch is "installed" in the system. So I cannot just download the .zip with libs? And if I "install" it (e.g. from sources or in whatever other way) on an AVX512 system, will the binary I link to it and distribute to end-users work on non-AVX512? The documentation is completely incomprehensible for newbies. UPDATE: I tried to do this via CMake following the tutorial https://pytorch.org/cppdocs/installing.html and got exactly the same error. Specifically, I renamed my directory to example-app and the source file to example-app.cpp. Then I created CMakeLists.txt in this directory with the following contents: cmake_minimum_required(VERSION 3.0 FATAL_ERROR) project(example-app) find_package(Torch REQUIRED) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}") add_executable(example-app example-app.cpp) target_link_libraries(example-app "${TORCH_LIBRARIES}") set_property(TARGET example-app PROPERTY CXX_STANDARD 14) Then mkdir build cd build cmake -DCMAKE_PREFIX_PATH=../../libtorch .. cmake --build . --config Release And here's the output: CMakeFiles/example-app.dir/example-app.cpp.o: In function `long long* at::Tensor::data<long long>() const': example-app.cpp:(.text._ZNK2at6Tensor4dataIxEEPT_v[_ZNK2at6Tensor4dataIxEEPT_v]+0x14): undefined reference to `long long* at::Tensor::data_ptr<long long>() const' Makes me think, maybe I forgot to include some header or define some variable? Oh, this is all on Mint 19.2 (equivalent to Ubuntu 18.04), g++ version is 7.5.0, glibc is 2.27. Compiling with g++-8 gives the same result.
This is not a cmake-related error, it's just how the library was implemented. I do not know why, but it appears that the specialization of T* at::Tensor::data<T> const with T = long long was forgotten/omitted. If you want to get your signed 64-bits pointer, you can still get it with int64_t: auto data = T.data<int64_t>(); It's good practice to use these types for which the size is explicit in general, in order to avoid compatibility issues.
https://stackoverflow.com/questions/67584843/
Optimizing with BCE is not working, nothing will change
I have the following code: import torch import torch.nn as nn import torch.nn.functional as F from tqdm import tqdm import matplotlib.pyplot as plt import os import keras from random import choice import sys devicet = 'cuda' if torch.cuda.is_available() else 'cpu' device = torch.device(devicet) if devicet == 'cpu': print ('Using CPU') else: print ('Using GPU') cuda0 = torch.device('cuda:0') class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.step1 = nn.Linear(5, 25) self.step2 = nn.Linear(25, 50) self.step3 = nn.Linear(50, 100) self.step4 = nn.Linear(100, 100) self.step5 = nn.Linear(100, 10) self.step6 = nn.Linear(10, 1) def forward(self, x): x = F.relu(x) x = self.step1(x) x = F.relu(x) x = self.step2(x) x = F.relu(x) x = self.step3(x) x = F.relu(x) x = self.step4(x) x = F.relu(x) x = self.step5(x) x = F.relu(x) x = self.step6(x) x = F.relu(x) return (x) net = Net() x = torch.rand(10,5) num = choice(range(10)) zero_tensor = torch.zeros(num, 1) one_tensor = torch.ones(10-num, 1) y = torch.cat((zero_tensor,one_tensor),0) x.to(devicet) y.to(devicet) learning_rate = 1e-3 optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate) loss_fn = torch.nn.BCELoss() acc_list = [] for i in tqdm(range(1000),desc='Training'): y_pred = net(x) loss = loss_fn(y_pred, y) loss.backward() optimizer.step() acc_list.append(abs(net(x).detach().numpy()[0]-y.detach().numpy()[0])) with torch.no_grad(): for param in net.parameters(): param -= learning_rate * param.grad optimizer.zero_grad() print ('\nFinished training in {} epochs.'.format(len(acc_list))) plt.plot(range(len(acc_list)),acc_list) plt.show() for i in range(10): print (str(net(x).detach().numpy()[i][0])+', '+str(y.detach().numpy()[i][0])) When I run this, it consistently just prints out the following: Image Why won't it do any training? It works if I use MSE loss (actually, it only works sometimes with MSE loss, sometimes it does the same thing as in the image) , it's only when I use BCE that it stops working entirely.
Final layer activation You are outputting only positive values, those should be between 0 and 1 for starters, these line specifically: x = F.relu(x) return (x) Use torch.sigmoid with BCELoss or even better, just output x and use torch.nn.BCEWithLogitsLoss which uses logits directly Training You are using Adam optimizer and doing SGD manually here: with torch.no_grad(): for param in net.parameters(): param -= learning_rate * param.grad Essentially you are applying optimization step twice which is probably too much and might destroy the weights. optimizer.step() already does this, no need for both! Accuracy This part: abs(net(x).detach().numpy()[0]-y.detach().numpy()[0]) I assume you want to calculate accuracy, it would go like this (also do not push data through the network twice via net(x), you already have y_pred!): # Assuming sigmoid activation def accuracy(y_pred, y_true): # For logits use # predicted_labels = y_pred > 0.0 predicted_labels = y_pred > 0.5 return torch.mean((y_true == predicted_labels).float())
https://stackoverflow.com/questions/67588487/
Pytorch: File-specific action for each image in the batch
I have a dataset of images each of them having an additional attribute "channel_no". Each image should be processed with the nn layer according to its channel_no: images with channel_no=1 have to be processed with layer1 images with channel_no=2 have to be processed with layer2 images with channel_no=3 have to be processed with layer3 etc... The problem is that when the batch contains more than one image, forward() function gets a torch tensor with the batch of images as input, and each of the images has different channel_no. So it is not clear how to process each image separately. Here is the code for the case when the batch has 1 image only: class Net(nn.Module): def __init__ (self, weight): super(Net, self).__init__() self.layer1 = nn.Linear(hidden_sizes[0], hidden_sizes[1]) self.layer2 = nn.Linear(hidden_sizes[0], hidden_sizes[1]) self.layer3 = nn.Linear(hidden_sizes[0], hidden_sizes[1]) self.outp = nn.Linear(hidden_sizes[1], output_size) def forward(self, x, channel_no): channel_no = channel_no[0] #extract channel_no from the batch list x = x.view(-1,hidden_sizes[0]) if channel_no == 1: x = F.relu(self.layer1(x)) if channel_no == 2: x = F.relu(self.layer2(x)) if channel_no == 3: x = F.relu(self.layer3(x)) x = torch.sigmoid(self.outp(x)) return x Is it possible to process each image separately using batch size > 1 ?
To process images separately you probably need separate tensors. I'm not sure if there's a fast way to do it, but you could split the tensor in the batch dimension to get individual image tensors and then iterate through them to sort them by channel number. Then join each group of images with the same channel number into a new tensor and process that tensor specially.
https://stackoverflow.com/questions/67591570/
Flatten 3D tensor
I have a tensor of the shape T x B x N (training data for a RNN, T is max seq length, B is number of batches, and N number of features) and I'd like to flatten all the features across timesteps, such that I get a tensor of the shape B x TN. Haven't been able to figure out how to do this..
You need to permute your axes before flattening, like so: t = t.swapdims(0,1) # (T,B,N) -> (B,T,N) t = t.view(B,-1) # (B,T,N) -> (B,T*N) (equivalent to `t.view(B,T*N)`)
https://stackoverflow.com/questions/67595244/
Passing Parameters in Python/Pytorch Classmethod
I am confused while passing the parameters in classmethod. The code is shown below: def gather_feature(fmap, index): # fmap.shape(B, k1, 1) index.shape(B, k2) dim = fmap.size(-1) index = index.unsqueeze(len(index.shape)).expand(*index.shape, dim) # this works fmap = fmap.gather(dim=1, index=index) # out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1. return fmap def gather_feature(fmap, index): # fmap.shape(B, k1, 1) index.shape(B, k2) dim = fmap.size(-1) index = index.unsqueeze(len(index.shape)) index = index.expand(*index.shape, dim) # raise error fmap = fmap.gather(dim=1, index=index) # out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1. return fmap Once index.unsqueeze() has done, the shape of index would be changed to (B, k2, 1). If the index.shape that pass to expand() classmethod is (B, k2, 1), an error has raised. However, if writing these tow classmethod in one line, namely index.unsqueeze().expand(), the index.shape passing to expand() classmethod seems to be (B, k2). Has the index.shape been computed and stored before performing .unsqueeze()? Therefore, the .unsqueeze() won't affect the index.shape which pass to .expand(). That is my guess, but I cannot figure out another one. Thank you for your time.
Consider the first case - index.unsqueeze(len(index.shape)).expand(*index.shape, dim) This is equivalent to the following code - A = index.unsqueeze(len(index.shape)) index = A.expand(*index.shape, dim) Note that Tensor index has not been changed after the execution of the first line. So when you then execute A.expand(*index.shape, dim) the original shape of index is used. However in the second case you when you first do index = index.unsqueeze(len(index.shape)) , you are changing index. So in the next step the new unsqueezed index's shape is used.
https://stackoverflow.com/questions/67595786/
How to iterate over PyTorch tensor
I have a tensor data of size (1000,110) and I want to iterate over the first index of the tensor and calculate the following. data = torch.randn(size=(1000,110)).to(device) male_poor = torch.tensor(0).float().to(device) male_rich = torch.tensor(0).float().to(device) female_poor = torch.tensor(0).float().to(device) female_rich = torch.tensor(0).float().to(device) for i in data: if torch.argmax(i[64:66]) == 0 and torch.argmax(i[108:110]) == 0: female_poor += 1 if torch.argmax(i[64:66]) == 0 and torch.argmax(i[108:110]) == 1: female_rich += 1 if torch.argmax(i[64:66]) == 1 and torch.argmax(i[108:110]) == 0: male_poor += 1 if torch.argmax(i[64:66]) == 1 and torch.argmax(i[108:110]) == 1: male_rich += 1 disparity = ((female_rich/(female_rich + female_poor))) / ((male_rich/(male_rich + male_poor))) Is there a faster way than for loop to do this?
The key in pytorch (as well as numpy) is vectorizataion, that is if you can remove loops by operating on matrices it will be a lot faster. Loops in python are quite slow compared to the loops in the underlying compiled C code. On my machine the execution time for your code was about 0.091s, the following vectorized code was about 0.002s so about x50 faster: import torch torch.manual_seed(0) device = torch.device('cpu') data = torch.randn(size=(1000, 110)).to(device) import time t = time.time() #vectorize over first dimension argmax64_0 = torch.argmax(data[:, 64:66], dim=1) == 0 argmax64_1 = torch.argmax(data[:, 64:66], dim=1) == 1 argmax108_0 = torch.argmax(data[:, 108:110], dim=1) == 0 argmax108_1 = torch.argmax(data[:, 108:110], dim=1) == 1 female_poor = (argmax64_0 & argmax108_0).sum() female_rich = (argmax64_0 & argmax108_1).sum() male_poor = (argmax64_1 & argmax108_0).sum() male_rich = (argmax64_1 & argmax108_1).sum() disparity = ((female_rich / (female_rich + female_poor))) / ((male_rich / (male_rich + male_poor))) print(time.time()-t) print(disparity)
https://stackoverflow.com/questions/67595954/
In pytorch, which part cost most time? Loss calculation or backward+step?
In pytorch, which part cost most of time? Loss calculation(no backward, just calculating) backward step Please explain a bit if you can.
Not just in Pytorch, but in all framework, backward_step takes more time
https://stackoverflow.com/questions/67595993/
Is there a vectorized way to create a matrix in which each element is the the row-wise dot product of a matrix?
Apologize for the vagueness in the title, I spend some time rephrasing but cannot get it very well. For example, I have a 2*3 matrix in Pytorch tensor test = torch.tensor([[1, 10, 100], [2, 20, 200]]) What I would like to have a final matrix that is torch.tensor([10101, 20202], [20202, 40404]) Where we can see, the (0,0) position is the first row's dot product with itself, and (0,1) (1,0) is the dot product of first row and second row, (1,1) the the dot product of the second row with itself. Thanks!
You can either do matrix multiplication: test @ test.T Or a torch.einsum: torch.einsum('ij,kj->ik', test, test)
https://stackoverflow.com/questions/67610935/
Calculating the Euclidean distance between 2 points on a circle in D-dimensional space
It's fairly straightforward to calculate a direct Euclidean distance between 2 points: import torch p1 = torch.tensor([1.0, 3.5]) p2 = torch.tensor([5.0, 9.2]) dis = torch.sum(torch.square(p1-p2)) dis >>> tensor(48.4900) However, how can I calculate the distance between 2 points on circle without going through the circle? That it, the distance on the circle's perimeter, in D-dimensional space. Clearly, in 2D a circle is just a circle: import numpy as np import matplotlib.pyplot as plt def circle_points(r, n): circles = [] for r, n in zip(r, n): t = np.linspace(0, 2*np.pi, n, endpoint=False) x = r * np.cos(t) y = r * np.sin(t) circles.append(np.c_[x, y]) return circles r = [2] n = [20] circles = circle_points(r, n) fig, ax = plt.subplots() for circle in circles: ax.scatter(circle[:, 0], circle[:, 1]) ax.set_aspect('equal') plt.show() point_1 = circles[0][0] point_2 = circles[0][11] print('point_1: ', point_1, 'point_2: ', point_2) >>> point_1: [2. 0.] point_2: [-1.90211303 -0.61803399] While in 3D it will be a sphere, 4D hypersphere, etc.
Let center ben the center of your circle. Then you can compute the angle between the two center-to-point vectors with the dot product formula, which relates the dot product with the cosine of the angle and the norm of the vectors (look for the geometric definition of the dot product) normalize = lambda vect: vect/vect.norm() v1 = normalize(point_1 - center) v2 = normalize(point_2 - center) angle = torch.arccos(v1.dot(v2)) Then, the length of the arc is the radius of the circle times the angle, i.e distance = angle*r
https://stackoverflow.com/questions/67610957/
How to run a proper Bayesian Logistic Regression
I'm trying to run a bayesian logistic regression on the wine dataset provided from the sklearn package. As variables, I decided to use alcohol, color_intensity, flavanoids, hue and magnesium where alcohol is my response variable and the rest the predictors. To do so, I'm using pyro and torch packages: import pyro import torch import pyro.distributions as dist import pyro.optim as optim from pyro.infer import SVI, Trace_ELBO import pandas as pd import numpy as np from pyro.infer import Predictive import torch.distributions.constraints as constraints from sklearn import datasets pyro.set_rng_seed(0) #loading data and prepearing dataframe wine = datasets.load_wine() data = pd.DataFrame(columns = wine['feature_names'], data=wine['data'] ) #choosiing variables: response and predictors variables = data[['alcohol', 'color_intensity', 'flavanoids', 'hue', 'magnesium']] #standardization variables = (variables-variables.min())/(variables.max()-variables.min()) #tensorizing alcohol = torch.tensor(variables['alcohol'].values, dtype=torch.float) predictors = torch.stack([torch.tensor(variables[column].values, dtype=torch.float) for column in ['alcohol', 'color_intensity', 'flavanoids', 'hue', 'magnesium']], 1) #splitting data k = int(0.8 * len(variables)) x_train, y_train = predictors[:k], alcohol[:k] x_test, y_test = predictors[k:], alcohol[k:] #modelling def model_alcohol(predictors, alcohol): n_observations, n_predictors = predictors.shape #weights w = pyro.sample('w', dist.Normal(torch.zeros(n_predictors), torch.ones(n_predictors))) epsilon = pyro.sample('epsilon', dist.Normal(0.,1.)) #non-linearity y_hat = torch.sigmoid((w*predictors).sum(dim=1) + epsilon) sigma = pyro.sample("sigma", dist.Uniform(0.,3.)) with pyro.plate('alcohol', len(alcohol)): y=pyro.sample('y', dist.Normal(y_hat, sigma), obs=alcohol) def guide_alcohol(predictors, alcohol=None): n_observations, n_predictors = predictors.shape w_loc = pyro.param('w_loc', torch.rand(n_predictors)) w_scale = pyro.param('w_scale', torch.rand(n_predictors), constraint=constraints.positive) w = pyro.sample('w', dist.Normal(w_loc, w_scale)) epsilon_loc = pyro.param('b_loc', torch.rand(1)) epsilon_scale = pyro.param('b_scale', torch.rand(1), constraint=constraints.positive) epsilon = pyro.sample('epsilon', dist.Normal(epsilon_loc, epsilon_scale)) sigma_loc = pyro.param('sigma_loc', torch.rand(n_predictors)) sigma_scale = pyro.param('sigma_scale', torch.rand(n_predictors), constraint=constraints.positive) sigma = pyro.sample('sigma', dist.Normal(sigma_loc, sigma_scale)) alcohol_svi = SVI(model=model_alcohol, guide=guide_alcohol, optim=optim.ClippedAdam({'lr' : 0.0002}), loss=Trace_ELBO()) losses = [] for step in range(10000): loss = alcohol_svi.step(x_train, y_train)/len(x_train) losses.append(loss) As I have to use Stochastic Variational Inference, I have defined both the model and the guide. My problem is now at matching tensor sizes, as I now I get the error: RuntimeError: The size of tensor a (142) must match the size of tensor b (5) at non-singleton dimension 0 Trace Shapes: Param Sites: Sample Sites: w dist 5 | value 5 | epsilon dist | value 1 | sigma dist | value 5 | alcohol dist | value 142 | I'm kinda new to the idea of modelling on my own, so clearly there are mistakes around the code (hopefully not on the theory behind it). Still, I see I should adjust dimension on the guide maybe? I'm not entirely sure on how to honestly.
Your main problem is that w is not declared as a single event (.to_event(1)), and your variance (sigma) should have the same dim as your observations (()). The model and guide below fix this; I suggest you look at auto-generated guides in Pyro, and a different prior on sigma. def model_alcohol(predictors, alcohol): n_observations, n_predictors = predictors.shape # weights # w is a single event w = pyro.sample('w', dist.Normal(torch.zeros(n_predictors), torch.ones(n_predictors)).to_event(1)) epsilon = pyro.sample('epsilon', dist.Normal(0., 1.)) # non-linearity y_hat = torch.sigmoid(predictors @ w + epsilon) # (predictors * weight).sum(1) == predictors @ w sigma = pyro.sample("sigma", dist.Uniform(0., 3.)) with pyro.plate('alcohol', len(alcohol)): pyro.sample('y', dist.Normal(y_hat, sigma), obs=alcohol) def guide_alcohol(predictors, alcohol=None): n_observations, n_predictors = predictors.shape w_loc = pyro.param('w_loc', torch.rand(n_predictors)) w_scale = pyro.param('w_scale', torch.rand(n_predictors), constraint=constraints.positive) pyro.sample('w', dist.Normal(w_loc, w_scale).to_event(1)) epsilon_loc = pyro.param('b_loc', torch.rand(1)) epsilon_scale = pyro.param('b_scale', torch.rand(1), constraint=constraints.positive) epsilon = pyro.sample('epsilon', dist.Normal(epsilon_loc, epsilon_scale)) sigma_loc = pyro.param('sigma_loc', torch.rand(1)) sigma_scale = pyro.param('sigma_scale', torch.rand(1), constraint=constraints.positive) pyro.sample('sigma', dist.HalfNormal(sigma_loc, sigma_scale)) # MUST BE POSITIVE
https://stackoverflow.com/questions/67617781/
Transforming the indices back through view in pytorch
I have the following code, where I define a torch tensor of zeros, change one item to be equal to 1, pass through three reshape functions. Then, after all the transformations, I obtain the index of 1. I am wondering if it is possible to somehow use the max_idx and the information about the permutations/.view to obtain the index of 1 in the initial B1 tensor (which should be equal to 1234). A1 = np.zeros(10*18*40*28) A1[1234] = 1 A1 = A1.reshape(10, 18, 40, 28) B1 = torch.Tensor(A1) print('B1: ', B1.shape, torch.nonzero(B1)) C1 = B1.permute(0, 2, 3, 1) print('C1: ', C1.shape, torch.nonzero(C1)) D1 = C1.contiguous().view(C1.shape[0], C1.shape[1], C1.shape[2], 3, 6) print('D1: ', D1.shape, torch.nonzero(D1)) E1 = D1.contiguous().view(D1.shape[0], -1, 6) print('E1: ', E1.shape, torch.nonzero(E1)) max_idx = torch.nonzero(E1) I would love to hear any hints on how I can try to do this :)
For each dimension, you can check in which index the value you refer to lies. After you find the right index, you subtract the number of indices that came before the next dimension and solve the same problem for a smaller sub-array. Or you can just use the function 'numpy.unravel_index' that does the exact same thing. import numpy as np import torch A1 = np.zeros(10*18*40*28) idx = 1234 A1[idx] = 1 A1 = A1.reshape(10, 18, 40, 28) B1 = torch.Tensor(A1) print('B1: ', B1.shape, torch.nonzero(B1)) idx_temp = idx+0 idxB1 = np.zeros((B1.dim(),), dtype = int) for i in range(B1.dim()): idxB1[i] = idx_temp//np.prod(B1.shape[i+1:]) idx_temp -= np.prod(B1.shape[i+1:])*idxB1[i] idxB1np = np.unravel_index(idx, B1.shape) print(f'idxB1 = {idxB1}') print(f'idxB1np = {idxB1np}') output: B1: torch.Size([10, 18, 40, 28]) tensor([[0, 1, 4, 2]]) idxB1 = [0 1 4 2] idxB1np = (0, 1, 4, 2)
https://stackoverflow.com/questions/67618462/
Documenting jit-compiled PyTorch Class Method (Sphinx)
I am having a problem trying to document custom PyTorch models with Sphinx: methods that are jit-compiled show up without docstrings in the documentation. How do I fix this? I checkoed out Python Sphinx autodoc and decorated members and How to autodoc decorated methods with sphinx? but the proposed solutions don't seem to work. When I try using ..automethod I get AttributeError: '_CachedForward' object has no attribute '__getattr__' Here's a MWE; at the moment I circumvent the problem by writing a my_forward and calling it in the forward. from torch import jit, Tensor class DummyModel(jit.ScriptModule): """My dummy model""" def __init__(self, const: float): super(DummyModel, self).__init__() self.const = Tensor(const) @jit.script_method def forward(self, x: Tensor) -> Tensor: """This method does show as a :undoc-member: in the documentation""" return self.my_forward(x) def my_forward(self, x: Tensor) -> Tensor: """This method shows up as a :member: in the documentation""" return x + self.const
Set the PYTORCH_JIT environment variable to 0 when running Sphinx. This will disable script and tracing annotations (decorators). See https://pytorch.org/docs/stable/jit.html#disable-jit-for-debugging.
https://stackoverflow.com/questions/67618471/
How to train pytorch model using batches with partial information
In this PyTorch model I have two streams data with two different modalities that are input into the model at the same time. The streams of data are blocks of sequential data. So I have modality one M1 = [[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5]] and modality two M2 = [[11,11,11,11],[22,22,22,22],[33,33,33,33],[44,44,44,44]]. I want to train this model with a system that during training, batches of sequential data will randomly have partial or full information. So there will be three possibilities during training: M1 and M2 will have its full sequential data, or M1 will be clipped meaning that the sequential data information will be set to zero (e.g. M1 = [[0,0,0,0],[0,0,0,0],[0,0,0,0],[0,0,0,0],[0,0,0,0]] ), while M2 will have its full sequential data info, or M2 will be clipped meaning that the sequential data information in M2 will be set to zero, while M1 will have its full sequential data info. Are there any PyTorch functions that will do that automatically for me, or does anyone know what would be a good way to implement this?
Let M1 and M2 be 2D tensors or 3D batched tensors for training, and let p1,p2 and p3 be the probabilities that M1, M2, or neither is zeroed: p1 = 0.5 # for example p2 = 0.3 # for example # p3 = 1- p1 - p2 = 0.2 randn = torch.rand(1) if randn < p1: M1 *= 0 elif randn > p1 and randn < p1+p2: M2 *= 0 # pass M1 and M2 to your model As a secondary note, if possible you may want to consider zeroing the gradient for whichever modality had its data erased. You don't really want your network to learn that the zero-values have any significance, which you are at risk of learning. Depending on the structure of your model, this may be possible. For instance, if M1 is processed by 1 network branch and M2 is processed by a different branch, you could constrain so that loss is only back-propagated through the branch that received non-zeroed inputs. For other network structures though, this may not be possible.
https://stackoverflow.com/questions/67619611/
Create dataset out of x_train and y_train
How to put the x_train and y_train into a model for training? The x_train is a tensor of size (3000, 13). The y_train is of size (3000, 1) That is for each element of x_train (1, 13), the respective y label is one digit from y_train. if I do: train_data = (train_feat, train_labels) print(train_data[0].shape) print(train_data[1].shape) torch.Size([3082092, 13]) torch.Size([3082092, 1]) train_loader = data.DataLoader(dataset=train_data, batch_size= 7, shuffle=True) The dataloader does not return the batch size, but returns the whole dataset instead
You can use the TensorDataset constructor: import torch.utils.data as data_utils dataset = data_utils.TensorDataset(train_feat, train_labels) train_loader = data_utils.DataLoader(dataset, batch_size=7, shuffle=True)
https://stackoverflow.com/questions/67620076/
How to replace torch.sparse with other pytorch function?
I would like to replace the torch.sparse function using the other Pytorch function. i = torch.LongTensor([[0, 1, 1], [2, 0, 2]]) v = torch.FloatTensor([3, 4, 5]) out1 = torch.sparse.FloatTensor(i, v, torch.Size([2,3])).to_dense() out2 = ??? out1 == out2 > tensor(True) Background: I'm converting a Pytorch model to CoreML, but the sparse_coo_tensor operator defined in the torch.norm function is not implemented with CoreMLTools. A few people have trouble with this problem, but CoreMLTools is still unsupported. So I'd like to replace it without the operator used in torch.sparse.FloatTensor. I have tried torch.sparse_coo_tensor but it was not supported. I have created a simple colaboratory notebook that reproduces this. Please test it using the following colab. https://colab.research.google.com/drive/1TzpeJqEcmCy4IuXhhl6LChFocfZVaR1Q?usp=sharing I've asked about different operators on stackoverflow before, so please refer to that. How to replace torch.norm with other pytorch function?
If I understand sparse_coo format correctly, the two rows of i are just the coordinates at which to copy the values of v. Which means that you can instead create you matrix like : def dense_from_coo(i, v): rows = i[0].max()+1 cols = i[1].max()+1 out = torch.zeros(rows, cols) out[i[0], i[1]] = v return out print(dense_from_coo(i,v)) >>> tensor([[0., 0., 3.], [4., 0., 5.]]) print(torch.sparse.FloatTensor(i, v, torch.Size([2,3])).to_dense()) >>> tensor([[0., 0., 3.], [4., 0., 5.]])
https://stackoverflow.com/questions/67620416/
Why is saving state_dict getting slower as training progresses?
I'm saving my model's and optimizer's state dict as follows: if epoch % 50000 == 0: #checkpoint save every 50000 epochs print('\nSaving model... Loss is: ', loss) torch.save({ 'epoch': epoch, 'model': self.state_dict(), 'optimizer_state_dict': self.optimizer.state_dict(), 'scheduler': self.scheduler.state_dict(), 'loss': loss, 'losses': self.losses, }, PATH) When I first start the training it saves in less than 5 seconds. However, after a couple of hours of training it takes over a two minutes to save. The only reason I could think of is the list of losses. But I can't see how that would increase the time by that much. Update 1: I have my losses as: self.losses = [] I'm appending the loss at each epoch to this list as follows: #... loss calculation loss.backward() self.optimizer.step() self.scheduler.step() self.losses.append(loss)
As mentionned in the comments, the instruction self.losses.append(loss) is definitely the culprit, and shoud be replaced with self.losses.append(loss.item()) The reason is that when you store the tensor loss, you also store the whole computational graph alongside (all the information that is required to perform the backprop). In other words, you are not merely storing a tensor, but also the pointers to all the tensors that have been involved in the computation of the loss and their relations (which ones were added, multiplied etc). So it will grow really big really fast. When you do loss.item() (or loss.detach(), which would work as well), you detach the tensor from the computational graph, and thus you only store what you intended : the loss value itself, as a simple float value
https://stackoverflow.com/questions/67620556/
Implementing a 3D gaussian blur using separable 2D convolutions in pytorch
I'm trying to implement a gaussian-like blurring of a 3D volume in pytorch. I can do a 2D blur of a 2D image by convolving with a 2D gaussian kernel easy enough, and the same approach seems to work for 3D with a 3D gaussian kernel. However, it is very slow in 3D (especially with larger sigmas/kernel sizes). I understand this can also be done instead by convolving 3 times with the 2D kernel which should be much faster, but I can't get this to work. My test case is below. import torch import torch.nn.functional as F VOL_SIZE = 21 def make_gaussian_kernel(sigma): ks = int(sigma * 5) if ks % 2 == 0: ks += 1 ts = torch.linspace(-ks // 2, ks // 2 + 1, ks) gauss = torch.exp((-(ts / sigma)**2 / 2)) kernel = gauss / gauss.sum() return kernel def test_3d_gaussian_blur(blur_sigma=2): # Make a test volume vol = torch.zeros([VOL_SIZE] * 3) vol[VOL_SIZE // 2, VOL_SIZE // 2, VOL_SIZE // 2] = 1 # 3D convolution vol_in = vol.reshape(1, 1, *vol.shape) k = make_gaussian_kernel(blur_sigma) k3d = torch.einsum('i,j,k->ijk', k, k, k) k3d = k3d / k3d.sum() vol_3d = F.conv3d(vol_in, k3d.reshape(1, 1, *k3d.shape), stride=1, padding=len(k) // 2) # Separable 2D convolution vol_in = vol.reshape(1, *vol.shape) k2d = torch.einsum('i,j->ij', k, k) k2d = k2d / k2d.sum() k2d = k2d.expand(VOL_SIZE, 1, *k2d.shape) for i in range(3): vol_in = vol_in.permute(0, 3, 1, 2) vol_in = F.conv2d(vol_in, k2d, stride=1, padding=len(k) // 2, groups=VOL_SIZE) vol_3d_sep = vol_in torch.allclose(vol_3d, vol_3d_sep) # --> False Any help would be very much appreciated!
You theoreticaly can compute the 3d-gaussian convolution using three 2d-convolutions, but that would mean you have to reduce the size of the 2d-kernel, as you're effectively convolving in each direction twice. But computationally more efficient (and what you usually want) is a separation into 1d-kernels. I changed the second part of your function to implement this. (And I must say I really liked your permutation-based appraoch!) Since you're using a 3d volume you can't really use the conv2d or conv1d functions well, so the best thing is really just using conv3d even if you're just computing 1d-convolutions. Note that allclose uses a threshold of 1e-8 which we do not reach with this method, probably due to cancellation errors. def test_3d_gaussian_blur(blur_sigma=2): # Make a test volume vol = torch.randn([VOL_SIZE] * 3) # using something other than zeros vol[VOL_SIZE // 2, VOL_SIZE // 2, VOL_SIZE // 2] = 1 # 3D convolution vol_in = vol.reshape(1, 1, *vol.shape) k = make_gaussian_kernel(blur_sigma) k3d = torch.einsum('i,j,k->ijk', k, k, k) k3d = k3d / k3d.sum() vol_3d = F.conv3d(vol_in, k3d.reshape(1, 1, *k3d.shape), stride=1, padding=len(k) // 2) # Separable 1D convolution vol_in = vol[None, None, ...] # k2d = torch.einsum('i,j->ij', k, k) # k2d = k2d / k2d.sum() # not necessary if kernel already sums to zero, check: # print(f'{k2d.sum()=}') k1d = k[None, None, :, None, None] for i in range(3): vol_in = vol_in.permute(0, 1, 4, 2, 3) vol_in = F.conv3d(vol_in, k1d, stride=1, padding=(len(k) // 2, 0, 0)) vol_3d_sep = vol_in print((vol_3d- vol_3d_sep).abs().max()) # something ~1e-7 print(torch.allclose(vol_3d, vol_3d_sep)) # allclose checks if it is around 1e-8 Addendum: If you really want to abuse conv2d to process the volumes you can try # separate 3d kernel into 1d + 2d vol_in = vol[None, None, ...] k2d = torch.einsum('i,j->ij', k, k) k2d = k2d.expand(VOL_SIZE, 1, len(k), len(k)) # k2d = k2d / k2d.sum() # not necessary if kernel already sums to zero, check: # print(f'{k2d.sum()=}') k1d = k[None, None, :, None, None] vol_in = F.conv3d(vol_in, k1d, stride=1, padding=(len(k) // 2, 0, 0)) vol_in = vol_in[0, ...] # abuse conv2d-groups argument for volume dimension, works only for 1 channel volumes vol_in = F.conv2d(vol_in, k2d, stride=1, padding=(len(k) // 2, len(k) // 2), groups=VOL_SIZE) vol_3d_sep = vol_in Or using exclusively conv2d you could do: # separate 3d kernel into 1d + 2d vol_in = vol[None, ...] # 1d kernel k1d = k[None, None, :, None] k1d = k1d.expand(VOL_SIZE, 1, len(k), 1) # 2d kernel k2d = torch.einsum('i,j->ij', k, k) k2d = k2d.expand(VOL_SIZE, 1, len(k), len(k)) vol_in = vol_in.permute(0, 2, 1, 3) vol_in = F.conv2d(vol_in, k1d, stride=1, padding=(len(k) // 2, 0), groups=VOL_SIZE) vol_in = vol_in.permute(0, 2, 1, 3) vol_in = F.conv2d(vol_in, k2d, stride=1, padding=(len(k) // 2, len(k) // 2), groups=VOL_SIZE) vol_3d_sep = vol_in These should still be faster than three consecutive 2d convolutions.
https://stackoverflow.com/questions/67633879/
Python: BERT Model Pooling Error - mean() received an invalid combination of arguments - got (str, int)
I am writing the code to train a bert model on my dataset. By when I run the code it throws an error in the average pool layer. I am unable to understand what causes this error. Model class BERTBaseUncased(nn.Module): def __init__(self, bert_path): super(BERTBaseUncased, self).__init__() self.bert_path = bert_path self.bert = transformers.BertModel.from_pretrained(self.bert_path) self.bert_drop = nn.Dropout(0.3) self.out = nn.Linear(768 * 2, 1) def forward( self, ids, mask, token_type_ids ): o1, _ = self.bert( ids, attention_mask=mask, token_type_ids=token_type_ids) apool = torch.mean(o1, 1) mpool, _ = torch.max(o1, 1) cat = torch.cat((apool, mpool), 1) bo = self.bert_drop(cat) p2 = self.out(bo) return p2 Error Exception in device=TPU:0: mean() received an invalid combination of arguments - got (str, int), but expected one of: * (Tensor input, *, torch.dtype dtype) * (Tensor input, tuple of names dim, bool keepdim, *, torch.dtype dtype, Tensor out) * (Tensor input, tuple of ints dim, bool keepdim, *, torch.dtype dtype, Tensor out) Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 228, in _start_fn fn(gindex, *args) File "<ipython-input-12-94e926c1f4df>", line 4, in _mp_fn a = _run() File "<ipython-input-5-ef9fa564682f>", line 146, in _run train_loop_fn(para_loader.per_device_loader(device), model, optimizer, device, scheduler=scheduler) File "<ipython-input-5-ef9fa564682f>", line 22, in train_loop_fn token_type_ids=token_type_ids File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 577, in __call__ result = self.forward(*input, **kwargs) File "<ipython-input-11-9196e0d23668>", line 73, in forward apool = torch.mean(o1, 1) TypeError: mean() received an invalid combination of arguments - got (str, int), but expected one of: * (Tensor input, *, torch.dtype dtype) * (Tensor input, tuple of names dim, bool keepdim, *, torch.dtype dtype, Tensor out) * (Tensor input, tuple of ints dim, bool keepdim, *, torch.dtype dtype, Tensor out) I am trying to run this on a Kaggle TPU. How to fix this?
Since one of the 3.X updates, the models return now task-specific output objects (which are dictionaries) instead of plain tuples. You can either force the model to return a tuple by specifying return_dict=False: o1, _ = self.bert( ids, attention_mask=mask, token_type_ids=token_type_ids, return_dict=False) or by utilizing the basemodeloutputwithpoolingandcrossattentions object: o = self.bert( ids, attention_mask=mask, token_type_ids=token_type_ids) #you can view the other attributes with o.keys() o1 = o.last_hidden_state
https://stackoverflow.com/questions/67635055/
Is there a significant speed improvement when using transformers tokenizer over batch compared to per item?
is calling tokenizer on a batch significantly faster than on calling it on each item in a batch? e.g. encodings = tokenizer(sentences) # vs encodings = [tokenizer(x) for x in sentences]
i ended up just timing both in case it's interesting for someone else %%timeit for _ in range(10**4): tokenizer("Lorem ipsum dolor sit amet, consectetur adipiscing elit.") 785 ms ± 24.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %%timeit tokenizer(["Lorem ipsum dolor sit amet, consectetur adipiscing elit."]*10**4) 266 ms ± 6.52 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
https://stackoverflow.com/questions/67639478/
pytorch cross-entropy-loss weights not working
I was playing around with some code and and it behaved differently than what i expected. So i dumbed it down to a minimally working example: import torch test_act = torch.tensor([[2.,0.]]) test_target = torch.tensor([0]) loss_function_test = torch.nn.CrossEntropyLoss() loss_test = loss_function_test(test_act, test_target) print(loss_test) > tensor(0.1269) weights=torch.tensor([0.1,0.5]) loss_function_test = torch.nn.CrossEntropyLoss(weight=weights) loss_test = loss_function_test(test_act, test_target) print(loss_test) > tensor(0.1269) As you can see the outputs are the same regardless if there are weights present or not. But i would expect the second output to be 0.0127 Is there some normalization going on that I dont know about? Or is it possibly bugged?
In this example, I add a second dataum with a different target class, and the effect of weights is visible. import torch test_act = torch.tensor([[2.,1.],[1.,4.]]) test_target = torch.tensor([0,1]) loss_function_test = torch.nn.CrossEntropyLoss() loss_test = loss_function_test(test_act, test_target) print(loss_test) >>> tensor(0.1809) weights=torch.tensor([0.1,0.5]) loss_function_test = torch.nn.CrossEntropyLoss(weight=weights) loss_test = loss_function_test(test_act, test_target) print(loss_test) >>> tensor(0.0927) This effect is because "The losses are averaged across observations for each minibatch. If the weight argument is specified then this is a weighted average" but only across the minibatch. Personally I find this a bit strange and would think it would be useful to apply the weights globally (i.e. even if all classes are not present in each minibatch). One of the prominent uses of the weight parameter would ostensibly be to give more weight to classes that are under-represented in the dataset, but by this formulation the minority classes are only given higher weights for the minibatches in which they are present (which, of course, is a low percentage because they are a minority class). In any case that is how Pytorch defines this operation.
https://stackoverflow.com/questions/67639540/
Python image types, shapes, and channels for segmentation
I am using this tutorial for instance segmentation in PyTorch. The test data the tutorial uses includes images and accompanying image masks from a dataset available here. I have an example of one of the image masks from that data set here (example data for this question). That mask looks like this by default in the dataset: The tutorial uses this code: mask.putpalette([ 0, 0, 0, # black background 255, 0, 0, # index 1 is red 255, 255, 0, # index 2 is yellow 255, 153, 0, # index 3 is orange ]) as explanatory step for the mask to make it look like this: but that code is not requisite in the segmentation process itself. It's just used to show what the mask contains. I am trying to use my own image data. I created masks for the images in G.I.M.P. This is one of the masks I made. It looks like this by default. As I try to run the tutorial code, I have problems with the masks. This code chunk creates a class that creates PyTorch datasets. import os import numpy as np import torch import torch.utils.data from PIL import Image class PennFudanDataset(torch.utils.data.Dataset): def __init__(self, root, transforms=None): self.root = root self.transforms = transforms # load all image files, sorting them to # ensure that they are aligned self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages")))) self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks")))) def __getitem__(self, idx): # load images ad masks img_path = os.path.join(self.root, "PNGImages", self.imgs[idx]) mask_path = os.path.join(self.root, "PedMasks", self.masks[idx]) img = Image.open(img_path).convert("RGB") # note that we haven't converted the mask to RGB, # because each color corresponds to a different instance # with 0 being background mask = Image.open(mask_path) mask = np.array(mask) # instances are encoded as different colors obj_ids = np.unique(mask) # first id is the background, so remove it obj_ids = obj_ids[1:] # split the color-encoded mask into a set # of binary masks masks = mask == obj_ids[:, None, None] # get bounding box coordinates for each mask num_objs = len(obj_ids) boxes = [] for i in range(num_objs): pos = np.where(masks[i]) xmin = np.min(pos[1]) xmax = np.max(pos[1]) ymin = np.min(pos[0]) ymax = np.max(pos[0]) boxes.append([xmin, ymin, xmax, ymax]) boxes = torch.as_tensor(boxes, dtype=torch.float32) # there is only one class labels = torch.ones((num_objs,), dtype=torch.int64) masks = torch.as_tensor(masks, dtype=torch.uint8) image_id = torch.tensor([idx]) area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0]) # suppose all instances are not crowd iscrowd = torch.zeros((num_objs,), dtype=torch.int64) target = {} target["boxes"] = boxes target["labels"] = labels target["masks"] = masks target["image_id"] = image_id target["area"] = area target["iscrowd"] = iscrowd if self.transforms is not None: img, target = self.transforms(img, target) return img, target def __len__(self): return len(self.imgs) dataset = PennFudanDataset('PennFudanPed/') dataset[0] The last line returns: (<PIL.Image.Image image mode=RGB size=559x536 at 0x7FCB4267C390>, {'area': tensor([35358., 36225.]), 'boxes': tensor([[159., 181., 301., 430.], [419., 170., 534., 485.]]), 'image_id': tensor([0]), 'iscrowd': tensor([0, 0]), 'labels': tensor([1, 1]), 'masks': tensor([[[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]]], dtype=torch.uint8)}) When I run this code with my data, ... dataset = four_chs('drive/MyDrive/chambers/') dataset[0] I get this error: /usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:38: DeprecationWarning: elementwise comparison failed; this will raise an error in the future. --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-17-12074ae9ab35> in <module>() 1 len(dataset) ----> 2 dataset[0] <ipython-input-1-99ab92a46ebe> in __getitem__(self, idx) 42 boxes = [] 43 for i in range(num_objs): ---> 44 pos = np.where(masks[i]) 45 xmin = np.min(pos[1]) 46 xmax = np.max(pos[1]) TypeError: 'bool' object is not subscriptable I'm not sure exactly what is going on, but there is a difference between my mask and the mask that is in the test data. They are both PNG file types, but it seems that mine has the red, blue, green channels split out with another channel or something, but I don't know what it is, based on the shape of the object in Python. This comes from one of the masks I made: mask2 = np.array(mask1) mask2.shape (5312, 2988, 4) For one of the test data masks: mask2 = np.array(mask) mask2.shape (536, 559) There appears to be only one channel? So with their different shapes, I guess that why I get the error from (this is an excerpt from the code I pasted earlier) ... mask_path = os.path.join(self.root, "masks", self.masks[idx]) # note that we haven't converted the mask to RGB, # because each color corresponds to a different instance # with 0 being background mask = Image.open(mask_path) mask = np.array(mask) # instances are encoded as different colors obj_ids = np.unique(mask) # first id is the background, so remove it obj_ids = obj_ids[1:] # split the color-encoded mask into a set # of binary masks masks = mask == obj_ids[:, None, None] # get bounding box coordinates for each mask num_objs = len(obj_ids) for i in range(num_objs): pos = np.where(masks[i]) ... How do I get my masks' shapes to match those of the masks in the test data so that I can use the rest of segmentation code create my PyTorch compatible dataset that will work with the segmentation algorithm? I'm not trying to get the same height and width, but change the number of channels/layers, but I don't think I want it to be grayscale. EDIT after HansHirse's comment: I went to back to G.I.M.P and used the Image Mode menu to change the image to grayscale. I exported with that setting. I tried to run the code with that file and it did not work. I also found a way to convert the R.B.G. image to grayscale with upon import with Image.open().convert("L"). This does not work either. In both cases, the problem has to do with speckles of colors that I thought were separate being mixed together. For example, I used HansHirse's advice to fill in the areas of interest with grey "colors" of 1,2,3,and 4, while the background stayed 0. Upon importing the file that created, the values of those files are 3,5,8, and 10. And while one of the shapes may be mostly of value 3, there are vagrant pixels with that value in the other shapes, so there is no value that is contained entirely in one shape. With that situation, the code draws bounding boxes that surround all 4 shapes, instead of around one shape. I'm aware of using the hue, saturation, value (H.S.V.) color space and tried converting to that color space. That still doesn't solve the problem for me. I'm trying to figure out how to use something like np.where( mask[<buffered shape1 xmin>,<buffered shape1 xmax>, <buffered shape1 ymin>, <buffered shape1 ymax>,0] == <majority color value for shape>) to sort of quarter up the mask, filter based on the main color's value for that shape in that quarter, and get the actual x and y values for the shape in that quarter. With those values, I figure I can use the min and max from the actual values to create my bounding boxes. Another note is that when exporting from G.I.M.P., there is a drop-down menu in the export window to set the file to 8 bit R.G.B. or Gray. Select 8bpc Gray for the format needed.
Following is an example how to create a grayscale image representing classes for a segmentation task or similar. On some black background, draw some shapes with fill values in the range of 1, ..., #classes. For visualization purposes, this mask is plotted as perceived as a regular grayscale image as well as scaled to the said value range – to emphasize that the mask looks all black in general, but there's actual content in it. This mask is saved as a lossless PNG image, and then opened using Pillow, and converted to mode P. Last step is to set up a proper palette for the desired number of colors, and apply that palette using Image.putpalette. import cv2 import matplotlib.pyplot as plt import numpy as np from PIL import Image # Generate mask: 0 - Background | 1 - Class 1 | 2 - Class 2, and so on. mask = np.zeros((300, 300), np.uint8) cv2.rectangle(mask, (30, 40), (75, 60), 1, cv2.FILLED) cv2.circle(mask, (230, 50), 85, 2, cv2.FILLED) cv2.ellipse(mask, (230, 230), (60, 40), 0, 0, 360, 3, cv2.FILLED) cv2.line(mask, (20, 240), (80, 260), 4, 5) # Save mask as lossless PNG image cv2.imwrite('mask.png', mask) # Visualization plt.figure(1, figsize=(18, 6)) plt.subplot(1, 3, 1), plt.imshow(mask, vmin=0, vmax=255, cmap='gray') plt.colorbar(), plt.title('Mask when shown as regular image') plt.subplot(1, 3, 2), plt.imshow(mask, cmap='gray') plt.colorbar(), plt.title('Mask when shown scaled to values 0 - 4') # Open mask with Pillow, and convert to mode 'P' mask = Image.open('mask.png').convert('P') # Set up and apply palette data mask.putpalette([ 0, 0, 0, # Background - Black 255, 0, 0, # Class 1 - Red 0, 255, 0, # Class 2 - Green 0, 0, 255, # Class 3 - Blue 255, 255, 0]) # Class 4 - Yellow # More visualization plt.subplot(1, 3, 3), plt.imshow(mask) plt.title('Mask when shown as indexed image') plt.tight_layout(), plt.show() The first steps generating the actual mask can be done in GIMP, of course. Please be sure to use black background, and fill values in the range 1, ..., #classes. If you have difficulties to do that because these colors are all nearly black, draw your shapes in some bright, distinguishable colors, and later just fill these with values 1, 2, and so on. ---------------------------------------- System information ---------------------------------------- Platform: Windows-10-10.0.19041-SP0 Python: 3.9.1 PyCharm: 2021.1.1 Matplotlib: 3.4.2 NumPy: 1.20.3 OpenCV: 4.5.2 Pillow: 8.2.0 ----------------------------------------
https://stackoverflow.com/questions/67642262/
Pytorch Model always outputs 0.5 for an unkown reason
I have a pytorch model I'm trying to use to do facial recognition. I am using the same model structure, loss, and optimizer as a working code, but it seems like the backprop won't do anything, any output of the NN is just 0.5. Here is the code, any help/suggestions is/are appreciated. import cv2 import numpy as np import PIL import torchvision.transforms as transforms import torch from tqdm import tqdm import torch.nn as nn import torch.nn.functional as F devicet = 'cuda' if torch.cuda.is_available() else 'cpu' device = torch.device(devicet) if devicet == 'cpu': print ('Using CPU') else: print ('Using GPU') cuda0 = torch.device('cuda:0') class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.step1 = nn.Linear(10000, 200) self.step2 = nn.Linear(200, 200) self.step3 = nn.Linear(200, 1) def forward(self, x): x = F.relu(x) x = self.step1(x) x = F.relu(x) x = self.step2(x) x = F.relu(x) x = self.step3(x) x = F.relu(x) x = torch.sigmoid(x) return (x) net = Net() transformer = transforms.ToTensor() original_image = cv2.imread('group.jpg', cv2.IMREAD_GRAYSCALE) face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml") detected_faces = face_cascade.detectMultiScale(image=original_image, scaleFactor=1.05, minNeighbors=2) tensor_collection_faces = [] tensor_collection_output = [] for (x, y, width, height) in detected_faces: im_pil = PIL.Image.fromarray(original_image) im_pil = im_pil.crop((x,y,x+width,y+height)) im_pil = im_pil.resize((100,100)) im_pil = PIL.ImageOps.grayscale(im_pil) curr_image_tens = (transformer(im_pil)).reshape((10000)).numpy() tensor_collection_faces.append(curr_image_tens) display(im_pil) tensor_collection_output.append([float(input('Expected Result: '))]) input_tensor = torch.tensor(tensor_collection_faces) output_tensor = torch.tensor(tensor_collection_output) loss_fn = torch.nn.BCELoss() optimizer = torch.optim.Adam(net.parameters(), lr=1e-3) for i in tqdm(range(500), desc='Training'): y_pred = net(input_tensor) loss = loss_fn(y_pred, output_tensor) optimizer.zero_grad() loss.backward() optimizer.step() print (net(input_tensor)) The output of the code: Training: 100%|██████████| 500/500 [00:11<00:00, 44.40it/s]tensor([[0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000], [0.5000]], grad_fn=<SigmoidBackward>)
You applied both relu and sigmoid to your final output. In this case, you want to apply only sigmoid. def forward(self, x): x = F.relu(x) x = self.step1(x) x = F.relu(x) x = self.step2(x) x = F.relu(x) x = self.step3(x) x = F.relu(x) # <---- delete this line x = torch.sigmoid(x) return (x) What's happening is your network is outputting negative values in the last layer (before relu or sigmoid are applied), which when passed to relu go to 0. sigmoid(0) = 0.5, which is why you are seeing 0.5. x = self.step3(x) # x = some negative value x = F.relu(x) # relu(negative) = 0 x = torch.sigmoid(x) # sigmoid(0) = 0.5 There might be other issues with your code, but it's hard to say without having access to the data/labels (or even toy data).
https://stackoverflow.com/questions/67644421/
Cannot export Yolov5 model using export
I've trained Yolov5s model in colab.research env. After training I've moved best.pt to main yolov5 directory and renamed file to yolov5s.pt. After calling export.py i got error listed below !python models/export.py Namespace(batch_size=1, img_size=[640, 640], weights='./yolov5s.pt') Traceback (most recent call last): File "models/export.py", line 33, in <module> model = attempt_load(opt.weights, map_location=torch.device('cpu')) # load FP32 model File "./models/experimental.py", line 137, in attempt_load model.append(torch.load(w, map_location=map_location)['model'].float().fuse().eval()) # load FP32 model File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 587, in load with _open_zipfile_reader(opened_file) as opened_zipfile: File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 242, in __init__ super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: [enforce fail at inline_container.cc:145] . PytorchStreamReader failed reading zip archive: failed finding central directory I'm trying to export this model to use it in Android App. If I use best.pt I've got other error on loading network: E/AndroidRuntime: FATAL EXCEPTION: main Process: org.pytorch.demo.objectdetection, PID: 6935 java.lang.RuntimeException: Unable to start activity ComponentInfo{org.pytorch.demo.objectdetection/org.pytorch.demo.objectdetection.MainActivity}: com.facebook.jni.CppException: [enforce fail at inline_container.cc:222] . file not found: archive/constants.pkl (no backtrace available) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3449) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3601) at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:85) at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:135) at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:95) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2066) at android.os.Handler.dispatchMessage(Handler.java:106) at android.os.Looper.loop(Looper.java:223) at android.app.ActivityThread.main(ActivityThread.java:7656) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:592) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:947) Caused by: com.facebook.jni.CppException: [enforce fail at inline_container.cc:222] . file not found: archive/constants.pkl (no backtrace available) at org.pytorch.NativePeer.initHybridAndroidAsset(Native Method) at org.pytorch.NativePeer.<init>(NativePeer.java:27) at org.pytorch.PyTorchAndroid.loadModuleFromAsset(PyTorchAndroid.java:31) at org.pytorch.demo.objectdetection.MainActivity.onCreate(MainActivity.java:165) at android.app.Activity.performCreate(Activity.java:8000) at android.app.Activity.performCreate(Activity.java:7984) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1309) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3422) ... 11 more I/Process: Sending signal. PID: 6935 SIG: 9 I haven't found any solutions on my own. Do you have any idea how can I fix this error? Edit: Fixed - I could export model if I linked to directory: !python models/export.py --weights ./runs/train/exp7/weights/best.pt --img 640 --batch 1
Fixed with linking model in runs directory: !python models/export.py --weights ./runs/train//exp/weights/best.pt --img 640 --batch 1
https://stackoverflow.com/questions/67644548/
How to reshape a 32x512 into 128x128 keeping it as consistant 4 32x128 parts stacked?
So I want to turn 32x512 into 128x128 keeping it as consistant 4 32x128 parts stacked like this: . How to do such a thing inside the PyTorch model? So here is what I tried based on this answer: import torch x = torch.arange(32*512*3*2).reshape(2, 3, 32, 512) print(x.shape) x = list(x.split(4, dim=3)) print(f" xx {len(x)} {iii.shape for iii in x }") x = torch.cat(x) print(f" xs {x.shape}") And this outputs: torch.Size([2, 3, 32, 512]) xx 128 <generator object <genexpr> at 0x7f358f1edad0> xs torch.Size([256, 3, 32, 4]) While I wanted xs torch.Size([2, 3, 128, 128])
You can use, for tensor t: torch.cat(t.split(128, dim=1)) To transform in the reverse direction, you can use: torch.cat(t.split(32, dim=0)) For your updated question: torch.cat(t.split(128, dim=3), dim=2) For the reverse: torch.cat(t.split(32, dim=2), dim=3) In general, the dim for torch.split tells which dimension you want to split over, and the dim for torch.cat tells which dimension you want to concatenate over.
https://stackoverflow.com/questions/67645394/
PyTorch: Error>> expected scalar type float but found double
I've just started using pytorch and I am trying a simple multi-layer perceptron . My ReLU Activation Function is the following: def ReLU_activation_func(outputs): print(type(outputs)) result = torch.where(outputs > 0, outputs, 0.) result = float(result) return result So I am trying to maintain the value which is greater than 0 and change the value to 0 if the value is smaller than 0. And this is a part of the main code where I use the ReLU Function (where I have the error): def forward_pass(train_loader): for batch_idx, (image, label) in enumerate(train_loader): print(image.size()) x = image.view(-1, 28 * 28) print(x.size()) input_node_num = 28 * 28 hidden_node_num = 100 output_node_num = 10 W_ih = torch.rand(input_node_num, hidden_node_num) W_ho = torch.rand(hidden_node_num, output_node_num) final_output_n = ReLU_activation_func(torch.matmul(x, W_ih)) and when I run the code, I get the following error: RuntimeError: 1 forward_pass(train_loader) in forward_pass(train_loader) -----14 W_ih = torch.rand(input_node_num, hidden_node_num) -----15 W_ho = torch.rand(hidden_node_num, output_node_num) ---->16 final_output_n = ReLU_activation_func(torch.matmul(x, W_ih)) in ReLU_activation_func(outputs) -----10 print(type(outputs)) ---->11 result = torch.where(outputs > 0, outputs, 0.) -----12 result = float(result) -----13 return result RuntimeError: expected scalar type float but found double Any help?
The issue is not on result, it's either on X, W_ih, or torch.where(outputs > 0, outputs, 0.). If you don't set an argument for the dtype of torch.rand(), it will assign the dtype based on the pytorch's global default value. The global variable can be changed using torch.set_default_tensor_type(). Or go the easy route: def ReLU_activation_func(outputs): print((outputs).dtype) result = torch.where(outputs > 0, outputs, torch.zeros_like(outputs)).float() return result # for the forward pass function, convert the tensor to floats before matmul def forward_pass(train_loader): for batch_idx, (image, label) in enumerate(train_loader): ... <your code> X, W_ih = X.float(), W_ih.float() final_output_n = ReLU_activation_func(torch.matmul(x, W_ih))
https://stackoverflow.com/questions/67645837/
Methods to Load Saved Python List in Pytorch?
I have a question on how to load saved python lists in Pytorch. The reference environment is Google Colab. Suppose I saved the regular list as follows. drive.mount('/content/gdrive') training_sets = ["a","b","c"] model_save_name = 'training_sets.pt' path = F"/content/gdrive/My Drive/Documents" torch.save(training_sets,path) Does pytorch support loading for regular python lists? Or does it only apply for pytorch tensors? Thank you.
Of course yes! You can save any kind of python object using torch.save Example import torch a = [1,2,3] torch.save(a,'a.pth') b = torch.load('a.pth') >>> b [1, 2, 3]
https://stackoverflow.com/questions/67646158/
Pytorch error: 'BiSeNet' object has no attribute 'module'
I'm new to pytorch and I'm trying to study BiSeNet for image segmentation (code took from a github repo: https://github.com/ooooverflow/BiSeNet/blob/master/train.py). During the training phase, after some training epochs, the net performs validation and tryes to save the parameters of the model if the results of the val are better than the previous ones. During this last operation, I get this error in line 109 (and also 102 during training): AttributeError: 'BiSeNet' object has no attribute 'module' I do not paste all the code here, but just the main steps. First of all, they built the model like that: os.environ['CUDA_VISIBLE_DEVICES'] = args.cuda model = BiSeNet(args.num_classes, args.context_path) if torch.cuda.is_available() and args.use_gpu: model = model.cuda() So there exists a BiSeNet object created thanks to an imported module called "model" where there is a file named build_BiSeNet.py; In this script the class BiSeNet is defined and there is no attribute named module. Looking at the pytorch documentation, seems like in the Model class there is an attribute called modules which contains the module I'd like to save. In the docs, they suggest also to do torch.save(model.state_dict(), ...) in order to save the model, without calling the module attribute Ilike it is done in the line 109) So, finally, my question is: In order to avoid the error that gives me, should I remove .module in line 109 (and so also in line 102) or maybe change this attribute to .modules?
Look at their demo.py, they are defining the model: model = BiSeNet(args.num_classes, args.context_path) if torch.cuda.is_available() and args.use_gpu: model = torch.nn.DataParallel(model).cuda() Once you wrap the model with nn.DataParallel you get an "extra" .module in your way. The saved checkpoint does not have this extra .module and therefore, when loading a saved checkpoint: model.module.load_state_dict(torch.load(args.checkpoint_path)) It is not loaded to the model directly, but rather to the model.module.
https://stackoverflow.com/questions/67647138/
Mean squared logarithmic error using pytorch
hello I'm new with PyTorch and i would like to use Mean squared logarithmic error as a loss function in my neural network for training my DQN agent but i can't find the MSLE in the nn.functional in PyTorch what the best way to implement it ?
well is not hard to do it the MSLE equation as the photo below shows now, as some user on the PyTorch form suggested you can be added as a class like this class RMSLELoss(nn.Module): def __init__(self): super().__init__() self.mse = nn.MSELoss() def forward(self, pred, actual): return torch.sqrt(self.mse(torch.log(pred + 1), torch.log(actual + 1))) then you can call it normally criterion = RMSLELoss() rmsle = criterion(pred, actual)
https://stackoverflow.com/questions/67648033/
Is there a verctorized approach in PyTorch to get this result?
I would like to get this result with torch functions. Do you have suggestions? import torch test_tensor=torch.tensor([[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12]] ) print(test_tensor) ''' I would like to get: t_1 = torch.tensor([[6], #1+2+3 [24]])#7+8+9 t_2 = torch.tensor([[9], #1+3+5 [27]])#7+9+11 '''
Using standard Python stride notation import torch test_tensor=torch.tensor([[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12]] ) t1 = test_tensor[:, :3].sum(dim=1) print(t1) t2 = test_tensor[:, ::2].sum(dim=1) print(t2)
https://stackoverflow.com/questions/67649104/
Find euclidean / cosine distance between a tensor and all tensors stored in a column of dataframe efficently
I have a tensor 'input_sentence_embed' with shape torch.Size([1, 768]) There is a dataframe 'matched_df' which looks like INCIDENT_NUMBER enc_rep 0 INC000030884498 [[tensor(-0.2556), tensor(0.0188), tensor(0.02... 1 INC000029956111 [[tensor(-0.3115), tensor(0.2535), tensor(0.20.. 2 INC000029555353 [[tensor(-0.3082), tensor(0.2814), tensor(0.24... 3 INC000029555338 [[tensor(-0.2759), tensor(0.2604), tensor(0.21... Shape of each tensor element in dataframe looks like matched_df['enc_rep'].iloc[0].size() torch.Size([1, 768]) I want to find euclidean / cosine similarity between 'input_sentence_embed' and each row of 'matched_df' efficently. If they were scalar values, I could have easily broadcasted 'input_sentence_embed' as a new column in 'matched_df' and then find cosine similarity between two columns. I am struggling with two problems How to broadcast 'input_sentence_embed' as a new column to the 'matched_df' How to find cosine similarity between tensors stored in two column May be someone can also suggest me other easier methods to achieve the end goal of finding similarity between a tensor value and all tensors stored in a column of dataframe efficently.
Input data: import pandas as pd import numpy as np from torch import tensor match_df = pd.DataFrame({'INCIDENT_NUMBER': ['INC000030884498', 'INC000029956111', 'INC000029555353', 'INC000029555338'], 'enc_rep': [[[tensor(0.2971), tensor(0.4831), tensor(0.8239), tensor(0.2048)]], [[tensor(0.3481), tensor(0.8104) , tensor(0.2879), tensor(0.9747)]], [[tensor(0.2210), tensor(0.3478), tensor(0.2619), tensor(0.2429)]], [[tensor(0.2951), tensor(0.6698), tensor(0.9654), tensor(0.5733)]]]}) input_sentence_embed = [[tensor(0.0590), tensor(0.3919), tensor(0.7821) , tensor(0.1967)]] How to broadcast 'input_sentence_embed' as a new column to the 'matched_df' match_df["input_sentence_embed"] = [input_sentence_embed] * len(match_df) How to find cosine similarity between tensors stored in two column a = np.vstack(match_df["enc_rep"]) b = np.hstack(input_sentence_embed) match_df["cosine_similarity"] = a.dot(b) / (np.linalg.norm(a) * np.linalg.norm(b)) Output result: INCIDENT_NUMBER enc_rep input_sentence_embed cosine_similarity 0 INC000030884498 [[tensor(0.2971), tensor(0.4831), tensor(0.823... [[tensor(0.0590), tensor(0.3919), tensor(0.782... 0.446067 1 INC000029956111 [[tensor(0.3481), tensor(0.8104), tensor(0.287... [[tensor(0.0590), tensor(0.3919), tensor(0.782... 0.377775 2 INC000029555353 [[tensor(0.2210), tensor(0.3478), tensor(0.261... [[tensor(0.0590), tensor(0.3919), tensor(0.782... 0.201116 3 INC000029555338 [[tensor(0.2951), tensor(0.6698), tensor(0.965... [[tensor(0.0590), tensor(0.3919), tensor(0.782... 0.574257
https://stackoverflow.com/questions/67656142/
Setting constraints for parameters in pytorch
I am trying to set some constraints for weight parameters in PyTorch, e.g. the sum of every row of the weight matrix be exactly one for a fully connected layer: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.layer1 = nn.Linear(28*28, 10*10) self.layer2 = nn.Linear(10*10, 5*5) self.layer3 = nn.Linear(5*5, 10) def forward(self, x): x=torch.sigmoid(self.layer1(x)) x=torch.sigmoid(self.layer2(x)) x=torch.sigmoid(self.layer3(x)) model=Net() The constraint for this example network would be: torch.sum(model.linear1.weight,0)==1 torch.sum(model.linear2.weight,0)==1 torch.sum(model.linear3.weight,0)==1 A commonly used method to set a constraint, clamp, is used to set constraints for every element, but in this case, I would be setting a constraint for every row, instead of any particular element of the weight matrix. Are there any ways to implement this kind of constraint?
A way I can think about is for example to normalize the vector of your choosing by its norm, which will give its direction, with size 1. w0 = model.linear1.weight[0, :] w0_hat = w0 / torch.linalg.norm(w0) # direction of w0, norm=1 I don't really see a way of doing this for the .sum, but I also don't see why one would want to. Using L1 norm would do that for the .sum if you can guarantee the weights are all non-negative. Probably this isn't what you want. If you insist of normalizing such that the .sum will equal to 1, you have to define the normalization yourself, because there is no single algorithm to decide which weight index gets changed, and by how much.
https://stackoverflow.com/questions/67657700/
PyTorch: TypeError: 'int' object is not subscriptable
I'm new to pytorch and I am creating a one hot encoding function for multi-layer perceptron but I'm having some issues. Here is the code: def one_hot_encoding(label): for idx, val in enumerate(label): one_hot_outputs = [0]*len(label) idx_n = idx[val] one_hot_outputs[idx_n] = 1 return one_hot_outputs I'm having a type error saying: in one_hot_encoding(label) 2 for idx, val in enumerate(label): 3 one_hot_outputs = [0]*len(label) > 4 idx_n = idx[val] 5 one_hot_outputs[idx_n] = 1 6 return one_hot_outputs TypeError: 'int' object is not subscriptable Any help?
This is not a direct answer, but an alternative. PyTorch already has functionality for this: torch.nn.functional.one_hot. So, if you have a label tensor label and n classes, just call: torch.nn.functional.one_hot(label, num_classes=n)
https://stackoverflow.com/questions/67658887/
Pytorch Data Loader concatenate an image to input images
In PyTorch data loader, how could I concatenate an image (let's say x.jpg) in-band wise to each and every input images. ie, in effect I will have 4 band input ( 3 band input jpg with 1 band x.jpg. How to implement it. Please find below example of my current dataloader just to load the images. To this, I want to add x.jpg to "image"(ie input image, not to mask) from PIL import Image class lakeDataSet(Dataset): def __init__(self, root, transform): super().__init__() self.root = root self.img_dir = os.path.join(root,'image-c3/c3-crop') #9UAV self.mask_dir = os.path.join(root,'label-c3/c3-crop') # self.mask_dir = os.path.join(root,'test') self.files = [fname for fname in os.listdir(self.img_dir) if fname.endswith('.jpg')] self.transform = transform def __len__(self): return len(self.files) def __getitem__(self,I): fname = self.files[i] img_path = os.path.join(self.img_dir, fname) mask_path = os.path.join(self.mask_dir, fname) img = self.transform(Image.open(img_path)) mask = self.transform(Image.open(mask_path)) return img, mask
I suppose the self.transform already has ToTensor. Otherwise you should specify it as well. Then you can just concat the first dimension. Like x_jpg = self.transform(Image.open('x.jpg')) img = torch.cat((img, x_jpg), 0) The x.jpg has to have only 1 channel, if it's an RGB then obviously it'll become 6 channels instead of 4.
https://stackoverflow.com/questions/67659757/
What and where am I going wrong in this code for pytorch based object detection?
I am using Yolov5 for this project Here is my code import numpy as np import cv2 import torch import torch.backends.cudnn as cudnn from models.experimental import attempt_load from utils.general import non_max_suppression weights = '/Users/nidhi/Desktop/yolov5/best.pt' device = torch.device('cpu') model = attempt_load(weights, map_location=device) # load FP32 model stride = int(model.stride.max()) # model stride cudnn.benchmark = True # Capture with opencv and detect object cap = cv2.VideoCapture('Pothole testing.mp4') width, height = (352, 352) # quality cap.set(3, width) # width cap.set(4, height) # height while(cap.isOpened()): time.sleep(0.2) # wait for 0.2 second ret, frame = cap.read() if ret ==True: now = time.time() img = torch.from_numpy(frame).float().to(device).permute(2, 0, 1) img /= 255.0 # 0 - 255 to 0.0 - 1.0 if img.ndimension() == 3: img = img.unsqueeze(0) pred = model(img, augment=False)[0] pred = non_max_suppression(pred, 0.39, 0.45, classes=0, agnostic=True) # img, conf, iou, classes, ... print('time -> ', time.time()-now) else: break cap.release() The error I am getting: File "run.py", line 38, in <module> pred = model(img, augment=False)[0] File "/Users/nidhi/Library/Python/3.8/lib/python/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/Users/nidhi/Desktop/yolov5/models/yolo.py", line 118, in forward return self.forward_once(x, profile) # single-scale inference, train File "/Users/nidhi/Desktop/yolov5/models/yolo.py", line 134, in forward_once x = m(x) # run File "/Users/nidhi/Library/Python/3.8/lib/python/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/Users/nidhi/Desktop/yolov5/models/common.py", line 152, in forward return torch.cat(x, self.d) RuntimeError: Sizes of tensors must match except in dimension 1. Got 108 and 107 in dimension 3 (The offending index is 1) Operating system: macOS Big Sur 11.2.3 Python version: 3.8.2 The model is used best.pt which I had trained on Google Colab, I used yolov5l model to train the dataset.
Are you getting your error in the following line? pred = model(img, augment=False)[0] It might be because YOLO expects inputs of the image size which are multiple of 32. So 320×320, 352×352 etc. But you are 352x288. You will either have to resize it, or pad the 288 dimension with white/black pixels to make it 352. If you are not sure about where you are getting the error, can you attach the whole error?
https://stackoverflow.com/questions/67663681/
No such file or directory
import os from PIL import Image path='D:/SomeExperiments/KITTRawData/2011_09_26/2011_09_26_drive_0091_sync/image_03/data/0000000100.jpg' with open(path,'rb') as f: pass error as below [Errno 2] No such file or directory: 'D:/SomeExperiments/KITTRawData/2011_09_26/2011_09_26_drive_0091_sync/image_03/data/0000000224.jpg' but I can find this path on my computer:
Possible solution Hold the shift key and right click on the image file. Select copy as path. Then paste the copied path into your python script. This SHOULD work for sure. Possible reasons for the error You are using slashes instead of backslashes - My version of windows and python supports using slashes but I'm not sure if it supports all. File extension mismatch - Maybe your files are .jpeg and yet you are trying .jpg in your script. Path is not case sensitive in Windows but it's always a good practice to use correct case in the path, for example if your file is aBc.TXT you should use the exact same name not abc.txt.
https://stackoverflow.com/questions/67665824/
Convert an Array to PyTorch IValue on Android
Simple question. I have an array of ints in Android (Kotlin). I need to convert it to an org.pytorch.IValue. I believe that such a conversion can be done, but I can't figure out how to do it. This is my array val array = Array(34) { IntArray(1) } How do I get this wrapped in an IValue?
I found the way to do this by way of the Tensor object. The Tensor class has a static method for creating tensor buffers, such as by Tensor.allocateIntBuffer(). The argument for that call accepts the number of elements that the buffer will contain. The .put() method on the buffer places items within the buffer. Once the buffer is populated, it can be converted to an IValue with IValue.from(sourceTensor).
https://stackoverflow.com/questions/67666966/
Shuffling along a given axis in PyTorch
I have the a dataset that gets loaded in with the following dimension [batch_size, seq_len, n_features] (e.g. torch.Size([16, 600, 130])). I want to be able to shuffle this data along the sequence length axis=1 without altering the batch ordering or the feature vector ordering in PyTorch. Further explanation: For exemplification let's say my batch size is 3, sequence length is 3 and number of features is 2. example: tensor([[[1,1],[2,2],[3,3]],[[4,4],[5,5],[6,6]],[[7,7],[8,8],[9,9]]]) I want to be able to randomly shuffle the following way: tensor([[[3,3],[1,1],[2,2]],[[6,6],[5,5],[4,4]],[[8,8],[7,7],[9,9]]]) Are there any PyTorch functions that will do that automatically for me, or does anyone know what would be a good way to implement this?
You can use torch.randperm. For tensor t, you can use: t[:,torch.randperm(t.shape[1]),:] For your example: >>> t = torch.tensor([[[1,1],[2,2],[3,3]],[[4,4],[5,5],[6,6]],[[7,7],[8,8],[9,9]]]) >>> t tensor([[[1, 1], [2, 2], [3, 3]], [[4, 4], [5, 5], [6, 6]], [[7, 7], [8, 8], [9, 9]]]) >>> t[:,torch.randperm(t.shape[1]),:] tensor([[[2, 2], [3, 3], [1, 1]], [[5, 5], [6, 6], [4, 4]], [[8, 8], [9, 9], [7, 7]]])
https://stackoverflow.com/questions/67672528/
Use other source to download from than pip in PythonVirtualenvOperator
Say I'm using the PythonVirtualenvOperator and have PyTorch as a requirement. When calling "pip freeze" I get #requirements.txt . . torch==1.8.1+cpu and defining my task as #tasks.py from airflow.operators.python import PythonVirtualenvOperator t1= PythonVirtualenvOperator( task_id = "test", python_version = "3.7", python_callable = test_func, requirements = ["torch==1.8.1+cpu"] ) throws the ERROR: Could not find a version that satisfies the requirement torch==1.8.1+cpu. In the documentation from PyTorch we install it by pip3 install torch==1.8.1+cpu torchvision==0.9.1+cpu torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html i.e downloading it from their webpage and not from pip (if I understand it correctly), which might be why pip fails in the venv. Thus I would like to make the venv (created by airflow for the PythonVirtualOperator) to download torch from the link specified above, instead of pip. Is that doable? And is there a difference between torch==1.8.1+cpu and just torch==1.8.1 when using the cpu i.e does it make a difference if I just remove the +cpu?
This seems to be working (tested on Py3.7): requirements=["torch==1.8.1+cpu", "-f", "https://download.pytorch.org/whl/torch_stable.html"] Logs from the task: [2021-05-24 18:37:20,762] {process_utils.py:135} INFO - Executing cmd: virtualenv /tmp/venv9kpx2ahm --system-site-packages --python=python3.7 [2021-05-24 18:37:20,781] {process_utils.py:139} INFO - Output: [2021-05-24 18:37:21,365] {process_utils.py:143} INFO - created virtual environment CPython3.7.10.final.0-64 in 436ms [2021-05-24 18:37:21,367] {process_utils.py:143} INFO - creator CPython3Posix(dest=/tmp/venv9kpx2ahm, clear=False, no_vcs_ignore=False, global=True) [2021-05-24 18:37:21,369] {process_utils.py:143} INFO - seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/root/.local/share/virtualenv) [2021-05-24 18:37:21,370] {process_utils.py:143} INFO - added seed packages: pip==21.1.1, setuptools==56.0.0, wheel==0.36.2 [2021-05-24 18:37:21,371] {process_utils.py:143} INFO - activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator [2021-05-24 18:37:21,386] {process_utils.py:135} INFO - Executing cmd: /tmp/venv9kpx2ahm/bin/pip install torch==1.8.1+cpu -f https://download.pytorch.org/whl/torch_stable.html [2021-05-24 18:37:21,401] {process_utils.py:139} INFO - Output: [2021-05-24 18:37:22,455] {process_utils.py:143} INFO - Looking in links: https://download.pytorch.org/whl/torch_stable.html [2021-05-24 18:37:34,259] {process_utils.py:143} INFO - Collecting torch==1.8.1+cpu [2021-05-24 18:37:34,820] {process_utils.py:143} INFO - Downloading https://download.pytorch.org/whl/cpu/torch-1.8.1%2Bcpu-cp37-cp37m-linux_x86_64.whl (169.1 MB) [2021-05-24 18:41:46,125] {process_utils.py:143} INFO - Requirement already satisfied: numpy in /usr/local/lib/python3.7/site-packages (from torch==1.8.1+cpu) (1.20.3) [2021-05-24 18:41:46,128] {process_utils.py:143} INFO - Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/site-packages (from torch==1.8.1+cpu) (3.7.4.3) [2021-05-24 18:41:49,211] {process_utils.py:143} INFO - Installing collected packages: torch [2021-05-24 18:41:57,106] {process_utils.py:143} INFO - Successfully installed torch-1.8.1+cpu However, I'm not sure if installing pytroch for every task/DAG run is optimal. By installing required dependencies on workers you can reduce the overhead (in my case installing pytorch took 5 minutes).
https://stackoverflow.com/questions/67674252/
Calculate `dot` product of image and vector
I would like to calculate something like dot product of vector and image with shapes: (3) (3,1080,1080) and the output should be (1,1080,1080) img = torch.rand((3,1080,1080)) R_coefficient=float(0.2126) G_coefficient=float(0.7152) B_coefficient=float(0.0722) Y = img[0,...]*R_coefficient + img[1,...]*G_coefficient img[2,...]*B_coefficient This code above gives me results, I look for, but I would like to use PyTorch utils like torch.dot, torch.matmul etc. I have tried these TensorImage(torch.dot(img[:,...],RGB_vector[:])) TensorImage(torch.matmul(img,RGB_vector[:])) These two options give me an errors connecting with shape, I have rejected them. RGB_vector = torch.tensor([[[0.2126, 0.7152, 0.0722]]], dtype=img.dtype).permute(2,1,0) # torch.t(RGB_vector) print(RGB_vector.shape) print(img.shape) return TensorImage(img[:,...]*RGB_vector)#.unsqueeze_(0) this sample above works, but I am getting image with shape (3,1080,1080), but I need to get 1 instead of 3 at the first shape's dimension Current working example import torch img = torch.rand((3,1080,1080)) RGB_vector = torch.tensor([[[0.2126, 0.7152, 0.0722]]], dtype=img.dtype).permute(2,1,0) print(RGB_vector.shape) print(img.shape) TensorImage(img[:,...]*RGB_vector).unsqueeze_(0) Greetings, DA
(This can delivers the correct shape but Im not sure if that's what you need) So a tensor multiplication where the second tensor has dim >2 seems to work when the last dimension of the first tensor equals the second dimension of the second tensor. v = torch.rand(3).unsqueeze(0) m = torch.rand(3, 1080, 1080).transpose(0, 1) r = torch.matmul(v, m).transpose(0, 1) print(r.shape) >>> torch.Size([1, 1080, 1080])
https://stackoverflow.com/questions/67678904/
OOP structure of Pytorch
I have started learning OOP for implementing my DL models in Pytorch. Where can I find the OOP structure of Pytorch. By structure I mean the class structures (i.e. inheritance structure of classes like Sequential container). I know Module class is base for most structures. Also I was exploring Pytorch on github and I am confused how import statements in Pytorch works. For example, the Module class is defined in torch/nn/modules/module.py but we only import torch.nn and write nn.Module to represent the Module class. Should it not be nn.modules.module.Module by the way it is stored on github? # Example of using Sequential model = nn.Sequential( nn.Conv2d(1,20,5), nn.ReLU(), nn.Conv2d(20,64,5), nn.ReLU() ) Sequential class in defined in /torch/nn/modules/container.py file but we just use nn.Sequential here? Also I didn't find __call__ method in Sequential class then how the above code calling the class as a function.
In python what you import inside a file becomes part of the file. So when you import nn.modules.module.Module in __init__.py(which is inside nn folder), it becomes part of the nn module. Here I will give a quick example. Let's see we have three files file1.py, file2.py and file3.py, and we have a variable var = 10 defined in file1.py. If we have imported this variable inside file2.py, file3.py can directly import the variable from file2.py. file1.py var = 10 file2.py from file1 import var file3.py from file2 import var print(var) # Prints 10 Now let's go back to your question. As you said the Module class is defined inside nn.modules.module.Module. But this class has been imported first inside nn/modules/__init__.py and then it was imported inside nn/__init__.py. That is why you can import Module class from nn package. But importing will not change the type of objects you will create from the class. from torch.nn import Module module = Module() print(type(module)) # torch.nn.modules.module.Module
https://stackoverflow.com/questions/67680144/
Compare two segmentation maps predictions
I am using consistency between two predicted segmentation maps on unlabeled data. For labeled data, I’m using nn.BCEwithLogitsLoss and Dice Loss. I’m working on videos that’s why 5 dimensions output. (batch_size, channels, frames, height, width) I want to know how can we compare two predicted segmentation maps. gmentation maps. # gt_seg - Ground truth segmentation map. - (8, 1, 8, 112, 112) # aug_gt_seg - Augmented ground truth segmentation map - (8, 1, 8, 112, 112) predicted_seg_1 = model(data, targets) # (8, 1, 8, 112, 112) predicted_seg_2 = model(augmented_data, augmented_targets) #(8, 1, 8, 112, 112) # define criterion seg_criterion_1 = nn.BCEwithLogitsLoss(size_average=True) seg_criterion_2 = nn.DiceLoss() # labeled losses supervised_loss_1 = seg_criterion_1(predicted_seg_1, gt_seg) supervised_loss_2 = seg_criterion_2(predicted_seg_1, gt_seg) # Consistency loss if consistency_loss == "l2": consistency_criterion = nn.MSELoss() cons_loss = consistency_criterion(predicted_gt_seg_1, predicted_gt_seg_2) elif consistency_loss == "l1": consistency_criterion = nn.L1Loss() cons_loss = consistency_criterion(predicted_gt_seg_1, predicted_gt_seg_2) total_supervised_loss = supervised_loss_1 + supervised_loss_2 total_consistency_loss = cons_loss Is this the right way to apply consistency between two predicted segmentation maps? I’m mainly confused due to the definition on the torch website. It’s a comparison with input x with target y. I thought it looks correct since I want both predicted segmentation maps similar. But, 2nd segmentation map is not a target. That’s why I’m confused. Because if this could be valid, then every loss function can be applied in some or another way. That doesn’t look appealing to me. If it’s the correct way to compare, can it be extended to other segmentation-based losses such as Dice Loss, IoU Loss, etc.? One more query regarding loss computation on labeled data: # gt_seg - Ground truth segmentation map # aug_gt_seg - Augmented ground truth segmentation map predicted_seg_1 = model(data, targets) predicted_seg_2 = model(augmented_data, augmented_targets) # define criterion seg_criterion_1 = nn.BCEwithLogitsLoss(size_average=True) seg_criterion_2 = nn.DiceLoss() # labeled losses supervised_loss_1 = seg_criterion_1(predicted_seg_1, gt_seg) supervised_loss_2 = seg_criterion_2(predicted_seg_1, gt_seg) # augmented labeled losses aug_supervised_loss_1 = seg_criterion_1(predicted_seg_2, aug_gt_seg) aug_supervised_loss_2 = seg_criterion_2(predicted_seg_2, aug_gt_seg) total_supervised_loss = supervised_loss_1 + supervised_loss_2 + aug_supervised_loss_1 + aug_supervised_loss_2 Is the calculation of total_supervised_loss correct? Can I apply loss.backward() on this?
Yes, this is a valid way to implement consistency loss. The nomenclature used by pytorch documentation lists one input as the target and the other as the prediction, but consider that L1, L2, Dice, and IOU loss are all symmetrical (that is, Loss(a,b) = Loss(b,a)). So any of these functions will accomplish a form of consistency loss with no regard for whether one input is actually a ground-truth or "target".
https://stackoverflow.com/questions/67682106/
difference between Dataset and TensorDataset in pyTorch
what is the difference between "torch.utils.data.TensorDataset" and "torch.utils.data.Dataset" - the docs are not clear about that and I could not find any answers on google.
The Dataset class is an abstract class that is used to define new types of (customs) datasets. Instead, the TensorDataset is a ready to use class to represent your data as list of tensors. You can define your custom dataset in the following way: class CustomDataset(torch.utils.data.Dataset): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # Your code self.instances = your_data def __getitem__(self, idx): return self.instances[idx] # In case you stored your data on a list called instances def __len__(self): return len(self.instances) If you just want to create a dataset that contains tensors for input features and labels, then use the TensorDataset directly: dataset = TensorDataset(input_features, labels) Note that input_features and labels must match on the length of the first dimension.
https://stackoverflow.com/questions/67683406/
Define nn.parameters with a for loop
I am interested in defining L weights in a custom neural network with Pytorch. If L is known it is not a problem to define them one by one, but if L is not known I want to use a for loop to define them. My idea is to do something like this (which does not work) class Network(nn.Module): def __init__(self, ): super(Network, self).__init__() self.nl = nn.ReLU() for i in range(L): namew = 'weight'+str(i) self.namew = torch.nn.Parameter(data=torch.Tensor(2,2), requires_grad=True) This should do something like this (which instead works but is limited to a specific number of weights): class Network(nn.Module): def __init__(self, ): super(Network, self).__init__() self.nl = nn.ReLU() self.weight1 = torch.nn.Parameter(data=torch.Tensor(2,2), requires_grad=True) self.weight2 = torch.nn.Parameter(data=torch.Tensor(2,2), requires_grad=True) self.weight3 = torch.nn.Parameter(data=torch.Tensor(2,2), requires_grad=True) With what I tried to do, there is the problem that instead of a "dynamic" string for 'namew', Pytorch recognizes just the string 'namew'. Therefore instead of L weights, just 1 weight is defined. Is there some way to solve this problem?
Best way to accomplish this You can accomplish this by using a ParameterDict or a ModuleDict (for nn.module layers): class Network(nn.Module): def __init__(self): super(Network, self).__init__() self.nl = nn.ReLU() # define some nn.module layers self.layers = nn.ModuleDict() for i in range(L): self.layers("layer{}".format(i) = torch.nn.Linear(i-1,i) # define some non-module layers self.weights = torch.nn.ParameterDict() for i in range(L): self.weights["weights{}".format(i)] = torch.nn.Parameter(data=torch.Tensor(2,2), requires_grad=True)
https://stackoverflow.com/questions/67689104/
How to integrate a pytorch model into a dynamic optimization, for example in Pyomo or gekko
Let's say I have a pytorch-model describing the evolution of some multidimensional system based on its own state x and an external actuator u. So x_(t+1) = f(x_t, u_t) with f being the artificial neural network from pytorch. Now i want to solve a dynamic optimization problem to find an optimal sequence of u-values to minimize an objective that depends on x. Something like this: min sum over all timesteps phi(x_t) s.t.: x_(t+1) = f(x_t, u_t) Additionally I also have some upper and lower bounds on some of the variables in x. Is there an easy way to do this using a dynamic optimization toolbox like pyomo or gekko? I already wrote some code that transforms a feedforward neural network to a numpy-function which can then be passed as a constraint to pyomo. The problem with this approach is, that it requires significant reprogramming-effort every time the structure of the neural network changes, so quick testing becomes difficult. Also integration of recurrent neural networks gets difficult because hidden cell states would have to be added as additional variables to the optimization problem. I think a good solution could be to do the function evaluations and gradient calculations in torch and somehow pass the results to the dynamic optimizer. I'm just not sure how to do this. Thanks a lot for your help!
Tensorflow or Pytorch models can't be directly integrated into the GEKKO at this moment. But, I believe you can retrieve the derivatives from Tensorflow and Pytorch, which allows you to pass them to the GEKKO. There is a GEKKO Brain module and examples in the link below. You can also find an example that uses GEKKO Feedforward neural network for dynamic optimization. GEKKO Brain Feedforward neural network examples MIMO MPC example with GEKKO neural network model Recurrent Neural Network library in the GEKKO Brain module is currently being developed, which allows using all the GEKKO's dynamic optimization functions easily. In the meantime, you can use a sequential method by wrapping the TensorFlow or PyTorch models in the available optimization solver such as scipy optimization module. Check out the below link for a dynamic optimization example with Keras LSTM model and scipy optimize. Keras LSTM MPC
https://stackoverflow.com/questions/67693181/
Torch to tensorflow
Is there a way to convert pytorch code to TensorFlow? While I am a little familiar with TensorFlow, totally new to pytorch. For example, def get_variation_uncertainty(prediction_score_vectors: List[torch.tensor], matrix_size: Tuple) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: prediction_score_vectors = torch.stack(tuple(prediction_score_vectors)) wt_var = np.var(np.sum(prediction_score_vectors[:, :, 1:].cpu().numpy(), axis=2), axis=0).reshape(matrix_size) * 100 tc_var = np.var(np.sum(prediction_score_vectors[:, :, [1, 3]].cpu().numpy(), axis=2), axis=0).reshape( matrix_size) *100 et_var = np.var(prediction_score_vectors[:, :, 3].cpu().numpy(), axis=0).reshape(matrix_size) * 100 return wt_var.astype(np.uint8), tc_var.astype(np.uint8), et_var.astype(np.uint8) how can I get the TensorFlow equivalent to the above code?
Per the comment, I would recommend using more tf functions to improve the performance and reduce the amount of GPU-CPU communication necessary. Here is an example @tf.function def get_variation_uncertainty_tf(prediction_score_vectors, matrix_size): prediction_score_vectors = tf.stack(prediction_score_vectors) wt_var_tmp = tf.math.square(tf.math.reduce_std(tf.reduce_sum(prediction_score_vectors[:, :, 1:], axis=2), axis=0)) # Two steps because that was getting long wt_var = tf.reshape(wt_var_tmp, matrix_size) * 100 tc_var_tmp = tf.math.square(tf.math.reduce_std(prediction_score_vectors[:, :, 1] + prediction_score_vectors[:, :, 3], axis=0)) tc_var = tf.reshape(tc_var_tmp, matrix_size) * 100 et_var_tmp = tf.math.square(tf.math.reduce_std(prediction_score_vectors[:, :, 3], axis=0)) et_var = tf.reshape(et_var_tmp, matrix_size) * 100 return tf.cast(wt_var, dtype=tf.uint8), tf.cast(tc_var, dtype=tf.uint8), tf.cast(et_var, dtype=tf.uint8) # if you need to return np arrays, do that instead of casting, i.e. (wt_var.numpy()).astype(np.uint8) Tested and works here, although choosing which method will depend heavily on the shape of your data, feel free to try changing the shape to estimate which is best. In my testing the mostly numpy code is actually better unless you have huge dimensions or will be running it in batches. https://colab.research.google.com/drive/1miOG6FV9MInanwwQxkYeSXVYirVeUh1r?usp=sharing
https://stackoverflow.com/questions/67694665/